No image available
· 2021
Biased software tools that use artificial intelligence (AI) and machine learning (ML) algorithms can exacerbate societal inequities. Ensuring equitability in the outcomes from such tools-in particular, those used by law enforcement agencies-is crucial. Researchers from the Homeland Security Operational Analysis Center developed a notional acquisition framework of five steps at which ML bias concerns can emerge: acquisition planning; solicitation and selection; development; delivery; and deployment, maintenance, and sustainment. Bias can be introduced into the acquired system during development and deployment, but the other three steps can influence the extent to which, if any, that happens. Therefore, to eliminate harmful bias, efforts to address ML bias need to be integrated throughout the acquisition process. As various U.S. Department of Homeland Security (DHS) components acquire technologies with AI capabilities, actions that the department could take to mitigate ML bias include establishing standards for measuring bias in law enforcement uses of ML; broadly accounting for all costs of biased outcomes; and developing and training law enforcement personnel in AI capabilities. More-general courses of action for mitigating ML bias include performance tracking and disaggregated evaluation, certification labels on ML resources, impact assessments, and continuous red-teaming. This Perspective describes ways to identify and address bias in these systems.
· 2023
The United States has considerable interests in the Arctic and is one of just eight countries with territory in the region. It also has a responsibility to prepare and protect its armed forces that could be called upon to secure its Arctic interests as the region becomes an increasingly active security environment. Russia continues to maintain and upgrade large-scale, credible Arctic military capabilities. Moreover, China's growing economic and scientific activities in the region could enable it to expand its influence and capabilities there. Beyond strategic competition and growing concerns over the possibility of a North Atlantic Treaty Organization (NATO)-Russia clash, the armed forces of the United States--particularly the U.S. Coast Guard (USCG)--continually contend with safety, law enforcement, legal, other national security, and environmental issues in the region. The National Defense Authorization Act for Fiscal Year 2021 requires a report on the Arctic capabilities of the armed forces. This report summarizes the findings of this research and is intended to, at a minimum, address the congressional request and could also contribute related, independent findings about needs and issues.
· 2023
Maintaining and even increasing force readiness in light of changing climate threats is a key part of meeting high-level U.S. strategic goals. In this report, researchers describe a study they conducted to develop links between climate and readiness.
No image available
· 2022
Using patent filings, the authors analyze the current position of the United States relative to China in selected technology areas of interest to the Department of the Air Force.
No image available
· 2022
"A large body of academic literature describes myriad attack vectors and suggests that most of the U.S. Department of Defense's (DoD's) artificial intelligence (AI) systems are in constant peril. However, RAND researchers investigated adversarial attacks designed to hide objects (causing algorithmic false negatives) and found that many attacks are operationally infeasible to design and deploy because of high knowledge requirements and impractical attack vectors. As the researchers discuss in this report, there are tried-and-true nonadversarial techniques that can be less expensive, more practical, and often more effective. Thus, adversarial attacks against AI pose less risk to DoD applications than academic research currently implies. Nevertheless, well-designed AI systems, as well as mitigation strategies, can further weaken the risks of such attacks."--
This report provides policymakers and developers of machine learning algorithms with a framework and tools to produce algorithms that are consistent with the U.S. Department of Defense's equity priorities.