My library button

No image available

Identifying Systemic Bias in the Acquisition of Machine Learning Decision Aids for Law Enforcement Applications

by Douglas Yeung ยท 2021

Book is in your Library

ISBN:  Unavailable

Category: Unavailable

Page count: 23

Biased software tools that use artificial intelligence (AI) and machine learning (ML) algorithms can exacerbate societal inequities. Ensuring equitability in the outcomes from such tools-in particular, those used by law enforcement agencies-is crucial. Researchers from the Homeland Security Operational Analysis Center developed a notional acquisition framework of five steps at which ML bias concerns can emerge: acquisition planning; solicitation and selection; development; delivery; and deployment, maintenance, and sustainment. Bias can be introduced into the acquired system during development and deployment, but the other three steps can influence the extent to which, if any, that happens. Therefore, to eliminate harmful bias, efforts to address ML bias need to be integrated throughout the acquisition process. As various U.S. Department of Homeland Security (DHS) components acquire technologies with AI capabilities, actions that the department could take to mitigate ML bias include establishing standards for measuring bias in law enforcement uses of ML; broadly accounting for all costs of biased outcomes; and developing and training law enforcement personnel in AI capabilities. More-general courses of action for mitigating ML bias include performance tracking and disaggregated evaluation, certification labels on ML resources, impact assessments, and continuous red-teaming. This Perspective describes ways to identify and address bias in these systems.