The White House has introduced new regulations requiring U.S. federal agencies to ensure that their artificial intelligence (AI) tools do not compromise public safety or rights. Vice President Kamala Harris emphasized the importance of verifying the harmlessness of AI tools used by government agencies. By December, each agency must implement concrete safeguards covering various applications such as facial recognition at airports and AI systems used in critical sectors like the electric grid and mortgage determinations.
The directive issued by the White House’s Office of Management and Budget builds upon President Joe Biden’s broader executive order on AI issued in October. While Biden’s order aims to regulate advanced commercial AI systems, the new directive specifically addresses longstanding AI tools used by government agencies in areas like immigration, housing, and healthcare.
For instance, Harris highlighted the need for ensuring that AI diagnoses in Veterans Administration hospitals are not racially biased. Agencies failing to implement these safeguards must discontinue AI usage unless justified by risks to safety or critical operations.
Additionally, the policy mandates federal agencies to appoint a chief AI officer responsible for overseeing AI technologies and annually disclose an inventory of AI systems along with associated risks. Exceptions to these rules include intelligence agencies and the Department of Defense, which has separate deliberations on autonomous weapons.
Shalanda Young from the Office of Management and Budget stressed that these requirements aim to bolster the responsible use of AI by the government, which, when implemented correctly, can enhance efficiency, accuracy, and accessibility of public services.