The White House has unveiled new rules that require U.S. federal agencies to show that their artificial intelligence tools are not harmful to the public or stop using them altogether. Vice President Kamala Harris emphasized the importance of ensuring that these AI tools do not endanger the rights and safety of the American people. Each agency is now required to have concrete safeguards in place by December, guiding the use of AI tools in various areas such as facial recognition screenings at airports, controlling the electric grid, and determining mortgages and home insurance. This new policy directive stems from President Joe Biden’s AI executive order signed in October, which aims to safeguard both commercial AI systems and government-used AI tools.
For instance, if the Veterans Administration wants to use AI to help diagnose patients in VA hospitals, they must first demonstrate that the AI does not produce racially biased diagnoses. Agencies that are unable to apply the required safeguards must cease using the AI system unless they can justify that doing so would increase risks to safety or rights overall, or create an unacceptable impediment to critical agency operations. The new policy also mandates that federal agencies appoint a chief AI officer with the necessary experience, expertise, and authority to oversee all AI technologies used within the agency. Additionally, agencies are required to annually publish an inventory of their AI systems, including an assessment of the risks they may pose.
Despite these new requirements, intelligence agencies and the Department of Defense are exempt from some rules, as they are having separate debates about the use of autonomous weapons. Shalanda Young, the director of the Office of Management and Budget, stated that the new requirements are intended to strengthen the positive uses of AI by the U.S. government. When responsibly used and overseen, AI can help agencies reduce wait times for critical government services, improve accuracy, and expand access to essential public services. These regulations are part of a broader effort to ensure that AI tools used by the government do not inadvertently harm the public or violate their rights.
The implementation of these new rules signifies a shift towards greater accountability and transparency in the use of AI by federal agencies. By requiring agencies to demonstrate that their AI tools do not pose risks to safety or rights, the government aims to protect the public from potential harm. The appointment of chief AI officers and the annual publication of AI system inventories are steps towards ensuring proper oversight and evaluation of AI technologies within government agencies. While these rules may pose challenges for some agencies, the overall goal is to promote responsible and beneficial uses of AI for the public good.
Moving forward, federal agencies will need to ensure compliance with these new rules and actively work towards demonstrating the safety and effectiveness of their AI tools. This shift in policy reflects the growing importance of AI in government operations and the need to establish clear guidelines for its ethical and responsible use. By fostering a culture of accountability and transparency, the White House aims to build public trust in the government’s use of AI and ensure that these technologies are used to benefit society as a whole.