Statehouses nationwide are facing pressure to regulate artificial intelligence programs that play a hidden role in hiring, housing, and medical decisions for millions of Americans. Colorado Governor Jared Polis recently signed a bill aimed at preventing AI discrimination, but with reservations, highlighting concerns about stifling innovation. The bill requires companies to assess the risk of discrimination from their AI and inform customers when AI was used to make consequential decisions.
Across the country, similar bills have faltered due to battles between civil rights groups and the tech industry, lawmakers wary of delving into unfamiliar technology, and governors afraid of spooking AI startups. Only one out of seven bills addressing AI discrimination has passed, signaling a divide between those in favor of regulation and those opposed. These bills aim to address the potential for bias and discrimination in AI decision-making processes, a complex challenge that the US is lagging behind in regulating.
Most AI-related bills discussed this year have focused on specific aspects of AI use, such as deepfakes in elections or pornography. In contrast, the seven bills tackling discrimination in AI target multiple industries and aim to address one of the technology’s most complex problems. Experts emphasize that even existing anti-discrimination laws are ill-equipped to handle the biased decisions made by AI algorithms at scale, putting certain groups at a disadvantage in hiring, housing, and medical care.
AI algorithms are often trained on historical data that inadvertently perpetuates bias. For example, Amazon’s hiring algorithm favored male applicants because it was trained on old resumes that predominantly featured men. This led to discriminatory practices, sparking concerns about the fairness and transparency of AI decision-making processes. While some lawsuits have shed light on specific instances of AI discrimination, many algorithms remain veiled, leaving the public unaware of their widespread use in various decision-making processes.
Colorado’s bill, along with similar proposals in California and Connecticut, aim to increase transparency around AI use, requiring companies to assess their AI for bias, inform customers when AI is used in decision-making, and implement oversight programs. However, concerns remain about the effectiveness of self-regulation by companies and the potential for trade secrets to hinder transparency efforts. AI companies, both large corporations and startups, agree that addressing algorithmic discrimination is crucial, but differ on how best to enforce regulations and prevent biases in AI systems.
As discussions around AI regulation continue, experts advocate for measures that go beyond Colorado’s bill, including independent organizations to test for potential bias in AI algorithms. The complexity of addressing bias in AI systems, particularly when embedded throughout an institution, calls for innovative solutions and collaboration between stakeholders. The ongoing debate highlights the challenges of regulating AI technology while balancing innovation and accountability.