Artificial intelligence (AI) systems are becoming increasingly powerful, raising concerns about potential security risks if not properly regulated. The threshold for AI models that require reporting to the U.S. government is set at 10 to the 26th floating-point operations, a number that some believe indicates a level of computing power that could be used for creating weapons of mass destruction or catastrophic cyberattacks. Critics view these thresholds as arbitrary attempts to regulate math and have questioned their effectiveness in evaluating AI capabilities.

President Joe Biden’s executive order and California’s newly passed AI safety legislation both rely on the 10 to the 26th threshold, with California adding a requirement that regulated AI models must cost at least $100 million to build. The European Union’s AI Act sets a slightly lower threshold at 10 to the 25th power, while China has also considered measuring computing power to determine which AI systems need safeguards. There are currently no publicly available models that meet California’s higher threshold, but some companies may already be working towards meeting these requirements and will have to share details and safety precautions with the U.S. government.

Debates among AI researchers continue regarding how to evaluate the capabilities of the latest generative AI technology and assess the risks they pose. While tests can measure AI’s ability to solve puzzles, logical reasoning, or predict text responses accurately, there is no clear way to determine which AI systems may pose a danger to humanity. Floating-point arithmetic, which involves adding or multiplying numbers together, has emerged as a common metric for assessing an AI model’s capability and risk. However, some tech leaders argue that this metric is too simplistic and lacks scientific support as a proxy for risk.

Critics, including venture capitalists Ben Horowitz and Marc Andreessen, have raised concerns about AI regulations potentially stifling innovation in the emerging AI startup industry. They argue that setting limits on computing power could deter companies from developing more advanced AI models. In response, the sponsor of California’s legislation defended the bill, stating that the thresholds are meant to exclude models that lack the ability to cause critical harm based on current evidence. Both California and the Biden administration see the metric as a temporary measure that may be adjusted in the future.

While some view the thresholds as a necessary step in regulating AI systems as they become more capable, others believe that they may not accurately capture the risks associated with advancing AI technology. As AI developers create smaller models requiring less computing power, the current thresholds may not address the potential harms of widely used AI products. Regulators are advised to be flexible in adjusting these metrics to ensure they effectively monitor AI systems that could have a significant impact on society. Despite criticisms of the thresholds, many believe that some form of regulation is necessary to prevent unforeseen dangers posed by powerful AI systems.

Share.
Exit mobile version