A Silicon Valley firm called Rhombus Power used generative AI to collect and analyze non-classified data on illicit Chinese fentanyl trafficking in a 2019 operation called Sable Spear. The AI was able to find twice as many companies and 400% more people engaged in illegal activity compared to human-only analysis. U.S. intelligence officials were impressed with the results and shared them with Beijing authorities, urging a crackdown on the illicit trade. The use of generative AI in this operation also allowed for evidence summaries for potential criminal cases, saving significant work hours.

Rhombus Power later used generative AI to predict Russia’s full-scale invasion of Ukraine with 80% certainty four months in advance for a different U.S. government client. The firm claims to have also alerted government customers to imminent North Korean missile launches and Chinese space operations using their AI technology. U.S. intelligence agencies are recognizing the importance of embracing the AI revolution to keep up with exponential data growth. However, they are cautious of the technology’s youth and potential vulnerabilities, especially in relation to generative AI prediction models trained on vast datasets.

While U.S. intelligence agencies are actively experimenting with generative AI, much of the work is happening in secret. The CIA has developed a gen AI called Osiris, which is used by thousands of analysts across 18 U.S. intelligence agencies. This AI runs on unclassified and publicly available data, providing annotated summaries and chatbot functions for deeper queries. The CIA’s chief technology officer emphasizes the importance of ensuring the accuracy and security of information generated by AI models, especially as they continue to evolve.

Generative AI is seen as a valuable tool for enhancing predictive analysis in the intelligence community. Rhombus Power’s CEO believes that the ability to predict an adversary’s likely actions will be a significant paradigm shift in national security. Other AI bigshots, like Microsoft and Primer AI, are vying for contracts with U.S. intelligence agencies to provide their advanced AI technologies for various purposes. The ongoing concern for U.S. officials is ensuring the privacy and security of sensitive data, while also countering potential adversary use of AI to undermine U.S. defenses.

As U.S. intelligence agencies continue to explore the capabilities of generative AI, there are concerns about potential abuse and unintended consequences. Government officials are cautious about adopting AI too swiftly or completely, given the complexities of intelligence analysis and the limitations of current AI models. While AI technology can enhance certain aspects of intelligence work, analysts still rely on instinct, collaboration, and institutional knowledge to make critical decisions. The challenge lies in finding the right balance between leveraging AI technology and maintaining the human intelligence essential to the intelligence community’s success.

Share.
Exit mobile version