Hugging Face, a popular online repository for generative AI models, has recently come under scrutiny after security researchers discovered thousands of malicious models containing hidden code that could potentially steal information and compromise data. Researchers from security startups ProtectAI, Hiddenlayer, and Wiz have identified over 3,000 malicious models on the platform, with hackers uploading code that can access tokens used to pay AI and cloud operators.

Some malicious actors have even gone as far as creating fake profiles on Hugging Face posing as well-known technology companies like Meta, Facebook, and Visa in an attempt to lure unsuspecting users into downloading infected models. One such model, masquerading as genomics testing startup 23AndMe, managed to deceive users and was downloaded thousands of times before being detected. The malicious code hidden in this fake model was designed to hunt for AWS passwords, which could be used to steal cloud resources.

In response to these security concerns, Hugging Face has integrated ProtectAI’s scanning tool into its platform to detect malicious code in models before they are downloaded. The company has also begun verifying the profiles of major companies like OpenAI and Nvidia to ensure trust in the models available on its site. With the rise of AI and machine learning technologies, the need for enhanced security measures to protect against malicious actors targeting the AI community has become increasingly important.

The United States’ Cybersecurity and Infrastructure Security Agency, along with security agencies from Canada and Britain, issued a joint warning in April urging businesses to scan pre-trained models for potentially dangerous code and to run them only on non-critical systems. Hackers typically inject rogue instructions into code downloaded from Hugging Face, allowing them to hijack the model when it is run by unsuspecting users. These attacks, while classic in nature, can be difficult to detect and trace back to the source.

Hugging Face, founded by Clément Delangue, Julien Chaumond, and Thomas Wolf, has pivoted from a teenage-focused chatbot app to a platform for machine learning, raising $400 million to date and earning a valuation of $4.5 billion. As the popularity of AI research grows, so does the potential for bad actors to target the AI community. The company’s partnership with ProtectAI and its efforts to enhance security measures aim to improve trust in machine learning artifacts and make sharing and adoption of AI models easier and safer for users.

Share.
Exit mobile version