Breaking
February 12, 2026

Google says its AI chatbot Gemini is facing large-scale “distillation attacks” Noor Bazmi | usagoldmines.com

Google’s AI chatbot Gemini has become the target of a large-scale information heist, with attackers hammering the system with questions to copy how it works. One operation alone sent more than 100,000 queries to the chatbot, trying to pull out the secret patterns that make it smart.

The company reported Thursday that these so-called “distillation attacks” are getting worse. Bad actors send wave after wave of questions to figure out the logic behind Gemini’s responses. Their goal is simple: steal Google’s technology to build or improve their own AI systems without spending billions on development.

Google believes most attackers are private businesses or researchers looking to get ahead without doing the hard work. The attacks came from around the world, according to the company’s report. John Hultquist, who leads Google’s Threat Intelligence Group, said smaller companies using custom AI tools will likely face similar attacks soon.

Tech firms have thrown billions of dollars at building their AI chatbots. The inner workings of these systems are treated like crown jewels. Even with defenses in place to catch these attacks, major AI systems remain easy targets because anyone with internet access can talk to them.

Last year, OpenAI pointed fingers at Chinese company DeepSeek, claiming it used distillation to make its models better. Cryptopolitan reported on January 30 that Italy and Ireland banned DeepSeek after OpenAI accused the Chinese firm of using distillation to steal its AI models. The technique lets companies copy expensive technology at a fraction of the cost.

Why are attackers doing this?

The economics are brutal. Building a state-of-the-art AI model costs hundreds of millions or even billions of dollars. DeepSeek reportedly built its R1 model for around six million dollars using distillation, while ChatGPT-5’s development topped two billion dollars, according to industry reports. Stealing a model’s logic cuts that massive investment to almost nothing.

Many of the attacks on Gemini targeted the algorithms that help it “reason” or process information, Google said. Companies that train their own AI systems on sensitive data – like 100 years of trading strategies or customer information – now face the same threat.

“Let’s say your LLM has been trained on 100 years of secret thinking of the way you trade. Theoretically, you could distill some of that,” Hultquist explained.

Nation-state hackers join the hunt

The problem goes beyond money-hungry companies. APT31, a Chinese government hacking group hit with US sanctions in March 2024, used Gemini late last year to plan actual cyberattacks against American organizations.

The group paired Gemini with Hexstrike, an open-source hacking tool that can run more than 150 security programs. They analyzed remote code execution flaws, ways to bypass web security, and SQL injection attacks – all aimed at specific US targets, according to Google’s report.

Cryptopolitan covered similar AI security concerns previously, warning that hackers were exploiting AI vulnerabilities. The APT31 case shows those warnings were spot-on.

Hultquist pointed to two major worries. Adversaries operating across entire intrusions with minimal human help, and automating the development of attack tools. “These are two ways where adversaries can get major advantages and move through the intrusion cycle with minimal human interference,” he said.

The window between discovering a software weakness and getting a fix in place, called the patch gap,  could widen dramatically. Organizations often take weeks to deploy defenses. With AI agents finding and testing vulnerabilities automatically, attackers could move much faster.

“We are going to have to leverage the advantages of AI, and increasingly remove humans from the loop, so that we can respond at machine speed,” Hultquist told The Register.

The financial stakes are enormous. IBM’s 2024 data breach report found that intellectual property theft now costs organizations $173 per record, with IP-focused breaches jumping 27% year-over-year. AI model weights represent the highest-value targets in this underground economy – a single stolen frontier model could fetch hundreds of millions on the black market.

Google has shut down accounts linked to these campaigns, but the attacks keep coming from “throughout the globe,” Hultquist said. As AI becomes more powerful and more companies rely on it, expect this digital gold rush to intensify. The question isn’t whether more attacks will come, but whether defenders can keep up.

The smartest crypto minds already read our newsletter. Want in? Join them.

 

This articles is written by : Nermeen Nabil Khear Abdelmalak

All rights reserved to : USAGOLDMIES . www.usagoldmines.com

You can Enjoy surfing our website categories and read more content in many fields you may like .

Why USAGoldMines ?

USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.