As someone who has experience with most of the upcoming and new AI tools, the attack on the Mexican Government, which resulted in a very large sensitive data breach, is not getting the headlines I think it deserves. The attackers used the government's support chatbot, connected to Anthropic's Claude chatbot to dump the data. With the implementation of AI in just about everything we do now, you would think that the security of these tools would be robust and a focus. However, with the amount of money being put into the AI craze, corners are being cut and there are a few large issues this causes, that I would like to talk about.
The first thing that stands out about the Mexico attack is that this is the first time a threat actor has used an AI chatbot to spit out sensitive data held in a database elsewhere, straight from a prompt. We have seen before where people are able to extract the AI prompt/rule sets from prompting, but this takes the cake. It almost seems like satire but the hackers were able to dump and export the government's entire tax history database, as well as the voting system history, to see exactly who voted for who, just by insisting over and over to the chatbot. They even said please and thank you when it finally complied, hours after constant begging.
The other issue that is becoming more prominent are exploits/payloads that have an LLM ingrained in them. While we are still too early to see the full capabilities of these tools, we are already seeing the first signs of what could be a major problem in the future. These large scale attacks with AI exploits are able to adapt on the fly. Tools like WormGPT or EvilAI are smart and able to quickly create and push attacks, while also being able to send more detailed and possibly sensitive reports on what it finds, which helps the attacker later down the road, as the AI is more likely to find relevant data.
What led us here
How did we get here? AI is fresh to the industry and things change around it all the time. Here are least 3 things I can see, that at least contribute to this issue.
- Majority of the large tech companies are using tools like Claude, ChatGPT, or OpenClaw (ClawdBot) which can be integrated in many services and even comes with pre-built connectors for them. The developers at said companies will use this AI tool heavily because it's fast, it works non-stop, and it's cheap.
- Instead of having help desk people or dedicated support teams, many companies are relying on AI to handle customer inquiries and support tickets. Aside from the very annoying fact that now if you have an issue you have to deal with a very low level LLM, these AI helpers are reliable as a basic support tool, and can be implented in phone systems as well. However, these bots are usually trained on company specific data, and just like we saw with the AI prompt jailbreaking, that is an issue because the AI is made to do what it is asked for, even if what is being asked for is regulated and protected data. It's a machine.
- The final issue is that these companies are not spending time or money to secure them, maybe because they come from a large AI company like OpenAI, who is thought to be bulletproof. Maybe it is because these AI companies present their product with "Speed to Market" as the selling point to customers. This is not a new theme and we have seen time and time again where a company is attacked at a weak point that could have easily been prevented, had they done actual and rigorous testing before launch.
This leads me to my talking point. I manage and am in charge of the security of a few services, for the company I work for and I have also recently integrated AI into a few of them, relating to social media marketing generation (particularly docker for services, a business environment for chatgpt, and dropbox to store the things the AI makes). I see how these large AI companies ship and sell their product and I can see how a company that is not security based, can easily overlook the needed security on them. However, with the craze of AI adoption, as well as the possible monetary benefits for a company to use them, AI is being jammed into every possible place.
This is not the last attack using AI that you will see or hear about. Data breach after data breach occurs and yet, there are no talks about improving security or the laws regarding them. In 2024 the entire U.S. social security database was compromised and nothing came of it. Most likely a settlement was reached between vendors under the table, and that was that. This is just one example of how customer data and the required security for these are overlooked in the name of profits or shareholder value. Companies are actively forgetting how important the security of your data, and your customers, really is.
My closing remarks on this is that AI is a great tool, as is all of the fun connectors you can find for it on github or even libraries in code editors like VS Code. But just like we have seen with all of those, they are prime targets that can turn a small, outdated, and unsecure python library into a backdoor for everyone that downloads it, from what they think is a secure service. Do not ever assume that something is secure, and always try and keep security at the forefront of your mind. As we continue to rapidly push software out for the sake of profit, worse and worse cyber attacks happen. The service usually just points a finger at a vendor it uses, and then pays a settlement and the actual users who lost their data, don't get a thing, and now have their stuff out there on someone elses computer. Don't be that company.
Thank you!