Artificial Intelligence (AI) can be a force for good in our future, that much is obvious from the fact that it’s being utilized to advance things like medical research. But what about it being a force for bad?
The thought that somewhere out there, there’s a James Bond-like villain in an armchair stroking a cat and using generative AI to hack your PC may seem like fantasy but, quite frankly, it’s not. Cyber security experts are already scrambling to thwart millions of threats by hackers that have used generative AI to hack PCs, steal money, credentials, and data, and, with the rapid proliferation of new and improved AI tools, it’s only going to get worse.
The type of cyberattacks hackers are using aren’t necessarily new. They’re just more prolific, sophisticated, and effective now that they have weaponized AI. Here’s what to look out for…
AI-generated malware
Next time you see a pop-up, you may want to hit Ctrl-Alt-Delete real quick! Why? Because hackers are using AI tools to write malware like there’s no tomorrow and it’s showing up in browsers.
Security experts can tell when malware has been written by generative AI by looking at its code. Malware written by AI tools is quicker to make, can be better targeted against victims, and more effective at bypassing security platforms than code written by hand, according to a paper in the journal Artificial Intelligence Review.
One example is malware discovered by HP’s threat research team which it highlights in its September 2024 Threats Insights Report. The company said it discovered malicious code hidden in an extension that hackers used to take over browser sessions and direct users to websites flogging fake PDF tools.
The team also found SVG images to be harboring malicious code which could launch infostealer malware. The malware in question had code featuring “native language and variables that were consistent with an AI generative tool,” which is a clear indicator of its AI origin.
Evading security systems
It’s one thing to write malware with AI tools, it’s quite another thing to keep it effective at bypassing security. Hackers know that cyber security companies move quickly to detect and block new malware, hence why they’re using Large Language Models (LLMs) to obfuscate or slightly change it.
AI can be used to blend code into known malware or create whole new variants that security detection systems won’t recognize. Doing this is most effective against security software that recognizes known patterns of malicious activity, cybersecurity professionals say. In fact, it’s actually quicker to do this than create malware from scratch, according to Palo Alto Networks Unit 42 researchers.
The Unit 42 researchers demonstrated how this is possible. They used LLMs to rewrite 10,000 malicious JavaScript code variants of known malware that had the same functionality as the original code.
These variants were highly successful at avoiding detection by LM detection algorithms like Innocent Until Proven Guilty (IUPG), the researchers found. They concluded that with enough code transformations it was possible for hackers to “degrade the performance of malware classification systems” enough to avoid detection.
Two other kinds of malware that hackers are using to evade detection are possibly even more alarming because of their smart capabilities.
Dubbed “adaptive malware” and “dynamic malware payloads” these types are able to evade security systems by learning and adjusting their coding, encryption, and behavior in real time to bypass security systems, cybersecurity experts say.
While these types predate LLMs and AI, generative AI is making them more responsive to their environments and therefore more effective, they explain.
Stealing data and credentials
AI software and algorithms are also being used to more successfully steal user passwords and logins and unlawfully access their accounts, according to cybersecurity firms.
Cybercriminals generally use three techniques to do this: credential stuffing, password spraying, and brute force attacks, and AI tools are useful for all of these techniques, they say.
Predictive biometric algorithms are making it easier for hackers to spy on users typing passwords and therefore making it easier to hack into large databases containing user information.
Additionally, scanning and analyzing algorithms are deployed by hackers to quickly scan and map networks, identify hosts, open ports, and identify the software in operation to discover user vulnerabilities.
Brute force attacks have been a favorite method of cyberattack for amateur hackers. This attack type involves the trial-and-error bombarding of a large number of companies or individuals with cyber-attacks in the hope that just a few will be penetrated.
Traditionally, only one in 10,000 attacks is successful thanks to the effectiveness of security software. But this software is becoming less effective due to the rise of password algorithms that can quickly analyze large data sets of leaked passwords and more effectively direct brute force attacks.
Algorithms can also automate hacking attempts across multiple websites or platforms at once, cybersecurity experts warn.
More effective social engineering and phishing
Conventional generative AI tools like Gemini and ChatGPT as well as their dark web counterparts like WormGPT and FraudGPT, are being used by hackers to mimic the language, tone, and writing styles of individuals to make social engineering and phishing attacks more personalized to victims.
Hackers are also using AI algorithms and chatbots to harvest data from user social media profiles, search engines, and other websites (and directly from the victims themselves) to create dynamic phishing pitches based on an individual’s location, interests, or their responses.
With AI modelling, hackers can even predict the likelihood their hacks and scams will be successful.
Again, this is another area where hackers are also deploying smart bots that can learn from attacks and change their behavior to make attacks more likely to succeed.
Phishing emails generated by hackers using AI software are more successful at fooling people, research shows. One reason is that they tend to involve fewer red flags like grammatical errors or spelling mistakes that give them away.
Singapore’s Government Technology Agency (GovTech) demonstrated this at the Black Hat USA cybersecurity convention in 2021. At the convention, it reported on an experiment in which spear phishing emails generated by OpenAI’s ChatGPT 3 and ones written by hand were sent to participants.
The experiment found the participants were much more likely to click on the ChatGPT-created emails than the hand-generated ones.
Science fiction-like impersonation
The use of generative AI for impersonation gets a little science-fictiony when you start talking about deep-fake videos and the use of voice-clones.
Even so, hackers are using AI tools to copy the likenesses and voices (known as voice phishing or vishing) of people known to victims in videos and recordings in order to pull off their swindles.
One high-profile case happened back in 2024 when a finance worker was conned into paying out $25m to hackers who used deep-fake video technology to pose as the company’s chief financial officer and other colleagues.
These aren’t the only AI impersonation techniques, though. In our article “AI impersonators will wreak havoc in 2025. Here’s what to watch out for,” we cover eight ways AI impersonators are trying to scam you, so be sure to check it out for a deeper dive on the topic.
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.