Weaponized Generative AI: Combatting This Rising Threat To Security

According to an article by Wired, generative AI has been compared by both supporters and detractors to the atom bomb. As with the bomb, humanity now faces the sudden unleashing of technology so powerful, so all-consuming, that it is certain to radically change the world as we know it.

This is a challenge for enterprises, and in some senses, looks to be even more challenging than those experienced during the atomic age. With the Manhattan Project, only Robert Oppenheimer and a small cadre of physicists and generals knew what the technology could do. There were rigorous background checks, stringent security protocols and constant supervision. And since the project's completion, nuclear weapons technology has remained the most guarded capability on Earth.

Not so with AI. This technology is already available to anyone and everyone, from innovative startups to main street corporations to criminals, rogue nation states and other bad actors. There is nothing holding anyone back from abusing this technology.

However, it is not all bad news. The same accessibility that makes generative AI a potential threat to enterprise security also makes it a great defensive tool. Here’s how I view the upside and downside of this technology for cybersecurity.

Not Your Grandparents’ Cyber Weapon

Whereas the most dangerous cybercriminals today are those with a lot of money and workforce—state actors and organized crime—generative AI tools like ChatGPT can easily be weaponized by bad actors of all sizes, targeting individual consumers as well as large institutions.

Consider deepfake and voice cloning technology, for instance. Spear phishing scams can be extremely effective when an email, voice mail or video call looks and sounds exactly like someone that you know. According to Darktrace research, there has been an uptick of 135% in sophisticated phishing attacks since the arrival of ChatGPT. Soon, it may be nearly impossible to distinguish a digital imposter from your employee, vendor or boss.

Security Risks To Watch For

Generative AI can also help cyber criminals rapidly, cheaply and effectively write malware. An example of fraud that businesses should watch out for includes man-in-the-middle attacks, which could be achieved without a human in the loop. Even if OpenAI prevents such code, there are workaround prompts already available on the internet. And for whatever malware detection tools exist, generative AI may be used to modify code in order to evade them.

Generative AI can also help create synthetic identities and data to use in financial fraud, right down to realistic-looking identity documents, receipts or other credentials. It can also create and propagate misinformation online in order to short a stock, discredit an organization or blackmail somebody.

This technology can be especially useful for any form of hacking that in the past required laborious human effort. Data scraping for later use in illicit activities, for example, can be done through generative AI tools like Scrapeghost.

Brute force attacks can be accomplished by learning the common patterns that make human-generated passwords vulnerable and then generating and testing endless passwords. Even advanced persistence threats (APTs), which used to require a large amount of human resources on a prolonged basis, can now be executed by a passive individual hacker.

Thanks to generative AI, imagination is now the only limit to what cybercriminals can do.

Cyber Defenders Must Fight Fire With Fire

While generative AI offers criminals countless new attack opportunities, defenders can get a lot of mileage with this technology as well. Here are just a few of the possibilities for including generative AI in your organization’s security apparatus.

• Threat detection. Generative AI’s ability to understand language-based data allows it to learn about the most recent threats from online intelligence communities and can empower it to discover and respond to threats using simple voice prompts. This can save cyber security professionals untold hours of research and speed up response to new threats.

• Training defense models. Generative adversarial networks (GANs) models can be used as training models to simulate various attack scenarios and train cyber defenses to recognize and thwart them. This offers the opportunity to vastly improve both speed and accuracy of defenses.

• Vulnerability testing. The current model of using red teams or posting bug bounties to discover vulnerabilities in your system is usually expensive and time-consuming. If the bad guys can use generative AI to probe for weaknesses, the best defense is to beat them to the punch and resolve potential chinks in the armor before they become a problem.

• Employee education. An element of human error is contained in the majority of cybersecurity incidents. Training staff to identify realistic phishing emails composed by generative AI is one way to shore up your defenses. Using generative AI to study user logs and identify unsafe employee behavior is another.

• Cyber management. After an incident does occur, generative AI can help create management reports explaining the incident and making recommendations. It can also scan logs and compile routine compliance reports, freeing up cybersecurity professionals to focus on higher-level activities. This is good news amidst a global shortfall of 3.4 million security professionals.

A Generative AI Arms Race Is Brewing

Because this technology is adaptive, both attackers and defenders will constantly improve, attempting to establish the upper hand on one another and marking the beginning of a perpetual AI arms race in cybersecurity.

Whereas in the past, many new cyber threats were met with focused, individual point solutions, the emergence of generative AI looks to change the entire cyber defense landscape. Instead of going out and buying a new widget to ward off the latest attack trend, I see cyber defenders increasingly looking to generative AI to update their methodologies.

Although cybersecurity investment is down dramatically this year, there could still be huge opportunities for startups amidst this arms race. Recognizing that time is of the essence, in early June 2023, OpenAI offered a one million dollar grant for innovative cyber defense using generative AI. I see untold opportunities for cybersecurity companies that discover novel ways to use this technology for good.

I know one thing—the bad guys are not hesitating.


A version of this article originally appeared on Forbes