AI allows a lot of work to be done at scale. Sounds good, right? But when your job is scamming organizations and people online, this translates into a lot of crime—and massive expenses for organizations that are constantly vulnerable.
The cost of cybercrime globally in 2023 is projected to reach $8 trillion. That means that if it were measured as a country, cybercrime would be the world’s third-largest economy after the U.S. and China. That estimate is largely driven by the scale and innovation that AI makes possible in cybercrime.
On the flip side, AI is a vital and potent tool in mitigating cyber risks. Here’s an overview of how AI is both driving explosive growth in cyber attacks and how it helps defend against four of the most prevalent types of attacks.
AI is an engine for cybercrime
AI makes online scamming easier in several ways. It increases cybercriminals’ bandwidth by allowing them to target larger audiences, faster. And it puts tools for malfeasance in more hands.
It no longer takes a lot of resources to get a cybercrime effort up and running. Buyers are purchasing out-of-the-box, whole phishing kits, complete with fake web pages that credibly impersonate companies and step-by-step instructions on how to run an email phishing scam. Untrained hackers can now launch a cybercrime campaign for a few bucks. The FBI is in the midst of prosecuting a darknet user who allegedly created malware using generative AI, then offered instructions to other cybercriminals on how to use it to recreate malware strains.
Creating and countering more personalized, on-brand phishing
Phishing—tricking users into giving up security credentials, organizational data, or financial information—has gone next level with generative AI like ChatGPT. Fraudsters can now create fake emails, texts (smishing), voicemails (vishing), and social media posts without grammatical errors, in a brand’s voice, peppered with personal details gathered from the entire internet—and appealing emojis. Malicious attachments get delivered not just when a user clicks through to sketchy free streaming sites but via services in everyday business use.
The lures are so believable that it’s getting impossible for victims to distinguish scams from genuine communication. This is a real killer for productivity, as staff members spend more and more time just trying to sort scams from what’s authentic.
Until recently, the traditional method of stopping email-delivered threats relied on assessments of previous attacks. But historical data is becoming less useful as a predictor of phishing when the creativity of cybercriminals combines with AI.
Cybersecurity tools that leverage AI learn about threats faster, have fewer false positives, and have superior pattern recognition. Data analysts, information security specialists, and administrators can detect keywords, phrases, grammatical styles, and suspicious links unique to phishing. With AI, cybersecurity teams can:
- Intelligently explore wide sets of data for signals, on all an organization’s channels: email, customer service text threads, social media feeds, corporate voicemails.
- Assess the behaviors that led victims to fall for a ruse such as analyzing language, types of requests, and other elements that typify attacks.
- Detect and display communities and commonalities that exist within the data, like third-party contractors who are most at risk of scams, or trending phishing messages.
- Move quickly and seamlessly from finding insights to making recommendations, like how to target employee education to best defend against sophisticated scams.
AI-guided forensic investigations and repairs after data breaches
AI is making the theft of sensitive data faster and easier. The list of vulnerabilities bad actors are probing is long: outdated authentication methods, unpatched software, unwitting phishing targets, lax third-party contractors, misconfigured cloud services, data sent unencrypted, insecurely coded web applications, and intercepted communications. With the average price tag of a data breach running $3.86 million, efficient countermeasures that keep up are a must.
The most time-consuming recovery tasks from these attacks are the cleaning and repair of infected systems and getting to the bottom of what went wrong; the good news is, AI is useful in both of those efforts. For example, machine learning can analyze large amounts of data to identify existing and potential threats, such as malicious files or suspicious network activity. This can reduce the amount of time and effort that security analysts need to spend manually reviewing logs and alerts and point teams to specific countermeasures. AI can also identify instances when sensitive data was transmitted without encryption and suggest encryption protocols to safeguard data during transit.
Countering zero-day attacks with AI
Fraudsters are using large language models (LLM) to find flaws in code and craft zero-day attacks. In one recent example, a bad actor used a ChatGPT prompt “act as if this was a zero-day flaw” and pointed to some code that was vulnerable to the SigRed DNS flaw. AI can reduce the impact of such attacks and minimize the damage by automating responses such as isolating infected systems, blocking malicious traffic, and patching vulnerabilities.
Gartner predicts that by 2025, 45% of organizations worldwide will have experienced attacks on their software supply chains, a three-fold increase from 2021. But AI can also potentially detect software vulnerabilities ahead of zero-day attacks, protecting everything from sensitive health data to digital supply chains, a growing target.
How AI helps prevent ransomware incursions
Though some generative AI systems have protections built in to reduce risk, cybercriminals are using LLM to write code for ransomware. They circumvent the prohibition by breaking the task into discrete parts. AI can help prevent these sorts of attacks by identifying and patching software vulnerabilities, identifying the security training employees need, and implementing security policies. It becomes more difficult for attackers to gain access to systems and networks. Organizations that are victimized can also use AI to develop tools to help identify and remove any malicious code or inputs.
Denying denials of service
One recent epic distributed denial of service (DDoS) attack lasted for 8396 hours, including one sustained attack of 87 hours. It was specifically timed to coincide with the day of the 2022 World Cup Final. We can expect more of such “carpet bomb” attacks, as cybercriminals enlist AI-powered bots to generate massive amounts of traffic, making websites or servers inaccessible. Even the most secure websites can be vulnerable.
On the defense, AI can analyze incoming traffic and deem them safe or unsafe using hundreds of different properties and then blocking those that reflect known attack patterns. AI can even fool the attackers into thinking that their mission has been accomplished when it actually has not, disrupting the attack further. AI can learn from each incident, potentially recognizing future attacks even faster.
Cyber security and AI: the ultimate partners in crime to fight against crime
The cost of fighting cybercrime is expected to exceed $11 trillion this year and hit $20 trillion by 2026, a 150 percent jump from 2022. Cybercriminals will only get faster and better as they use AI to continuously analyze data, craft more convincing bait, get smarter about timing, and evade detection.
To effectively combat their endlessly creative, constantly evolving schemes, organizations can use AI for good. They can seek out and detect previously unseen patterns by intelligently exploring their data. They can find commonalities among billions of data points, improve preventive measures, and speed up responses when an attack does succeed. And they can extrapolate from yesterday’s attacks to determine what tomorrow’s will look like.