As artificial intelligence (AI) advances, organizations and governments are scrambling to find its best applications. While ChatGPT and other large language models (LLM) have captivated the media's attention, the potential uses for AI are far broader than text generation. One such area is security: especially the repetitive, large-scale task of identifying software vulnerabilities.
But whether AI leads to better or worse security depends on who or what is doing the vulnerability identification — and for what purpose.
Some flaws in software are essentially benign. But some flaws, known as vulnerabilities, can give someone who exploits the flaws a foothold within the system, leading to compromise. A significant chunk of cyber security practice is devoted to identifying and patching these vulnerabilities.
The number of exploited vulnerabilities leading to compromise are too high to list, some examples of high-profile incidents include:
The 2017 Equifax breach, which started with an unpatched vulnerability
The 2022 LastPass breach was partially caused by a vulnerability in third-party software
The Norwegian government's IT systems were hacked in 2023 via a zero-day vulnerability
The consequences of vulnerability exploits can be disastrous, from data leaks to ransomware infections that freeze up an organization's systems. Organizations need to identify and patch vulnerabilities as rapidly as possible to avoid such occurrences.
Analyzing complex software programs in search of mistakes is a repetitive task that would seem to be a good fit for automation. Noted technologist Bruce Schneier has observed that: "Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like."
And indeed, machine learning (a subset of AI capabilities) has long been used for finding potential vulnerabilities in code. GitHub, for example, includes machine learning in their code scanning feature, which identifies security vulnerabilities in code. Naturally, this approach sometimes results in false positives, but when paired with manual analysis, a well-trained machine learning model can accelerate vulnerability identification.
As artificial intelligence advances by leaps and bounds, the possibility arises of training this technology to find vulnerabilities even more effectively. In fact, in 2023 the US agency DARPA announced a program called Intelligent Generation of Tools for Security — INGOTS. (DARPA, notably, was the agency that created ARPANET, the precursor to the Internet.)
The program "aims to identify and fix high-severity, chainable vulnerabilities before attackers can exploit them" by using "new techniques driven by program analysis and artificial intelligence to measure vulnerabilities." INGOTS looks for vulnerabilities in "modern, complex systems, such as web browsers and mobile operating systems."
But is AI actually good at finding vulnerabilities? DARPA aims to find out, but their program is still somewhat exploratory.
Back in 2016, DARPA hosted the "Cyber Grand Challenge," in which seven teams of engineers created autonomous AI hacking programs, then faced off against each other in a digital game of "Capture the Flag." The idea was to see how well an automated program could hack a secure system. After several hours, the program "Mayhem," designed by a team from Carnegie Mellon, won the competition.
The DEF CON 2016 conference was being hosted nearby, and "Mayhem" was invited to participate in DEF CON's own Capture the Flag game against human hackers. Mayhem came in last place, and it wasn't close.
AI has advanced a great deal since then, and researchers continue to release machine learning models for vulnerability discovery. But software investigated by even the latest machine learning models still requires human review to avoid false positives — or false negatives.
There is no denying that AI can find vulnerabilities. But human penetration testing still appears to have its place. This may change in the future, as AI becomes more robust.
Patching a vulnerability involves writing code that corrects the flaw. AI tools can certainly generate code. But to do so, they require specific prompts generated by their human users.
Even INGOTS does not plan to rely fully on automated processes for remediating vulnerabilities, instead aiming to "create a computer-human pipeline that seamlessly allows human intervention in order to fix high-severity vulnerabilities."
But the same caveat applies: As AI becomes more advanced, it may be able to rapidly and efficiently generate patches in the future.
It is inevitable that, if a tool or technology is widely available, one side will use it to defend systems from attacks, and one side will use it to generate attacks.
If AI can effectively find and patch vulnerabilities in software, then attackers will certainly use it to find those vulnerabilities before they are patched and write exploits.
Not all cyber attackers have access to such resources. But those who do will likely have no qualms about selling the vulnerabilities their AIs find, or the exploits they write, to the highest bidder on the dark web. Malware authors are already incorporating AI into their tools, and they will surely continue to do so as AI improves.
The possibility looms of an escalating, AI-driven arms race between legitimate software developers and malicious attackers, in which vulnerabilities are identified and exploited almost instantaneously, or (hopefully) patched just as quickly.
Of course, attackers are already combing through code looking for undiscovered vulnerabilities — such "zero-day" vulnerabilities are extremely valuable and can either be used by the discoverer for purposes of hacking the system, or sold on underground markets for a high price. Malicious use of AI may become a game-changer, but it's the same old game.
As with patching, this is possible, but the process still requires human guidance. Therefore it may not actually save attackers any labor — and many of them buy exploit kits anyway, rather than writing their own code.
The answer may change 10 or even five years from now, and security folks should be preparing for a wave of fully automated vulnerability exploits targeting their systems.
All networks are vulnerable to compromise — indeed, given enough time and a determined attacker, compromise is inevitable.
Even if AI brings a new world of vulnerability discovery for the side looking to secure their systems, attackers will be using the same methods to try and find vulnerabilities first, or at least before they can be patched. AI is becoming another tool in the toolbox for attackers, just as it is for the good side.
Forward-thinking organizations start with the assumption that compromise has occurred: that their security may fail, their data is at risk, and that attackers may already be inside the network.
They assume that their external-facing security may not always work perfectly and therefore, microsegment their networks so that malicious parties cannot extend their reach beyond the one segment they have already accessed. Think of how a ship can be sealed off into separate watertight compartments to prevent a leak from spreading: ideally, security teams can use this same approach for containing attacks.
This approach is called "Zero Trust," and there are strong reasons for this philosophy's growing adoption. As AI tools enable escalating exploits, Zero Trust can help ensure that those exploits remain restricted to a small corner of the network, and that attackers never gain a big enough foothold to cause real damage.
Vulnerability exploit discovery may accelerate, but Zero Trust offers the most hopeful path forward. And Cloudflare is the only vendor that consolidates Zero Trust technologies such as secure web gateways, DNS filtering, and data loss prevention (DLP) into a single platform with a unified dashboard — a platform with points of presence all over the world. The distributed nature of the Cloudflare network makes it possible to enforce granular, default-deny access controls across cloud and on-premises applications with no latency added to the user experience.
In fact, Cloudflare has taken a Zero Trust approach to securing its own network and employees against attacks. Learn more about how Cloudflare equips organizations to do the same.
This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.
Learn more about Zero Trust in the A Roadmap to Zero Trust Architecture whitepaper.
After reading this article you will be able to understand:
Why AI is well-suited for finding vulnerabilities, with some caveats
How both sides of the security fight can use automation to either exploit or patch vulnerabilities
Why assuming compromise is the safest approach
ฝ่ายขาย
เริ่มต้นใช้งาน
ชุมชน
นักพัฒนา
การสนับสนุน
บริษัท