16 February 2024

How Generative AI is Changing the Cybersecurity Landscape Part 1

By Daniel Karp and Dan Pilewski

What impact will generative AI (gen AI) have on the cybersecurity landscape? This is a big question for founders and investors – not only because it’s on everyone’s mind, but because it’s a complex issue that you’ll need to be able to navigate successfully. 

 

Like other new technologies that have altered the structure of doing business, gen AI presents both challenges and opportunities. Consider the recent kerfuffle over OpenAI and its leadership, a clear indicator of the tension between fear and possibility. Furthermore, in order to plot a path forward, we need to keep a view of where we have been.

 

In this blog post, we’re going to attempt to break it down for you. We’ll tell you how gen AI is accelerating the evolution of threats, how traditional systems will need to evolve to face those threats, and how emerging systems are already transforming the ecosystem. In short, we’ll explain why you might be concerned and what you can do about it.

 

Generative AI and cybersecurity: How nervous should you be?

 

The onset of generative AI was actually less a sudden change than it has been an evolution. In fact, AI has been around in the security space for a while. The coming of age moment of the generative nature of AI presented new ways to harness the technology, offering a beefed-up version of what we have already seen – specifically, the technology is a lot more automated and intelligent. These characteristics create new risks and opportunities, which in turn call for a transformation of our toolsets.

 

We’ve seen this pattern before. For example, with the movement to the cloud, organizations became more distributed. Whereas a traditional security solution might look like a firewall – which protects the organization by essentially setting up a perimeter around it – the move to the cloud created more fuzzy and nuanced parameters. Now, organizations have to think about all the various distributed assets they have so that they can protect them. In other words, the move to the cloud created new vulnerabilities and an expansion of the attack surface. For years security vendors have said that the cloud is not secure enough, but the operational model and value persisted and thus the security industry had to adjust to the legitimacy of the cloud as an integral part of the organization it needs to protect. 

 

To a certain extent those claims amounted to fear-mongering. Indeed, there's always a level of alarmism in every platform transition, which we can see in the transition to LLMs (large language models) and gen AI. The resulting nervousness is evident within the enterprise, where there are a lot of LLMs in trial stages rather than production-grade use. 

 

However, there is also a degree of truth to this fear. Generally speaking, the more a new platform is in use, the more it needs security measures and capabilities. Right now, there has been a broad platform shift to embedding gen AI or LLMs into software. And whether it's directly embedded or used as an embedded capability inside of an application, that environment needs protection.  

 

With the meteoric rise of popularity in LLMs, there is an urgent need for adequate security protections to be put in place. And current security vendors, whether they're incumbents or new companies, have to think about how they will protect this new environment.

 

The impact of gen AI on the cybersecurity landscape can be divided into two main categories. The first is its effect on the current cybersecurity environment, and the second is the new types of vulnerabilities which emerge with the use of this technology. In this blog post, we focus on the first category. This area encompasses both threats and opportunities. Known threats, like phishing, may become significantly harder to counter, potentially making existing solutions obsolete and creating the need for a new technological approach. Conversely, areas where effective solutions lagged, such as vulnerability patching and management, might see material enhancements due to the use of gen AI.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a fringilla tortor, et porttitor tort. Vestibulum non nisi interdum, blandit dolor in. laoreet magna. Suspendisse sit amet elit sit amet nisl. semper imperdiet. Suspendisse

Enhanced cyber threats and methods of attack

There are several attack methods and vectors where threat actors can benefit from gen AI to cause material disruptions to targets.

Phishing

Ever get an email from a purported Nigerian prince promising you a long-lost inheritance? This is one of the first iterations of phishing-based attacks. Since such early versions, phishing has become increasingly sophisticated (and lucrative for hackers). Bad actors can create a lot of havoc through phishing by prompting people to give away their information, whether it’s their identity or bank account. There are also sophisticated ways to contaminate email in order to infiltrate software a victim has access to as an employee of a company or as an individual (i.e. credentials).

One reason traditional phishing attacks are so popular is because it's a game of numbers. Attackers can choose to send email blasts to hundreds of thousands of people or more, trying multiple sites with tweaks and variations. While most people won’t fall for the scam, some people will click through. And that’s the yield that attackers are going after because it generates economic value. Moreover, the more sophisticated attacks will social engineer and harvest contextual details to better obfuscate the intent (for example, attackers can make emails appear to be coming from a connection a victim confides in and have context related to their line of work or roles. 

Add gen AI to the equation and there are a few ways to make this scam more advanced. First, gen AI allows attackers to automate email blasts so that instead of sending thousands of emails it becomes possible to send billions of emails. Second, through learning, it becomes possible to make the emails infinitely more believable–they can be fine tuned to sound more relatable and human. In addition, gen AI is quickly advancing in the more sophisticated realms of voice, image, and video simulation (deep fake technology). 

In this way it will become more difficult for recipients to distinguish whether the voicemail or message they have received is truly from their friend, family, or acquaintance. In a universe where it’s nearly impossible to discern what’s real and what’s not, how do you protect yourself? The market is ripe for a solution that goes beyond current anti-phishing software. 

Phishing is all about credibility. It's rare for someone to follow a link from an email claiming they've been chosen to inherit a vast fortune from a mysterious Nigerian prince. However, many people would engage with a reputable businessperson who appears to have a legitimate LinkedIn profile and a professional company website, especially if this person is willing to have live Zoom or phone conversations. The risk increases significantly if someone believes they know the person on the other side of the line - for example their boss.

The use of deepfake technology, combined with the ability to quickly generate convincing websites through no-code generative AI tools, poses a significant threat to organizations. For instance, just last week, a scammer used deepfake technology to impersonate the CFO of a leading company to deceive a financial employee into transferring $25 million during a fake Zoom meeting. Traditional mail-protection solutions are obsolete and fail to address this level of threat adequately.

This situation presents two key opportunities for addressing the problem: in the short term, deepfake detection technologies can provide a temporary fix, so long as the underlying deepfake models have flaws. For a more long-term solution, developing watermarking technologies to verify media was produced by a certified device is essential.

Viruses and malware

Another common form of attack aims at the endpoint itself whether that’s a smartphone, a desktop, a laptop, or server. An end point is contaminated when malware prompts the device to install malicious code. In order to remain undetected by protective systems within the endpoint, hackers evolve known malicious code through various renditions, to the point that they are no longer recognized as malicious by defensive cyber software. Traditionally, this process takes time, and many attackers choose to simply go on the dark web, where they can purchase malware developed by others, to contaminate a site, a company, or an individual.  

Endpoint protection software works by detecting anomalies in the mutation code. If it determines that the code is unharmed, then it creates a static signature confirming that the code has been checked and it’s safe. 

However, with gen AI, malware can be trained to morph in such a fast manner that it remains undetected by the underlying endpoint protection platforms. While this “polymorphic malware” essentially contains the same DNA as traditional malware, it can generate mutations at a much greater velocity and magnitude. As a result, a valid static signature that is created at one moment may no longer be valid ten minutes later. And that means that with the development of polymorphic malwares, the static signature detection used in traditional antivirus software will become obsolete.

Even as traditional antivirus software made way to “next generation Anti-virus” and subsequently Extended Detection and Response (“XDR”) software, those platforms will need to leapfrog further to handle the velocity of polymorphic malwares.

In addition to traditional threats, gen AI poses new challenges to other existing security systems. Here we’ll look at the supply chain, and attack surface and pen testing.

Supply Chain

Imagine that you are a subscription-based customer of a software company that gets attacked. How can you make sure that as a customer you are protected? From the company perspective, how can you make sure that every component inside of your organization is secured and isn't vulnerable? 

The same thing goes for your code base, as evident in Solarwinds’ Log4j attack. How do you make sure that the code base you're using is protected? This could be in the form of a SaaS product you have bought, or it could be in the form of your open source components that others are using as part of their code base. Indeed, the majority of written code is built on open source material. And so how do you make sure that the open source code you’ve used as part of your software has protection? How do you make sure that you don't have vulnerabilities? 

The supply chain attack vector is top of mind right now when it comes to gen AI. For example, 46%(!) of GitHub code is generated by Copilot in a generative AI module. This creates somewhat of a black box problem because developers don’t entirely understand how their code was actualized. The code base could prompt vulnerabilities that developers would be unable to solve; in fact developers may not even be aware of the vulnerability in the first place. 

The inherent skill gap also creates a problem. Here, there's a layer of abstraction that goes into a process that is orchestrated by machines. If 50% of your code base is auto generated, you will have no idea how to solve emerging problems. In other words, if there’s a vulnerability within your code base, how will you know where the root of the problem is? Without being able to pinpoint the specific vulnerability within its unit there isn’t a way to solve the issue without replacing significant portions/ the entirety of your code. But of course, as mentioned, you may not be able to detect a vulnerability in the first place. 

On a positive note, there is an opportunity to improve supply chain security vis-a-vis gen AI. Today, security patching and remediation is a nagging problem for security organizations, causing endless security tickets and a never-ending marathon of updates, patches and remediation. Using gen AI, one can further automate this process and bring it up to speed with the velocity of code fixes.

Possible solutions: The industry will have to standardize around code attribution and software bill of materials solutions to help assure the integrity of the code. Moreover, vulnerability scanners will need to expand to add explainability and remediation capabilities to address the remediation skill gaps.  

Attack Surface and Pen Testing

We've talked about gen AI’s impact on endpoint protection email phishing attacks and contamination. Attack surface management and pen testing is one way to make sure that there are no open attack vectors or attack surfaces like those above within a given organization. 

By testing the perimeter of an organization’s digital footprint to see whether you can infiltrate its virtual walls, organizations can make sure digital assets within a company are secured. This simulation of how an attacker might attempt to access an organization's digital assets is called a pen test or penetration testing and in a broader sense, an organization orchestrating attacks on its footprint are sometimes labeled external attack surface management, or red-teaming (analogous to red-teams in a capture the flag simulation). 

Gen AI increases the threat because agents can be programmed to penetrate an organization quicker and more easily. Consider this: typically an external attacker will attempt to find vulnerabilities by way of the easiest target. Then, they’ll use trial and error to slowly make their way further into the organization until they get to a point that’s valuable. It’s not a “one-and-done” form of attack, but rather a multi-step process.

With gen AI, the whole process can be automated and expanded. Instead of one method at a time, gen AI can institute multiple methods to find a vulnerability. And once it infiltrates the main system, it can continue to dig into the organization at an accelerated pace, automating each step as it heads towards critical assets, doing the most damage much faster than today. 

 

In conclusion, we believe that gen AI technology has the potential to disrupt most parts of the existing cybersecurity markets. The level and number of threats, such as phishing and malware is going to rise dramatically, raising the need for more sophisticated defense technologies. Conversely,  remediation solutions are likely also going to leverage gen AI to improve dramatically, resulting in higher walls of defense and less mistakes. 

So far, we’ve examined how gen AI impacts the traditional cyber security segments. In the second part of our blog, we’ll discuss new attack surfaces and the broader impact LLM usage introduces to the security, trust and privacy community. 

 

Thank you to Dan Pilewski whose research project as an intern at Cervin provided the foundation for this blog.