theNet by CLOUDFLARE

Preparing for the future of AI in cyber security

AI is on everyone’s mind today. Large language models (LLMs), like those that power OpenAI’s ChatGPT, have sparked our imagination: We are seeing new possibilities for using generative AI to build innovative solutions.

As security professionals, we’re thinking about how cyber criminals are using AI to create new types of attacks and strengthen their existing methods. Meanwhile, we’re contemplating defensive AI and exploring ways to use AI to better detect and defend against threats.

Still, too many organizations are focusing only on what AI can do today or in the short term. Too few are looking ahead to the tremendous impact AI will likely have on our businesses and our lives in 5, 10, or 20 years. We need to envision that future, though, because to adequately prepare for it, we have to start planning now.


Understanding today and envisioning tomorrow

AI is still a somewhat immature technology. While machine learning (ML) is more mature, the application of ML is still in its early stages. Organizations today are mostly collecting data and doing basic analytics. Some are experimenting with LLMs to generate text or using AI-infused design tools to create unique illustrations. But true AI inference is not yet available: Models cannot yet draw conclusions or generate meaningful insights from live data, even if some vendors tell you otherwise.

That lack of inference is clear when we attempt to use AI and ML for anything more than generating text or images. When I was working on my master’s degree in data science and ML, we had a project to try to predict which baseball players would be inducted into the Hall of Fame. The model we created found that the top predictor was the number of at-bats — how many times the player batted over his career. The longer the career, and the more at-bats, the greater likelihood he would make it into the Hall of Fame.

It was an interesting result, but the model couldn’t produce any insights from it. The model couldn’t tell us why at-bats might be the best predictor. We humans realized, though, that to have a long career, with lots of at-bats, you need to be an excellent player. And when you have a lot of at-bats, you are more likely to have a decent number of hits and home runs.

When models start to understand meaning and gain the ability to make inferences, then we’ll see more dramatic effects from AI.

Given the speed of change in AI, inference will be here before we know it. Just think about how far LLMs have progressed in the past few years. They have gone from providing basic text completion to supporting robust text-based conversations that allow for long prompts, accept visual input, and retain context between sessions. Meanwhile, continual increases in computer power are enabling models to rapidly ingest and learn from more and more data, which in turn helps those models produce better, more accurate results.

In 20 years, AI and other advanced technologies will likely reshape our world considerably. For example, we might see fewer people owning cars, because we’ll be able to summon fully autonomous cars on demand. Grocery shopping, house cleaning, lawn mowing, and other daily tasks will likely be streamlined by technology.

And yes, it’s possible that in the future AI will change many of our current jobs. In the tech industry, we might not have to write code anymore, for example. But we’ll still need people to understand how to design AI-based solutions that meet business needs, how to manage this technology, and how to secure it.


Taking the first steps toward a long-term strategy

How do you start developing a strategy today that will help your organization maximize the value of AI in the future?

  1. Understand the technology: The first step in building a long-term AI strategy is to understand the technology enough so that you can use it. You don’t have to become an expert in data science or ML. But as a security leader, you should understand how attackers are using AI and how your team can apply AI to anticipate and defend against those attacks.

    You should also start thinking about how your organization can aggregate data sets, which you will need to do for AI and ML to work. You’ll then need to secure those aggregated data sets.

  2. Plan to support business use cases: Today, some organizations are so eager to adopt AI that they are immediately focused on choosing the best ML model. Then they try to find a business problem to solve. But you should identify the business problems you need to address first. It’s vitally important to lead with the business problem instead of the technology.

What types of business problems might be good candidates for AI? Many repetitive tasks — such as summarizing meetings or responding to initial customer service requests — are already handled by AI.

In the future, organizations could use AI and ML in situations where the models are learning more from experiences and external inputs. So, for example, in the banking industry, fraud teams could employ AI to improve the accuracy of fraud detection by learning from actual outcomes. In addition, marketing teams across industries could use AI to streamline content creation: They could input writing preferences and styles, and they could provide feedback that the models would use to improve subsequent results.

As security leaders, we need to start planning for these and other business use cases. We need to find ways to help business teams incorporate AI into their workflows to solve business problems without jeopardizing compliance and security.


Modernizing security

To prepare for the AI future, we need to modernize security — and we need to start that modernization now. Many security teams are still operating the way they have for the past 15 or 20 years. They are reacting to the latest threats and implementing multiple point solutions to address vulnerabilities. But this type of approach led to where we are now: More and more breaches occur every day. We need to be less enamored with the next great point product and more focused on the mission of protecting our customers, workers, and companies.

As security teams lag behind, attackers are moving forward quickly. In the not-too-distant future, we’ll see more attempts to bypass AI models used for cyber security by exploiting data that is not factored into those models. We’ll also see AI-driven malware that can learn from defensive methods and rapidly modify its attack.

To modernize security, we have to rethink what tools we need, what skills we need, and how we should work. Envisioning where we’ll be in 5, 10, or 20 years will help us figure out what we need to do today. Here are 4 recommendations on how to get started:

  1. Implement the right technologies and tools: Modernizing security requires advanced tools. Organizations need technology that can automatically anticipate and detect new types of AI-driven threats. Today, if we see a new IP address trying to VPN into Cloudflare, our SOC team might have to write a static rule to prevent access. But in that case, we are solving a specific problem instead of addressing a larger issue.

    In the future, we’ll likely be using built-in AI and ML capabilities that can automatically pick up and mitigate those types of threats.

    Tools with AI capabilities could also help us streamline compliance. We might have tools that could recommend changes in controls or policies whenever new regulations are introduced.

  2. Address the skills gap: Because AI will take over some low-level tasks, IT and security teams will need people with more advanced skill sets. IT teams will need people with expertise in data science, ML, and neural networks.

    At the same time, IT and business teams will need domain expertise: If you want to use AI to improve medical decision making, for example, you’ll need to collaborate with medical professionals. Data science should be the intersection of computer science, math and statistics, and domain knowledge.

    Security teams need people who can interpret the results generated by AI models and determine how strategies or policies should change to address evolving threats. One of the most common mistakes I have seen is listening to the output of the model without understanding the meaning. Remember that baseball Hall of Fame example: If we only listen to the output (which found that at-bats are the best predictor of who gets in), young players might assume that they should just focus on having the most at-bats. In fact, they should work on developing their skills so they can have a long and outstanding career.

  3. Modify operations: Security teams also need to change how they operate. Will we still need a 250-person, globally dispersed SOC, operating 24/7 in the future? We might need round-the-clock operations, but we’ll likely be able to complete many of those existing human tasks with computers, so we’ll be able to focus humans on other tasks.

    We could have an autonomous SOC in 5 to 10 years — which means that we should start formulating a plan now for how it will operate. We need that lead time because of the complexity of our security environments. It’s difficult enough to secure those environments — incorporating AI will be challenging.

    If we’re able to save some time with an autonomous SOC, we’ll also need to invest more time on maintaining data integrity and sustaining regulatory compliance. The more we use AI models internally, the more data we’ll need to manage and control. We need to know where data is, make sure it is accurate, and ensure we can safeguard confidentiality.

    There has been a lot of discussion about AI governance, but the most important factor is data. There are already numerous global laws about how and where we can use and store data. Security teams will have to ensure that their organizations continue to comply with data sovereignty laws as well as data privacy regulations. Compliance might become more difficult if IT teams shift where they are running AI models, for example, moving data from the cloud back to on-premises environments.

  4. Begin to consolidate technology partners: It can be difficult to aggregate data and generate the AI-driven insights you need when you use solutions from numerous vendors. Our IT and security environments are complex enough without having to manage all of those distinct solutions. As your organization looks toward the future, you should start to consolidate solutions. Working with four or five key partners is much simpler than trying to manage data from 50.


Looking ahead — and planning in the present

AI is here, but we’ve only seen the first glimpses of its impact on our work and our lives. As security leaders, we have to start planning now for the changes to come. With the right longer-term strategy, we will be better positioned to support internal use of AI for business objectives, employ AI to bolster security, and defend against AI-driven threats.

Learn how Cloudflare can help your organization start building and deploying AI applications on Cloudflare’s global network today. And discover how you can modernize security for the AI future with Cloudflare’s connectivity cloud.

This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.


Author

Grant Bourzikas — @grantbourzikas
Chief Security Officer, Cloudflare



Key takeaways

After reading this article you will be able to understand:

  • The importance of planning now to secure the future of AI

  • 2 first steps in developing a long-term strategy

  • 4 recommendations for how to modernize security



Receive a monthly recap of the most popular Internet insights!