- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
“AI’s Impact in Cybersecurity” is a weekly blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42 with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how Palo Alto Networks protects itself and its customers, as well as implications for the future of cybersecurity.
The rapid evolution of artificial intelligence (AI), including a new wave of generative AI capabilities, has already had a dramatic impact on cybersecurity. Hackers are using AI to script ransomware, author phishing threats and create more adaptive and evasive botnets while AI-powered cybersecurity systems are bringing new speed and precision to threat detection and response. And, there’s much more still to come.
With these critical developments at hand, we decided to reach out to our own teams at Palo Alto Networks to get some candid opinions about the impacts of AI in cybersecurity; near and long term:
Based on those conversations, we’ve compiled an overview of some of the predictions and general thoughts from the perspectives of our diverse teams.
The following are a list of some opinions that stood out and bear our attention:
When creating a new attack, cybercriminals often seek to evade defenses by chaining together a finite set of tactics, techniques and procedures (TTPs) in new combinations. AI is making this process easier for attackers, but it offers similar benefits for defenders as well.
In the past, threat detection systems could be trained effectively on existing examples of individual techniques, but new variations in the way the malware was constructed and delivered would need to be captured individually over time. “There are a lot of exciting things happening with using LLMs [large language models] to construct datasets where it was very hard to get data before. Now you can just make your own data,” says Billy Hewlett, leader of the AI research team here at Palo Alto Networks.
Matt Kraning, CTO, Cortex, explains, “With AI, we’re now able to simulate many more examples of the ways different techniques can combine together from, without hardening only limited variations. For example, from a single piece of malware that contains a novel attack vector, we can automatically simulate what that attack vector would look like paired with other known malware TTPs, generating thousands or more of new attack simulations that our threat detectors can all be trained over. In this way we greatly improve the robustness and comprehensiveness of our training data, both improving accuracy and lowering false positives.”
For attackers, it will no longer be enough to create novelty through minor variations on existing themes. “For an attack to be truly unseen, they’re going to have to go back to basics,” says Kraning. “This is a great example of the way AI asymmetrically raises costs on attackers, which is ultimately the way that we win.”
Security operations can encompass distinctly different skill sets. It takes one type of expertise to deeply analyze data and determine whether a threat is present, while an entirely different type of knowledge is needed to build systems for data analysis. Kraning believes:
“Being really good at breaking things doesn’t necessarily mean you’re great at building things, or vice versa. Right now, security analysts have to be this kind of unicorn, able to understand not only how the attackers might get in but also how to set up complex automations and queries that are highly performant over high volumes of data.”
Now generative AI will make it possible to interact with data more easily. “It’s what I call natural language SecOps,” says Kraning. “People with deep domain expertise will be able to focus on analyzing a situation without worrying about a whole slew of different requirements to gather data, understanding its vagaries and biases, or becoming a SQL triple black belt or security data lake database administrator.”
Similar capabilities will simplify tasks across the security operations center (SOC). Greg Heon, senior director, product management says, “There’s huge potential for new generative AI-powered interfaces. I see this in ChatGPT every day, where just asking a high-level question instead of clicking around a graphical user interface can often give me a better answer. That approach will replace more and more of the traditional interfaces in the web applications built to serve security operations teams.”
Data throughout the enterprise can have value for threat detection and prevention, as well as remediate and improve an organization’s security posture. For example, integrating the data in a company's HR system can make it easier to ensure that a user requesting access to a Zero Trust network is actually a company employee or contractor. But first, organizations need a way to join these diverse datasets. Heon says:
“AI is quite good at what I think of as ‘fuzzy joins.’ If you have different databases where the fields don’t quite line up or there are inconsistencies in data standards, AI can help you stitch them together to provide visibility you wouldn’t otherwise have. AI will play more and more of a role in breaking down these silos to aid security decisions.”
After a years-long talent shortage in cybersecurity, AI may finally help enterprises scale their security operations without adding headcount. Hewlett says:
“Security can’t be done just by human experts anymore. When attackers are using polymorphism to generate millions of files, we can’t cover them anymore using signatures. We need to automate the classification of files, web pages and other things, and the only way to do it is with artificial intelligence.”
In addition to that theory, Kraning believes:
“Even the AI systems that are already available can have a similar impact on productivity that we saw with the cloud. Cloud took a data center team that was previously 15 people and made it a DevOps team of three to five people. Some of the current AI systems, especially large language models, are increasing individual SecOps throughput by a factor of five or more.”
The impact of AI on scale will be especially dramatic in terms of security automation, which has been constrained to date by engineering requirements. “We haven’t had automated automation,” says Kraning:
“Combining automation with AI will democratize it and allow many, many more automations to take place. What might have taken one person 10 weeks, or 10 people a month, will now take one person a single week, and they’ll be able to pervasively orchestrate automation across the enterprise.”
As LLMs consume data for training, attackers may seek to open a new vector for exploitation. Yoni Allon, vice president of research, says, “We might see something like AI pollution or data pollution where attackers deliberately try to create a fake reality. Models will train on that reality and produce hallucinations or malicious content based on the attacker’s intent.”
Another possibility might be injecting exploits directly into prompts. Preventing such tactics will call for protective measures around access both to and from the model. Heon believes that protecting AI models requires a different point of view:
“In terms of protecting AI models from adversarial attacks, you can think of models as just another form of code. As with any other type of system, you need to think about OWASP [Open Worldwide Application Security Project], vulnerabilities and making sure the code doesn’t start doing anything unexpected. You don’t want to allow anyone, whether an adversary or a regular user, to have direct access to model artifacts.”
This kind of classical security applies in the other direction as well. Heon added, “At the end of the day, a model ends up looking quite similar to the kind of systems we’ve already been protecting, just a new type of content.”
A year ago, few would have predicted the tsunami of innovation triggered by the wave of generative AI. A year from now, we may well be looking back at further developments unforeseen by even the most astute observers today, and looking ahead at still greater advances. If there’s one thing we can be certain of, it’s that we’ve only just begun to see the impact of AI on cybersecurity.
Discover how this innovative approach leverages AI to enhance, not replace, your security teams.
The GigaOm Radar Report on Autonomous Security Operation Center (SOC) solutions published, and Cortex XSIAM has been recognized as both a Leader and Outperformer. In this dynamic landscape, ensuring the utmost security for your organization is paramount. The question isn't whether to embrace AI and automation, but rather how to stay ahead of the curve by choosing the most advanced and comprehensive solution.
Cortex XSIAM is an award-winning and groundbreaking AI-driven platform that converges SOC capabilities, leverages AI for accurate threat protection and applies an automation-first approach to security operations. See the latest innovations from XSIAM 2.0 in action through our on demand demo.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Subject | Likes |
---|---|
1 Like | |
1 Like | |
1 Like | |
1 Like | |
1 Like |