- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
Can you trust your AI systems with your business, or are they just another attack surface waiting to be exploited?
Aaron Isaksen leads AI Research and Engineering at Palo Alto Networks, where he advances state-of-the-art AI in cybersecurity while overseeing Cortex Xpanse's teams automating attack surface management across some of the world's largest networks. In this episode of Threat Vector, host David Moulton sits down with Dr. Aaron Isaksen to explore why engineering excellence must precede ethical AI debates, how adversarial AI is reshaping cybersecurity, and what it actually takes to build AI systems resilient enough to operate in hostile environments.
You'll learn:
Before Palo Alto Networks, Aaron spent 15+ years building products across wildly different domains. From co-founding mobile gaming companies and funding independent game developers through Indie Fund, to leading ML engineering at ASAPP where his teams prototyped state-of-the-art neural networks for NLP. With a PhD from NYU (automated software design), a Master's from MIT (light field rendering), and a BS from UC Berkeley, Aaron brings a unique perspective: AI security isn't about philosophical debates. It's about rigorous engineering, continuous red teaming, and building systems that can withstand determined adversaries.
This episode is essential listening if you're: deploying AI in production systems, building security programs around generative AI tools, leading attack surface management initiatives, trying to separate AI security theater from actual resilience, or wondering whether your AI agents can operate safely on the open web. #AI
Related Episodes:
Join the conversation on our social media channels:
by
dmoulton
on
10-23-2025
09:00 AM
Labels:
0 Comments
914
Views
|
0 Comments
|
914
Views
| |||
0 Comments
|
780
Views
| ||||
0 Comments
|
1278
Views
|


