Palo Alto Networks Delivers First-Ever Reference Architecture for AI Runtime Security With NVIDIA NIM

Showing results for 
Show  only  | Search instead for 
Did you mean: 
Please sign in to see details of an important advisory in our Customer Advisories area.
Community Team Member

This blog was authored by Jesse Ralston, NetSec CTO, with contributing authors: Jaimin Patel, Dennis Payton, James Kline, Victor Aranda, Rich Campagna, Francesco Vigo

Palo Alto Networks is collaborating with NVIDIA to deliver the first ever joint reference architecture for AI security with NVIDIA NIM. Enterprises can safely build and run intelligent automation (IA) technologies with a reference architecture that secures generative AI deployments built on NVIDIA NIM in enterprise environments with Palo Alto Networks AI Runtime Security technologies. 


Generative AI is rapidly transforming modern enterprises, and existing comprehensive solutions around securing AI are in rapid flux and minimally deployed. Securing generative AI applications requires defense in depth as there’s no single point solution. This reference architecture represents a holistic approach to securing generative AI applications that aligns with and addresses many of the AI security risks defined by the Open Worldwide Application Security Project (OWASP)


Growth in adoption for generative AI across enterprises is exploding faster than any previous disruptive technology—faster than the internet, mobile phones, SaaS applications, and even the public cloud, which has enabled this explosion. There’s a critical lack of security tools available today that both recognize and understand generative AI applications, AI models, and internal enterprise data. Moreover, the lack of visibility into user interaction and brand-new threats associated with AI ecosystems adds to the complexity of securing enterprise AI.


As enterprises increasingly adopt intelligent automation (IA) technologies, such as those based on NVIDIA full-stack accelerated computing, to streamline operations and enhance productivity, the need for a robust security reference architecture becomes paramount. The convergence of AI, GenAI, machine learning, and robotic process automation presents unprecedented opportunities for business transformation and also introduces complex security challenges. A well-defined security reference architecture serves as a foundational framework to guide enterprises in implementing and maintaining a secure IA ecosystem. 


Generative AI is transforming every industry. NVIDIA NIM inference microservices are foundational to quick enterprise deployments of generative AI across on-premises and public cloud infrastructure. Securing these environments will be critically important to safely secure enterprises as they adopt generative AI solutions and require the same speed of deployment as AI adoption.


Enterprises are fast-tracking generative AI development with NVIDIA NIM to run state-of-the-art models at full performance,” said Justin Boitano, vice president, Enterprise Products, NVIDIA. “Monitoring NVIDIA NIM with Palo Alto’s generative AI firewall provides security teams visibility into prompts, data access patterns and responses to help protect modern generative AI applications.


NVIDIA NIM is a collection of easy-to-use microservices for accelerating the deployment of generative AI. These are foundational to how enterprises deploy generative AI across a wide range of available generative AI models and infrastructures from public clouds to on-premises environments.


AI Runtime Security from Palo Alto Networks is purpose-built to discover, protect, and defend against enterprise AI and LLM vulnerabilities, exploits, and attacks. AI Runtime Security combines continuous runtime threat analysis of AI apps, models, and datasets with AI-powered security to stop attackers in their tracks. 

  • Stop zero-day threats in zero time anywhere AI applications run. 
  • Protect AI apps, models, and data from foundational threats and brand-new threats to AI ecosystems at runtime with no code changes
  • Reject prompt injections, malicious responses, LLM denial-of-service, training data poisoning, malicious URLs, command and control, embedded unsafe URLs, and lateral threat movement 


Palo Alto Networks NVIDIA NIM: First-Ever Reference Architecture for AI Runtime SecurityPalo Alto Networks NVIDIA NIM: First-Ever Reference Architecture for AI Runtime Security



Palo Alto Networks delivers a reference architecture for securing NVIDIA-powered generative AI deployments for the enterprise. A detailed design guide, “Securing generative AI Applications,” serves as a reference architecture that enables data scientists and developers to continue to innovate quickly as they further adopt intelligent automation (IA) technologies while also giving enterprises the oversight and control they need to do so securely.


This guide provides an architectural overview for using Palo Alto Networks AI Runtime Security with NIM, providing visibility, control, and protection to your generative AI applications operating in public clouds or private data center architectures. With this reference architecture, you’ll be able to:


  • Securely innovate and gain a competitive edge: Generative AI is a powerful driver of innovation, enabling the creation and delivery of solutions and directly contributing to competitive advantages. Its adoption can quickly transform a company’s offerings and position it as an industry pioneer and desirable employer, attracting customers and talent alike. Palo Alto Networks AI Runtime Security allows you to quickly innovate while keeping your organization and data secure.
  • Customize for market relevance: Generative AI empowers businesses to tailor solutions that cater to industry-specific requirements. By training models on domain-specific data, companies can generate outputs that accurately reflect industry jargon and context, enhancing relevance and effectiveness.
  • Deliver cost-efficiency and scalability: Although initial investments in generative AI might be considerable, the long-term reduction in operational expenses could be significant. By developing proprietary AI solutions, organizations can minimize dependencies on external AI service providers, resulting in cost savings and greater control over integration and scalability and with existing systems.
  • Maintain performance: Control over model updates and data handling enables alignment with strategic objectives. By maintaining autonomy over AI systems, companies can robustly adjust and scale their AI capabilities in accordance with their strategic growth and operational needs.
  • Secure data sovereignty and maintain IP ownership: In-house generative AI development allows for the retention of intellectual property rights and adherence to data governance standards. This is particularly critical in regulated industries where data privacy and protection are paramount.


Get started today with “Securing Your generative AI Applications” with this comprehensive holistic reference architecture.

For more information, see the AI Runtime Security webpage.


Register or Sign-in