Use TensorFlow models inside XSOAR automation

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.

Use TensorFlow models inside XSOAR automation

L4 Transporter

Hello,

We'd like to create our own TensorFlow models to improve the system. The model will be trained and tested outside XSOAR, while the production model will be set inside an automation. The main problem here is whether XSOAR containers could have enough resources to make it work. The other option is to create a server to communicate via API with XSOAR. Obviously, the latter  will be more time-consuming to deploy.

 

Thanks for your help

 

 

1 REPLY 1

L2 Linker

It is difficult to comment without knowing what kind of model is this and how much time does it take to give you an answer; 

There are some server configurations to adjust the memory that can be accessed by the container

https://docs-cortex.paloaltonetworks.com/r/Cortex-XSOAR/6.8/Cortex-XSOAR-Administrator-Guide/Configu...

So, if your model is a file in the image, then you can access it from the automation.

  • 705 Views
  • 1 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!