We'd like to create our own TensorFlow models to improve the system. The model will be trained and tested outside XSOAR, while the production model will be set inside an automation. The main problem here is whether XSOAR containers could have enough resources to make it work. The other option is to create a server to communicate via API with XSOAR. Obviously, the latter will be more time-consuming to deploy.
Thanks for your help
It is difficult to comment without knowing what kind of model is this and how much time does it take to give you an answer;
There are some server configurations to adjust the memory that can be accessed by the container
So, if your model is a file in the image, then you can access it from the automation.
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!