- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
04-28-2021 07:53 AM
Hi,
we are about to activate Cortex XDR agent with Default Policy Rules (i.e. Default Exploit, Malware, Restrictions, Agent settings and Exceptions profiles) on some Windows servers which contain a huge amount of data (terabytes).
Are there some recommended best practices to follow or some functionalities that should be disabled in order to avoid any kind of impact on these kind of servers in terms of performances?
For example, we were told that "File Search and Destroy" feature could cause a huge overhead for some time after the agent has been activated.
Furthermore, can anyone provide an estimate of how long a Cortex XDR malware scan on 1 TB of data might take?
Thanks in advance.
05-04-2021 11:55 AM
@MCereda wrote:Hi,
we are about to activate Cortex XDR agent with Default Policy Rules (i.e. Default Exploit, Malware, Restrictions, Agent settings and Exceptions profiles) on some Windows servers which contain a huge amount of data (terabytes).
Are there some recommended best practices to follow or some functionalities that should be disabled in order to avoid any kind of impact on these kind of servers in terms of performances?
For example, we were told that "File Search and Destroy" feature could cause a huge overhead for some time after the agent has been activated.
Furthermore, can anyone provide an estimate of how long a Cortex XDR malware scan on 1 TB of data might take?
Thanks in advance.
Hi @MCereda ,
As I understand it, you're planning to deploy the Cortex XDR to a Windows server that houses a considerable amount of data. There are some considerations to make:
Note: Subsequent scans will be faster as the agent will not rescan unmodified files.
As for the amount of time that a scan could take, this is variable depending upon the system resources available and the size of the file, so there's no way to provide an estimate at this time. Predicting an estimated amount to scan a target fully would be an excellent feature, which should be submitted as a feature request for future consideration.
Finally, Search and Destroy performs a hash match in the cloud for all devices noted to have a file with that hash on the endpoint (Search). Then it sends a delete job to remove the target file from the filesystem on the target endpoint (Destroy). There's no major resource utilization expected on the endpoint during this process, as the endpoint will only be tasked with deleting the target file if it exists.
05-11-2021 02:57 AM
Hey @MCereda
If those are production servers I will suggest deploying the agents with a profile configured to 'Report' instead of Block, give it a week to collect alerts and activities from those servers, and monitor the triggered alerts on the hosts. in case you are seeing FP's perform the appropriate exclusions and once you feel comfortable with moving to 'Block' mode please do so.
Thanks,
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!