File server, backup server and storage server profiles

cancel
Showing results for 
Search instead for 
Did you mean: 

File server, backup server and storage server profiles

L0 Member

Hi,

 

we are about to activate Cortex XDR agent with Default Policy Rules (i.e. Default Exploit, Malware, Restrictions, Agent settings and Exceptions profiles) on some Windows servers which contain a huge amount of data (terabytes).

 

Are there some recommended best practices to follow or some functionalities that should be disabled in order to avoid any kind of impact on these kind of servers in terms of performances?

For example, we were told that "File Search and Destroy" feature could cause a huge overhead for some time after the agent has been activated.

 

Furthermore, can anyone provide an estimate of how long a Cortex XDR malware scan on 1 TB of data might take?

 

Thanks in advance.

 

2 REPLIES 2

L4 Transporter

@MCereda wrote:

Hi,

 

we are about to activate Cortex XDR agent with Default Policy Rules (i.e. Default Exploit, Malware, Restrictions, Agent settings and Exceptions profiles) on some Windows servers which contain a huge amount of data (terabytes).

 

Are there some recommended best practices to follow or some functionalities that should be disabled in order to avoid any kind of impact on these kind of servers in terms of performances?

For example, we were told that "File Search and Destroy" feature could cause a huge overhead for some time after the agent has been activated.

 

Furthermore, can anyone provide an estimate of how long a Cortex XDR malware scan on 1 TB of data might take?

 

Thanks in advance.

 


Hi @MCereda ,

As I understand it, you're planning to deploy the Cortex XDR to a Windows server that houses a considerable amount of data. There are some considerations to make:

  1. Install the Cortex XDR agent during a maintenance window.
  2. After the installation, prepare for an immediate malware scan to follow.
  3. Schedule scans during off-periods when the server will not be in use as described here.

Note: Subsequent scans will be faster as the agent will not rescan unmodified files.

As for the amount of time that a scan could take, this is variable depending upon the system resources available and the size of the file, so there's no way to provide an estimate at this time. Predicting an estimated amount to scan a target fully would be an excellent feature, which should be submitted as a feature request for future consideration.

Finally, Search and Destroy performs a hash match in the cloud for all devices noted to have a file with that hash on the endpoint (Search). Then it sends a delete job to remove the target file from the filesystem on the target endpoint (Destroy). There's no major resource utilization expected on the endpoint during this process, as the endpoint will only be tasked with deleting the target file if it exists.

--gjenkins

L1 Bithead

Hey @MCereda 

 

If those are production servers I will suggest deploying the agents with a profile configured to 'Report' instead of Block, give it a week to collect alerts and activities from those servers, and monitor the triggered alerts on the hosts. in case you are seeing FP's perform the appropriate exclusions and once you feel comfortable with moving to 'Block' mode please do so.

 

Thanks, 

Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!