AWS VM-100 (2 VCPU limit) on M4/M5.xlarge (4 VCPU onboard) - wasted VCPU?

Reply
Highlighted
L1 Bithead

AWS VM-100 (2 VCPU limit) on M4/M5.xlarge (4 VCPU onboard) - wasted VCPU?

Hello Experts,

 

What happens when one runs AWS VM-100 (2 VCPU limit) on M4.xlarge or M5.xlarge (4 VCPU onboard).

I definitely have seen it working fine. But does one waste VCPUs in such setup?

I tried to use CloudWatch to see CPU utilization, but unable to see per VCPU stats, only the shared result for entre EC2 is visible.

Please see some bits from my research below.

 

Regards,

Sergg

 

Details about VM-100 virtual hardware support

VM-Series on Amazon Web Services Performance and Capacity

Details about amount of the resources supported by VM license type from VM-Series for AWS Sizing

Please note – VM-100 does only support 2 VCPU

SergGur_0-1599826668014.png

 

 

Details about AWS EC2 types

Details about EC2 instances from https://aws.amazon.com/ec2/instance-types/

Please note xlarge instances provide 4 VCPU (while VM-100 can only consume 2 VCPU)

M4 options:

SergGur_1-1599826668018.png

 

M5 options:

SergGur_2-1599826668020.png

 

 

Details about declared VM throughput

From VM-Series on Amazon Web Services Performance and Capacity

SergGur_3-1599826668025.png

 

Highlighted
L4 Transporter

You may see a nominal performance increase by running the bigger instance size due some of the underlying AWS hashing to hardware.  The increase will be no where close to the performance of running a VM-300 on the same instance types. 

 

Highlighted
L1 Bithead

Update: I discovered that SNMP monitoring does indeed only report 2 CPU and does individual graphs got each.

 

Now I'm suspecting AWS CloudWatch graphs does not represent true AWS VA firewall load. This is because AWS combines the load of 4 VCPUs (2 busy and 2 totally idle) and therefore in this situation, AWS CloadWatch results need to be multiplied by two (to compensate for CPUs provided by AWS but not used by firewall software)

 

My sample data for a firewall I'm examining (24 hours - but it can not be compared because in a massive difference in polling frequency and methods)

 

AWS CloudWatch:

SergGur_0-1599828528873.png

 

SNMP monitoring:

SergGur_1-1599828579561.png

 

 

Highlighted
L1 Bithead

Similar to physical firewalls there is the management and data plane separation. And there is (at least one) dedicated CPU assigned to the management. I'm getting there but still confused. In one hand there is a document (see below) telling unlicensed CPUs allocated to management. In the other hand, my SNMP monitoring only reports two CPUs back.

Does my VM-100 in AWS effectively only has 1 VCPU for traffic?

 

From VM-Series System Requirements

 

The number of vCPUs assigned to the management plane and those assigned to the dataplane differs depending on the total number of vCPUs assigned to the VM-Series firewall. If you assign more vCPUs than those officially supported by the license, any additional vCPUs are assigned to the management plane.

 
Total vCPUsManagement Plane vCPUsDataplane vCPUs
2
1
1
4
2
2
8
2
6
16
4
12
Highlighted
L1 Bithead

@jmeurerwith regards to your statement

You may see a nominal performance increase by running the bigger instance size due some of the underlying AWS hashing to hardware.  The increase will be no where close to the performance of running a VM-300 on the same instance types.

Thank you for sharing the first-hand experience with running VM-100 and VM-300 on the same instance type. Perhaps the difference is due to the number of VCPUs used by bata plane.

Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the Live Community as a whole!

The Live Community thanks you for your participation!