- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
05-15-2025 10:55 PM
Hi,
We are using PA820
i have a isp connection of 700mps up/down.and i have an internal server that can access from public and the domain is pointed to the public ip.the internal server is in my dmz zone and isp is in untust only untrust interface is configured with Qos.and dmz interface has no Qos configured.
when i check the speedtest i see it shows 500mbps up/down.but when a client try to upload or donload a file from the server from public they only get 40-45 mbps max . and it will increased when multiple client try to upload and i tested upto 200mbps uplaod to the server. why i cant go more than 40-45 mbps per connection .
i try uploading a file from private network and it get 500 mbps upload speed so i think its not my server then i thought its my firewall or Qos that throttling the bandwidth so i disable Qos in untrust but no luck. i even try to configure the public ip in server to bypass the firewall but the result is same ,so i thought it is the isp and i tried with another isp and the result is same,So i kind of stuck here.
please help me if u have some suggestion on this matter
05-16-2025 07:46 AM - edited 05-16-2025 07:47 AM
@pyrainath wrote:
Hi,
We are using PA820
i have a isp connection of 700mps up/down.and i have an internal server that can access from public and the domain is pointed to the public ip.the internal server is in my dmz zone and isp is in untust only untrust interface is configured with Qos.and dmz interface has no Qos configured.
when i check the speedtest i see it shows 500mbps up/down.but when a client try to upload or donload a file from the server from public they only get 40-45 mbps max . and it will increased when multiple client try to upload and i tested upto 200mbps uplaod to the server. why i cant go more than 40-45 mbps per connection .i try uploading a file from private network and it get 500 mbps upload speed so i think its not my server then i thought its my firewall or Qos that throttling the bandwidth so i disable Qos in untrust but no luck. i even try to configure the public ip in server to bypass the firewall but the result is same ,so i thought it is the isp and i tried with another isp and the result is same,So i kind of stuck here.
please help me if u have some suggestion on this matter
I've got a 5250 in my DC with 10Gb Internet circuits (symmetrical.) On my best day I would only get 200Mbps from a VPN client at home with 1Gb symmetrical Internet and WiFi running 802.11ax with 80MHz channels.
The 800 series (820 or 850) really isn't a "datacenter" series firewall. It's more SOHO. You're saying from the Internet connecting to a server in your DMZ, downloading a file remotely you're only getting 50Mbps (~6MBps?) Your download speed probably has more to do with your server (OS) and client setup than the firewall. You also mentioned QoS being setup on the firewall, that really will have no bearing as that's only going to be a local thing. QoS settings on the FW will really have no bearing on the larger Internet your and client.
These performance specs are "best case scenario" given specific test criteria. The don't really equate to end user performance. For the hardware I would say your download speeds you're seeing are not worth your time trying to improve:
05-16-2025 07:46 AM
Hi @pyrainath ,
From what you're describing, this doesn't look like an issue with the firewall or QoS config. You've already tested various layers (server performance, firewall bypass, ISP switch, QoS off), and since you’re still hitting the same ~40–45 Mbps per client even after bypassing the firewall, I’d lean toward this being related to per-flow limitations.
Single connection speed limits can be due to various reasons tho. TCP window scaling, latency or buffer tuning on the client or server, ISP applying limits, and many more O_o
You mentioned that when multiple clients upload at once, you bypass the 40-50 Mbps limitation ... that leads me to believe that the link supports higher throughput, but each flow is being bottlenecked individually. Also, since internal LAN testing reaches 500 Mbps, your server and DMZ setup seem fine.
Just in case, confirm you're not running into fragmentation issues between your DMZ and un-trust zones. You can test this with tools like ping -f -l or tracepath. Have you already taken PCAPs on either client/server ? Do you see any weird behaviour in the PCAP like many retransmissions, out-of-order packets, or zero-window updates ? That could be a sign of fragmentation issues.
Some bottlenecks can be caused by ISP peering paths. You can test this with mtr or traceroute and you should be able to test it from different looking glass tools on the internet. Use multiple looking glass tools from different regions to compare latency and path hops — this should help detect ISP peering bottlenecks.
Looking glass tools are great for checking routing paths, latency, BGP visibility, and traceroutes — but they don't directly test connection speeds or bandwidth to a public IP.
For that I'd use something like iperf3 where you can test with multiple streams (e.g., using 'iperf3 -P 4' to test parallel transfers) ... you’ll likely see higher aggregate throughput.
If that's the case then I'm leaning towards it being a tcp tuning issue.
It's also unclear how your speedtest was performed ... If I'm not mistaken speedtest will also use multiple threads to test upload/download speeds.
Either way I think further debugging is required at this point to identify the root cause.
Kind regards,
-Kim
05-19-2025 02:24 AM
thank u for your time,
Thank you for your time.
I believe there is no fragmentation issue because when I use the command `ping -f -l 1472`, I can ping without any problems. I conducted tests with multiple streams and observed higher aggregate throughput. However, when uploading a file using SFTP, I only achieved a maximum speed of 30 Mbps. I captured the packets during the test and did not find any frequent retransmissions, but I did notice some messages indicating that the TCP window was full.
05-20-2025 02:10 AM
Hi @pyrainath ,
That likely explains the slowness.
When the TCP buffer is full it can’t accept more data. The sender has to pause or slow down, even though it could send more.
Your server or the client is likely advertising a small TCP receive window and/or not scaling it efficiently. This could explain the slowness over the WAN link.
Enabling/Fixing the window scaling or buffer sizes could improve throughput speed.
Run PCAP on both the server and a client to compare advertised window sizes and verify where/if you can improve it.
The standard TCP window size without scaling is limited to ~64 KB. That works fine on a LAN with low latency, but is way too small for high-speed WAN links with higher latency. Enabling TCP Window Scaling (RFC 1323) allows the receiver to advertise a much larger window (up to 1 GB).
On your Local LAN you have low latency which means the small TCP window fills and empties quickly. So even if the window is small, there's enough round-trips per second to maintain good throughput which is why you could see speeds like 500 Mbps inside your network.
However, WAN links usually have much higher latency and when the TCP window is full, the sender has to wait longer for an ACK before sending more data. To fix this you need a larger window to keep the pipe full over longer distances.
Hope this helps,
-Kim.
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!