Apache HTTPD on FreeBSD and Linux Load Test

Comparison of infrastructure resource usage between Linux and FreeBSD HTTPD instances

For various reasons, I’ve had to perform a series of tests to ensure our Measuring Agent can generate traffic from a large number of source IP addresses. Aside from validating that capability, the by-result of the test is a somewhat interesting comparison of a FreeBSD and Linux based Apache HTTPD server.

Generating Load From Multiple IPs

First, a quick overview of what I wanted to prove: I needed to make sure that we can run a Load Test simulating a large number of source IP addresses. To validate this requirement, I’ve configured one of our Measuring Agents with approx. 12k IP addresses. I’ve used a bash script to do that, as otherwise it would take forever. All IPs are assigned as aliases to the NIC from where the load will be generated, and all IPs are within the same /16 subnet.

Finally, I’ve configured my Real Load test script with two additional steps:

  1. Step 0 that selects a random IP address configured on the NIC and stores it in a variable.
  2. Step 2 that instructs the load test to use as src IP the address stored in the variable.

Infrastructure Details

The hypervisor is a Windows 2019 Server Standard edition machine, running Hyper-V and fitted with an somewhat old Xeon E5-2683v3 CPU. The measuring agent and the tested servers are connected to the same virtual switch.

The Linux and FreeBSD VMs are minimal instals of their distributions, onto which I’ve installed the latest Apache HTTPD build offered by the built in software distribution mechanisms. That’s why the HTTPD versions are not identical.

In order for the results to be somewhat comparable, I’ve deployed the same set of static HTML pages on both servers. I’ve also aligned several key HTTPD config parameters on both systems, as shown in this table.

Parameter Measuring Agent FreeBSD HTTPD VM Linux HTTPD VM
OS Version RH 8.4 13.0 Oracle Lnx 8.4
RAM 4 GBs 4 GBs 4 GBs
vCPUs 10 4 4
HTTPD Version n/a 2.4.53 2.4.37
HTTPD MPM n/a event event
ServerLimit n/a 8192 8192
MaxRequestWorkers n/a 2048 2048
ThreadsPerChild n/a 25 25

See further down for other tuning parameters applied to the HTTPD VMs.

Load Test Execution and Result Metrics

I’ve then executed a 20 minutes 1000 VUs load test which, which is configured to maximize the number of HTTP requests generated. Apache is configured to server some static HTML pages, made up of text and some images.

This table summarizes metrics observed once the max. load was reached, approx. 10 minutes into the test. The PDF reports allow you to have a better glance into the test results.

Metric Linux HTTPD FreeBSD HTTPD
User CPU usage 21% 20%
System CPU usage 47 % 70%
Avg reqs/s 8.8k 10.3k
Avg network throughput 1.1 Gbps 1.3 Gbps
Hyper-V CPU usage 10% 11%
Test report PDF Linux Report PDF FreeBSD Report PDF
Test progress screenshot

Notes

  • CPU usage was measured with the “iostat 20” command.
  • Hyper-V CPU usage was taken from Windows Admin Center.

And the winner is…

… is difficult to pick, to be honest.

  • CPU usage, as measured by Hyper-V was a little bit higher for FreeBSD. CPU metrics measured within the VMs seem to indicate an overall higher CPU usage by FreeBSD (… in particular System CPU). Perhaps the Linux NIC driver is better optimized for Hyper-V.
  • FreeBSD HTTPD seems to deliver an higher throughput (network and avg requests/s).
  • FreeBSD HTTPD also seems to offer an higher HTTP Keep-Alive efficiency, which might partially explain the higher throughput.
  • Observations (like CPU usage, etc…) were averaged by “eyeballing” metrics displayed on screen. Expect some rounding error…

Assuming I had time to spend to better tune and align the two platforms, I might have been able to squeeze out a bit more performance from each server, but I doubt that would have materially changed the result in favor of one OS or the other. Obviously I’m happy to be proven wrong…

Feel free to email us with your feedback, I’ll be more than happy to test any further tuning suggestions.

OS Tuning

Below the OS level tuning that was applied to the Linux and FreeBSD servers. I didn’t have time to research in full each of the parameters mentioned below, they were mentioned in various other online sources and adopted. I’ve implemented the ones that seemed to make most sense…

Linux HTTPD (/etc/sysctl.conf)

The last 2 tunables were required to prevent the Linux server stopping accepting connections for various reasons…

fs.file-max = 524288
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_fin_timeout = 30
net.ipv4.ip_local_port_range = 16384 60999
net.core.somaxconn = 256
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576
net.core.message_cost=0
net.ipv4.neigh.default.gc_thresh3=64000

FreeBSD HTTPD (/etc/sysctl.conf)

kern.threads.max_threads_per_proc=4096
kern.ipc.somaxconn=4096
kern.ipc.maxsockets=204800
kern.ipc.nmbclusters=262144
kern.maxfiles=204800
kern.maxfilesperproc=200000
kern.maxvnodes=200000
net.inet.tcp.delayed_ack=0
net.inet.tcp.msl=5000
net.inet.tcp.maxtcptw=200000
net.inet.ip.intr_queue_maxlen=4096
net.inet.ip.dummynet.io_fast=1