Scalability & Latency
MockServer is build to support massive scale from a single instance:
- Apache Bench tested 73,307 requests per second with a p99 of less then 1 ms
- Locust tested 987 requests per second with a p99 of 19 ms
The test frameworks show different results, this is most likely due to the way TCP connections are re-created in Locust but are re-used in Apache Bench.
The following frameworks & techniques are used to maximise scalability:
- Netty an asynchronous event-driven network application framework to maximise the scalability of HTTP and TLS
- LMAX Disruptor a high performance inter-thread messaging library to maximise the scalability of recording events (i.e. state) and logging
- ScheduledThreadPoolExecutor a thread pool that can scheduled delayed tasks is used to execute delay response to avoid blocking threads
Performance Tests
MockServer has been performance tested using Apache Bench and Locust with the following scenario:
- four basic expectations, including method, path and headers
- basic GET request matching third expectation (i.e. three matches are attempted for each request)
During the test MockServer was run on a Java 12 JVM, due to the improved GC, with the following command:
java -XX:+UnlockExperimentalVMOptions -XX:+AlwaysPreTouch -XX:-UseBiasedLocking -Xms8192m -Xmx8192m -Dmockserver.logLevel=ERROR -Dmockserver.disableSystemOut=true -Dmockserver.nioEventLoopThreadCount=500 -jar mockserver-netty-jar-with-dependencies.jar -serverPort 1080
Apache Bench Performance Test
Apache Bench showed for a simple test the MockServer can scale up to 73,307 requests per second with a p99 of less then 1 ms.
The following command was executed:
ab -k -n 10000000 -c 10 http://127.0.0.1:1080/simple
The test results are:
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000000 requests
Completed 2000000 requests
Completed 3000000 requests
Completed 4000000 requests
Completed 5000000 requests
Completed 6000000 requests
Completed 7000000 requests
Completed 8000000 requests
Completed 9000000 requests
Completed 10000000 requests
Finished 10000000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 1080
Document Path: /simple
Document Length: 20 bytes
Concurrency Level: 10
Time taken for tests: 136.413 seconds
Complete requests: 10000000
Failed requests: 0
Keep-Alive requests: 10000000
Total transferred: 830000000 bytes
HTML transferred: 200000000 bytes
Requests per second: 73307.04 [#/sec] (mean)
Time per request: 0.136 [ms] (mean)
Time per request: 0.014 [ms] (mean, across all concurrent requests)
Transfer rate: 5941.88 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.2 0 30
Waiting: 0 0 0.2 0 30
Total: 0 0 0.2 0 30
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 0
99% 0
100% 30 (longest request)
A lower request rate also shows similar results:
$ ab -k -n 1000 -c 5 http://127.0.0.1:1080/simple
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 1080
Document Path: /simple
Document Length: 20 bytes
Concurrency Level: 5
Time taken for tests: 0.023 seconds
Complete requests: 1000
Failed requests: 0
Keep-Alive requests: 1000
Total transferred: 83000 bytes
HTML transferred: 20000 bytes
Requests per second: 42868.78 [#/sec] (mean)
Time per request: 0.117 [ms] (mean)
Time per request: 0.023 [ms] (mean, across all concurrent requests)
Transfer rate: 3474.72 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.1 0 2
Waiting: 0 0 0.1 0 2
Total: 0 0 0.1 0 2
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 0
99% 0
100% 2 (longest request)
Locust Performance Test
The Locust results are as follows:
req/s | Min | Avg | Median | 50% | 66% | 75% | 80% | 90% | 95% | 98% | 99% | 99.90% | 99.99% | Max |
49.73 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 3 | 3 |
49.63 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 3 | 3 | 3 |
99.3 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 3 | 3 |
247.96 | 0 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 5 | 7 | 7 |
495.73 | 0 | 2 | 1 | 1 | 2 | 3 | 4 | 6 | 8 | 10 | 12 | 18 | 21 | 21 |
741.66 | 0 | 3 | 2 | 2 | 4 | 5 | 6 | 8 | 10 | 13 | 15 | 22 | 24 | 24 |
986.75 | 0 | 4 | 3 | 3 | 5 | 6 | 7 | 10 | 13 | 17 | 19 | 29 | 31 | 31 |
2299.28 | 0 | 44 | 21 | 21 | 42 | 65 | 83 | 120 | 180 | 200 | 210 | 230 | 250 | 259 |
This test was done using the following typical command but altering the values for -c NUM_CLIENTS, -r HATCH_RATE and -t RUN_TIME
locust --loglevel=INFO --no-web --only-summary -c 500 -r 50 -t 50 --host=http://127.0.0.1:1080
The following locustfile.py was used
from locust import TaskSet, task, between
import locust.stats
locust.stats.CONSOLE_STATS_INTERVAL_SEC = 60
from locust.contrib.fasthttp import FastHttpLocust
class UserBehavior(TaskSet):
@task
def simple(self):
self.client.get("/simple", verify=False)
class WebsiteUser(FastHttpLocust):
task_set = UserBehavior
wait_time = between(1, 1)
Clustering MockServer
MockServer supports a very high request throughput, however if a higher request per second rate is required it is possible to cluster MockServer so that all nodes share expectations.
Currently there is no support for clustering the MockServer log therefore request verifications will only work against the node that received the request.
To create a MockServer cluster all instances need to:
- share a read-write file system i.e. same physical / virtual machine, NFS, AWS EFS, Azure Files, etc
- configure identical expectation initialiser and expectation persistence
- bind to a free port i.e. separate ports if on same physical / virtual machine
Each node could be configured as follows (adjusting the port as necessary):
MOCKSERVER_WATCH_INITIALIZATION_JSON=true \
MOCKSERVER_INITIALIZATION_JSON_PATH=mockserverInitialization.json \
MOCKSERVER_PERSIST_EXPECTATIONS=true \
MOCKSERVER_PERSISTED_EXPECTATIONS_PATH=mockserverInitialization.json \
java -jar ~/Downloads/mockserver-netty-5.9.0-jar-with-dependencies.jar -serverPort 1080 -logLevel INFO
or
java \
-Dmockserver.watchInitializationJson=true \
-Dmockserver.initializationJsonPath=mockserverInitialization.json \
-Dmockserver.persistExpectations=true \
-Dmockserver.persistedExpectationsPath=mockserverInitialization.json \
-jar ~/Downloads/mockserver-netty-5.9.0-jar-with-dependencies.jar -serverPort 1080 -logLevel INFO
Scalability Configuration:
Number of threads for main event loop
These threads are used for fast non-blocking activities such as:
- reading and de-serialise all requests
- serialising and writing control plane responses
- adding, updating or removing expectations
- verifying requests or request sequences
- retrieving logs
Expectation actions are handled in a separate thread pool to ensure slow object or class callbacks and response / forward delays do not impact the main event loop.
Type: int Default: maximum of 20 or available processors count
Java Code:
ConfigurationProperties.nioEventLoopThreadCount(int count)
System Property:
-Dmockserver.nioEventLoopThreadCount=...
Environment Variable:
MOCKSERVER_NIO_EVENT_LOOP_THREAD_COUNT=...
Property File:
mockserver.nioEventLoopThreadCount=...
Example:
-Dmockserver.nioEventLoopThreadCount="20"
Number of threads for the action handler thread pool
These threads are used for handling actions such as:
- serialising and writing expectation or proxied responses
- handling response delays in a non-blocking way (i.e. using a scheduler)
- executing class callbacks
- handling method / closure callbacks (using web sockets)
Type: int Default: maximum of 20 or available processors count
Java Code:
ConfigurationProperties.actionHandlerThreadCount(int count)
System Property:
-Dmockserver.actionHandlerThreadCount=...
Environment Variable:
MOCKSERVER_ACTION_HANDLER_THREAD_COUNT=...
Property File:
mockserver.actionHandlerThreadCount=...
Example:
-Dmockserver.actionHandlerThreadCount="20"
Number of threads for each expectation with a method / closure callback (i.e. web socket client) in the org.mockserver.client.MockServerClient
This setting only effects the Java client and how requests each method / closure callbacks it can handle, the default is 5 which should be suitable except in extreme cases.
Type: int Default: 5
Java Code:
ConfigurationProperties.webSocketClientEventLoopThreadCount(int count)
System Property:
-Dmockserver.webSocketClientEventLoopThreadCount=...
Environment Variable:
MOCKSERVER_WEB_SOCKET_CLIENT_EVENT_LOOP_THREAD_COUNT=...
Property File:
mockserver.webSocketClientEventLoopThreadCount=...
Example:
-Dmockserver.webSocketClientEventLoopThreadCount="20"
The the minimum level of logs to record in the event log and to output to system out (if system out log output is not disabled). The lower the log level the more log entries will be captured, particularly at TRACE level logging.
Type: string Default: INFO
Java Code:
ConfigurationProperties.logLevel(String level)
System Property:
-Dmockserver.logLevel=...
Environment Variable:
MOCKSERVER_LOG_LEVEL=...
Property File:
mockserver.logLevel
The log level, which can be TRACE, DEBUG, INFO, WARN, ERROR, OFF, FINEST, FINE, INFO, WARNING, SEVERE
Example:
-Dmockserver.logLevel="DEBUG"
Disable logging to the system output
Type: boolean Default: false
Java Code:
ConfigurationProperties.disableSystemOut(boolean disableSystemOut)
System Property:
-Dmockserver.disableSystemOut=...
Environment Variable:
MOCKSERVER_DISABLE_SYSTEM_OUT=...
Property File:
mockserver.disableSystemOut=...
Example:
-Dmockserver.disableSystemOut="true"