老板让尝试用node搭建一个高TPS的http服务器,业务不重要,仅仅测试一下传说中的适合I/O的技术能比java web container好多少.英文版测试结果:
Tried several approach to increase the TPS of a node.js http server to check if it’s competitive to be a easy tool for some specific tasks.
I create a simple http server, based on nodejs native http server. It receives http requests, records its information into a (remote) mongo DB, then response with ‘Okey’.
Test tool is Apache Bench, installed in the same host machine with http server: a desktop of “Dell OptiPlex 7010” with 8 core CPU as well as 8G memory running Oracle Linux Server 6.8.
Optimization approaches include:
Increasing the host server’s ‘open files limit’ with “ulimit -n 99999” while the default is 1024, also increasing the default stack size for V8 in node with
'--max-old-space-size 2048'1234567891011121314151617[root@pu ~]# ulimit -acore file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 31197max locked memory (kbytes, -l) 64max memory size (kbytes, -m) unlimitedopen files (-n) 99999pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 10240cpu time (seconds, -t) unlimitedmax user processes (-u) 31197virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited
Re-use TCP connections for successive requests, i.e. make use of the Keep-Alive feature of http(1.0)
- Make the http server a cluster, make use of more cores of CPU.
- Change the business logic, return immediately when receiving request instead of waiting for database finish recording.
Increasing the max open files(and hence the sockets) as well as stack size didn’t improve the performance. Which means we haven’t reached the limit of parallel socket numbers, nor memory limit.
The http header ‘Connection: keep-alive’ is needed for Http/1.0 to reuse connection for further request— while for Http/1.1, the connection is keep-alive by default. Apache Bench is using 1.0, and with a parameter “-k”, it will add the “keep-alive” header.
As Http/1.0 can’t make use of ‘Transfer-encoding: chunked’, there’s only one possible way for the client to determine the boundary of successive requests in a single connection, i.e. ‘Content-Length’, it’s easy to know the content-length when requesting static file, but for the case of dynamic page, we need to manually calculate the ‘Content-Length’ and then mark it in the response header. And this is what we do by adding code in the node.js http server.
By doing this, the throughput increased:
It contains two aspects when we introduce concurrency:
Adding the concurrency level of the test client
Adding the concurrency level of the http server
Since we are doing test in the exact server where http server deploys, the bottleneck can shift between client and server. So adding the concurrency level blindly won’t always increase the performance.
Adding the concurrency of Apache Bench is easy, just increase the parameter value of “-c”, adding this value will increase the TPS, but only valid in a certain range, approximately 1-50, in this range increase concurrency level will increase TPS, but out of this range, the TPS didn’t increase, –and also won’t decrease. For example if you increase the concurrency level to a non-sense high value, it won’t increase the TPS as 50.
To add the concurrency of Nodejs Http Server, we use node’s build-in feature of ‘cluster’, creating several slaves to strive for a single port. After several tuning, I find the concurrent level of 4 slaves increases the performance better, unlike the Apache Bench, adding the concurrency level of Http Server bigger than 4 will cause the total TPS decreased– this is because it will occupy the CPU resources that used for Apache Bench.
It is argued that several slaves striving for the same port is not so efficient than four slaves listening to different port respectively, and in the front, adding a inverse-proxy like Nginx to balance the load. This approach is not tried yet.
I tried to remove the code snippet of writing to mongo db, and then test it. In this situation, node.js server has the same TPS as Apache Httpd server.
So for static page, nodejs is not so powerful, it’s value lies in when the business logic added, the TPS won’t drop down rapidly.
Stability: as last time I tried this http server, it shows periodically TPS down, probably related with V8’s GC, so need to investigate into more detail about it.