Talking about performance is cool
A few days ago, I read Craig Morten's post: What Is The Best Deno Web Framework? It gives an overview of all the Deno frameworks and compares all the aspects of them. If you haven't read about it, I highly encourage you to do so.
From Craig's post, the section "Performance" is fascinating; In his environment, Deno HTTP module gets almost the same result(even more) as Node HTTP module, which amazes me, even though it's just a "Hello Deno!" benchmark test.
But, I knew it's not true for most of the cases. Because deno HTTP module is written in Typescript. Type checking brings pleasure for development, but it costs more resources in compiler time thus has a negative effect on performance.
(Thanks for Oghenovo Usiwoma and Jonathan Beaumont point out in the comment)
But I've tested it before and never got the same result as his. So I would like to see what would happen when I run tests in a more powerful machine.
-
Environment:
- CPU: i5-9600KF @3.7GHz
- RAM: 16GB DDR4 2133MHz
- OS: Windows 10
- Benchmark tool: autocannon
-
Benchmark Script:
- "Hello World!" deno server
- "Hello World!" node server
Let's see the result first:
100 concurrent connections
Name | version | AVG req/sec |
---|---|---|
node.http | 12.16.3 | 47969.2 |
deno.http | 1.0.0 | 47376 |
deno.http | 1.1.0 | 46953.7 |
node.http | 14.2.0 | 44409 |
Wow! Looks like deno beat node overall! (at least for the version 14.2.0).
But wait, this benchmark is based on 100 concurrent connections by autocannon http://localhost:3000/ -c100
, which is too many for a small-to-medium-sized server. So I give it another try: this time, I use -c10
, which keeps sending requests in 10 concurrent connections.
10 concurrent connections
Name | version | AVG req/sec |
---|---|---|
node.http | 12.16.3 | 49926.69 |
node.http | 14.2.0 | 45345.33 |
deno.http | 1.1.0 | 34806.79 |
deno.http | 1.0.0 | 34742.37 |
This time, node is absolutely the winner. There is a significant difference between the two, node has more than about 10K requests per second than deno.
But, this brings a new question:
Why does the concurrent connection number matter?
A concurrent connection is a connection happening with another connection at the same time.
A user sends a request for an HTML page, then after 5ms
another user sends a request for a css file, which is not considered as a concurrent connection.
Calculate concurrent connection
Let's say, you have 100 active users on an app. You track these users for an hour and find out each of them makes 60 clicks an hour. That's 6000 total requests on the server, and each request takes 2 seconds(which is very slow). That's 12000 seconds of active connections in 3600 seconds(an hour), which results 3.33 concurrent connections (12000/3600 = 3.33).
If the server is powerful enough to allow processing each request within 1 second?
- The concurrent number is even fewer, less than 1.7.
How about 1000 active users?
- We get 33.3 concurrent connections
Are 100 concurrent connections overkill?
- Yes, absolutely! Except you want to build a enterprise-level application that serves 3000+ active users(in above scenario).
Can you please tell me which one performs better? Node or Deno.
With that said, the result with 10 concurrent connections is more meaningful in reality, Node wins for the performance round.
Though deno performs better in the -c100
test, which means deno is doing better than node in a higher traffic network. That's great. From what I've seen, companies would prefer Java or other well known and proven languages to handle a huge amount of traffic over deno, since deno is still young and it's not production-ready yet at this point. However, everyone in deno community is making it happening 🔥.
At last, I also want to mention one point. Performance is not the key of success for a dev community. Look at Python and PHP, they preform much slower than node and deno. What makes them popular? The ecosystem and peoples!
Extra topic
Consider of Google handles over 75,000 queries per second.
Can your server handle this amount of request (>40K) per second?
Top comments (15)
Hi! You mentioned that Deno should be slower because it has to type check at compile time. I understand compile time to be different from runtime. Unless type-checking is done at runtime, I don't see how it should affect the runtime performance of Deno.
It's also worth point out that runtime types can have a large positive effect on performance.
The more information the runtime has about the code that it's executing, the more optimisations it can apply whilst the application is cold!
Spot on.
Very good point!
Well, you are right. After rethink and a few tests, I should update my post a little bit.
Hey man, you should really update this post instead of writing another (or at least add a disclaimer and link to the comment thread). It is super confusing to share misinformation publicly.
Deno has different performance to node because it a different implementation; the server internals are written in rust as opposed to C/C++, and hasn't been optimised heavily yet. A quick GitHub issue search actually shows that several recent PRs have improved performance lately, but I digress.
With respect, you really shouldn't be writing a blog post about performance if you don't know the difference between compile time and runtime.
Thanks for the comment. Sorry about that.
I should have done more research and make it more clear.
Nice one man! Reading this back, sorry if that comment came across a little harsh. I've read a lot about deno but haven't had the time to build much yet, so kudos. Keep on posting 🎉
My understanding of the benefit of "node" is a high I/O performance during the heavy traffic scenario compared to the other language/frameworks with relatively fewer codes, and I think "Deno" is aiming better "Node". Based on that purpose/goal of "Deno", That's a good thing that Deno has higher performance in the heavier traffic than Node.
Hinestly, I have been looking for a post on Demo vs Node performance for a while now, thank you for writing this.
V8 is V8, what you should be doing is comparing build/parsing time.
Does HTTP2 make things easier?
Not in this scenario, no. Autocannon is used to test concurrent unique users, rather than just one user hammering the API.
The main improvement with HTTP2 is reducing the number of roundtrip requests that a single user might make (by reusing the TCP connection), but it doesn't have any meaningful effect on a group of unique users. Hope that helps!
It would be an interesting test though, to see if there was a meaningful difference. But I'm doubtful, because the biggest overhead in an HTTP request is the TCP connection itself.
Deno claims to be a better nodejs, but it can only be a different nodejs if it can't beat nodejs' performance! At least before the author makes the improvement (which potentially could never happen)!
If it's the same performance-wise but can still offer other benefits, it would still be considered better. Also hello world tests are virtually meaningless, I feel 🤔
This middleware for Deno brings good results: deno.land/x/faster