Premature optimization is the root of all evil -- Donald Knuth
In light of modern trends of increasing complexity of modern JS frameworks, high end developer hardware and the premature optimization mantra we ended up with some performance problems with modern web and especially modern mobile web (deserves a separate article).
Probably you can't comfortably use modern web sites on a 5 year old mid/low budget device, at least the most people don't expect to upgrade their PCs used just for web surfing every year. I've ended up buying a high end android phone just to use social sites comfortably.
One could tell that a user couldn't tell a difference between 1 and 5 ms, but given the fact we could have hundreds of operations on page or data loading, all these small overheads lead to a noticeable performance hit and poor user experience.
So I'd say that every millisecond matters in Javascript. Premature optimization is bad when you spend significant amount of time on a code piece that is thrown away later due refactoring. But what if you could write highly performant Javascript code from the beginning? You just don't need to optimize later and the whole premature optimization thing isn't relevant anymore. So learn how to write fast Javascript, I believe it will pay off.
In the next posts that would form a series I will be collecting tips how to write fast Javascript.
Top comments (5)
I did some performance testing too, but I found that issues in Javascript are particularly difficult to predict. Often results are unexpected or completely unlogic. In some cases, large strings are processed faster than short ones etc.. And, results may vary from browser to browser.
I´m not sure if there is any reliable source for this kind of hints, but in any case it should be a more or less complete review on JS features. We have this for browser features (CanIUse.com), but we would need something like "IsItFast".
Currently the only way is to try, which makes sense only for some really time critical operation. And still then you may have some surprises....
In V8 several strings format exist: github.com/danbev/learning-v8/blob...
I use Array#join() to create long strings in my benchmark, to avoid CON strings (which skew benchmarks)
Strings in JS are immutable, so you get a copy anytime you mutate a string. So, it depends much on the operation you use. chatAt() and even indexOf() are often increadibly fast, while other operations can drop performance. Just, it is hopeless to try to make any predictions without trying.
I think this comes down to optimising your own tiny piece of code and then seeing the results not matter at all because you're sitting on top of an enormous collection of frameworks and libraries.
Reducing the number and type of dependencies is the number one way to improve performance, except in very specific circumstances. Optimising expensive repetetive things like third-party API calls is another biggie. For day-to-day Javascript, I think readablilty wins over performance in every case.
A definite example is a complex effect scope in a reactive framework working as you type in an input field. no 3rd party code working except yours only. If the typing is staggering there could be some milliseconds stolen by your readable code...
Also adding wasted 25-50ms to your API call to transform a large chunk of items received from a backend wouldn't make a user happier...
And writing fast code isn't always making it less readable...
So I guess we should have exact examples here to discuss, but I think one would take one of the sides anyway regardless of the arguments.