Why is JavaScript so popular?
With 16 million developers estimated, JavaScript is a pretty popular language. We sometimes attribute that to its high-level nature, and the fact that you just need a browser and a text editor, and you can write a small program or a lengthy app, and see the result right away.
The so-called feedback loop is immediate; we can see the results of code execution right away.
There is no need to compile it -or at least you don't need to intervene in the compilation as a developer-. Of course, modern frontend setups and build tools come equipped with transpilation systems, bundling and packaging, and a lot of ahead of time processes like static analysis, but in essence, you can run JavaScript from a text file with a browser only.
JavaScript needs no compilation...but, does it?
For a program written in JavaScript to run, each instruction goes through the following JIT (just in time) flow.
- the parser or JS engine generates an Abstract Syntax Tree (AST), which is a hyerarchy of nodes,
- which the interpreter reads and
- generates high-level binary code or byte code from, for
- the compiler to finally generate the machine code -or low level binary- that may be optimised as instructions for a specific CPU architecture.
This cycle of execution, uses the client's (be it desktop or mobile) resources.
Client-side dynamics
Until very recently, and ever since libraries like jQuery and JavaScript frameworks like AngularJS, Ember, Backbone emerged and became so popular, around 2009, almost everything dynamic happened on the client-side. In addition to the interpretation of JavaScript during execution, to offer good performance, the stars had to align: the requested data had to be distributed -served from the cache of a CDN- to avoid high latency, the user connection had to be fast, and they needed to be on a high-end, high-capacity terminal.
In reality, we don’t know the user’s system capacity at any time. And that’s a hindrance.
We try to prevent exhausting end-user terminal resources
In theory, if we ran lower level code, we would be using less resources. That's more than a theory. Go to this video where I demonstrate Pagefind, written in Rust and compiled to Wasm as target, as a static app that ingests and indexes HTML documents and runs super efficient search queries, all client-side.
Going lower, to reach higher
That's Wasm. A (web) standard binary instructions compilation target that runs on the browser, implementing a system of imports and exports.
You literally write the code in the language you prefer, and given the toolchain is in place -and it's in (experimental or preview) place for JavaScript, with teams working on it, like for example JCO- you can compile with Wasm as target.
At the edge of the cloud
We have seen how we can run Wasm on the browser to attain a high level of performance with low bandwidth cost, client-side. But what about server-side or in an isomorphic way?
Cloud technologies -and emerging JavaScript meta-frameworks- allow us to render, cache, and then rehydrate dynamic parts of a (server rendered) static page based on rules, cookies or headers and other composability strategies.
This allows us to build dynamic experiences with a backend for frontend
approach, preventing the execution of JavaScript to happen client-side, with the unavoidable performance degradation.
And where does this compute or execution run?
On serverless functions, we can run it on Node.js as runtime on a host built specifically for that purpose. Yet this serverless environments that run at the heart of data-centers, have some drawbacks. They experience something known as cold-start: a mix of the time necessary for the runtime to start, load dependencies, execute the function, and the latency imposed by the physical distance between the server and the end-user terminal.
Lighter runtimes
How can we spin lighter runtimes in the cloud, at the edge of the network? Intuitively we know that edge means we're closer to the user, and we can reduce latency. But how do we run lower level code, which is faster and more lightweight, and hence more performant, on a non-browser environment?
We can do that also server-side, -or on non-browser environments- with WASI enabled Wasm. If WebAssembly is a compilation target that uses a system of imports and exports to enable access to APIs from the Web Platform using the browser as execution engine-, WASI is a POSIX compatible system interface that enables that imports and exports strategy to exist where the browser is not part of the host.
This enables the ultimate level of portability of a sandboxed runtime, from frontend to backend, to a myriad of systems and devices.
Don't read me say it, see me at it!
Here are the slides for the conference talk I based this article on, https://slides.com/anfibiacreativa/wasm-runtimes-portable-secure-lightweight/fullscreen
Top comments (1)
what was the public repo? Interested to see how you go from js to wasm