DEV Community

Cover image for Server-Sent Events Explained
Thanapon Jindapitak
Thanapon Jindapitak

Posted on

Server-Sent Events Explained

Hi there, it's been a while 👨🏼‍🦳

In the era of Internet, people mostly are connected through Web technology, and even though we are talking about gigabit network nowadays, there are still some people who is obsessed about network utilization--make it faster, don't waste any of bytes! and they come up with different ideas to improve the protocol or even try to hack the existing protocol.

I can give you an example, in the history of HTTP protocol, HTTP/1.0 has been introduced in 1996 but then suddenly next year, they introduced HTTP/1.1 this is still okay because HTTP/1.0 is really not usable--there are lot of limitation such Head-of-Line blocking (HOL blocking, see https://engineering.cred.club/head-of-line-hol-blocking-in-http-1-and-http-2-50b24e9e3372).

OK, HTTP/1.1 is now a standard protocol (standardizing a protocol, it's a document published by Internet Engineering Task Force (IETF), you may heard of RFC-XXXX, HTTP/1.1 was standardized in RFC 2068) until year 2015, again, Google introduced another one called SPDY but got renamed to HTTP/2. It improves on many different areas and also bring many features such as keep-alive, server push.

As you can predict. Google (again) are still not satisfied with just HTTP/2, they introduced yet another improved version--named QUIC which then got renamed to HTTP/3 in year 2022!. Earlier version they are using TCP, but now they go deeper in layer 4 (OSI Layer 4, Transport). They changed from TCP to UDP based on the idea that we are ok to lose some packet, how crazy! 🤯

We are all been told that TCP is reliable and UDP is unreliable, but how comes QUIC or HTTP/3 can use UDP?

Here is the reason, the general idea is QUIC uses sequence number (same idea as TCP) to track the segment (Segment is the Protocol data unit, PDU for Transport Layer) ... there is more to that I'll cover more detail in my next blog, but if you want to read more detail, see RFC 9000: https://datatracker.ietf.org/doc/html/rfc9000#section-13

...

Phew ~
Let's take a short break

...

Alright, let's go

...

There is no such perfect thing in the world, this applies to the world of technology as well, I'm 100% sure that there will be more versions of HTTP to come in the future.

But what we are discussing here ...
Just to see, understand and use the non-perfect protocol ?

The answer is Yes! ✅
There are pros/cons on every thing, but what we need is to understand tools/options we have on hands, try to understand why they implement it that way and understand when to use and when not to use. These reasoning is really useful when we have to analyze/estimate our solution in the future.

...

So much talking, sorry, like I said, it's been a while 😁

...

There is another interesting topic that people tries to "utilize" every single bytes possible, it is Server-Sent Events

TL;DR

Server-Sent Events is leveraging HTTP protocol, to keep the connection open until server decides to close it. It is one-direction communication (only server can send messages to client)

A bit of history about SSE (I rephrase the original from Wikipedia: https://en.wikipedia.org/wiki/Server-sent_events)

SSE was introduced in 2006 as part of HTML5 and during HTTP/1.1 day by Ian Hickson.

...

I'll cover 4 parts today

  1. How SSE works
  2. Example
  3. Pros/Cons
  4. Learn

Not wasting time, let's dive into part 1 (I already wasted too much time above 😅)

Here we go ... part 1 how SSE actually works
SSE is using HTTP protocol and HTTP rely on TCP to transfer the data. We have to understand that when it comes to TCP there are so many overhead because they will have to guarantee that the segment will be reliable, but in this blog I'll just mention only some part of TCP for the sake of simplicity.

Every start of a connection using TCP, client has to do 3 way handshake with server to exchange something called sequence number, both side will have it's own sequence number. This number will be used to tell the integrity of the data sent (I'll explain more detail in the next blog)

Here is how TCP's 3 way handshake works
TCP 3 way handshake

After that we can say that the connection is OPEN.
And now client can send HTTP message to server via this connection.

Alright, now SSE
Because SSE is not our every day HTTP message, so, there should be a way for client to tell server that, "hey server this is a request for SSE, let's keep the connection open until I close it, and you can send me data via this open connection".

Ok here is how, after connection is open, client will send HTTP request GET / with header Accept: text/event-stream

When server receives the request, server understands that it has to upgrade this connection to SSE, that's it!

And when server wants to send data to client, it has to send with



Headers:
Content-Type: text/event-stream
Tranfer-Encoding: chunked

Body:
"data: <text>\n\n"


Enter fullscreen mode Exit fullscreen mode

And I'm not joking here... server has to send data in this format data: <text>\n\n, or else, the protocol will not understands

I'll demonstrate as diagram, to make it more clear
SSE

Not only just data: there are more fields, if you are interested, please check this https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#fields

that is it! general idea of how SSE works, and I already covered part 1

...

Next is Part 2 Example

I have a server on NodeJs Express running on port 8080



import app from "express";

const server = app();

server.get("/", (req, res) => res.send("hello"));

server.get("/stream", (req, res) => {
  console.log("SSE Connected ....");

  res.setHeader("Content-Type", "text/event-stream");
  write(res, 0);
});

function write(res, i) {
  setTimeout(() => {
    res.write("data: " + i++ + "\n\n");
    write(res, i);
  }, 1000)
}

server.listen(8080);

console.log("Listening on 8080");


Enter fullscreen mode Exit fullscreen mode

And I have multiple clients connects to it
Many clients connect to server on SSE

My client is curl -H Accept:text/event-stream http://localhost:8080/stream

You can see clients connect to server using SSE then our server sends back i++ every 1 second

That's cool isn't it, and I just covered part 2, that was quick 😉

...

Part 3 Pros/Cons

Pros

  • during message pushing from server, no header ! (you just have to follow patter data: <text>\n\n)
  • HTTP/2 compatible
  • Firewall friendly, because it's http! not any fancy new protocol

Cons

  • Proxy could be hard
  • Layer 7 Load balancing could be hard
  • It is hard to scale (less than WebSocket)
  • Feature is limited, if the requirement changed, we might have to re-implement with WebSocket

...

Part 4 learn

I really like this part, because this part is purely from my deduction and reasoning skill on why Ian comes up with this idea, there will be a lot of discussion on it. And by understanding why it has been invented also try to make sense with it. It is super beneficial.

Because SSE comes after WebSocket, but still, Ian had pushed SSE to become a standard in HTML5 (see https://html.spec.whatwg.org/multipage/server-sent-events.html), so, I think we should compare SSE with WebSocket just to see, in what case we should use SSE over WebSocket. Because both SSE and WebSocket, server can send data to client without the need to client asking every single time.

WebSocket

  • Is a full-duplex which means, client and server can talk to each other simultaneously
  • The protocol starts with TCP 3 way handshake, then client sends GET / UPGRADE Http request, if server can support websocket, server will ack back, after that the communication talks in a different protocol, not HTTP anymore and it requires WebSocket headers
  • Client has to implement reconnection logic by themself, where SSE on the other hand, client is support by EventSource interface (supported 95% of the browsers, see https://developer.mozilla.org/en-US/docs/Web/API/EventSource)
  • Horizontal scaling is hard, because it is stateful

By knowing that, I think, Ian introduces this SSE because he wants to have an alternative way of server sending data to client (without client asking every single time), and he wants to make it compatible on existing Networking device, for example, router, firewall. And also it is faster! (than WebSocket) because WebSocket has it's own header but SSE does not! (I mean, only the data format data: <text>\n\n we have to follow)

That's it! I cannot think of any other reasons why Ian had invented this SSE. If you can think of other good reason, please let me know.

End of part 4

...

Next chapter let's deep dive into TCP
And WebSocket afterward

Cheers !

....

references

QUIC: https://http3-explained.haxx.se/en/quic-v2
Pitfalls of EventSource on HTTP/1.1: https://textslashplain.com/2019/12/04/the-pitfalls-of-eventsource-over-http-1-1/

Top comments (2)

Collapse
 
der_gopher profile image
Alex Pliutau

Great write up! Does anyone use Server-Sent Events in their projects? If yes, for which use cases? This video dives into the main building blocks of Server-Sent Events in Go.
youtu.be/nvijc5J-JAQ

Collapse
 
webberzheng profile image
Webber Zheng

I've got confused...
The SSE sequence diagram above shows that each response from the server contains both Content-Type and Tranfer-Encoding headers, but later in Part 3 Pros/Cons you claimed "during message pushing from server, no header ! ".
So what is the truth?