DEV Community

Cover image for An Exploration of Software-defined networks in video streaming, Part One: Where are we now, and why should we change it?
Mario Baltazar Reséndiz
Mario Baltazar Reséndiz

Posted on • Edited on

An Exploration of Software-defined networks in video streaming, Part One: Where are we now, and why should we change it?

Hello, and welcome to my blog! My name is Mario from Mexico City. I'm a senior undergraduate computer science student at Universidad Autónoma Metropolitana.

As part of my final degree project, I have been conducting research and testing on a topic that has stayed consistently relevant in the entertainment space: streaming services. It's safe to say almost everyone we know (and even ourselves) is subscribed to one or more of these services, whether they provide music, movies, education, or anything in between. These products continue to evolve and grow at breakneck pace, and if we tried to think of what running, maintaining and expanding applications of such size implies, it would almost seem like a miracle that they work as well as they do.

However, none of this is obscure arcane magic, but the result of years of R&D in a wide range of fields that has spanned innovations that allowed online multimedia services to get to where they are today.

One of such innovations are software-defined networks. These new types of network represent a radical shift in the way we think and design infrastructure for mass scale services, such as streaming. SDNs have been integrated into a wide variety of services, and the benefits this has brought have allowed for faster and more reliable applications running on the web. But how have SDNs specifically impacted streaming services?

This entry is part one in a three-part series where I'll discuss the research and tests I conducted on the topic of the use of software defined networks on streaming services. This time, we'll touch briefly on how modern streaming services work, the technology that powered them before the advent of SDNs, and how said technology was limiting the services' scalability and performance.

Why SDNs?


To start things off, I'd like to give a bit of context as to why I decided to choose this topic as my area of research.

There's no denying that our daily lives are now irreversibly entwined with the digital world. I believe this change has now gone beyond simply leaving behind typewriters, printed maps or rental video stores for their digital counterparts, but rather, the digital has now become an extension of our observable reality. Of course, such a change is bound to have both positive and negative aspects, but let's try and focus on the good: more people are able to access a wealth of knowledge, thus lowering the entry barrier to technology and other fields as time goes by, giving everyone an opportunity to pursue their passions and seek personal improvement.

The services that provide these opportunities have a need to grow beyond what the forefathers of computing and computer networks ever envisioned, and it is technologies like software defined networks that help push them forward. While the focus of this research was on video streaming aimed at entertainment services, such technology is not used only for that, but for education, health and even personal relationships.

It continues to amaze me how we went from slow DSL internet connections and limited internet services at the turn of the millennium, to being able to complete an entire university degree, play modern AAA videogames on a web browser, communicate with people on the other side of the world with high definition video or use online office suites, all powered by high performance cloud computing and gigabit internet connections.

We can and should continue to pursue the improvement of these technologies for the benefit of everyone around the world.

The true impact of streaming

WIth that out of the way, let's get to the central topic of this entry by talking about the impact of streaming services in the modern day. I'm sure not a lot of convincing is needed to drive home that these services have had a huge cultural and economic impact in the last 10 to 15 years. However, I'd like to support this argument with cold, hard facts and numbers. According to Forbes and Business of Apps:

  • Netflix dominates as the most subscribed to video streaming service, with 260.28 million subscribers
  • Revenue for the entire video streaming app industry reached $82.3 billion in 2022, and is projected to reach $115 billion by 2026
  • The video streaming industry reached $82.3 billion in 2022, with most of the revenue coming from the United States. It is projected to reach $115 billion by 2026
  • The video streaming industry is valued at $544 billion

Impressive, isn't it? Suffice to say, streaming is a central part of the Internet in terms of traffic and the revenue they generate. The demand and profitability of streaming has led to increased competition between providers, with each one of them constantly improving the quality of their product, not only in terms of the content they offer, but also in the experience the user has using their product (fast loading times, easy to use application, etc.).

But how often do we stop to think how things like this work? It might be that due to to how common and simple (to the user, at least) they have become, we don't really see this as technology worth dissecting and explaining. I certainly do think they deserve a second in-depth look (hence my research), so allow me to explain the inner workings of streaming service using something everyone loves: pizza!

As easy as (a pizza) pie

Let's set up the stage for this explanation, starting from the most fundamental element to any streaming service: the content, whether we're talking about a YouTube video or a movie on Netflix. Now, let's imagine that each movie or video is like a pizza. We know there are places where pizzas are cooked and stored before they are delivered to the customer, such as a pizzeria.

In our streaming service, a pizzeria would be a server, which is a high-performance computer that processes and stores the video. Video processing could be seen as the steps a cook would follow to make a pizza: make the dough, add the ingredients, and so on until we have a fully cooked pizza.

Moving on to the next layer of our system, let's imagine that our pizzeria is part of a chain of restaurants that operates with a system like this: in each city, there's a central kitchen that cooks and distributes pizzas and other items to each restaurant in the city. This central kitchen is also home to the central office for the chain's administrative staff. In each restaurant throughout the city, pizzas and other food items are kept warm and sold to customers.

In our streaming service, our central pizzeria would be the data center, which is a set of servers that not only handle the tasks I previously mentioned, but everything related to the service's management, such as subscriptions, the application's backend logic, etc.

On the other hand, each restaurant would be analogous to what we know as content delivery networks (CDN). These networks also consist of a set of servers, but unlike the data center, they are only responsible for maintaining a copy of the content. CDNs are placed in strategic points around the globe so users worldwide can access the content faster, as well as to keep internet traffic to the service evenly distributed in separate geographical areas.

The last piece of our system is Quality of Experience (QoE). It refers to the overall performance and user experience provided by the streaming service, and encompasses various factors such as video resolution, playback smoothness, latency, and reliability. Going back to our chain of pizza restaurants, for it to be successful, those who manage it must take care of many aspects that make up the entire service through quality control.

To give an example, let's consider pizza delivery. We want the delivery person to be able to deliver all orders without taking too many detours and in the shortest amount of time possible; all of this while also keeping the driver's safety in mind. The pizzeria could implement a system that provides the delivery person the best route to take based on the addresses to visit, traffic on main streets, etc.

We see something similar in our streaming service: many measures are put in place to ensure that the user experience is always at its best: from ensuring that the application works correctly and is easy to use, to including content that is appealing to subscribers, to implementing solutions that guarantee that there will be as few interruptions or problems as possible when viewing the content.

Challenges of a streaming service

Though simplified, the explanation I've just provided encompasses the essential components of the system that has served as the blueprint for most modern streaming services.

That said, real life streaming systems are rarely as simple as I've just made them out to be, and setting up and managing one has its own set of challenges to overcome. We've all experienced a video suddenly stopping in the middle of playback to load, or the image quality decreasing out of nowhere. Such behaviors are the result of not a fault in the system itself, but rather two important and clever solutions that allow a streaming service to continue working even under adverse conditions: buffering and adaptive quality. We'll discuss how they work and both the benefits and drawbacks they bring to the viewing experience.

As I previously mentioned, Quality of Experience is one of the most important aspects to consider for streaming. Service providers will always look to ensure that the viewing experience is consistent, which means that there should be none or as little interruptions as possible when watching content.

To that end, streaming services implement Quality of Experience techniques, which are a series of measures put in place to prioritize real-time multimedia traffic over other types of data, such as web browsing or file downloads. Two of such measures are buffering and adaptive quality. They ensure that the service can continue to work when it faces disruptions, such as a slow or unreliable internet connection.

Let's see how both of them work:

This is what your player looks like while it's buffering

On the one hand, video buffering is like having a little reservoir of video content stored up while you're watching a video online. When you start streaming a video, the content is loaded into this reservoir faster than you're watching it. So, if there are any hiccups or slowdowns in your internet connection while you're watching, the video can keep playing smoothly from the reservoir without interruption. It's like having a backup plan to make sure your video keeps playing without any annoying pauses or stops, even if your internet isn't perfectly steady.

A horrible, but illustrative example of adaptive quality

On the other hand, video adaptive quality is like having different versions of the same video available to watch, each with different levels of picture quality. When you're streaming a video online, the system constantly checks your internet speed and adjusts the quality of the video accordingly. If your internet is fast and stable, you get to watch the video in high definition with crisp, clear images. But if your internet slows down or gets a bit shaky, the system automatically switches to a lower quality version of the video, so it can keep playing smoothly without buffering or interruptions. It's like having a flexible video player that adapts to your internet speed to give you the best viewing experience possible, no matter what.

Clearly, adaptive quality and buffering are incredibly clever solutions that are essential for a streaming service to work. However, I tend to think of them as a "necessary evil", or in the case of adaptive streaming, a fallback in case something goes wrong, and should be used sparingly when possible. And don't get me wrong, it's great that having a slow connection doesn't mean that you can't watch a movie at all, but I also think that fluctuating video quality and long load times are not exactly great either.

But what if the fault is not the user's, but rather the provider's? What could be going wrong on their side that leads to excessive buffering or use of adaptive streaming?

If it ain't broke, why fix it?

The lazy man's dilemma

To this day, many streaming services still struggle with these technical problems. Understandably, not all of them have the resources or manpower that Netflix or Google has to deploy advanced solutions at a large scale, nor have the need to. But the underlying issues are not exclusive to streaming; many other Internet services are also afflicted by them, though the apparent problem might look different for each one of them. So what's causing all this?

One of the main reasons for slow internet infrastructure are the devices powering the services' networks. No matter how advanced or costly a router, a switch or a firewall might be, their design continues to be one that rarely changes, and we're talking about designs conceived in the past century where high definition multimedia streaming was still just a fantasy from the realm of science fiction. Thus, none of these devices inherently provide any tool to guarantee Quality of Experience.

In fact, they work under the concept of "best effort", which means that the device will try to complete its tasks to the best of its abilities, but has no way of providing troubleshooting or error correction should something go wrong in the information transmission process. This is, of course, a problem that engineers have found ways to mitigate through software and hardware solutions, but the underlying solution remains and can become an even bigger problem in the long run.

Another reason is the size of networks at data centers and the complexity involved in their configuration and management. Network devices (like the ones I've just described) that allow communication between data centers and the outside world are robust and autonomous, but very inflexible due to their design. Modifying these devices to adapt to services with ever increasing demands for scalability and flexibility is costly and difficult, and those who plan the infrastructure of these services must adapt to the limitations of the devices and not the other way around.

The autonomy of these devices can also be a detriment to the network engineer, especially in services that require fast, low-latency data transmission (such as streaming!). All of this makes setting up and managing networks at this scale an overly complicated process that requires highly skilled specialists and years of specialized training.

It could also be argued that legacy network technology like this does not offer any way to monitor everything that happens on the network. This makes device failure detections nearly impossible, increasing the chance of a domino effect that could result in prolonged, and thus costly, service downtime.

Another disadvantage of not being able to easily monitor the network is that the task of managing traffic to the network becomes difficult. If many users want to connect to the service, and these connections all converge on a couple of network devices, the quality of service will be affected. Additionally, traffic redirection is not only something these devices do not do inherently, but something costly and difficult to implement.

Lastly, because of the need of external hardware or software solutions to mitigate these issues, the market is now flooded with them. Most of them are incompatible with each other and usually sold under the predatory Software-as-a-Service model. This only adds fuel to the network management nightmare, and that's without counting legacy solutions which big companies still use and won't likely replace in the future due to how much that would cost.

Looking towards the future

After reading through all this, you might be wondering if there's a better way to do Internet. Surely, in the era of 4K video and instant gratification, it would seem unthinkable that some of the biggest names in technology would still be doing things in such an old fashioned way. Indeed, times have changed, and for the better.

In 2008, a group was formed to create a revolutionary new paradigm that would completely change the way we think and do networks. A technology designed with modern Internet needs in mind that has already been implemented in the most important streaming services. That technology is...

Software-defined Networks!

In part 2 of this series, we'll discuss the what, why and how of SDNs, as well as the ways it's improving streaming services. Stay tuned and thanks for reading!

References

Austerberry, D. (2005). The technology of video and audio streaming. Taylor & Francis.

Barz, H. W., & Bassett, G. A. (2016). Multimedia networks: Protocols, Design and Applications. John Wiley & Sons.

Goransson, P., Culver, T., & Black, C. (2016). Software defined networks: A Comprehensive Approach. Morgan Kaufmann Publishers.

Kurose, J. F., & Ross, K. W. (2021). Computer networking: A Top-down Approach.

Top comments (0)