DEV Community

Cover image for JSON is Slower. Here Are Its 4 Faster Alternatives
Nik L.
Nik L.

Posted on • Edited on

JSON is Slower. Here Are Its 4 Faster Alternatives

Edit 2: Lots of insightful comments at the bottom, do give them a read, too, before going with any alternatives!
Edit 1: Added a new take on 'Optimizing JSON Performance' from comments


Your users want instant access to information, swift interactions, and seamless experiences. JSON, short for JavaScript Object Notation, has been a loyal companion for data interchange in web development, but could it be slowing down your applications? Let's dive deep into the world of JSON, explore its potential bottlenecks, and discover faster alternatives and optimization techniques to make your apps sprint like cheetahs.


You might want to check this tutorial too: Using Golang to Build a Real-Time Notification System - A Step-by-Step Notification System Design Guide


What is JSON and Why Should You Care?

Before we embark on our journey to JSON optimization, let's understand what JSON is and why it matters.

JSON is the glue that holds together the data in your applications. It’s the language in which data is communicated between servers and clients, and it’s the format in which data is stored in databases and configuration files. In essence, JSON plays a pivotal role in modern web development.

Understanding JSON and its nuances is not only a fundamental skill for any web developer but also crucial for optimizing your applications. As we delve deeper into this blog, you’ll discover why JSON can be a double-edged sword when it comes to performance and how this knowledge can make a significant difference in your development journey.

The Popularity of JSON and Why People Use It

JSON’s popularity in the world of web development can’t be overstated. It has emerged as the de facto standard for data interchange for several compelling reasons:

  1. Human-Readable Format: JSON uses a straightforward, text-based structure that is easy for both developers and non-developers to read and understand. This human-readable format enhances collaboration and simplifies debugging.


   // Inefficient
   {
     "customer_name_with_spaces": "John Doe"
   }

   // Efficient
   {
     "customerName": "John Doe"
   }


Enter fullscreen mode Exit fullscreen mode
  1. Language Agnostic: JSON is not tied to any specific programming language. It’s a universal data format that can be parsed and generated by almost all modern programming languages, making it highly versatile.

  2. Data Structure Consistency: JSON enforces a consistent structure for data, using key-value pairs, arrays, and nested objects. This consistency makes it predictable and easy to work with in various programming scenarios.



   // Inefficient
   {
     "order": {
       "items": {
         "item1": "Product A",
         "item2": "Product B"
       }
     }
   }

   // Efficient
   {
     "orderItems": ["Product A", "Product B"]
   }


Enter fullscreen mode Exit fullscreen mode
  1. Browser Support: JSON is supported natively in web browsers, allowing web applications to communicate with servers seamlessly. This native support has contributed significantly to its adoption in web development.

  2. JSON APIs: Many web services and APIs provide data in JSON format by default. This has further cemented JSON’s role as the go-to choice for data interchange in web development.

  3. JSON Schema: Developers can use JSON Schema to define and validate the structure of JSON data, adding an extra layer of clarity and reliability to their applications.

Given these advantages, it’s no wonder that developers across the globe rely on JSON for their data interchange needs. However, as we explore deeper into the blog, we’ll uncover the potential performance challenges associated with JSON and how to address them effectively.

The Need for Speed

Users expect instant access to information, swift interactions, and seamless experiences across web and mobile applications. This demand for speed is driven by several factors:

User Expectations

Users have grown accustomed to lightning-fast responses from their digital interactions. They don’t want to wait for web pages to load or apps to respond. A delay of even a few seconds can lead to frustration and abandonment.

Competitive Advantage

Speed can be a significant competitive advantage. Applications that respond quickly tend to attract and retain users more effectively than sluggish alternatives.

Search Engine Rankings

Search engines like Google consider page speed as a ranking factor. Faster-loading websites tend to rank higher in search results, leading to increased visibility and traffic.

Conversion Rates

E-commerce websites, in particular, are acutely aware of the impact of speed on conversion rates. Faster websites lead to higher conversion rates and, consequently, increased revenue.

Mobile Performance

With the expansion of mobile devices, the need for speed has become even more critical. Mobile users often have limited bandwidth and processing power, making fast app performance a necessity.

Is JSON Slowing Down Our Apps?

Now, let’s address the central question: Is JSON slowing down our applications?

JSON, as mentioned earlier, is an immensely popular data interchange format. It’s flexible, easy to use, and widely supported. However, this widespread adoption doesn’t make it immune to performance challenges.

JSON, in certain scenarios, can be a culprit when it comes to slowing down applications. The process of parsing JSON data, especially when dealing with large or complex structures, can consume valuable milliseconds. Additionally, inefficient serialization and deserialization can impact an application’s overall performance.

Parsing Overhead

When JSON data arrives at your application, it must undergo a parsing process to transform it into a usable data structure. Parsing can be relatively slow, especially when dealing with extensive or deeply nested JSON data.



// JavaScript example using JSON.parse for parsing
const jsonData = '{"key": "value"}';
const parsedData = JSON.parse(jsonData);


Enter fullscreen mode Exit fullscreen mode

Serialization and Deserialization

JSON requires data to be serialized (encoding objects into a string) when sent from a client to a server and deserialized (converting the string back into usable objects) upon reception. These steps can introduce overhead and affect your application’s overall speed.



// Node.js example using JSON.stringify for serialization
const data = { key: 'value' };
const jsonString = JSON.stringify(data);


Enter fullscreen mode Exit fullscreen mode

String Manipulation

JSON is text-based, relying heavily on string manipulation for operations like concatenation and parsing. String handling can be slower compared to working with binary data.

Lack of Data Types

JSON has a limited set of data types (e.g., strings, numbers, booleans). Complex data structures might need less efficient representations, leading to increased memory usage and slower processing.



{
  "quantity": 1.0
}


Enter fullscreen mode Exit fullscreen mode

Verbosity

JSON’s human-readable design can result in verbosity. Redundant keys and repetitive structures increase payload size, causing longer data transfer times.



// Inefficient
{
  "product1": {
    "name": "Product A",
    "price": 10
  },
  "product2": {
    "name": "Product A",
    "price": 10
  }
}


Enter fullscreen mode Exit fullscreen mode

No Binary Support

JSON lacks native support for binary data. When dealing with binary data, developers often need to encode and decode it into text, which can be less efficient.

Deep Nesting

In some scenarios, JSON data can be deeply nested, requiring recursive parsing and traversal. This computational complexity can slow down your application, especially without optimization.


Similar to this, I along with other open-source loving dev folks, run a developer-centric community on Slack. Where we discuss these kinds of topics, implementations, integrations, some truth bombs, weird chats, virtual meets, contribute to open--sources and everything that will help a developer remain sane ;) Afterall, too much knowledge can be dangerous too.

I'm inviting you to join our free community (no ads, I promise, and I intend to keep it that way), take part in discussions, and share your freaking experience & expertise. You can fill out this form, and a Slack invite will ring your email in a few days. We have amazing folks from some of the great companies (Atlassian, Gong, Scaler), and you wouldn't wanna miss interacting with them. Invite Form

Let's continue...


Alternatives to JSON

While JSON is a versatile data interchange format, its performance limitations in certain scenarios have led to the exploration of faster alternatives. Let’s delve into some of these alternatives and understand when and why you might choose them:

Protocol Buffers

Protocol Buffers, also known as protobuf, is a binary serialization format developed by Google. It excels in terms of speed and efficiency. Here’s why you might consider using Protocol Buffers:

  1. Binary Encoding: Protocol Buffers use binary encoding, which is more compact and faster to encode and decode compared to JSON’s text-based encoding.

  2. Efficient Data Structures: Protocol Buffers allow you to define efficient data structures with precise typing, enabling faster serialization and deserialization.

  3. Schema Evolution: Protocol Buffers support schema evolution, meaning you can update your data structures without breaking backward compatibility.



syntax = "proto3";

message Person {
  string name = 1;
  int32 age = 2;
}


Enter fullscreen mode Exit fullscreen mode

MessagePack

MessagePack is another binary serialization format designed for efficiency and speed. Here’s why you might consider using MessagePack:

  1. Compactness: MessagePack produces highly compact data representations, reducing data transfer sizes.

  2. Binary Data: MessagePack provides native support for binary data, making it ideal for scenarios involving binary information.

  3. Speed: The binary nature of MessagePack allows for rapid encoding and decoding.



// JavaScript example using MessagePack for serialization
const msgpack = require('msgpack-lite');
const data = { key: 'value' };
const packedData = msgpack.encode(data);


Enter fullscreen mode Exit fullscreen mode

BSON (Binary JSON)

BSON, often pronounced as "bee-son" or "bi-son," is a binary serialization format used primarily in databases like MongoDB. Here’s why you might consider using BSON:

  1. JSON-Like Structure: BSON maintains a JSON-like structure with added binary data types, offering a balance between efficiency and readability.

  2. Binary Data Support: BSON provides native support for binary data types, which is beneficial for handling data like images or multimedia.

  3. Database Integration: BSON seamlessly integrates with databases like MongoDB, making it a natural choice for such environments.



{
  "_id": ObjectId("60c06fe9479e1a1280e6bfa7"),
  "name": "John Doe",
  "age": 30
}


Enter fullscreen mode Exit fullscreen mode

Avro

Avro is a data serialization framework developed within the Apache Hadoop project. It emphasizes schema compatibility and performance. Here’s why you might consider using Avro:

  1. Schema Compatibility: Avro prioritizes schema compatibility, allowing you to evolve your data structures without breaking compatibility.

  2. Binary Data: Avro uses a compact binary encoding format for data transmission, resulting in smaller payloads.

  3. Language-Neutral: Avro supports multiple programming languages, making it suitable for diverse application ecosystems.



{
  "type": "record",
  "name": "Person",
  "fields": [
    { "name": "name", "type": "string" },
    { "name": "age", "type": "int" }
  ]
}


Enter fullscreen mode Exit fullscreen mode

The choice between JSON and its alternatives depends on your specific use case and requirements. If schema compatibility is crucial, Avro might be the way to go. If you need compactness and efficiency, MessagePack and Protocol Buffers are strong contenders. When dealing with binary data, MessagePack and BSON have you covered. Each format has its strengths and weaknesses, so pick the one that aligns with your project's needs.

Optimizing JSON Performance

But what if you're committed to using JSON, despite its potential speed bumps? How can you make JSON run faster and more efficiently? The good news is that there are practical strategies and optimizations that can help you achieve just that. Let's explore these strategies with code examples and best practices.

1. Minimize Data Size

a. Use Short, Descriptive Keys: Choose concise but meaningful key names to reduce the size of JSON objects.



   // Inefficient
   {
     "customer_name_with_spaces": "John Doe"
   }

   // Efficient
   {
     "customerName": "John Doe"
   }


Enter fullscreen mode Exit fullscreen mode

b. Abbreviate When Possible: Consider using abbreviations for keys or values when it doesn’t sacrifice clarity.



   // Inefficient
   {
     "transaction_type": "purchase"
   }

   // Efficient
   {
     "txnType": "purchase"
   }


Enter fullscreen mode Exit fullscreen mode

2. Use Arrays Wisely

a. Minimize Nesting: Avoid deeply nested arrays, as they can increase the complexity of parsing and traversing JSON.



   // Inefficient
   {
     "order": {
       "items": {
         "item1": "Product A",
         "item2": "Product B"
       }
     }
   }

   // Efficient
   {
     "orderItems": ["Product A", "Product B"]
   }


Enter fullscreen mode Exit fullscreen mode

3. Optimize Number Representations

a. Use Integers When Possible: If a value can be represented as an integer, use that instead of a floating-point number.



   // Inefficient
   {
     "quantity": 1.0
   }

   // Efficient
   {
     "quantity": 1
   }


Enter fullscreen mode Exit fullscreen mode

4. Remove Redundancy

a. Avoid Repetitive Data: Eliminate redundant data by referencing shared values.



   // Inefficient
   {
     "product1": {
       "name": "Product A",
       "price": 10
     },
     "product2": {
       "name": "Product A",
       "price": 10
     }
   }

   // Efficient
   {
     "products": [
       {
         "name": "Product A",
         "price": 10
       },
       {
         "name": "Product B",
         "price": 15
       }
     ]
   }


Enter fullscreen mode Exit fullscreen mode

5. Use Compression

a. Apply Compression Algorithms: If applicable, use compression algorithms like Gzip or Brotli to reduce the size of JSON payloads during transmission.



   // Node.js example using zlib for Gzip compression
   const zlib = require('zlib');

   const jsonData = {
     // Your JSON data here
   };

   zlib.gzip(JSON.stringify(jsonData), (err, compressedData) => {
     if (!err) {
       // Send compressedData over the network
     }
   });


Enter fullscreen mode Exit fullscreen mode

Following up with Samuel's comments, I am adding an edit.

This is an interesting collection of notes and options. Thanks for the article!

Can you provide links, especially for the "Real-World Optimizations" section? I would appreciate being able to learn more about the experiences of these different companies and situations.

In the "Optimizing JSON Performance" section the example suggests using compression within Javascript for performance improvement. This should generally be avoided in favor of HTTP compression at the connection level. HTTP supports the same zlib, gzip, and brotli compression options but with potentially much more efficient implementations.

While protocol buffers and other binary options undoubtedly provide performance and capabilities that JSON doesn't, I think it undersells how much HTTP compression and HTTP/2 matter.

I did some small work optimizing JSON structures a decade ago when working in eCommerce to offset transfer size and traversal costs. While there are still some benefits to using columnar data (object of arrays) over the usual "Collection" (array of objects), a number of the concerns identified, like verbose keys, are essentially eliminated by compression if they are used in repetition.

HTTP/2 also cuts down overhead costs for requests, making it more efficient to request JSON – or any format – in smaller pieces and accumulate them on the client for improved responsiveness.

There are some minor formatting issues, and it is lacking in sources, but it provides a great base of information and suggestions.



As Samuel rightly observes, the adoption of HTTP/2 has brought significant advancements, particularly in optimizing data interchange formats like JSON. HTTP/2's multiplexing capabilities efficiently manage multiple requests over a single connection, enhancing responsiveness and reducing overhead.

In practical terms, a comprehensive optimization strategy may involve both embracing HTTP/2 and utilizing compression techniques per your use-case, recognizing that each approach addresses specific aspects of network efficiency and performance. HTTP/2 excels in network-level optimization, while compression strategies enhance application-level efficiency, and the synergy between them can lead to substantial gains in data handling speed and resource utilization.



Enter fullscreen mode Exit fullscreen mode

6. Employ Server-Side Caching

a. Cache JSON Responses: Implement server-side caching to store and serve JSON responses efficiently, reducing the need for repeated data processing.

7. Profile and Optimize

a. Profile Performance: Use profiling tools to identify bottlenecks in your JSON processing code, and then optimize those sections.

Remember that the specific optimizations you implement should align with your application’s requirements and constraints.

Real-World Optimizations: Speeding Up JSON in Practice

Now that you've explored the theoretical aspects of optimizing JSON, it's time to dive headfirst into real-world applications and projects that encountered performance bottlenecks with JSON and masterfully overcame them. These examples provide valuable insights into the strategies employed to boost speed and responsiveness while still leveraging the versatility of JSON.

1. LinkedIn’s Protocol Buffers Integration

Challenge: LinkedIn's Battle Against JSON Verbosity and Network Bandwidth Usage

LinkedIn, the world's largest professional networking platform, faced an arduous challenge. Their reliance on JSON for microservices communication led to verbosity and increased network bandwidth usage, ultimately resulting in higher latencies. In a digital world where every millisecond counts, this was a challenge that demanded a solution.

Solution: The Power of Protocol Buffers

LinkedIn turned to Protocol Buffers, often referred to as protobuf, a binary serialization format developed by Google. The key advantage of Protocol Buffers is its efficiency, compactness, and speed, making it significantly faster than JSON for serialization and deserialization.

Impact: Reducing Latency by up to 60%

The adoption of Protocol Buffers led to a remarkable reduction in latency, with reports suggesting improvements of up to 60%. This optimization significantly enhanced the speed and responsiveness of LinkedIn's services, delivering a smoother experience for millions of users worldwide.

2. Uber’s H3 Geo-Index

Challenge: Uber's JSON Woes with Geospatial Data

Uber, the ride-hailing giant, relies heavily on geospatial data for its operations. JSON was the default choice for representing geospatial data, but parsing JSON for large datasets proved to be a bottleneck, slowing down their algorithms.

Solution: Introducing the H3 Geo-Index

Uber introduced the H3 Geo-Index, a highly efficient hexagonal grid system for geospatial data. By shifting from JSON to this innovative solution, they managed to reduce JSON parsing overhead significantly.

Impact: Accelerating Geospatial Operations

This optimization substantially accelerated geospatial operations, enhancing the efficiency of Uber's ride-hailing services and mapping systems. Users experienced faster response times and more reliable service.

3. Slack’s Message Format Optimization

Challenge: Slack's Battle with Real-time Message Rendering

Slack, the messaging platform for teams, needed to transmit and render large volumes of JSON-formatted messages in real-time chats. However, this led to performance bottlenecks and sluggish message rendering.

Solution: Streamlining JSON Structure

Slack optimized their JSON structure to reduce unnecessary data. They started including only essential information in each message, trimming down the payload size.

Impact: Speedier Message Rendering and Enhanced Chat Performance

This optimization led to a significant improvement in message rendering speed. Slack users enjoyed a more responsive and efficient chat experience, particularly in busy group chats.

4. Auth0’s Protocol Buffers Implementation

Challenge: Auth0's Authentication and Authorization Data Performance

Auth0, a prominent identity and access management platform, faced performance challenges with JSON when handling authentication and authorization data. This data needed to be processed efficiently without compromising security.

Solution: Embracing Protocol Buffers for Data Serialization

Auth0 turned to Protocol Buffers as well, leveraging its efficient data serialization and deserialization capabilities. This switch significantly improved data processing speeds, making authentication processes faster and enhancing overall performance.

Impact: Turbocharging Authentication and Authorization

The adoption of Protocol Buffers turbocharged authentication and authorization processes, ensuring that Auth0's services delivered top-notch performance while maintaining the highest security standards.

These real-world examples highlight the power of optimization in overcoming JSON-related slowdowns. The strategies employed in these cases are a testament to the adaptability and versatility of JSON and alternative formats in meeting the demands of the modern digital landscape.

Stay tuned for the concluding part where we summarize the key takeaways and provide you with a roadmap for optimizing JSON performance in your own projects.

Closing Remarks

JSON stands as a versatile and indispensable tool for data exchange. Its human-readable structure and cross-language adaptability have solidified it as a cornerstone of contemporary applications. However, as our exploration in this guide has revealed, JSON's pervasive use does not grant it immunity from performance challenges.

The crucial takeaways from our journey into enhancing JSON performance are evident:

    1. Performance is Paramount: Speed and responsiveness are of utmost importance in today's digital landscape. Users demand applications that operate at lightning speed, and even slight delays can result in dissatisfaction and missed opportunities.
    1. Size Matters: The size of data payloads directly impacts network bandwidth usage and response times. Reducing data size is typically the initial step in optimizing JSON performance.
    1. Exploring Alternative Formats: When efficiency and speed are critical, it's beneficial to explore alternative data serialization formats like Protocol Buffers, MessagePack, BSON, or Avro.
    1. Real-World Examples: Learning from real-world instances where organizations effectively tackled JSON-related slowdowns demonstrates that optimization efforts can lead to substantial enhancements in application performance.

Similar to this, I along with other open-source loving dev folks, run a developer-centric community on Slack. Where we discuss these kinds of topics, implementations, integrations, some truth bombs, weird chats, virtual meets, contribute to open--sources and everything that will help a developer remain sane ;) Afterall, too much knowledge can be dangerous too.

I'm inviting you to join our free community (no ads, I promise, and I intend to keep it that way), take part in discussions, and share your freaking experience & expertise. You can fill out this form, and a Slack invite will ring your email in a few days. We have amazing folks from some of the great companies (Atlassian, Gong, Scaler), and you wouldn't wanna miss interacting with them. Invite Form

And I would be highly obliged if you can share that form with your dev friends, who are givers.

Top comments (48)

Collapse
 
moopet profile image
Ben Sinclair

First off, efficiency in data formats like these only matters when you're manipulating a lot of data. If you're using a gigabyte of JSON then you're probably doing something wrong.

As far as your optimisations are concerned, I disagree on a few points.

This example:

   // Inefficient
   {
     "order": {
       "items": {
         "item1": "Product A",
         "item2": "Product B"
       }
     }
   }

   // Efficient
   {
     "orderItems": ["Product A", "Product B"]
   }
Enter fullscreen mode Exit fullscreen mode

isn't a way of making JSON more efficient, it's a way of changing your schema. If you want to store relationships, or have a collection of order objects, then you use the "inefficient" hierarchy, and if you need to have keys for the items (not really relevant in this example) then you use a keyed object, otherwise you use an array. My point is that these are things you will change depending on your schema, and they have little bearing on efficiency and none on JSON in particular.

Use Short, Descriptive Keys

Rather, use keys which are consistent with the rest of your code.

Abbreviate When Possible

Don't do this. When you're loading data, unless you're going through another parsing stage, these abbreviations are going to directly correspond with objects with confusing names. If you wouldn't write it as a variable name in your clear, self-documenting code, don't use it as a key in a data structure.

You talk about JSON being language agnostic, and efficiency in things like parsing numerical data, but really we know we're talking about the performance over the web. Your native application storing its configuration in JSON isn't going to notice any of these performance changes. This post is more like, "improving performance of sending human-readable data structures over the wire".

Collapse
 
nikl profile image
Nik L.

True Ben, your point makes sense. I was considering that for some use-cases, where data may need to be serialized or deserialized frequently, changing the schema may reduce verbosity.
But, yeah, your points are valid overall.

Collapse
 
callmemagnus profile image
magnus

Thanks you've spared me some minutes writing the exact same comment

Collapse
 
mcsee profile image
Maxi Contieri

Indeed. This hurts readability and seems unnecessary most of the time

Remember that premature optimization is the root of all evil

Collapse
 
nicolasdanelon profile image
Comment deleted
Collapse
 
moopet profile image
Ben Sinclair

I think that's unfair - there's nothing particularly wrong with the article if it's pitched as saving bandwidth, and the author has clearly made an effort to make it readable and interesting.

Thread Thread
 
nicolasdanelon profile image
Nicolás Danelón

saving bandwidth in 2023 =P is not 100% shitty tho

Collapse
 
raibtoffoletto profile image
Raí B. Toffoletto

"Language Agnostic" !? It literally has JavaScript in its acronym... I think you mean it's supported by several languages. Almost all languages will have a json converter. But it's not agnostic... let's take numbers for example: numbers is JS are just numbers, independently if they have or not decimals... however it's the converter job to transform it into int/float/decimal whatever type of numbers the other language work with.

Collapse
 
punund profile image
punund

Numbers in JSON are symbolic representations of numeric data, which type is not imposed by the format. It is up to the implementation to interpret, for example, numbers without decimal separator as integers, others as double-precision 64 bit just like JS, or single precision 32-bit numbers if the mantissa is short enough.

Collapse
 
anchorwave profile image
Anchorwave

"'Language Agnostic' !? It literally has JavaScript in its acronym..."
You can use JSON in a Golang program, yeah? So it's language agnostic.

Collapse
 
oculus42 profile image
Samuel Rouse

This is an interesting collection of notes and options. Thanks for the article!

Can you provide links, especially for the "Real-World Optimizations" section? I would appreciate being able to learn more about the experiences of these different companies and situations.

In the "Optimizing JSON Performance" section the example suggests using compression within Javascript for performance improvement. This should generally be avoided in favor of HTTP compression at the connection level. HTTP supports the same zlib, gzip, and brotli compression options but with potentially much more efficient implementations.

While protocol buffers and other binary options undoubtedly provide performance and capabilities that JSON doesn't, I think it undersells how much HTTP compression and HTTP/2 matter.

I did some small work optimizing JSON structures a decade ago when working in eCommerce to offset transfer size and traversal costs. While there are still some benefits to using columnar data (object of arrays) over the usual "Collection" (array of objects), a number of the concerns identified, like verbose keys, are essentially eliminated by compression if they are used in repetition.

HTTP/2 also cuts down overhead costs for requests, making it more efficient to request JSON – or any format – in smaller pieces and accumulate them on the client for improved responsiveness.

There are some minor formatting issues, and it is lacking in sources, but it provides a great base of information and suggestions.

Collapse
 
nikl profile image
Nik L.

You're right about the HTTP compression, Samuel. Have added your perspective in that section. For referring resources in Example, I've added the links in those sections.

Collapse
 
zhangli profile image
Zhang Li

On point 4.a (avoid repetitive data), you might actually need to use the "inefficient" way, as you will most likely need an "id" type of field.

In the "inefficient" way, the property name can serve as an "id", but in the "efficient" way, you would need a new "id" property.

For most of these tips, they seem to all be micro-optimizations, as the majority of the slowness would be down to the frontend framework you use, as well as the many third party packages.

Collapse
 
adaptive-shield-matrix profile image
Adaptive Shield Matrix

Being able to improve performance is nice,
but as a dev I care the most about the developer experience:

  • How can I debug/decrypt messages if something goes wrong?
  • Do problems, for example, inconsistencies between versions happen offen?
  • How to avoid these?

And missing for me is real examples, with open source code on

  • how the differences look like
  • if its readable / maintainable
  • performance/benchmark/message size comparison
  • increase of browser bundle size because of the library

All the features listed for each serialization format sound completely identical to me.

  • Binary Encoding
  • Efficient Data Structures
  • Compactness
  • Binary Data Support

Since most of these tools come from server environments its even questionable if you can

  • make use of any of the features
  • since a client library has to exist,
  • has to provide types for typescript
  • has to implement all protocol features.
  • Optionally has to do it while not exploding the shipped bundle size.
  • Then still has to do it faster/more efficient than JSON (thats not a given)
Collapse
 
junihh profile image
Junior Hernandez

The solution for JSON to be sent compressed in Brotli or GZip format is very good. But one thing, if you run an application in production, the safest thing is to run it behind a proxy server (Nginx, Apache, Cloudflare, ...) to which it is preferable to enable serving all string responses (json, html, xml, css, js, ...) in compressed mode (brotli, gzip). Better to let the proxy compress the responses and not your application or you will use up important application resources that it may need for other processes.

On the other hand, although it may seem like a closed way of thinking, I have never been inclined towards JSON Schema. This has its reason for being, adding validation of the value, but it has a serious disadvantage: The type of data (among other things) that allows validating the value must be passed through each key, leaving an extremely heavy JSON. In essence, it should only be used when the response of a form is sent to an API, so that the server can also validate what it receives. Schema would be terribly inefficient for the response of a list (for example, a list of products from a catalog) because of the burden of unnecessary extra data.

Speaking of catalog products, a while ago I was in charge of an e-commerce that, instead of saving the shopping cart in the application, temporarily saved it in the browser's localStorage. When the buyer adds a product to the cart, the SKU and the requested quantity ["SKU89038", 5] are accumulated in an array and then go down to localStorage.

When placing the purchase order, what is sent from the customer to the application is a list of SKU's + Quantity. No further information is needed at this point, making the process quite efficient. But when it is the other way around in the case of displaying a list of products, only exactly the data that is needed is shown and no more.

While designing how I would return responses from product lists I explored the idea of returning a json containing a list of arrays with only the values. Then, the first member of that list would be all the keys in each column (focused on the human who was programming the front-end), in order to avoid adding extra bytes to the response. In the end it was an over-optimization and we ruled out doing it, returning the json with their respective traditional key-values.

Collapse
 
learnitmyway profile image
David • Edited

Thanks for the article. I learnt some things :)

Impact: Turbocharging Authentication and Authorization

Auth0 wasn't as impressed by Protocol Buffers as I was expecting them to be.

I have to be honest, I was hoping to come across a more favorable scenario for Protobuf. Of course, being able to handle, on a Java to Java communication, 50 thousand instances of Person objects in 25ms with Protobuf, while JSON took 150ms, is amazing. But on a JavaScript environment these gains are much lower.

Collapse
 
danielgtaylor profile image
Daniel G. Taylor

I'm kind of shocked that CBOR and the excellent github.com/fxamacker/cbor library aren't covered in this. You get JSON interop without the wackiness of protobuf serialization and can be equally compact to protobufs with the keyasint and toarray struct tags when needed.

On top of that you can still utilize JSON schema fairly easily, for example huma.rocks/ supports JSON/CBOR (via client-driven content negotiation) out of the box and uses OpenAPI 3.1 & JSON Schema for validation.

Collapse
 
rickdelpo1 profile image
Rick Delpo • Edited

Hey THANKS for the inspiration. I always wanted to try a Mongodb Atlas Cluster but I never considered the notion that BSON is much faster than JSON. So I gave it a whirl and found that manipulating data in Mongo is 90% faster according to my testing.

When inserting a record into a JSON array there is no append method so we need to fetch the entire array with the record added and then resave the array which overwrites the original structure. In my app this takes 3000ms to add a record. Over in Mongo we have the insertOne() method which only takes 200ms. My collection has 14k records so inserting via JSON is not practical. But for much smaller use cases like dashboards using javascript array methods on JSON can be practical.

I use AWS Lambda. In 7~ lines I can insert a record into Mongo with the following:

import { MongoClient } from "mongodb"; //a very small library
const client = new MongoClient(process.env.MONGODB_CONNECTION_STRING);
//connection string saved as key/pair in env file

export const handler = async(event) => {
const db = await client.db("test");
const collection = await db.collection("tracker3");

const myobj = {

"xyz":"value",
"other key":"123"
};
//const body = await collection.find().toArray(); //gets entire collection
const result = await db.collection("tracker3").insertOne(myobj); //this is the insert command
//return body; //the toArray methods puts in json array..then can be exported to csv
};

to do same as JSON array I have bigger process also with a much bigger import

const AWS = require('aws-sdk'); //a very large library
const fetch = require('node-fetch');
const s3 = new AWS.S3();

exports.handler = async (event) => {
const res = await fetch('xxxxx.s3.us-east-2.amazonaws.com/t...);
const json = await res.json();

// add a new record using array method push
json.push({
country: event.country2,
session: event.ses,
page_name: event.date2,
hit: event.hit2,
ip: event.ip2,
time_in: event.time2,
time_out: event.time3,
event_name: event.city2
});

     //then re write whole updated file back to s3 and it will overwrite existing which now has update append //from above
Enter fullscreen mode Exit fullscreen mode

var params = {
Bucket: 'xxxxxx',
Key: 'tracker2.json',
Body: JSON.stringify(json), //pass fetch result into body with update from push
ContentType: 'json',
};

var s3Response = await s3.upload(params).promise();

};

Collapse
 
nikl profile image
Nik L. • Edited

That's pretty time saving, imo. Did you tried any other more efficient options?

Collapse
 
ranggakd profile image
Retiago Drago

how about gRPC?

Collapse
 
nikl profile image
Nik L. • Edited

An efficient option for getting smaller payloads. Tried this one? Also this was something I wrote sometimes back with gRPC dev.to/nikl/building-production-gr...

Some comments may only be visible to logged-in visitors. Sign in to view all comments.