DEV Community

Cover image for tnfy.link - One more shortener?
capcom6
capcom6

Posted on

tnfy.link - One more shortener?

Because English is not my native language I've used some AI-tools to improve the text, you can find original version here. I appreciate your feedback which version you like more.

Hello, everyone!

The Beginning

This story begins with a simple task: I needed to create posts containing a link to a service I didn’t control and track the number of clicks on that link.

The first thing I did was turn to Google and search for URL shortener services. I found plenty of options. However, some were overly complicated, others included ads or required registration, and some were frustratingly slow. None of them fit my requirements.

I also came across several Open Source projects on GitHub. But then I thought: I’m a developer too. Why not create my own URL shortener—simple, functional, and tailored to my needs?


What Should I Use?

Having prior experience with Go and Fiber, choosing my web stack was easy. Initially, I experimented with Huma, but I encountered some limitations, such as restricted support for sub-routers. Since my project only required a few REST endpoints (where Huma excels), I decided to stick with raw Fiber.

Next came the question of storage—for mapping short and long links and storing statistics. Since I wasn’t planning a high-traffic service, I could use any solution: RDBMS, NoSQL, or key-value storage.

I prefer self-hosted solutions, so SaaS like Firebase was off the table. If this series interests you, I’ll write a post about the deployment process.

I value simplicity. Storing a short-long link pair sounded like a “key-value” use case, so I chose Redis.

Some may think I picked Redis for its speed. You’re partly right, but not entirely. Yes, Redis is fast because it stores everything in RAM, and redirects in my service took less than 1 ms. However, users’ requests must still travel to and from the server, which often adds 50–100 ms or more to the response time. In most cases, Redis’ performance advantage is negligible to end-users.

Note: While response times impact user experience, they also affect the service itself. Faster response times reduce resource consumption, enabling higher requests per second (RPS).


The Storage

I’ll describe the steps I took during development. Some initial ideas weren’t optimal, and I’ll explain why.

Since I chose Redis, my options were limited to its data types. RAM is finite, so I decided to add a TTL (Time to Live) of 7 days for links, with flexibility to adjust later.

Initially, Redis’ String type seemed ideal. Keys like links:<id> could store target URLs as values. Commands like SET with NX (to prevent overwriting) and EX (to set TTL) provided a straightforward solution.

However, what if I wanted to store additional metadata in the future, such as a creation date or user ID? I could create separate keys (links:<id>:createdDate, links:<id>:userId, etc.), but managing TTL across multiple keys would become cumbersome and error-prone.

Thankfully, Redis’ Hash data type solved this issue. With hashes, I could store all link-related data under one key (links:<id>:meta) and set fields with HSET. TTL could be applied to the entire key using EXPIRE, ensuring consistency.

But there was a catch: Redis hashes lack an NX option to prevent collisions. Checking for key existence with EXISTS before HSET isn’t reliable in concurrent environments. A distributed lock could help, but it would reduce write throughput.

Even without high throughput requirements, I needed to plan for potential load increases.

Redis 7.4 introduced a better solution: the HEXPIRE command. It allows TTL to be set on individual hash fields, which was perfect for this use case.

Here’s my final approach:

  1. Generate an ID and attempt to save the ID-URL pair in an index hash (links:index) using HSETNX links:index <id> <target>. If the result is 0, there’s a collision.
  2. If successful, set metadata using HSET links:<id>:meta key1 value1 key2 value2 ....
  3. Set TTL for the <id> field in links:index using HEXPIRE links:index <id> <ttl>, and for the metadata with EXPIRE links:<id>:meta <ttl>.

Steps 2 and 3 can be pipelined to minimize round-trips to Redis.

Data Model

links
  | index - HASH
      | <id> -> target URL
  | <id>
      | meta - HASH
          | targetUrl -> target URL
          | createdAt -> created at date
          | validUntil -> valid until date
Enter fullscreen mode Exit fullscreen mode

Conclusion

Today, we began engineering a new URL shortener and discussed one critical component: link storage.

If this project interests you, the next part will explore ID generation and possibly start writing code.

I’d love to hear your thoughts and feedback on this project! The service is currently in alpha, but feel free to try it out.

The source code is available on GitHub: tnfy-link. Documentation will be updated as the service approaches beta.

Thanks for reading!

Top comments (0)