DEV Community

Max Daunarovich for FindLabs

Posted on • Updated on

This is how we've utilized our expertise and built solid data-driven application

Introduction

One of the key projects FindLabs has created and currently supports is FlowDiver - the Flow Blockchain block explorer. It allows you to inspect different information about the blockchain, without directly interfacing with it (which can require a specific setup and knowledge). For example, you will be able to explore which transactions were included in a recent block, what arguments and code were used for said transactions and what events were emitted as a side effect of changes to network’s state.

There was a single solution on a market back then - FlowScan - which was providing the same kind of information, but there were no public API projects could tap into and private one had it’s own issues - inconsistent data, complex data fetching or inability to filter specific results out, etc.

We’ve seen the potential and interest in the community and started the work from the data layer. When we got the confirmation and understanding the gargantuan scale of the task we bootstrapped front-end app and continued from there.

This post is about our journey - what decisions we’ve made and where they lead us. Grab a popcorn and prepare for a ride! :)

How we tailored it

Pick your poison

Given our team's collective proficiency within the React ecosystem, we decided to leverage this expertise for our project. Initially, we contemplated utilizing Next.js; however, due to the limited practical experience with this technology among key engineers and the pressing timeline to develop the first prototype, we opted for a Single Page Application(SPA) approach. For bundling, we selected Vite, primarily due to its super fast build times, simplicity of configuration, and potential for a nearly seamless transition to server-side rendering.

Styled Components

Next key elements are styled-components and accompanying it styled-system libraries. These gave us a clear separation of CSS rules at the component level and provided a nice way to implement our building blocks.

Yes, it increased the initial bundle size, but that chunk would be reusable and would only be loaded once. Additionally, lazy loading the routes allowed us to cleanly split the app into separate blocks, further minimizing the loading time.

Atomic System

We were loosely following Atomic Design principle to slice our interface into reusable parts. The smallest possible elements would be living under components/atoms, bigger “brothers” and “sisters” would form molecules which would be composed into organisms. This methodology provided a solid foundation and established component composition as the core principle of our design approach.

React Router

React Router v6 was picked as a stable routing solution to support lazy loaded routes, domain code splitting and it was potentially an easier way to port the app to Next.js. Following Atomic Design, we created a pages folder and gathered files for respective routes there. In retrospect, this was a great decision, which made transition to server-side rendering a breeze.

State

Application architecture is rather simple and can be seen as a lose composition of separate modules. Data fetched on a single route is never reused on another route, which makes it easy to describe using React’s useState and urql’s useQuery . Occasional calls to chain were sewn together with backend data using useEffect resolvers. Overall, the system doesn’t require complex state management and data juggling, which made it easier to split tasks between team members and work in parallel.

Context

A good portion of the application consists of tabular representation of data. Data in said tables can be sorted, paginated and filtered. During the design stage we were able to identify repetitive patterns across these component and plan implementation accordingly.

Disregarding props-drilling technique in favor of a more reliable and elegant solution we looked for inspiration elsewhere. Another project of ours .find was using Typesense/Algolia components, which looked a bit like black-box/magic, but at the same time provided a clean approach to build complex and highly customizable solutions.

Our Table component lives within the confines of our data provider and can be expanded with pagination, sorting and filtering solutions. All of the aforementioned parts are decoupled from each other and could be customized for current needs. Some tables are using on-chain data and some use locally stored data. Filtering/sorting logic is purely client-side, while others pull filtered/sorted data directly from the backend and require customized variables for GraphQL queries.

Hasura, urql and Graphql

Our backend sports a huge Postgress database, which is exposed via Hasura service in the form of a GraphQL endpoint. This endpoint provides a schema which paired with graphql-codegen generator to get accurate representation on the client side with all the necessary types and variables for the hooks.

Typescript

While initially we fancied the idea of dropping Typescript to keep momentum and maximum velocity, in the end we are happy we didn’t go that route and decided to do it “right” from the start.

TypeScript offers several benefits to a React project, particularly when collaborating with multiple team members. It introduces static typing, which helps catch errors during compile-time rather than at runtime. This not only enhances the quality of the code but also cuts down on the time needed for debugging. The clear typing system also makes the code more readable and acts as its own documentation, making it easier for team members to understand one another's work. Additionally, TypeScript provides robust tooling that includes autocompletion, type checking, and advanced refactoring capabilities, all of which significantly boost developer efficiency. Moreover, the use of generated types from codegen has allowed for the expansion of generic types, ensuring accurate typing of table elements. It's akin to watching puzzle pieces fit together perfectly.

Deployment

Choosing Vercel was a natural decision as it has become the default method for launching apps that are accessible to a wide audience. The simplicity of configuring environment variables, domains, and other settings facilitated this choice. We have implemented feature branch deployment to guarantee that the code is operational and prepared for peer review.

What went “wrong”

Exposed token

We were initially okay with using an exposed token because there were ways to restrict token rights—either by limiting calls to only those from Flowdiver or by rate limiting the data and frequency of requests. However, this was not an ideal solution as we aimed for a scenario where the token would never be exposed to the client.

No SEO whatsoever

The technology we chose did not offer an easy way for pages to be indexed by search engines. The control over page titles was subpar—looking at you, Helmet—and definitely needed improvement.

Over-engineering

In our quest to perfectly compose complex blocks, we ventured a bit too far into premature optimization. It wasn’t disastrous, but it did mean that some aspects were overcomplicated when they could have been simpler and easier to maintain. We learned our lesson and refactored this overly complex code when we transitioned to a Next.js solution.

Hooks limitations

While the useQuery hooks from urql provided straightforward and clean access to backend data, things got tricky when we needed to combine data from multiple sources. Hooks have to be called conditionally, which led to more complex logic involving one or more useEffect hooks. Although urql allows you to call a query directly and convert the result to a promise, hooks were more convenient, so we endured the complexity and later refactored in Next.js.

Atomic, shmatomic…

We did our best to adhere to the Atomic design approach when creating components. However, the system sometimes didn’t clearly indicate where components should be placed. In some cases, it was obvious that an atom could go anywhere, but in other scenarios, it made more sense to position it near a molecule or organism, especially if it was seldom reused.

This approach didn’t break the app but certainly led to an uneasy feeling that the next person to maintain it might face some challenges and a somewhat rough path.

What Next?

As we continued to roll out new features, we began to explore what it would take to migrate everything to Next.js. This move could address some of the fundamental issues we faced with our current setup and provide a clearer path for the app's growth.

Find out how we did it and the challenges we faced in the next chapter! 😉

Top comments (0)