DEV Community

Cover image for Some things I learnt from working on big frontend codebases
Stefano Magni
Stefano Magni

Posted on • Edited on

Some things I learnt from working on big frontend codebases

Until now (May 2024), I had three experiences working on very big front-end (React+TypeScript) codebases: WorkWave RouteManager, Hasura Console, and Preply.com. The former two are ~ 250K LOC, while Preply.com is close to 1M LOC, and the experiences are very different. In this article, I report the most important problems I saw while working on them, things that usually are not big deals if working on smaller codebases, but that become a source of big friction when the app scales.

Photo by Sander Crombach on Unsplash


Changelog


My direct experience

First of all, let me describe the main characteristics of the two projects:

  1. WorkWave RouteManager: the product is very complex due to some back-end limitations that force the front-end to take care of a bigger complexity. Anyway, due to the strong presence of the great front-end Architect (that's Matteo Ronchi, by the way), the codebase can be considered front-end perfection. The codebase is completely new (rewritten from scratch from 2020 to 2022), and trying and using the latest tools happened on a high cadence (for instance: we started using Recoil way sooner than the rest of the world, we migrated the codebase from Webpack to Vite in 2021, etc.), and the coding patterns are respected everywhere. The team was made by the four front-end engineers, including the architect and I. Here I was the team leader of the front-end team.

  2. Hasura Console: the complexity of the project is not so high but the startup needs (pushing out new features as soon as possible) and the very back-end nature of the platform later resulted in huge technical debt and antipatterns that became into big friction points for the developers working on the front-end. The team was made of 12 front-end engineers, and later on the company decided to ditch the front-end project creating a 50x smaller one, and keep only the back-end/CLI projects. Here, I joined as a senior front-end engineer and then I became the tech lead of the platform team.

  3. Preply.com: Preply scaled following a very experiment and data-driven approach, given its B2C nature and the million of users taking lessons on the platform every day. The natural business orientation lead to heavy outdated front-end dependencies and hard-to-work-with front-end projects. At the same time, Preply's goal to become a strong brand, to grow in the B2B market, and the tireless dedication to its internal culture and employees' satisfaction, drove the company to care a lot about internal tech excellence, to create a DevEx team inside the Platform team, and to lead some specimen tech initiatives. The company counted ~40 frontend-end engineers, some of them dedicated to React Native. I joined the platform team as a senior front-end engineer and then I moved to the Design System team.

Following, is a non-exhaustive list of examples coming from some of the characteristics/activities/problems I saw, grouped by categories.

Table of contents

Generic approaches

Managing more cases than the needed ones

This innocent approach leads to big problems and a waste of time when you have to refactor a lot of code trying to maintain the existing features. Some examples are:

  1. Components/functions with optional props/parameters and fallback default values: when you need to refactor the components you need to understand what are the indirect consumers of the default values... But what happens if the usage of the default values is driven by network responses? You need to understand and simulate all the edge cases! And what happens if you find out that the default values are not used at all? I once saw a colleague of mine wasting four hours during a refactor for an unused default value...

  2. Types that are typed as a generic string or generic record<string, any> when in reality the possible values are known in advance. The result is a lot of code that manages generic strings and objects while managing the real finite amount of cases would be 10x easier. Again, when you need to refactor the code managing "generic" values, you are going to waste time.

I touched on these topics in my How I ease the next developer reading my code article.

Leaving dead code around

You refactor a module, you remove an import of an external module and you are fine. What happens if the module was the last consumer of the external one? The external module becomes dead code that will not be embedded in the application (nice) but that will confuse everyone that's going around the codebase looking for solutions/utilities/patterns and will confuse the future refactorer that will blame anyone that left the unused module there!

And obviously, it's a waterfall... the external module could import other unused modules and they could depend on an external NPM dependency that could be removed from the package.json, etc.

Internal code dependencies and boundaries

Not enforcing (through ESLint rules or through a proper monorepo structure) strong boundaries among product features/libraries/utilities bring unexpected breaks as a result of innocent changes. Something like FeatureA imports from internal modules of FeatureB that imports from internal modules FeatureA and FeatureC, etc. brings you to break 50% of the product by changing a simple prop in a FeatureA's component. And if you have a lot of JavaScript modules never converted to TypeScript, you will also have a hard time understanding the dependency tree among features...

I strongly suggest reading React project structure for scale: decomposition, layers and hierarchy.

Implicit dependencies

They are the hardest things to deal with. Some examples?

  • Global styles that impact your UI's look&feel in unexpected ways
  • A global listener on some HTML attributes that does things without the developer knowing about them
  • A generic MSW mock server that all the tests used but it's impossible to know what handlers are used by what tests

Again, poor the refactorer that will deal with those. Instead, explicit imports, speaking HTML attributes, inversion of control, etc. allow you to easily recognize who consumes what.

Spreading external dependencies and implementation details

External dependencies should be hidden and consumed by controlled code if you write a custom function called addOneDayToDate compared to spread dateFns.addDays(currentDate, 1) everywhere is better because the function depends on DateFns, which is centralized, easy to test, and change.

Big modules

This is another very subjective topic: I prefer to have a lot of small and single-function modules compared to long ones. I know that a lot of people prefer the opposite so it's mostly a matter of respecting what is important for the team.

Code readability

I'm a fan of the The Art of Readable Code book and after spending 2.5 years working on a big and complex codebase with zero (!!!) tests, I can tell how much code readability is important.

This also really depends on the number of developers working on a codebase, but I think it's worth investing in some shared coding patterns that must be enforced in PRs (or even better if they can be automated through Prettier or similar tools).

I publicly shared the ones we were using in WorkWave in this 7-article series: RouteManager UI coding patterns. The internal rule we had was that "patterns must be recognizable in the code, but not authors".

No silver bullets here, the important thing IMO is that readability and refactorability are kept in mind by everyone when writing code.

Uniformity is better than perfection

If you are about to refactor a module but you do not have time to refactor also the two modules that are coupled to it... Consider not refactoring it to leave the three modules uniform among them (uniformity means predictability and less ambiguity).

Working flow

Not tracking architectural decisions

Architectural decisions and changes are key to comprehending why a project was designed and how it evolved over time. Usually, these decisions are not 100% reflected in the codebase since big codebases always require incremental approaches.

It is important to track those decisions to avoid dealing with approaches partially applied, refactors partially done, etc., without a precise idea about the timeline of those decisions and what they were trying to solve.

Usually, this problem explodes when the engineers who remember those decisions leave the company, and the new ones are doomed to remain ignorant forever.

On a small scale, this also refers to the awkward changes and/or shenanigans you make to get something done. See this great Josh W. Comeau's example.

No PR description and big PRs

That's such an important topic that I wrote four articles about it. Start with the most important one: Support the Reviewers with detailed Pull Request descriptions

And if you are curious you can dig into some real-life examples I documented here

Suggesting big changes and approaches during code reviews

PRs are not the best place to suggest big changes or completely change the approach because you are indirectly blocking releasing a feature or a fix. Sometimes is crucial to do it, but maybe the initial analysis and estimation steps, pair programming sessions etc. works better to help shape the approach and the code.

When to fix the technical debt?

That's a great question, no silver bullet here... I could only share my experience until now

  1. In WorkWave we were used to dealing with technical debt on a daily basis. Fixing tech debt is part of the everyday engineers' job. This can slow down the feature development in favour of having a deep knowledge of the context and keeping the codebase in a good shape. It's like knowing that you are slowing down today's development to keep tomorrow's development at the current pace.
  2. In Hasura, we cannot deal with technical debt due to the needs to deliver new features. This transformed in a lot of frontenders going slower compared to their potential, sometimes introducing bugs, and offering an imperfect UX to the customers. It happened after years, obviously.
  3. In Preply, engineers can dedicate 20% of their time to tech excellence initiatives, some of them driven by the company and other ones proposed by the teams themselves.

You can read more about a good example of Hasura's problems in my Frontend Platform use case - Enabling features and hiding the distribution problems article. Also, you could read what happened to our E2E tests here after all the tech debt problems we were facing.

Major product changes and refactors

Major product changes (the complete rewrite of WorkWave RouteManager, Preply's complete rebrand, etc.) are also perfect for introducing refactors or clearing tech debt that has been there for ages. The reason is that all the knowledge accumulated in the previous years gives us a more comprehensive vision of what is needed and what needs to be cleared, leaving the new product way better than when it started (a sort of greenfield project inside an existing product).

No front-end oriented back-end APIs

By "no front-end oriented" I mean APIs not designed with the end customers' UX in mind and a lot of complexity pushed to the front-end in order to keep the back-end development lean (ex. Embedding a lot of DB queries in the front-end avoiding exposing a new API from the back-end). This approach is natural during the initial evolution of a product but will lead to more and more complex front-ends when the product needs to scale.

Never updating the NPM dependencies

Again, based on my own experiences:

  1. In WorkWave I used to updating the external dependencies on a weekly basis. Usually, it takes me 30 minutes, sometimes 4 hours.
  2. In Hasura, we were used not to update them, finding ourselves, enabling legacy-peer-deps by default, leveraging NPM's overrides and being unable to update any GraphQL-related dependency. Other than having a lot of PRs that completely break the build because of a new dependency.
  3. In Preply, the outdated TypeScript version made impossible to enable exactOptionalPropertyTypes and noUncheckedIndexedAccess which caused more than one production incident. At the same time, the need to become SOC2 compliant (necessary to expand in the B2B market) got caring about the dependencies a first-class citizen (after a big months-long initiative to update all of them).

And since maintaining dependencies has a cost, you should carefully consider if you really need an eternal dependency or not. Is it maintained? Does it solve a complex problem I prefer to delegate to an external part?

As an alternative approach, valid only for very very small projects, you can also consider to copy/paste the code of some dependencies inside a "vendor" directory, linking the original project and tracking which version the code belongs to (at the cost of not being able to update it and that other must know they should not install the same dependency).

TypeScript

Bad practice: Generic TypeScript types and optional properties

It is very common to find types like this

type Order = {
  status: string
  name: string
  description?: string
  at?: Location
  expectedDelivery?: Date
  deliveredOn?: Date
}
Enter fullscreen mode Exit fullscreen mode

that should be represented with a discriminated union like this

type Order = {
  name: string
  description?: string
  at: Location
} & ({
  status: 'ready'
} | {
  status: 'inProgress'
  expectedDelivery: Date
} | {
  status: 'complete'
  expectedDelivery: Date
  deliveredOn: Date
})
Enter fullscreen mode Exit fullscreen mode

that is more verbose but acts as pure domain documentation, removes tons of ambiguity, and allows writing better and clearer code.

The topic is so important and has so many great advantages that I wrote a dedicated article to the topic: How I ease the next developer reading my code.

Type assertions (as)

Type assertions are a way to tell TypeScript "shut up, I know what I'm doing" but the reality is that barely you know what you are doing, especially thinking about the consequences of what you are doing...

This happens very frequently in tests, where big objects are "typed" with type assertions... Resulting in the object going outdated compared to the original type... But you realize it only when the tests will fail and you now left room for a lot of future doubts about the test failures...

The solution: type everything correctly and eventually prefer @ts-expect-error with an explanation of the error you expect.

Read Why You Should Avoid Type Assertions in TypeScript to know more about the topic (and keep in mind that also the JSON.parse example shown there can be typed by using Zod parsers).

@ts-ignore instead of @ts-expect-error and broad scope

@ts-expect-error issues could be auto-fixable in the future, compared to @ts-ignore (that's another way to shut up TypeScript).

More, @ts-expect-error should be applied to the smallest possible scope to TS accepting unintended errors.

// ❌ don't
// @ts-expect-error TS 4.5.2 does not infer correctly the type of typedChildren.
return React.cloneElement(typedChildren, htmlAttributes); // <-- the whole line is impacted by @ts-expect-error

// ✅ do
return React.cloneElement(
  // @ts-expect-error TS 4.5.2 does not infer correctly the type of typedChildren.
  typedChildren, // <-- only typedChildren is impacted by @ts-expect-error
  htmlAttributes
);
Enter fullscreen mode Exit fullscreen mode

any instead of unknown

TypeScript's any gives you freedom (that's generally bad) of doing everything you want with a variable, while unknown forces you to strictly guarantee runtime the runtime value before consuming it. any is like turning off TypeScript while unknown is like turning on all the possible TypeScript alerts.

ESLint rules kept as warnings

ESLint warnings are useless, they only add a lot of background noise and they are completely ignored. Rules should be on or off, but never warnings.

Validating the external data

In the software world, the rule of "never trust what the frontend sends to the backend" is crucial, but I'd say that in a front-end application armed with TypeScript types, you should not trust any kind of external data. Server responses, query strings, local storage, JSON.parse, etc. are potential sources of runtime problems if not validated through type guards (read my Keeping TypeScript Type Guards safe and up to date article) or, even better, Zod parsers.

React

HTML templating instead of clear JSX

JSX which includes a lot of conditions, loops, ternaries, etc. are hard to read and sometimes unpredictable. I call it "HTML templating". Instead, smaller components with a clear separation of concerns among the components are a better way to write clear and predictable JSX.

Again, I touched on this topic in my How I ease the next developer reading my code article.

Lot of React hooks and logic in the component's code

I'm a great fan of hiding the React component's logic into custom hooks whose name clearly indicates the scope of the hook and the consuming it inside it. The reason is always the same: long code before the JSX makes reading the JSX harder.

Components accepting className

Components are designed to encapsulate and hide some logic and give the external world the result of this logic. Their UI is part of the encapsulated APIs the consumer should not be able to change. Usually, components also accept className to allow consumers to customize small parts of the component's UI (this is the initial goal). Instead, the result is an uncontrolled and hard-to-predict backdoor to rape all the UI details of the components and their children in seconds.

Like all the JavScript details, the styling details should be encapsulated and hidden, exposing only some generic configurations to the consumer. These configurations explicitly mark what the component offers and what the consumer wants to obtain (like variants, type, mode, etc.).

As a refactorer, when you see a component accepts className, you already know how hard your life will be.

Hide stores implementation details

Something I saw working like a charm on WorkWave RouteManager is hiding the stores under modules that export React-only, store-independent APIs. We started using Recoil way before the rest of the world, and later on, we migrated to Valtio because it better covered our needs. The migration was painless because Recoil was just an implementation detail of modules that export pure React APIs like useSelection.

Consuming Swiss army knifes

Some React components are generic by design due to the huge number of cases they manage (think of a table, a date picker, a modal, etc.). This makes hard to track what consumers need out of the 100 features those generic components do. As a result, refactoring those components or the consumers is hard. My suggestion is to create intermediate and vertical components that act as proxies for the more complex ones.

The vertical components' name and description allow the reader to understand what they do and need without digging into the details of how the original complex components are consumed (for instance: UserList which uses just the sorting options of Table is clearer compared to digging into how Tutors, Students, and Managers pages use Table).

Tests

Bad tests

As a test lover and instructor (I teach about front-end testing at private companies and conferences) I can say that bad tests are the result of lacking experience on this topic, and the only solution is help, mentoring, help, mentoring, help, mentoring, etc.

Anyway, the false confidence that tests can offer is a big problem in every codebase.

I suggest reading two of my articles:

E2E tests everywhere

E2E tests do not scale well because of the need for real data, a real back-end, etc.

From this perspective, Preply is a great example (and the only successful one I saw in my working life) of what you can achieve when the user experience is considered crucial by the leadership: E2E test are mandatory and the strong Continuous Delivery approach lead to 98%+ stability of the E2E test suite that ensure a lot of happy path are always functional.

Also, in this case, I suggest reading some of my articles:

Developer Experience

Deprecated APIs

When code is marked as @deprecated, the IDE shows it as strikethrough'ed and present the documentation, helping the developers realize that they should not use it.

An example:


/**
 * @deprecated Please use the new toast API /new-components/Toasts/hasuraToast.tsx
 */
export const showNotification = () => { /* ... */ }

Enter fullscreen mode Exit fullscreen mode

Care about the browser logs

Console warnings (coming from ESLint, from TypeScript, from React, from Storybook, etc.) add a lot of background noise that mixes with the important logs you could trace. Care and remove them in order to avoid the developers ignoring your own important alerts due to the high noise.

Developer alerts for unexpected things

Runtime things (ex. server responses) could not be aligned with the front-end types. If you do not want to break the user flow by throwing an error, at least track the error through something that could alert you about it (like Sentry, or whatever other tool), so a short time will pass between the error coming out and you fixing it.

React-only APIs

If you are creating an internal library, prefer to expose only React APIs. The big advantage is that you count on React's reactivity system, and managing dynamic/reactive cases in the future will be easier because you are sure the consumers of your React APIs are re-rendered for free and always deal with fresh data.

Non-straightforward CI scripts

CI pipelines should just launch scripts present in the package.json without additional logic that increases the cognitive load and makes it harder to replicate errors locally or in another environment. Think about the painful process of trying to decipher what a CI step does to replicate it locally to dig into the root cause of the issue. Maybe CI uses a tool you do not master, maybe CI uses a particular configuration, and all these take you depending on other colleagues/teams who own everything CI.

CI pipeline should only care about setting up everything with the correct Node.js version (set by the frontenders who maintain the codebase) and launching some CI-dedicated scripts (for instance: ci:lint, ci:build, ci:ts-check, ci:test:unit, ci:test:e2e, etc.). This decouple the scripts launched in CI from who knows better the JS ecosystem, getting everything simpler.

Credit where credit is due

Thank you so much to M. Ronchi and N. Beaussart for teaching me so many important things in the last few years ❤️ a lot of content included in this article comes from working with them on a daily basis ❤️

Top comments (37)

Collapse
 
jrrs1982 profile image
Jeremy Smith

Great article!! Agree to it all!

Collapse
 
noriste profile image
Stefano Magni

Thank you, Jeremy!! 😊

Collapse
 
stefanonepa profile image
stefanonepa

Lots of wise advices!
Thanks

Collapse
 
noriste profile image
Stefano Magni

Thanks to you for leaving your appreciation here 😊

Collapse
 
dipanjan profile image
Dipanjan Ghosal

This is a great read! Saving this so as to go through all the links later.

Collapse
 
noriste profile image
Stefano Magni

Sure, I know reading all of them takes some time 😅

Collapse
 
hassansuhaib profile image
Hassan Suhaib

This is gold! Thanks for sharing Stefano. Learned a ton!

Collapse
 
reacthunter0324 profile image
React Hunter

Thank you

Collapse
 
omril321 profile image
Omri Lavi

Amazing article, thank you! I plan to read most of the linked contents very soon.
I have a question - what do you do when your opinions about tests (or readable code) are different than your team's opinions? For example, if you find yourself working with a team that doesn't find the value of tests.
Thanks again :)

Collapse
 
noriste profile image
Stefano Magni

what do you do when your opinions about tests (or readable code) are different than your team's opinions?

That's a great question 😊

I never dealt with such a situation for a long time. Time makes the difference here, because when we speak about the short term, hoenstly there is no differences. If we speak about the medium and long term, instead, tests and code readabilty make a huge differrence.

Anyway, the approach is alwayus the same: I focus on the most important parts to improve (for instance, in a distributed company with a lot of devs, tests are more important than readability of the code itself. TypeScript Discriminated Unions are more important than code indentation, etc. especially if you consider the advent of Copilot etc.) and:

  1. I act as a model: most people do not have strong opinions, and when they see "the quality" of how more seasoned devs work they tend to emulate.
  2. I propose things: I get in touch with the authors of the code, I propose improvements, I jump in a sync call to discuss them, I listen to the proposals of the other ones, and also I show/demostrated what is the added value of my approaches.
  3. I do some refactors all my own and I jump in a call to discuss it with the author. Please note that, in this case, it's important to let original code as is, even if it leaves room for improvements and even if I refactored it. From the authors' perspective, you are respecting their work if you do not change it. The next time, there is a chance they will follow your suggestions because they know you respect them.
  4. I keep track of real-life examples that show that I'm right, and I keep them in a separate txt of mine. Then, when needed, I can recall them and point people there. It's hard to deny the evidence 😊

All the above means accepting that 90% of your suggestions (especially if you are a nitpicker) will not be considered at all... But the remaining 10%, the most important ones, maybe yes. And it's a great exercise for me too! Because as a perfectionist, I need to always learn more and more that not everything have the same importance.

What do you think? Do you have different direct experiences? 😊

Collapse
 
omril321 profile image
Omri Lavi

Thank you for the detailed reply!

in a distributed company with a lot of devs, tests are more important than readability of the code itself

I never thought about it, but I totally agree. On larger companies, each team usually owns its own codebase. They usually know who to approach when needing clarifications about the code. What they usually don't know is which part will break when something changes - that's where the importance of tests really shines.

I really like the 90%-10% approach, it makes a lot of sense to me. In some way, it can be parallelized to an important skill of a good developer: understanding what's important, and compromising where needed (e.g. on a quality vs. speed consideration).

I also think of myself as a perfectionist, but I try taking a pragmatic approach. Usually when I review a PR, I tend to be very pedant, and leave comments about "smaller" things as well. However, I make sure to emphasize what's important and what's not, and on the summary note I explicitly say what needs to change to get an approval from my end. I make sure not to become a burden, otherwise people will avoid approaching me.
When taking part on "live" sessions (e.g. design reviews), I try being more "nice", considering what's really important to me, and start by raising only these issues. In some ways, it's a bit harder than doing it "offline", since you need to analyze the details quickly. On the other hand, since it's usually face-to-face, people tend to be more receptive to the feedback.

What's your experience on this regard?

P.S.
I really like the content you write, both the subjects and the style. Keep up the amazing work!

Thread Thread
 
noriste profile image
Stefano Magni • Edited

Sorry for the delay, I was on vacation 😊

I really like the 90%-10% approach, it makes a lot of sense to me. In some way, it can be parallelized to an important skill of a good developer: understanding what's important, and compromising where needed (e.g. on a quality vs. speed consideration).

Could you tell me how you all use it in your company? 😊

However, I make sure to emphasize what's important and what's not, and on the summary note I explicitly say what needs to change to get an approval from my end.

That's an interesting approach I did not think about. Here, I always use Conventional Comment to express the importance of every single comment (since most of them are nitpicks).

When taking part on "live" sessions (e.g. design reviews), I try being more "nice", considering what's really important to me, and start by raising only these issues.

I do the same 😊

In some ways, it's a bit harder than doing it "offline", since you need to analyze the details quickly.

I have the same problem, usually my mind needs a bit more time to analyze things and lot of times I get back to the other dev later in the next hour with more thoughts 😊

On the other hand, since it's usually face-to-face, people tend to be more receptive to the feedback.

Also: it requires less time to convince people face by face than async IMO 😊

I really like the content you write, both the subjects and the style. Keep up the amazing work!

Thank you so much, it means a lot to me 🤗

Thread Thread
 
omril321 profile image
Omri Lavi

Hey Stefano, Thank you for the reply! I hope you had a good vacation 😊

Could you tell me how you all use it in your company?

I'm not sure if it's used by everyone in the company. Personally I keep a mindset similar to what you described: I understand people have a lot on their plates, and not all my suggestions can be applied. Since I understand that 90% of my suggestions may not be implemented, I try hard to find the 10% that are crucial, and should not be ignored (as I see it). It's a matter of prioritization and compromisation 😊

... I always use Conventional Comment ...

Wow, that's a great convention, I never heard of it! I think that using this depends heavily on the company's culture and size. In a smaller company, I believe it's easier to get a broader agreement about such convention. In a medium or large company, with different groups and sites, it's very likely that not everyone will agree about the benefits of such convention. This may add friction in a worse way than PR comments with no convention 😆
I wonder how larger companies integrate with such conventions in a way that is accepted by most of the workers... Do you happen to know about such processes?

... lot of times I get back to the other dev later in the next hour with more thoughts

I need to start doing this more 😆 I find myself many times trying to provide a solution quickly, just so I won't have to deal with another thing on my plate. (And perhaps since I want to be seen as "the guy with the answers" 😋). I should follow you as example more often, and take the time offline to think of an answer.

Thread Thread
 
noriste profile image
Stefano Magni

I wonder how larger companies integrate with such conventions in a way that is accepted by most of the workers... Do you happen to know about such processes?

I could tell you that in my experience, it simply happened organically. Who leaves 0/1 comments do not use it. Who leaves a lot of comments start using it when they see you using it. I think it's the best approach instead ot pushing it to everyone 😊

I should follow you as example more often, and take the time offline to think of an answer.

FYI: this takes time, obviously 😊 and I need to balance when to do it and when not otherwise it could ruin my days in a while 😊

Collapse
 
cmcnicholas profile image
Craig McNicholas

Nice article, you pretty much encapsulate all my experiences.

To expand on your typed unions example I think this goes further into correct data modelling techniques.

Too often do I find hacky procedural scripting-like behaviour when people model front end data models but you should be taking as much care as your db, API etc. If application state is correctly modelled it makes the decision of what to guard against or implement in the resulting component so much simpler and unambiguous. Too many front end Devs don't appreciate good OO in this case and it hurts as the apps grow/scale.

Collapse
 
noriste profile image
Stefano Magni

I 100% agree, thanks for sharing it 👏👏👏

I have only frontend experience so I can't say if it's something specific to frontend devs or simply lack of "great" mindset, independently from frontend or backedn

Collapse
 
webbertakken profile image
Webber Takken

Excellent article. Thank you for sharing!

A quick note about your remark on ESLint warnings:

ESLint warnings are useless, they only add a lot of background noise and they are completely ignored.

Note that ESLint warnings can show different squiggly lines (yellow instead of red) in your IDE, meaning you can keep coding and fix them later, as they're less distracting. They're only useless if you don't enforce them being fixed eventually.

I would recommend using ˋeslint src --ext ts,tsx --max-warnings 0ˋ as a script in package.json and invoke that from a CI workflow. You could also add a precommit hook for faster feedback on staged files (explained) to improve developer experience.

Collapse
 
noriste profile image
Stefano Magni • Edited

Thank you, Webber! Did the "warnings during dev and errors in CI and on pre push" approach work well for your team/teams? I would prefer to set a strong alert since the beginning so the developers knows that they are hiding the dust but they need to fix the problems in the short term compared to giving soft warnings and then errors... But I'm very curious about the pros and cons you found with your approach!! 😊

Collapse
 
webbertakken profile image
Webber Takken

The yellow ones are clearly less daunting and can be helpful abstracting over the exact syntax of your implementation momentarily, which reduces cognitive complexity and allows focusing on the feature at hand.

Errors for no-console, react-hooks/exhaustive-deps and no-unused, to name a few, can be distracting during development, especially if you can not differentiate them from more structural problems.

Blocking them at pre-commit just means you have to fix them before committing. Therefore it does the same thing as you're describing, just with a bit more nuance: multiple colours, but all need to be fixed before committing.

Thread Thread
 
noriste profile image
Stefano Magni

It makes sense, thank you 😊

Collapse
 
matiasherranz profile image
Matías Herranz • Edited

Great article! I agree on most points, and wanted to emphasize how crucial code style and uniformity of approach is imo. You should ideally never get to discuss these concerns on a PR level, but enforce them before, with automated tools (prettier, eslint, etc).

Collapse
 
noriste profile image
Stefano Magni

I agree, one of the next things I want to study is AST for leveraging ESLint for more advanced cases than the ones provided by the various (and great) plugins 😊

Collapse
 
bobbyconnolly profile image
Bobby Connolly

I really like your approach and learned some things from your article like the discriminated union.

Personally, I often tell typescript to shut up and turn off noImplicitAny and strictNullChecks. However, I work alone and understand my "loose TS shrinkwrap." I really love how TS infers the return types and I don't mind typing my parameters for the most part.

Collapse
 
noriste profile image
Stefano Magni

However, I work alone

This changes a lot of things. My approach was the same if yours until I was almost working alone on frontend, I changed my mind when I needed to ensure everything was stable and secure for a lot of devs other than me 😊