Welcome to my new series about Effect. In this series, I want to discuss why I think Effect is one of the most pragmatic technology choices for most software companies. Before we start talking about the technology itself, we have to establish what all software companies seek. How do their needs change based on their size and the economic environment in which they operate?
In a utopia, all software companies would produce high-quality software that costs them nothing to develop and maintain, and ship it immediately to their customers. This is not possible in the real world. Companies often need to weigh their options and settle for a mix of trade-offs that suits them the most in a given moment. There are many things to be considered. The following is a non-exhaustive list of examples:
- Timeline: does the project need to be shipped before a certain event (board meeting, launch event, etc.)?
- Resources: are there enough resources with the right qualifications to deliver the project?
- Cost: is the current operating budget enough for the delivery and long-term maintenance of the project?
- Regulation: privacy policy, cookies, GDPR, HIPAA, SOC2, FedRAMP, WCAG, etc.
Typically, on one end of the spectrum, we have startups prioritizing faster time-to-market over anything else just so they can prove their product fits the market. The long-term viability and software maintenance are of secondary importance to them. Once they succeed, they can hire engineers who can fix any mistake. At least that is what they think. On the other end of the spectrum, we have large enterprises prioritizing software quality above everything else to protect their existing customer base from experiencing service disruptions. Remember, the bigger they are, the harder they fall. This rule also applies to vast and complex systems with large codebases. One small change in one part of such a system can have unintended, unpredictable, and potentially cascading effects in other parts of the same system. These companies trade the pace of innovation in favor of stability. It is very rare to see large companies that can innovate fast. I mean true innovation here, not a yearly announcement of minor feature improvements in their product catalog.
Let's take a quick look at the various development stages of companies and what their business goals are for the given stage:
- Startup: The primary goal is to validate product ideas and achieve product-market fit as quickly as possible.
- Scale-Up: The focus is on accelerating growth. This requires hiring talented people rapidly, making popular technologies advantageous due to the larger pool of available engineers. The company must also develop the capability to release new features at an incredible pace—any productivity bottleneck could jeopardize its growth targets.
- Corporate: After a period of substantial growth, the company shifts its focus to optimizing systems and processes. Cost efficiency and operational effectiveness become major priorities. With established revenue streams, stakeholders now expect increasing profit margins.
- Enterprise: Having streamlined operations and improved profit margins, the company is ready to scale even further. Instead of hiring individual engineers, it can acquire entire companies. Success depends on selecting the right acquisitions and integrating them quickly. The right technologies and processes play a crucial role in achieving operational efficiency at scale.
- Mega-Corp (or conglomerate): As a well-established enterprise capable of seamlessly integrating acquired companies, the focus shifts to diversification to mitigate risk. Expanding into new industries and markets becomes essential, with an emphasis on establishing new revenue streams.
The global economy also plays a role in these considerations. In the past, when inflation and interest rates were low, and VC funding was abundant, cost was rarely a consideration. A company would throw more engineers or hardware at a given problem to solve it faster and better. Engineers would not experience pressure from the business to optimize their workloads for cost efficiency. These days are very different. In the pursuit of sustainable growth, companies are obsessed with achieving the principle called "Rule of 40"1, and mass layoffs have become the norm. With these macroeconomic headwinds in play, software companies are now forced to figure out how to deliver high-quality software not only fast but also cheap (i.e. efficiently).
The industry is facing a serious dilemma. Companies want conventional technologies (languages, frameworks, libraries, etc.) to be used to write their software. The idea is that freshly hired engineers can immediately be productive and, in case of turnover, the market can easily provide substitutes. If the supply of engineers who can work with a specific technology is low, employers would have to pay higher wages or invest time and money in upskilling and educating their workforce. The problem with conventional tech such as Ruby on Rails, React, etc. is that they are designed to get one started fast. Building high-quality software with such tools requires years of experience, which often ends up negating the productivity benefits these tools were designed to offer. To offset the shortfalls of these technologies, engineers often utilize advanced design patterns and clever abstractions. This ends up hurting the companies because new hires have to spend time understanding and learning how to work with these technologies in the context of the specific company. Thus, the idea of hiring an engineer who can be immediately productive is implausible. Furthermore, once we commit to a specific design pattern, it has to be adhered to. Otherwise, we would end up with a Big Ball of Mud2. This takes discipline and time, further lowering the overall engineering productivity.
Some engineering managers I talk to about this topic like pointing out that a lack of discipline is a "people problem" and bringing more technology will not fix it. I agree, but that is not the point. We do not apply technology to fix people. We apply technology to make people's lives and work easier. This is what humanity has been doing for millennia. We invent and improve things to solve a specific set of problems. Yes, this often introduces a new set of problems. However, if the invention is good, the new set of problems is more enjoyable to deal with than the old one until a new invention comes along and does the same, ad infinitum.
Other people think that AI could be the answer. Imagine a future where fallible humans could be taken out, almost fully, of the software production loop. Machines are generally better at work that requires discipline, aren’t they? This way of thinking is an open admission of the human incapability to produce high-quality software fast.
Okay, but we are still quite far away from a future like that. Today's LLMs produce code of average quality at best because, guess what, they have been trained on publicly available code produced by humans. What shall we do until we reach the amazing future, though?
One thing that comes to mind is the use of cutting-edge technologies specifically designed to solve the difficult problems one faces at scale; technologies that give us the productivity of Ruby on Rails and the reliability of Rust. Technologies that teach us about advanced programming concepts while using them, and not the other way around where we have to learn the concepts first to use them. Utilizing such technologies could also be considered an investment in our future software-producing AI systems that are very much in need of high-quality training data. Sounds too good to be true? Well, I have some good news. This technology exists and it is called Effect.
Effect is a general-purpose programming framework written in TypeScript. It is designed to provide end-to-end type safety3, immutable data structures4, formalized dependency injection5, structured concurrency6, and so much more. It fills in the gap left behind by the missing standard library in the TypeScript ecosystem. Its main goal is to empower engineers to ship high-quality software fast.
That is enough of marketing talk. These are pretty strong claims that need to be backed up. The rest of this series is dedicated to doing just that. We dive deep into software quality: reliability, security, observability, maintainability, testability, performance, and so on. We explore how each of these quality traits supports the business and why it is important for the produced software to possess these traits if one wants to build a lasting product. Last but not least, we discuss how Effect helps us achieve these traits efficiently.
Effect is designed to help us tackle the difficult parts of engineering. Therefore, throughout the entire series, I often demonstrate its capabilities through fairly complex code examples. Simple and short examples that could convey the same message are tough to devise but don't worry! I will do my best to break the complexity down and explain the examples bit by bit on a higher level. Ignore the details. The goal is not to learn how Effect works internally or how to use it. The goal is to demonstrate its benefits as clearly as possible.
Top comments (0)