DEV Community

Cover image for Why does improving Engineering Performance feel broken?
Tiago Barbosa for Rely.io

Posted on

Why does improving Engineering Performance feel broken?

Many engineering teams conduct performance reviews (these are not individual performance reviews, we’re talking about going over a team’s assets and practices) that feel like a blame game or an exercise in vanity metrics, leading to frustration, finger-pointing, and no meaningful improvements.

Yet companies continue this exercise because the stakes are high: engineering performance directly impacts delivery speed, product quality, developer satisfaction, and ultimately, business outcomes. Without a structured review process, teams can miss critical bottlenecks, lose visibility into progress, and stall improvements.

Done right, engineering reviews can uncover bottlenecks, celebrate successes, and provide a clear, actionable plan to improve delivery, quality, and developer experience.

Or so goes the common belief… especially among the management.

This guide is for Engineering Leaders – CTOs, VPs, Heads, Directors, EMs – who want to drive performance but are still tinkering to find the right process and align teams on outcomes.

Do we even need Engineering Performance Reviews?

What even constitutes a review of Engineering Performance?

At its core, an engineering performance review is a process of analyzing how teams, systems, and workflows contribute to delivering software efficiently, reliably, and sustainably. It can include quantitative metrics, qualitative feedback, and outcomes alignment.

The reality is, we all do it, whether systematically, in an organized way or not, and at every scale:

  • In startups, “reviews” happen organically as developers debate and settle on a few practices to follow so they have some semblance of consistency in their code (Are you team tab or spaces?, documentation practices, etc.
  • In small teams within larger companies, reviews ensure alignment and smooth handoffs between developers, increasing shared ownership, reducing ramp-up times and facilitating internal rotations.
  • In mid-sized organizations, reviews address collaboration across teams to ensure operational excellence.
  • In large companies or enterprise settings, they scale to departments, business units, and entire organizations to tackle systemic challenges and improve accountability.

What do we typically hope to get from it?

  • Continuous Improvement: Reviews uncover strengths and weaknesses, enabling systemic changes to improve productivity, delivery speed, and code quality. In the end, it’s also about DevEx – imagine how painful collaboration would be if every code change or resource adjustment felt as convoluted as a bureaucratic process. Now project that inefficiency across thousands of developers.
  • Accountability: Reviews promote transparency and shared ownership. Teams gain clarity on how their contributions tie to the bigger picture and business outcomes.
  • Alignment: Reviews offer a structured forum to align engineering work with overarching company goals. They ensure teams’ focus on solving the right problems.
  • Celebrating Wins: Often overlooked, reviews recognize achievements, highlight progress, and motivate teams to sustain momentum.
  • Visibility: Engineering leaders gain a clear understanding of team performance, bottlenecks, and progress toward shared objectives.

So, is this just a “big company thing”?

No. The need for engineering reviews exists at every scale because improving collaboration, accountability, and efficiency is essential for all teams.

The difference? At scale, processes that worked informally begin to break down, creating frustrations for both leadership and teams. What once enabled quick collaboration now feels like a painful exercise in finger-pointing or red tape.

The challenge is clear: how do we scale engineering performance reviews so they deliver the intended value without creating overhead or mistrust?

Why Traditional Engineering Reviews Fall Short

Engineering reviews often begin with good intentions but fall apart in execution. Why? Because the process itself feels disconnected from the teams it aims to serve.

First, the exercise feels forced. Too often, reviews are initiated by leadership without input from the engineers on the ground. Instead of an opportunity for teams to reflect and improve, they can feel like a mandate from above, imposed rather than embraced.

Second, they are painful and unfair from the outset. Measuring engineering performance relies on KPIs that most organizations don’t have readily available. The irony? The burden of collecting these metrics almost always falls on engineers. Instead of focusing on their work, they find themselves wrestling with sporadic code instrumentation, custom database queries, or extracting data from multiple developer tools – tedious, manual, and thankless tasks performed for leadership’s benefit.

The next problem is priority. Engineering reviews inevitably compete with business as usual – and rarely win. Product engineers, for example, are evaluated and rewarded for delivering features, not collecting data for compliance or operational reviews. Why would they prioritize a process that doesn’t contribute to their career growth? It becomes an exercise without a clear value proposition.

Even when the data is collected, reviews face a trust problem. Metrics alone are rarely enough. They offer a partial picture at best, stripped of the context that explains why a team’s performance may look the way it does. Worse, these metrics are often compiled manually into static datasets – outdated, inaccurate, and impossible to drill into. A single point-in-time snapshot rarely reflects the nuanced, ongoing story of engineering work.

And finally, the outcome feels disconnected. Reviews often produce results that engineers simply don’t recognize. A spreadsheet of metrics suggesting “team X isn’t deploying often enough” lands like a punchline without context. Teams roll their eyes at comparisons to Google’s benchmarks or abstract best practices, which often ignore the unique challenges they face.

When reviews are forced, painful, and untrusted, they lose their potential to drive change. Instead of creating alignment and improvement, they become divisive exercises that highlight friction between leadership and teams.

Treat Engineering Teams Like Customers, Not Direct Reports

As the title of this section suggests, there is a way out of the cursed corner that engineering reviews have painted themselves into—and the good news is that it doesn’t require recreating rotary-to-linear motion conversion devices. The key is to apply well-known principles, often borrowed from platform engineering and product management, to how reviews are structured and executed.

A core tenet of platform engineering is to treat developers as customers. As a platform team, you act like a product manager, delivering capabilities that solve real pain points for your users. These aren’t features that make your life easier; they’re built to empower developers, removing obstacles and enhancing productivity.

So why can’t engineering leaders apply the same mindset when approaching engineering performance?

Imagine this: Instead of saying, “We need to report DORA metrics to leadership,” make it a collaborative journey with the teams themselves. Even if it’s a top-down request, the framing matters—engaging teams as co-creators ensures buy-in and ownership.

Start the Journey with Questions

Much like product discovery, onboarding teams into the performance improvement journey starts with understanding their perspectives. Ask open-ended, collaborative questions like:

  • Where do you believe we are most inefficient? Why?
  • How do you think we fare compared to the rest of the org?
  • Where should we improve first, and why?
  • How much effort should we put into this issue? For what result, knowing what we have on our plate already?
  • What impact do we foresee on deliveries? How should we sell this up the chain?

Let Teams Cook

Answering these questions is no easy task, even for experienced teams. Resist the temptation to jump in with pre-baked solutions—instead, give teams the space to propose their own ideas.

The process is more valuable than the outcome at this stage. Let them ideate and push back constructively, much like a product brief review:

  • Would this be an accurate representation of our performance?
  • Would this be easily understandable by other teams? By higher-ups?
  • Can this be extended or reproduced by other teams, or is it too context-specific to our work?

Facilitate the Team’s Own Goals

This is where the engineering leader’s role becomes pivotal: as a facilitator, not a dictator. Your experience in aligning internal team goals with organizational priorities is crucial. Teams need your help to shape their ideas into coherent, actionable plans that resonate both internally and with leadership.

Propose frameworks, compliance standards, or engineering performance tools as means to an end—not as rigid mandates:

  • “Like developing a product or a feature, let’s start small and iterate success after success.”
  • “Yes, all these frameworks and tools might have inaccuracies or inherent unfairness, but they offer a battle-tested starting point.”
  • “Most importantly, they save us time and effort, allowing us to focus on gaining internal momentum, showing progress and delivering early wins.”

By treating engineering teams like customers, engineering leaders can transform reviews into a collaborative, empowering process. Teams become active participants in their performance journey, owning both the challenges and the solutions, while leadership gains the actionable insights they need without alienating the people doing the work.

Conclusion: Reimagining Engineering Performance Reviews

Engineering performance reviews are critical but often feel broken because of misaligned processes, forced mandates, and a lack of trust. By treating engineering teams like customers—engaging them as co-creators, asking the right questions, and facilitating their goals—leaders can transform reviews from painful chores into empowering, collaborative exercises. The key is to focus on creating alignment, celebrating wins, and fostering continuous improvement without the overhead or mistrust that typically derails these efforts.

Engineering performance reviews shouldn’t be about hitting abstract benchmarks or generating metrics for leadership’s sake. Instead, they should serve as opportunities for teams to uncover bottlenecks, align on priorities, and drive meaningful, systemic change. With the right mindset and approach, reviews can become a cornerstone of engineering success, balancing leadership’s need for visibility with teams’ desire for autonomy and recognition.

What’s Next? How to Conduct Effective Engineering Performance Reviews

In the next article, we’ll dive into the how. You’ve understood the pitfalls of traditional reviews and the principles for a better approach. But what does an effective process look like in practice? How do you define goals, measure progress, and ensure actionable outcomes? We’ll explore frameworks, tools, and real-world examples to help you design and execute engineering performance reviews that deliver results—without the pain.

Stay tuned for a step-by-step guide to transforming engineering performance reviews into a process that empowers teams and drives sustainable improvements.

Top comments (0)