our blog

How to Set Metrics That Matter in Digital Delivery

How to Set Metrics That Matter in Digital Delivery

At a time when digital products are being built and shipped faster than ever, the real challenge isn’t speed - it’s knowing whether the work is actually making an impact.

That’s the question we tackled in a recent webinar with UKTN, where we brought together product and tech leaders to discuss what good looks like when it comes to measurement. From outdated KPIs to aligning teams around business outcomes, the conversation reinforced a simple point: velocity alone isn’t enough - we need to be tracking what actually matters.

It’s also what inspired us to build Pulse, our own internal platform for measuring delivery. Pulse helps product teams understand both performance and sentiment across sprints - giving a clearer view of what’s working, what’s not and where to improve. Because the best decisions come when data and team insight go hand in hand.

Understanding which metrics truly matter is key to driving a team to success. With so many data points available, it's easy to fall into the trap of tracking the wrong things - vanity metrics that look good on a dashboard but fail to provide actionable insights. This post explores how to define meaningful KPIs that align with your team’s goals and business objectives, ensuring you're making data-driven decisions that genuinely improve product delivery and team health. 

For product and engineering managers, metrics serve as both a guiding light and a validation tool. The right ones help refine processes, measure success and make informed decisions. They offer clarity in cross-functional teams where engineers, designers and stakeholders may have different perspectives on what progress looks like. The key is selecting metrics that not only quantify output but also drive behaviors that improve long-term outcomes.

The 2022 Accelerate State of DevOps Report by DORA found that organisations tracking key software delivery metrics, such as deployment frequency, lead time for changes, change failure rate and time to restore service - demonstrate higher software delivery and operational performance, which correlates with improved organisational outcomes. Similarly, our own research found that 100% of respondents agreed measuring productivity is beneficial.

Not all metrics are created equal. Some look impressive but fail to tell us anything useful, while others provide deep insight into efficiency and performance. A meaningful metric should be actionable, so the team can influence it through their work, aligned with product and business goals and balanced, combining qualitative and quantitative measures for a holistic view.

Too often, teams focus on raw output, such as the number of features shipped or lines of code written, rather than the impact of their work or the quality of their process. A great product isn’t just built quickly, it delivers real value. Research from Google’s DevOps Research and Assessment (DORA) team highlights that elite teams deploy code 208 times more frequently than low-performing teams while maintaining higher system reliability. This underscores the importance of tracking meaningful metrics that drive positive behaviours. 

There are so many metrics and frameworks out there and it can be overwhelming to know where to start. Rather than trying to track everything, it's important to focus on a few key metrics that drive meaningful impact. Here are a few we like to get you started on your journey:

It’s crucial to balance numbers with real-world context. High deployment frequency might seem great until customer satisfaction scores indicate a quality problem. Faster cycle times might come at the cost of burnout, making qualitative feedback from teams just as important.

Bad metrics can be worse than no metrics. Vanity metrics that don’t drive meaningful change, overcomplicating measurement with too many KPIs, or using metrics punitively can lead teams in the wrong direction. Studio Graphene’s research found that 58% of industry leaders warn against misleading metrics, citing potential negative impacts on morale and productivity. A study by Harvard Business Review also found that 67% of employees feel pressured to game metrics when they are tied to performance evaluations, which can create misleading data. The best approach is to keep things simple and focused, ensuring the team understands why each metric exists and how it contributes to success.

Once the right metrics are in place, the focus should shift to how they’re used. They should be visible and accessible through dashboards or reports, regularly reviewed and adapted as teams and products evolve and openly discussed to ensure shared understanding. Numbers don’t exist in isolation and fostering a culture where teams analyse and iterate based on data leads to more effective decision-making.

Ultimately, metrics are a means to an end, not the end itself. The goal isn’t just to measure more but to measure better. By focusing on meaningful, actionable and balanced KPIs, product teams can avoid distractions and concentrate on what truly moves the needle. A great product isn’t the one with the most features or the fastest release cycle - it’s the one that delivers lasting value. With the right approach to metrics, teams can unlock efficiency, drive continuous improvement and build products that genuinely make an impact.

spread the word, spread the word, spread the word, spread the word,
spread the word, spread the word, spread the word, spread the word,
Workflow diagram illustrating AI agents producing outputs with human oversight and structured intervention points
AI

When AI Agents Get It Wrong

Workflow diagram showing multiple AI agents being monitored with human oversight
AI

Running AI Agents Reliably in Production

Diagram of multiple AI agents handling tasks across teams with human oversight
AI

How Multiple AI Agents Work Together in a Business

AI agent monitoring workflow activity with human oversight dashboard
AI

Running Agentic AI Safely at Scale

AI agent analysing business performance data while leadership reviews measurable ROI metrics on a digital dashboard
AI

When Does Agentic AI Become Commercially Meaningful?

When AI Agents Get It Wrong

Workflow diagram illustrating AI agents producing outputs with human oversight and structured intervention points
AI

When AI Agents Get It Wrong

Running AI Agents Reliably in Production

Workflow diagram showing multiple AI agents being monitored with human oversight
AI

Running AI Agents Reliably in Production

How Multiple AI Agents Work Together in a Business

Diagram of multiple AI agents handling tasks across teams with human oversight
AI

How Multiple AI Agents Work Together in a Business

Running Agentic AI Safely at Scale

AI agent monitoring workflow activity with human oversight dashboard
AI

Running Agentic AI Safely at Scale

When Does Agentic AI Become Commercially Meaningful?

AI agent analysing business performance data while leadership reviews measurable ROI metrics on a digital dashboard
AI

When Does Agentic AI Become Commercially Meaningful?

When AI Agents Get It Wrong

Workflow diagram illustrating AI agents producing outputs with human oversight and structured intervention points

Running AI Agents Reliably in Production

Workflow diagram showing multiple AI agents being monitored with human oversight

How Multiple AI Agents Work Together in a Business

Diagram of multiple AI agents handling tasks across teams with human oversight

Running Agentic AI Safely at Scale

AI agent monitoring workflow activity with human oversight dashboard

When Does Agentic AI Become Commercially Meaningful?

AI agent analysing business performance data while leadership reviews measurable ROI metrics on a digital dashboard