Measuring and Improving ROI From Your Technology Org

How do you define developer productivity? What is the difference between effectiveness and efficiency? 

Measuring developer productivity is complicated because workflows are nonlinear – unlike business functions like marketing or sales, where progress can be clearly tracked across a funnel, engineering is iterative and progress toward one goal (e.g., shipping code) doesn’t necessarily result in progress toward a larger goal (e.g., releasing a new feature).

Engineering productivity should be evaluated through 2 types of metrics:  

  • Efficiency metrics – how quickly the engineering team can develop, and the quality of their output.
  • Effectiveness metrics – whether the engineering team is building the right things that contribute to business success.

Productivity benchmarks are always relative – there are no industry standards for story points, commit frequency, or code quality metrics. A complex feature might receive 10 story points at one company and 20 at another. Establish a well-defined structure for what you want your organization to report against, then measure baseline performance and strive for continuous improvement rather than adopting arbitrary targets that were designed for another company.

What tools can Technology teams leverage to help them measure developer productivity? 

Standard project management tools automatically generate most basic engineering reports – tools like JIRA provide standard scrum reports that capture sprint burn down, velocity, and cumulative flow.

Using GitHub Actions to document workflows can help improve productivity – workflow documentation allows your team to operate according to a defined series of steps, gates, and reviews/approvals across the development lifecycle. This tooling isn’t new, but teams often fail to use them or properly customize workflows.

Advanced productivity metrics (e.g. code churn, density effects) require data extraction from multiple tools – the data behind metrics like code churn, defect density, and DORA measurements must be pulled from multiple systems and connected manually.

  • Git analytics tools like GitPrime (now Pluralsight Flow), CodeClimate Velocity, or LinearB to pull code churn metrics from version control systems
  • Issue tracking APIs from Jira, Azure DevOps, or GitHub Issues to extract defect data
  • CI/CD pipeline APIs from Jenkins, GitLab CI, CircleCI, or Azure DevOps to gather deployment frequency and lead times
  • Monitoring/incident tools APIs like PagerDuty, Datadog, or New Relic for mean time to recovery data

Specialized productivity tools are emerging but unproven – some companies are starting to offer tools that automatically analyze productivity metrics across the development cycle, but these options are still nascent.

Assessing Efficiency

How can you measure developer efficiency?

Efficiency metrics focus on the mechanics of software delivery
Velocity Metrics• Story point velocity – how much is retired against what was planned for each scrum
• Tech debt accumulation – the longer-term cost of shortcuts developers take to meet short-term deadlines
• Time to merge code – the time it takes to review a pull request
Quality metrics• Code churn – how often code is modified, rewritten, or deleted after its initial creation indicates that people are not understanding requirements or are creating flawed outputs, both of which contribute to instability.
• Defect density per line of code – the number of defects in a code base relative to its size
• Defect discovery location – where defects are discovered reveals the cost of speed (i.e., engineering efficiency) because bugs found during SDLC are 5X less expensive to fix than defects found during production.

Note: Avoid metrics like Number of Commits or Lines of Code, which can be gamed without accurately representing developer output.

Assessing Effectiveness: “Building the right thing” and “Building it the right way”

How do you measure effectiveness?

Effectiveness metrics assess the extent to which product effort drives business value – there are no benchmarks for these metrics because business needs are relative. For example, if product stability is such a large problem that your team needs to spend most of its time fixing issues instead of developing new features, releasing a limited quantity of new features that quarter could be an indicator of effectiveness.

Main effectiveness metrics include:

  • Work allocation adherence – measuring how effectively development teams stick to planned work distributions and resource assignments. This tracks the percentage of time developers spend on their designated tasks versus unplanned work, context switching, or scope changes.
    • Amount of time spent supporting new bookings – developing features that prospects are asking for.
    • Amount of time spent supporting retention – developing features that customers are asking for or that drive competitive parity.
    • Amount of time spent driving innovation – developing innovative features that provide marketplace differentiation, become your USP, or drive efficiency.
  • Business outcome contribution – evaluating how directly software engineering work translates into measurable business results. This measures the connection between development efforts and key business indicators such as revenue growth, user engagement, cost reduction, or market share expansion

Achieving desired ratios across these areas equals effectiveness – time allocation is the bridge between product investment and business outcomes. If you spent the right amount of time on each of these categories, you delivered according to business needs (assuming those needs were accurately prioritized).

What are the steps to assessing whether engineering resources are allocated effectively?

StepDescription / Tips
Product gathers inputsProduct requests feedback and takes requests from:
• Sales
• Marketing
• Customer Success
• Customer Support
Product determines the target work allocation plan for the quarterBusiness needs drive prioritization – Product reconciles inputs from different departments to identify the features and activities that will drive the greatest impact for the business as a whole.
 
For example, a target allocation plan might assign engineering time across the following work areas:
• Developing features that will help drive new bookings: 60%
• Developing features that will help reduce churn/ achieve competitive parity: 10%
• Bug fixes: 10%
• Debt remediation: 10%
• Something else: 10%
Engineering executes and tags the workTagging drives accountability and measures effectiveness – each time engineering completes a JIRA story or ticket throughout the quarter, they tag it with its corresponding allocation.
Compare actual effort to allocation goalsThe outputs of effective teams align with business priorities – achieving as much of the allocation plan as possible indicates that a team is well-managed and executing work in accordance with what delivers value into the market.
 
If you deviated from the plan, understand why – did you underestimate story points? Did something hit the fan in production and the entire team was all hands on deck to resolve it? Was the deviation in the business’s best interest?
 
Note: this comparison assessment should occur on a monthly or quarterly basis.

What percentage of your team’s time should be spent on planned work versus unplanned work or interruptions?

Ideally, close to 100% of time should be spent on planned work – teams that allocate effort according to plan have better control over their systems and are more likely to invest time and resources against activities that are aligned with business priorities. High levels of unplanned work indicate poor planning, system instability, and/or poor management of technical debt.

Always allocate at least 5-10% of time to paying down technical debt – while spending more time on technical debt can become necessary, it’s good hygiene to spend a small amount of time each quarter proactively reducing tech debt before it becomes an urgent issue.

Note: some technical debt accumulation is normal – it is extremely hard to avoid. Fortunately, most problems only emerge if teams don’t take proactive debt resolution seriously.

How is responsibility for effectiveness shared between Product vs. Engineering? How can they collaborate to ensure development is valuable?

Product is where value is translated and defined – the product team is the gatekeeper/ translation layer between all other functions and engineering. They decide what gets built and how different features are prioritized.

Engineers are the builders of value – they assess how to deliver the value identified by the product team and are responsible for the velocity of feature development.

Example of how Product and Engineering contribute to productivity:

  • Product → prioritizes the features they want Engineering to create based on the value each feature will drive for customers and the brand
  • Engineering → determines how many story points a feature is worth. For example, they might decide that a specific feature Product prioritized is worth 100 story points.
  • Upon receiving Engineering’s estimate, Product → realizes that they’d under-anticipated the work required to create the feature. They decide to swap that feature out for 3 other features that together will provide more customer value.
  • Engineering → builds according to the updated prioritization.

Communicating Productivity Across Stakeholders

How does the perspective on developer productivity differ between the PE sponsor, the engineering function, and individual developers? 

Perspective: PE Sponsor
Priorities & Key QuestionsTips
• What business outcomes or impacts did engineering deliver this quarter? 
• Among all the releases this quarter, how much did that work output contribute to driving value?

Note: board reporting occurs on a quarterly basis.
When presenting to the board, all engineering work must tie to one of 3 key outcomes:
• Booking new revenue
• Reducing churn
• Saving on operational costs

Tell a value “story”. For example:
• Prospects were asking for new features, which your team created last quarter. Sellers were able to get X new bookings as a direct result of those features.
• Customer Y said they would churn if we didn’t provide this feature. We made the feature, and they renewed.
Perspective: Engineering Org
Priorities & Key QuestionsTips
Standard pipeline visibility:
• How often can a team push new code into production?
• How long does it take for code committed in repository to make its way into production?
• What percentage of deployments result in failures that require rolling back the change or doing a hot fix?  
• How long does it take to restore a service after an unplanned incident?

Developer experience:
• How satisfied are my developers?
• How long does it take to onboard new engineers?
• How effective is that onboarding process?

Note: engineering teams use a monthly reporting cycle.
Use DevOps Research and Assessment (DORA) metrics to measure:
• Deployment frequency – agility and responsiveness to deliver change. Some teams can deliver change daily, while others can deliver change just 1-2X per year.
• Lead time for change – efficiency of pipeline from developer to production. Some teams have a 5-day lead time, while others have a 15-day lead time.
• Change failure rate – ability to drive quality and stable releases into production
• Mean time to recover – how quickly you can recover after an update accidentally brings the site down

Developer experience (DX) metrics and methods are becoming increasingly popular:
• Satisfaction surveys – ask for developers’ opinions on tooling, process, and work environment.
• Time to first commit – indicates the effectiveness of onboarding, new hire comprehension, and the complexity of your code base.
• Employee net promoter scores – measure how likely someone is to recommend the workplace to another.
Perspective: Individual Engineer
Priorities & Key QuestionsTips
Unless you have a very small team, individual-level productivity metrics aren’t useful because outcomes are driven on the team level.Teams are moving away from individual-level metrics – number of commits, lines, and PRs are vanity metrics that can easily be gamed and don’t effectively evaluate contribution.

How should the technology leaders address developer productivity in board meetings?

Board presentations should be business-oriented – CTOs usually have an OKR guiding their business goals (e.g., “This quarter, I want to drive down hosting by 15”). Structure your board presentation around that goal.

Show how you progressed on your goals (at a high level) – for example, you might have driven down hosting by slashing a bunch of servers, changing code to run more efficiently, or moving from AWS to Azure because you got a better deal.

Tie engineering effort to business outcomes – always connect quarterly activities to effectiveness metrics. Provide numbers and specific examples of how engineering support explicitly supported a revenue bump or drove down churn.

Only discuss technical hygiene if it has become a problem – board members understand that technical debt and code churn are important, but they are rarely highlighted during board meetings unless either area has become a strategic priority.

Improving Productivity

What DevOps practices should you consider to improve developer productivity? 

Focus on DORA metrics to improve the SDLC – the development pipeline can get messy due to the many moving parts, different people, and orchestration required to get code from A to B to C. All DORA metrics help CI/CD pipelines perform better.

Shift tests left to reduce expensive fixes and security risks across 2 areas:

  • QA – instead of waiting for a piece of code to go into QA, UAT, or an integration environment, identify and resolve issues earlier to expedite and simplify later testing processes.
  • Secure software development – many junior and intermediate programmers might not understand secure development practices (e.g., how threat actors can siphon data or shut down your environment based on vulnerabilities). Add a static code analyzer to your CI/CD pipeline to identify SQL injection vulnerabilities, security flaws, and code quality issues during development. Early detection costs significantly less (and exposes you to less risk) than post-deployment remediation.

How do you accelerate the creation of code using AI automation tooling?

AI coding tools can improve efficiency by accelerating the amount and increasing the quality of work a dev team can produce – tools like Copilot, CodeWhisperer, Cursor, and Windsurf can read entire codebases, generate code, and make contextually appropriate suggestions that align with existing architecture and patterns.

AI tools excel at writing test cases – test cases are the Achilles’ heel of dev teams because writing tests can be tedious and less exciting than creating the features that they’re testing. The result is that testing is frequently omitted or skipped over, which creates risks that can have significant impacts down the road. AI tools can analyze new code and automatically generate corresponding test suites, which dramatically improves code quality while reducing the manual effort required for comprehensive testing.

Use AI-powered tools to identify edge cases – this interactive approach to coding can help you think through edge cases that you might not have discovered. The process of identifying edge cases might now look like this:

  • Explain what you’re trying to do – for example, you might explain that you’re trying to write code to connect to an API, and you need to save the data as a CSV file.
  • Generate the code – the tool automatically writes the code for you.
  • Ask about edge cases – ask what other edge cases you did not consider for this code.
  • Generate suggestions – for example, the tool might point out that since your CSV file is a comma separated file, any commas that make their way into Column B of the CSV file will break the formatting. The tool might also suggest considering putting everything in quotes to prevent this from becoming an issue.
  • Solve for the edge case – ask the tool to handle the suggested updates itself.
  • Repeat – continue prompting for edge cases until you are satisfied or don’t believe that the edge cases it comes up with are likely to happen.  

Note: documentation is becoming slightly less important due to AI tools – because these tools can read code and explain, in clear English, what that code does, the right tooling can remove the need for explicit documentation.

How should you structure a team for efficient development? 

How to share and divide resources across an engineering org
Align a separate pod to each product lineEach product should have a dedicated team that includes a product manager, engineering lead or manager, engineers, QA resources, and (depending on team size) DevOps support. This structure concentrates domain and engineering expertise and enables each pod to prioritize engineering resources for that product on a quarterly basis.
 
For example, Product A’s priority might be allocating 60% of work to new features, while Product B’s priority is spending 50% of its time on bug fixes. Each product needs an independent allocation plan to succeed.
Use shared horizontal functions to standardize practices across productsEach pod doesn’t necessarily need to have their own DevOps or Infrastructure person—but they do need to do CI/CD in the same way. Teams of 2-3 people support all pods and implement CI/CD best practices across the organization.
 
Shared functions should think of their customers as the pods that want to release software into the company’s infrastructure. This standardization is especially important for areas like Cloud service usage, databases, logging, and monitoring.

Cross-pod councils emerge as engineering orgs scale beyond basic product teams – the larger the company, the more specialized councils cut across product boundaries to support strategic and technical best practices and standardize key areas of operation. Each council might comprise individuals from a cross-section of different pods. Examples include:

  • Architecture Council – ensures consistent technical approaches
  • Platform Team – builds internal tooling used by all product teams
  • Governance and Compliance Team – limits risk. For example, they might manage open source licensing to reduce the extent to which products are exposed to IP license risk.

Team composition should prevent single points of failure in code review processes – if the team structure isn’t right, a single person (e.g., an engineering manager) can become a bottleneck and delay the entire team’s code delivery.

How should you incorporate a global workforce or reductions in force into your considerations of developer productivity? 

Cutting headcount spend should typically not result in decreased effectiveness – the goal is to accomplish the same or more for less spend.  Cutting spend should not result in lower productivity unless that decrease is reflected in the company’s strategic plan.  

Global workforce models can enable pricing arbitrage savings of 20-60% – finding a professional services or outsourcing company can allow you to buy the capacity of engineers based in other geographies with less expensive salaries. This approach doesn’t necessarily improve efficiency, but it can significantly support cost optimization efforts. Outsourcing partners can help you hire in another geography (e.g., Eastern Europe or South America), onboard engineers, train them, and help manage their development.

Team Dynamics & Culture

How do you maintain a culture of adoption of efficiency measures? How do you measure and increase uptake of new workflows?

Focus on how change will improve the developer experience – engineers are more likely to embrace changes that improve satisfaction with their tooling, processes, and work environment. Explain why their work matters and how these changes will empower them to create meaningful impact.

Roll out adoption one pod at a time to create internal champions – prioritize building momentum instead of requiring wide scale, instant change. Work with the first pod to tweak the new methodology until the team is happy with it. Then, the team will naturally advocate on behalf of the initiative to the rest of the pods.

Make engineers feel like they’re part of the process – engineers respond better when they are part of creating a new system/solution instead of being told what to do. A collaborative, iterative approach leverages the engineering team’s natural problem-solving inclinations and results in better solutions and stronger buy-in.

How do you approach productivity knowledge sharing within your team? What meetings or reporting cadences should you hold? 

A big town hall or leadership-style meeting is necessary to secure buy-in – operational meetings like scrums and retrospectives can reinforce changes, but the initial communication must come from leadership. Ideally, the CEO, founder, and/or CTO will kick off a big change by explaining what’s coming, sharing the goals and reasoning behind the initiative, and explaining how the team will measure success.

Overall

What are common pitfalls? 

Vanity metrics foster a culture of looking busy instead of driving impact – measuring lines of code, numbers of commits, or hours worked can be easily gamed and aren’t necessarily helpful. Just because tools let you measure something doesn’t mean it’s the most useful thing to measure.

Tooling chaos (or a lack of tooling) breaks the tool chain – when GitHub, JIRA, and CI/CD systems aren’t properly tagged or don’t share metadata, teams lose the ability to connect effort, efficiency, and effectiveness. For example, if a developer spends days on JIRA ticket #475 but commits the code without tagging the Jira ticket, there is no way to tie that work back to the relevant work allocation category. 

 Incomplete feedback loops prevent teams from understanding the “why” behind their work – scrum ceremonies always focus on what was accomplished and how quickly it was accomplished. However, engineers need to understand why their work is important, why something is or isn’t urgent, and how it connects to the business.

Responses