Skip to content

Introduction

wallowa wallowa is a data fetch and analysis tool focused on providing straightforward and transparent insights into system-level behaviors of your Software Development Life Cycle (SDLC) with both out-of-the-box DORA/SPACE-like measures and flexible ad hoc queries using SQL or various dataframe dialects. It is designed to give anyone on your team flexible access to query your software operations and development data using famliar tools.

Transparency, openness, and empowerment are all intentionally designed into wallowa.

Get started or learn more about the philosophy behind the tool.

Features

Key system-level DORA/SPACE-like measurements are available out of the box to provide immediate insight into the bottlenecks in your tooling and processes. Go further by using SQL, data frames, and other familiar data analysis tools to query any measured aspect of your SDLC that you can think of. Specific features include:

wallowa is not a "productivity" measurement tool

wallowa is expressly not intended for "productivity" measurement. Instead, it is a tool to help you and everyone on your team understand your SDLC quantitatively. These measurements can help you indentify opportunities for continued improvement, often the bottlenecks in your system.

Measuring the "productivity" of individuals in software development is misguided and fundamentally flawed. Instead, measuring the behavior of the system(s) that support your people throughout the SDLC will highlight opportunities for improvement that can lead to higher satisfaction for developers and users, improved quality, and more impactful output. Making those improvements takes investment in the areas identified. Using data that shows the bottleneck can help make a case for the needed investment. Improvement can be quantified as it happens since you have a baseline.

The performance and contribution of each individual is important, of course, but measuring "productivity" doesn't make sense. System-level behaviors tend to dominate a group's achievement (especially as the group gets larger) and performance+contribution are complex emergent properties that involve many factors. Individual performance and contribution are best addressed through individual development of competence and motivation in the context of the team(s) and project(s) that people are involved in, often with the guidance and support of a manager, mentor, or peer. Here's a great example from Dan North: The Worst Programmer I Know and a great pair of articles from Kent Beck and Gergely Orosz (Kent's part 1, part 2 & Gergely's part 1, part 2) in response to McKinsey publishing their methodology for measuring developer "productivity".

Resources that inspired these views:

  • "Accelerate" by Forsgren, Humble, Kim
  • "Out of the Crisis" by Deming
  • "The Goal" by Goldratt and "The Phoenix Project" by Kim, Behr, Stafford

Advice on careful, intentional use of measurement

Like any tool, measurement can cause more harm than benefit if used inappropriately. Here is some advice on using measurement effectively for your SDLC.

  • Approach measurement in the context of feedback loops (see OODA loop for a useful example). Does the measure provide quick and actionable insight into an area to improve? Try it. If not, don't bother.
  • Measure your feedback loop(s) in order to bring attention to shortening them. Two of the DORA metrics, Deployment Frequency and Lead Time for Changes, are practical and empirically-justified measures to use.
  • Be data-enabled, not data-driven. It takes knowledge, data, and intuition to achieve great outcomes.
  • Align your measurements and focus areas to your overall organizational/business goals so there is a clear line from the overall goals to the small number of measurements you're optimizing for.
  • Talk with developers, designers, and PMs about where they experience friction, in general or in the context of specific metrics. They know best what’s holding them back. In larger organizations, fan out these chats with your managers. Surveys can help at high organizational scale or in low trust environments (if anonymous), but nothing replaces the insights that come from high trust dialogue.
  • Synthesize and share your thinking and any results openly and transparently. Include details on which insights you’re taking action on, if any, why you picked those areas to focus, and how you're taking action.
  • Only use metrics for short-term goals or standards. This avoids stagnant thinking, entrenching a status quo, or the proliferation of gamed incentives that tends to happen with long-term metrics.
  • Metrics can and will be gamed; it is human nature (see Goodhart's law). Pick metrics that, when gamed, will lead to positive behaviors anyway. For example, short Pull/Merge Request merge times drives the positive behavior of keeping PRs smaller and feedback loops shorter. When people find ways to game that metric, it'll probably still be a net positive outcome.
  • Use counterbalancing metrics to mitigate some of the problems with gaming a specific metric. Continuing the example of short Pull/Merge Request merge times, optimizing for that metric alone can lead to cursory reviews or many interruptions. Counterbalance that behavior by finding a way to measure the number of interuptions or quality of reviews.
  • Review all of the measures you've been keeping an eye on every month or two, depending on your organization’s cadence, to see where you’d like to focus next, if anywhere.
  • Keep only a small number of metrics in focus or with set goals at any given time. Fewer metrics is better than many. Cull metrics that are not useful at the moment.

Please open a documentation Pull Request to add advice or refine/debate these points.

Alternative tools

There are other tools available for similar insights (in lexicographic order). You can also build something similar yourself.

Please open a documentation Pull Request to add other tools to this list.

Thank you to these projects

This project would take a much larger amount of effort (that I probably wouldn't undertake) without these great open projects to build upon. Thank you!

and all of the projects these tools depend on.