Subscribe to our blog to get the latest articles straight to your inbox.

This is part of our How We Work series of blog posts. Today, we're talking about the metrics we use to measure the health of our development projects.

The Key Questions

The questions we'll answer here are:

  • How do you scope individual features?
  • How will we prioritize and make trade-offs along the way?
  • What specific metrics will you track on an ongoing basis and what should I look out for?

How Do You Scope Individual Features?

To understand the scope of a feature, we use one primary metric: a story point. It represents an ideal day in the life of an engineer or designer — a day without any interruptions, blockers or unknowns. Story points aren't estimates of units of time; they're estimates of relative complexity.

Story points can use a few different scales. At Very, we use the Fibonaccia sequence — an approximation of the logarithmic "golden spiral," where greater uncertainty exists as requirements get larger. With significant platform builds come significant uncertainty -- which means the more precise we are, the less accurate we will be. (To learn more about planning, read about Planning Poker.)

Every card in Pivotal represents an end-user facing feature that the platform needs to contain, and each card is assigned a story point value. As the team completes features, a velocity is established. Velocity represents the number of story points completed in each iteration, and it indicates the speed at which the team can create and deliver value to the market.

Our goal is to set and maintain a consistent velocity week over week. By tracking this over the course of four to six iterations, we can measure volatility — the change in velocity from iteration to iteration.

With these metrics in hand, we consider all the features in the Backlog and use historical project performance to calculate delivery time for specific features well into the future. We encourage you to learn more about story points, velocity, and volatility to get a better grasp on why these simple metrics and data points are so valuable in measuring our progress and managing complex development projects.

How We'll Prioritize and Make Trade-offs?

As a stakeholder, you will be an important part of the discussion around priorities. There are four variables to consider:

  • Scope
  • Quality
  • Time
  • Budget

You will determine which two of the four variables have the greatest importance. With a clear understanding of your priorities, our team can proactively manage the project day to day and navigate tough decision points when they arise.

What Specific Metrics Will You Track on an Ongoing Basis and What Should I Look Out For?

The following is a detailed list of the metrics used to evaluate the health of the project.

  • Backlog: Every week, we enter the total number of story points in the project's Backlog — i.e., the Pivotal lane that contains all cards that are planned for the current release cycle but haven't entered the workflow. Non-prioritized items, like enhancements, belong in a "wish list" lane until you choose to increase scope by prioritizing them in the Backlog. There should be no cards in the Backlog that are not "good cards" with story points.
  • Work in Progress: Work in Progress points are the sum of story points in lanes between the Backlog and Done lanes. This shows how much "value" is currently in the manufacturing, production and delivery process.
  • Done Points in Aggregate: We leave all "done-done" cards in the Done lane until the end of the project. At the end of each iteration, we enter the total number of story points in the Done lane. The value in this box is additive from iteration to iteration (e.g., 24 done in iteration one, 12 done in iteration two = 36 points in "done" lane).
  • Actual Points in Aggregate: We follow a similar process to Done Points in Aggregate, but instead of calculating the estimated story points, we calculate the actual story points represented in the lighter blue color. We add the actual values to each card once it hits the Done lane.
  • Actual Points Per Iteration: Calculated using the same methodology as Delivered Points.
  • Velocity: The average number of story points delivered over the previous iterations - up to 10.
  • Volatility: We calculate the change in velocity from iteration to iteration as: Volatility = std dev (velocity week) / average (velocity week).
  • Hours Spent: We tally the sum of all billable hours team members logged to the project over the iteration within Harvest, our online time tracking software.
  • Hours Per Point Ratio: We calculate this ratio as "Hours Per Point = Hours Spent / Delivered Points"
  • Defects Completed: We track the number of defect cards completed and delivered to the Done lane each iteration.
  • Estimation Accuracy: We estimate the accuracy of our predictions as: (Delivered Points -- Actual Points) / Delivered Points
  • Defects Completed: We track the number of defect cards completed and delivered to the Done lane each iteration.
  • Estimation Accuracy: The accuracy of our predictions as (Delivered Points -- Actual Points) / Delivered Points
  • PR Opens: The number of Github Pull Requests (PRs) opened during the iteration.
  • Production Deploys: The number of production (not staging) deploys during the iteration.
  • Accrued Expenses: Pulled from Harvest to show the hourly rate for your project multiplied by the billable hours recorded in "Hours Spent" above.