Articulation of success criteria is pervasive in every agile approach, at every level of value creation. In many ways, the core behavior of agility is defining what success looks like for a activity, acting to make that true, and validating that it remains true and continues meeting the customer needs. This is the essence of PDCA. it is a core behavior of TDD and BDD. It’s embedded in how we define stories, features, epics, and strategies. I don’t believe a company can say it’s agile if it lacks this capability.
And yet, I frequently see confusion and lazy language around the different components of success as we measure them. The confusion arises at a variety of levels, yet the core disconnect follows a very common pattern. This post describes the core questions people should be answering, and then briefly outlines application to a number of different situations.
Now: What is happening now? This is the realm of operational data monitoring, business performance metrics, development team metrics, and any other metric or measure that we want to build decisions or goals around. Before you can address the “new”, you have to be able to articulate the “now” with some degree of confidence. Usually, this means you can easily monitor or collect the current state of these, whether from your production systems, your general ledger, your CRM, or from easy data gathering via surveys and/or observation.
Never: What should not change? Drawn from the “now”, these metrics are the foundation of stability that your business relies on for safety and comfort. Examples include “the development organization consistently delivers ~10 features per PI” or “the conversion rate in funnel stage 3 is around 93%” or “our revenue was the same or higher than last quarter”. These metrics form the basis of all the varieties of “don’t let this slip” metrics, and are often framed in terms of target ranges like “Cost per active customer should stay between $0.15 and $0.17 per month” or “Service unavailability should be below 2 minutes per month”. In practice, goals around these metrics should be clear extensions of the current values rather than “ambitious goals”. For example, don’t set a metric of “2 minutes per month” if you’re down 600 minutes per month, or set a revenue growth goal of 30% when you’ve only been growing at 10%. The general goal of these is to “never” leave the target zone. If you want ambition-focused goals, see below!
New: What should change (and how)? This is where the organization gets focused on the changes that matter. These metrics are also drawn from the “now” metrics, but tend to focus on leading and coincident indicators rather than lagging indicators, because they are being used for active steering of change and need to provide a fast feedback cycle. They also need to have a target timeline in which the change should happen, and are often framed as “Move from X to Y by Z” like “Get from 600 minutes of outage to 60 minutes of outage per month by the end of Q2”.
Example: OKR vs. KPI. We use both “Objectives and Key Results” (OKRs) and “Key Performance Indicators” (KPIs) extensively for articulating business strategy and goals, because they serve as great aligning tools for broad organizations. We often use them as part portfolio, ART, and business unit vision statements. We use them for setting performance goals when that’s necessary. OKRs are “new” metrics. They define what will be different if we are successful, and tell us if our strategies and tactics are working or not. They also need to be paired with KPIs, which are the “never” metrics telling us that we’re not putting our business at risk by breaking something important in the pursuit of our strategy. Providing either set without the other introduces risk and inspires poor choices.
Example: Acceptance Criteria vs. Definition of Done. Agile teams rely on a consistent definition of done (DoD) to define and respect the trust relationship with other teams and their business stakeholders. The DoD says “We will never release a story/feature without meeting these standards”, and are taken very seriously as a commitment. The standards are consistent from feature to feature and from team to team. As a “never” metric, they provide the backplane of trust that allows the organization to move fast and confidently. The Acceptance Criteria, on the other hand, are a “new” metric that clearly articulates what will be different upon completion of a single, specific feature or story, and thus helps bring focus to the work the team does to finish that feature.
Example: Feature vs. NFR. This pair isn’t explicitly metrics, but does represent a pairing of intent that is very consistent with the “new” vs. “never” construct, and extends the previous example into the full backlog. A feature (or story) represents a set of “new”, and represents change being made to the solution. A non-functional requirement (NFR), by comparison, represents something that should never be violated, and that must remain true regardless of what features are introduced. This example is useful, because it highlights what happens when an NFR needs to change, which I’ve elaborated below.
Changing an NFR
Changing an NFR requires a very strong clarity of intent and a careful focus on the sequence of operation if you want to maintain trust in the development organization’s quality commitment. The sequence should go as follows:
- Determine the future-state intent for the NFR, including the business reason. “We want to reduce the maximum load time of this transaction flow’s pages from 2s to 1.5s because we are seeing lower funnel conversion rate when our site is lagging compared to when it’s operating at better speeds.”
- Determine the work/experiments required to achieve that goal. “We believe that introducing an elastic scaling framework on the core application flow servers and pairing it with an improved caching system will achieve the desired NFR results.”
- Write the feature. Note AC3* and AC4 capturing the future-state NFR goal. “Improve transaction flow load times. AC1: Elastic scaling framework being used. AC2: Dynamic predictive query caching in use. AC3: Site meets goal of 1.5s load time at P95 reliability under all load conditions. AC4: Performance test library updated to validate against 1.5s load time. BBH: We will see conversion rates increase by 2% in core transaction steps when site is operating at load”
- Implement it, achieving the goals stated in the feature description
- Update the NFR. Publish and communicate the new, lower goal, AFTER it has been fully implemented. NEVER change an NFR before it’s being met, because that means no other team can be successful on any features because they’re violating the NFR.
* AC – Acceptance Criteria.
* BBH – Business Benefit Hypothesis