metrics

5 Rules for Successful Success Metrics

Here’s an analytics truism: everyone wants a dashboard (a.k.a. key performance indicators (a.k.a KPIs), success metrics, scorecards). Managers want a barometer of performance, a hammer to use on their subordinates, and a straightforward quantification of their business. Below are a few of the guidelines we use when we take on this task:

1. Actionable metrics

Ask yourself: what would I do if the metric is out of line? Do I have the levers that can impact it? Measures that track final outcomes like revenue or total customers don’t give you much time to react or guidance about what to do next.

 

2. Less than five.

When I first started at AOL, a friend of mine pointed to the dozens of reports flying around the organization and remarked (I paraphrase): "This many ’important’ metrics just indicates that nobody really understands this business." If you struggle to boil down, you should spend more time defining success and understanding the factors that drive performance.

Sprint Advertising Campaign

Sprint Advertising Campaign

3. Simplicity over comprehensiveness

We don’t agree with Thomas Davenport’s call for more proprietary metrics:

You know you compete on analytics when...You not only are expert at number crunching but also invent proprietary metrics for use in key business processes.

In our experience, you’re better off if you choose metrics that can be understood outside your corner of the world. One common trap we’ve seen is a desire to create a single comprehensive metric; this metric is often an index that combines a number of factors into an overall measure of performance. The result: numbers that are meaningless without a lot of context and difficulty in interpreting deltas.

NFL Passer Rating Formula

NFL Passer Rating Formula

4. Presentation matters

Your dashboard should be easy to understand and provide enough data to give your audience context. I’ve seen many dashboards that stubbornly show only the current state of a metric and the change from the previous week. Why so stingy with historical data? At Juice, we always show trending and try to give users a means to "cut" the data - by business line, customer type, month, etc. 

Juicebox dashboard

Juicebox dashboard

5. Evolve to goals.

Metrics without goals can be a waste. Unfortunately, getting people to agree to specific targets can be painful. After all, goals start us down a slippery slope toward clear accountability. Here’s what I’ve found works: start by focusing your energy on getting people to buy-in to the success metrics. Get clarity on definitions, show trending, and incorporate them into the organization’s vernacular. Be patient: one day someone will raise their hand in a meeting and ask if there are targets for the metrics. Pretend to act surprised by the cleverness of this suggestion.

SMART goal setting

SMART goal setting

Franken-measures…or How to Construct a Useful Composite Measure

Franken-measures

Sometimes a simple metric isn’t enough. It can’t fully describe a behavior or performance of a system. That’s when you need a Franken-measure: a made-up metric monster that creates a comprehensive composite to capture complex concepts.

Franken-measures go by many names—indexes, scales, ratings, composite or compound measures—and show up in all sorts of places:

Web analytics has anongoingdiscussionabout a measure of visitor engagement; the famous Google PageRank measures the “importance” of sites using a complex and mysterious algorithm.

Sports have embraced Franken-measures to evaluate player and team performance, e.g. passer ratings, Rating Percentage Index for college basketball, and judging of Olympic events like gymnastics, ski jumping, and ice dancing.

Economists loves indexes, e.g. Consumer Price Index, Consumer Confidence Index, Gross Happiness Index.

Marketers use “scores” to simplify their lives, e.g. Q scores measure the familiarity and appeal of popular culture entities and credit scores judge your value as human being.

Why would I want a Franken-measure?

You are probably already up to here with measures, so why would you want another one—much less one that is going to need extra effort and explanation? Here are a few things Franken-measures can offer:

A short-hand way to communicate about a complex concept. For example, a concept like customer loyalty may encompass everything from share-of-wallet to frequency of interactions to average sales amount.

A mechanism to operationalize a complex concept. Systems can take action on a single number more easily than an array of variables.

A definitive weighting of factors. Rather than constantly bickering about the relative importance of various measures, a Franken-measure can lock down the weighting, avoiding individual biases (in exchange for a systematic bias).

A balance of components. By combining multiple measures, variation in one measure doesn’t unduly bias the results.

What does it take to design an useful Franken-measure?

Not all Franken-measures are effective at achieving these benefits. There are at least four elements that contribute to a good design: completeness, concision, measurability, and independence. These factors can be combined into the Franken-measure Effectiveness Index (FEI) using Juice’s proprietary weighting model.

Completeness. Modeling all relevant performance factors to provide a holistic measurement of the concept.

Concision. A calculation that is as simple and straightfoward as possible, making it understandable and logical to users.

Measurability. Using direct performance data rather than relying too heavily on proxies or subjective measures. And from a practical perspective, if you can’t reliably gather valid data, the exercise is futile.

Independence. The components of the measure need to be independent so that variation in one component doesn’t directly drive another.

What can go wrong?

Finally, here are a few of the pitfalls to avoid when setting out to create your perfect Franken-measure:

Complexity. A complex calculation can confuse and infuriate your audience because it is hard to understanding what is driving performance and why the measure is moving. Leigh Steinberg, famous NFL agent, said of the NFL passer rating: “Other than one attorney in our office, I am unaware of a single human being who has the capacity to figure a quarterback rating.” The formula isn’t quite as inpenetrable as that, but it isn’t for the weak of heart:

passer rating

Changing the baseline. There will be inevitable pressure to change the franken-measure formula which automatically invalidates historical performance.

In search of comprehensiveness. A desire to be comprehensive can hamstring the effort. Take Eric T. Peterson’s Engagement Model. He is clearly striving for completeness but at the risk of feasibility, in my opinion.

Eric T. Peterson’s engagement metric

Black box and credibility. For the people impacted by a Franken-measure, it is important to understand what is going on under the covers. And if it is impossible to share the algorithm or approach, credibility of the creator is all that remains. PageRank succeeds to the extend that people trust that Google has an objective, well-intentioned algorithm. A whiff of agenda or bias would undermine it in the eyes of the audience. Take the National Review’s “Liberal Rankings” which have managed to label the last two Democratic Presidential nominees as the “Most Liberal Senators.” Coincidences like that can undermine credibility.

For more information:

Analytics Roundup: Expensive cup of Joe-l

On the Fahrenheit scale, do 0 and 100 have any special meaning
The story of a mixed up metric.

At Last, a $20,000 Cup of Coffee - New York Times
Monstrous $20k coffee brewing system for fanatics, err, I mean, purists.

Five whys - Joel on Software
Incredible blog on system uptime, SLAs, rdiculousness of "Six 9’s", black swans, and how superbly FogCreek Software handles customer service issues.

Browser History Timeline
Chronicle of the lives of six popular Web browsers.

TV Ratings and Online Audiences… Or, Where to Find Skeet Ulrich’s Bio

The TV ratings system is broken. Everyone knows it, but nobody wants to admit it. Nielsen ratings struggle to accurately measure audience quantity (limited tracking of DVR usage and online viewers) and quality (are viewers engaged? are they skipping the ads?). However, admitting so would undermine the delicate balance TV networks share with their advertisers.

I caught an interesting segment on KCRW’s "The Business" podcast about TV series that find themselves on the "bubble," i.e. at risk of getting canceled. The producer of CBS’s Jericho, "a post-apocalyptic drama starring Skeet Ulrich" (shouldn’t that description alone put it on the chopping block?), explained how they received a temporary stay of execution when their small but loyal audience protested network plans to cancel show. The interview raised questions about the validity of Nielsen ratings and how an fervent online audience can bring additional perspective to the performance of a show.

All this talk of measurement gave me an itch to look at some real data. I tracked down the Nielsen audience size (Subscription required) for TV series over the 2006-2007 TV season. Then I pulled from comScore (a Juice client and leading source for data about Internet traffic and usage behaviors) the unique visitors and time spent on websites of TV shows over the same September to May time period.

I had a few questions I was curious about:

  1. Which shows have dispropotionately larger internet audiences—an indicator of a loyal and rabid fan base? Are there other shows like Jericho that struggle to build a large TV audience, but have a strong online following?
  2. Which TV show sites have the most engaged audiences?
  3. What TV networks have been most successful at building online traffic to their sites? Which types of shows spawn online audiences?

The table below shows the top 20 TV series by ratio of monthly unique website visitors to average TV viewership. This metric suggests an ability to get viewers to look for more content, whether it is additional video, information about the actors, or discussion boards. If Jericho’s 9.5 million TV viewers (tied for 48th overall) represents the proverbial bubble, there are eight other shows with bubble-level ratings that can also claim strong online support (highlighted in this list).

Ratings Table 1

I also wanted to get a sense as to the engagement of the online audience. Were people simply stopping by the website to check the TV schedule, or were they digging deep for more content? One measure that gets at this question is minutes per unique visitor. The top 20 websites are listed below. Interestingly, 12 of these sites are also found in the previous table. Jericho is one of four of the bad-Nielsen-ratings/strong-online-audience group that overlap with the table above. (NBC, if you are grousing about ratings for The Office, hopefully these numbers will make you feel a little better.)

Ratings Table 2

The final table addresses my third question about the TV networks and types of shows that are best at building an online audience. ABC has done more than twice as well as CBS in getting viewers online, which may be a reflection of the traditionally older CBS audience. Note: I pulled the top-end outliers (American Idol, You Think You Can Dance?, and Deal or No Deal) from the Network comparison.

The second half of the table brings those TV series back into the mix in the reality/contest category, and you can see the impact. I was surprised at the dearth of sitcoms on this list. It may be that a website for a sitcom doesn’t typically make sense.

Ratings Table 3

With all the money spent on TV advertising, I can only hope the networks go beyond the top-line Nielsen ratings to try to get a complete picture of their audiences.

Choosing the Right Metric

Misaligned goals, distorted behaviors, and a misguided sense of success... no, I’m not referring to college graduates. I’m talking about the problems caused by using the wrong metrics in your organization. You’ve probably seen examples like tracking average customer profitability and losing perspective on the variance in profitability or evaluating customer service reps on calls handled without regard for the quality of the experience. I’d like to offer up a quick-bake recipe for choosing the right metric.

Step 1: Set the context

Metrics generally serve one of two purposes. Start by understanding what you are trying to achieve.

1. Identifying problems. Defining the right metrics in this case requires you to do a little detective work: What is the data residue of a problem? What evidence can be found and how exactly does it show up?

2. Measuring performance. The right success metrics need to focus on measures that can be controlled and where improvement in the number is unabiguously a good thing.

Step 2: Balance the four dimensions of a good metric

unnamed (2).png

Lots of metrics fail in at least one of these dimensions. A few examples:

  • Common interpretation: We had a client who made a distinction between "leads" and "prospects" in their marketing organization. Prospects had theoretically expressed more interest in the service through their actions. Unfortunately the line between leads and prospects was always hard to decipher and the definitions were hard to communicate. On a related note, we got a kick out of Tom Davenport’s (author of "Competing on Analytics") assertion that a company competing on analytics needs to "invent proprietary metrics for use in key business processes." There is nothing inherently wrong with "invented proprietary metrics" but it sounds like something that is designed to confuse anyone outside of the inner sanctum.

  • Actionable: Metrics are frequently too broad for the impact that a particular group can have. Customer satisfaction is a popular dashboard staple, but it is hard for most managers to see how they can have a significant impact on the number.

  • Accessible, credible data: Sometimes the most valuable and obvious metrics are frustratingly hard to track. In the web analytics world, unique visitors is important to know, but user deletion of cookies has thrown a wrench into the works.

  • Transparent, simple calculation: Top NFL agent Leigh Steinberg says of the famous quarterback ratings metric:"Other than one attorney in our office, I am unaware of a single human being who has the capacity to figure a quarterback rating." I don’t know what kind of art majors he hires, but all they need to do is use the simplified formula: (83.33 * Comp %) + (4.16667 * Yds per att) + (333.333 * TD pct) - (416.667 * INT pct) + 25/12.

(Want a little validation of this framework? Avinash, respected web analytics guru, just published a post with "Four Attributes of Great Metrics" and he landed on a strikingly similar set of four: 1) instantly useful (i.e. actionable); 2) relevant (i.e. common interpretation); 3) timely (i.e. accessible); 4) uncomplex (i.e. transparent and simple).)

Step 3: Avoid the metrics bugaboos

Finally, here are a few traps that I’ve seen in deciding on appropriate metrics:

  • Trending and distributions: Don’t always try to compress a metric into a single number. Often it is more revealing to show the metric across time or as a distribution to uncover variance.

  • Edge cases: There will always edge cases where a metric may not mean what you think it means. These situations are worth understanding, but you shouldn’t allow the perfect to be the enemy of the good.

  • Setting goals: Could you hold someone accountable for this metric without them throwing out a half-dozen reasons why it doesn’t make sense? It’s a decent test of the value of the metric.

  • Self-serving: Be careful that you don’t select metrics simply because you know they’ll make you look good.

Esurance–Competing on Analytics

Recently I caught up with my college friend John Swigart who now runs the marketing organization at Esurance. When the conversation inevitably drifted to business, I asked about how Esurance was using data to make decisions. I was expected to hear the same old story—big failed data warehouse projects, piles of underutilized reports, frustration about not being able to understand how the business was performing. I was way off.

It seems that John works for the rare company that has managed to live the analytics dream. Esurance competes on analytics—not in the idealistic model highlighted by Tom Davenport, whose "full-bore" analytics competitors are defined by:

"Top management had announced that analytics was key to their strategies; they had multiple initiatives under way involving complex data and statistical analysis, and they managed analytical activity at the enterprise (not departmental) level...

...Employees hired for their expertise with numbers or trained to recognize their importance are armed with the best evidence and the best quantitative tools. As a result, they make the best decisions: big and small, every day, over and over and over."

That’s window-dressing. John didn’t make any grandiose pronouncements of Esurance’s analytical achievement or talk of the best tools and most complicated models. He simply stated that data-based decision-making has been a part of the culture from the very beginning and he considers it essential to running a smart business. A few points that he emphasized:

  • Clear linkages between metrics. There needs to be a well-understood hierarchy that has important financial measures at the top (i.e. revenue) and connects to the underlying drivers.
  • Frequent reviews of reporting. Senior managers get together on a regular basis to look through the core reporting. These meetings are detailed, but somehow useful enough that people stay committed to the process.
  • Learning takes time. John recognized that Esurance cound not be as evolved in their understanding of the business without a commitment to this approach from the very beginning.

After getting off the phone with John, I asked him to respond to a few questions so our readers could get a taste of their approach:

How has Esurance managed to develop a culture that embraces decisions using data?

We don’t make decisions based "I think we should this." We look at data to find out what we know, then decide what to do based on the facts. We identify expected outcomes up front and determine how we are going to measure the change before we implement something. Also, a data-driven culture starts at the top of our organization.

What processes do you have in place to get the right data in front of the right people?

We have centralized data warehouse and reporting structure. Everyone gets their data from the same place and the metrics are universal. This took 3-4 years to get it right, and we built it from scratch. It takes a substantial commitment to pull off.

What is the role of the analyst in your organization? What tools do they use?

We have technical analysts and DBAs in our business intelligence group that deal with the more technical issues. In Marketing, then, we have analysts how are on the individual marketing teams that work closely with the business people. The use some basic tools, nothing terribly fancy.

From an analysis perspective, what do you do when you are testing new marketing opportunities?

All tests are done with as much of a controlled environment as possible. With so many moving parts, this can be difficult, but is important.

How has analytics contributed to the success of Esurance?

Truly one of our competitive advantages. We would not be where we are today without great data and a dedication to using it through all levels of the organization.

Baby Dashboard

The folks at Trixie Tracker very nearly read my mind. They created an online service that helps you keep track of the daily patterns of your infant. Users enter information about nap times, feedings, and diaper activity—then have access to a variety of informative charts and graphs. "Learn more about your baby’s needs and behavior... get more sleep," they promise. Here’s an example of their "sleep telemetry" graphic:

The idea is good...but it doesn’t cover our most pressing need as parents. Like a business, we need real-time information that will help us make game-time decisions. Is he ready for a nap? What’s going on in that diaper? Does he need to eat now? These are the answers we need to ensure a contented baby.

With these concerns in mind, I took baby analytics a step further: I developed a real-time baby dashboard with heads-up display. Using the Trixie Tracker log data as a starting point, I added a durable in-diaper sensor to capture the, err, "raw data" necessary for timely action. The final step was to attach a wearable DLP, high-lumen (indoor, outdoor usage) projection lens to baby’s outfit. Now I’m always one step ahead of an unhappy kid.

You are not alone — common enterprise data problems

I like the bumper sticker that goes: "Never forget that you are unique, like everyone else." Most of our clients believe they suffer from the ugliest pile of unmanageable numbers possible. Guess again; you’re probably no worse off than the next guy.

In an attempt to ease your fears of being alone with your data troubles, we’ve put together a list of common data-related issues we see in our client work:

  1. No unique identifier. Faced with numerous enterprise systems and any number of customers and employees entering data, many organizations are unable to maintain unique customer identifiers. With unique identifiers, you can match customers across their interactions with the organization; without them, it becomes very hard to get a full view of the customer experience.
  2. Blocked by the reporting front-end. Many enterprise systems (CRM, ERP, etc.) do you the "favor" of bolting on a reporting engine. Set up correctly, these tools can be modestly satisfying in their ability to spit out metrics and support basic slicing-and-dicing analysis. However, when you start to ask complex questions or want to dig into the raw data, you find that your reporting engine is more of a door than a window.
  3. Too many reports. A big stack of dashboards, key performance indicators, and success metrics is piling up on your desk—and yet you don’t feel like you understand the state of the business. As we pointed out here, too many metrics can mean you don’t fully understanding business drivers, but want to create that facade.
  4. Inconsistent data definitions. What does "customers" mean? For marketing, it is the number of people brought in the door. Operations only counts the number of active users. This leads to any number of  unproductive conversations trying to explain discrepancies in the numbers.
  5. Messy, unstructured data. Data is rarely arrives in an easy to use form.  Sometimes it is spread across tables in Excel (or worse, PDFs).   Dimensions and measures are poorly labeled and not defined.
  6. Access. Getting to the right data can mean a well-formed and informed request to the IT group. When IT and business folks struggle to communicate, data stays locked away. Our friend Dratz writes a great blog that offers nice perspective for business folk like me.

  7. Data shmata. Sometimes all the good analysis falls on deaf ears. "Some people in an enterprise apparently aren’t really looking for a single version of the truth. It can be easier for them to work with common assumptions and ’dance with numbers’ during management meetings", says Bill Hostmann of Gartner.
  8. The data warehouse is late to the party. This has to be our favorite. While working for AOL, I watched as on two separate occasions as expensive data warehouses were delivered just as the business changed direction.

Do any of these sound familiar? We’d love to hear your stories of data trouble. Or leave us a comment if you’re interested in our strategies for tackling these problems.

A few other resources on data and business intelligence troubles:

  • The Open Source Analytics blog discusses data marts, data warehouses and the related philosophical debates.
  • Gartner’s take on "BI’s seven fatal flaws"
  • SearchDataManagement.com has a host of articles about enterprise and customer data integration

November Learning

Yesterday I had an opportunity to sit in on presentation by Alan November of November Learning . Alan’s expertise in learning communities and technology in education offered a few tasty nuggets:

He was emphatic that our education system needs to create a “global work ethic" if our country wants to be competitive in a global economy. This special new work ethic requires three key skills (all of which I’d take as an employer):

  • Access and understand massive amounts of information
  • Global communication skills
  • Self-directed

Alan’s other bold point: if “No Child Left Behind" continues as the primary focus of the US educational system, the consequences will be catastrophic for our economy. He cited studies that show performance of top-level students declining even as the bottom group of students improve in test scores. Metrics matter, and the focus on improving proficiency (as compared against ourselves) neglects the fact that education systems around the world are leaving us behind.

Goals for your success metrics

Have you ever seen that movie scene where the wizened old cowboy tames the impossibly wild stallion? The wise frontiersman takes his time, careful not to frighten the horse, and gradually shows it what he expects. He uses a soft voice and doesn’t ask too much at first. Once he has established trust, it isn’t long before he hops on the stallion and lets it do what it wants to do naturally: gallop into the sunset.

There is a metrics lesson in this scene.

If you have successfully defined and tracked a few critical metrics for your business, you’ve made admirable progress towards life as a metrics-driven organizations. However, without specific goals for these metrics, you can’t create the optimum focus and accountability. You’ve got your peanut butter, but no chocolate…

Don’t be daunted by the task of getting executives to agree on target levels of performance. It will come naturally if you approach the problem with patience.

First, invest the time and energy to socialize your success metrics. The challenge is to offer a clear definition of what is being measured and demonstrate its importance. Do not initiate discussions about goals at this point. You will probably be dealing with a jittery cast of characters—no sudden movements or loud noises.

Second, introduce the success metrics into periodic meetings and other venues. You want these metrics to become part of the organizational vernacular. Again, have patience in getting people accustomed to the metrics and their implications about the business

Now here is the wonderous part: having built a foundation of awareness and understanding, you can reasonably expect people to start coming to you wondering why there aren’t goals associated with the success metrics. In all likelihood, they’ll ask this question as if you hadn’t thought of the concept. Resist temptation and let it be their idea.

Soon you will be conducting meetings to define reasonable yet aggressive goals for your metrics.