Choosing the Right Proposal Measure

Folks in the research administration community are talking more and more about data management and reporting at their respective universities. When we talk about data, we also need to talk about metrics. Tracey Robertson, the Director of Sponsored Research Accounting at Princeton University tells us that choosing the correct metric can:

  1. Change behavior
  2. Drive performance
  3. Support investments

Failing to choose the right metric to present research activity data will not only confuse people, but will also lead to missed opportunities and a failure to answer important questions that researchers and campus leaders may have.

A couple of years ago we wrote an article about using the right metric for your data presentations, and people really loved it. It’s summarized by this diagram:

However, we wanted to make it “real” for our research community so we decided to give some more insight on how we used these concepts to design our office of research reporting application. Here’s what we came up with.

Actionable

To make a metric actionable, start by making sure it accurately addresses a real question or need. If your goal is to create a report on how successful a college or department is in getting funding for their proposals, your report would be lacking if you only included number of awards received in this performance metric. Why? Because this metric alone does not adequately capture proposal success. Including the number of proposals submitted as a reference to the number of awards granted captures the performance metric and accurately addresses the need. Here are the metrics we selected:

To enhance the actionability of these metrics, we also added the change from the previous month for each metric. In this example, for instance, the number of proposals was down 157 from the prior month. This gives the users some insight into context and hotspots for follow up action.

Additionally, when a user selects a metric, other information on the page (such as trend over time, or breakout by sponsor) is updated to reflect more detail on that selection. Interesting detail means action.

Common Interpretation

Your metric should be one that everyone can easily understand without much thought. Keep in mind that some (if not most) of the people to whom you are reporting your school’s funding data are not analytical experts. Think layman's terms here.

In the Research app, we made sure the labels of the metrics were simple, common and easily understandable. The labels “Proposals” and “Proposal Dollars” clearly represent what they mean and are common to the lexicon of our targeted users.

Additionally, we wanted to make sure there is a delineation between proposal and award metics by separating the key metrics into two representative rows, using the gestalt rules of association to connect the related metrics.

Accessible, Credible Data

A good metric is one that should be easily accessible and tenable. Many schools run into the issue of being able to track down and organize the data for their grant funding activity reporting.  

The platform that we used to create our research application (i.e., Juicebox™) is based on the premise of accessibility. But the credibility factor is tied to the data. Make sure that the data that you use to calculate your metrics is well understood and comes from a respected source. A good litmus test is to ask the question to your users: “If you wanted to know the number of awards, where would you look to figure that out?” Your data source selection means more if people confirm your source as one they’re already trusting for their work.

Transparent, Simple Calculation

When an administrator, dean, or professor looks at the reported metrics, they should be able to recognize how your team reached that value and what it represents. If they cannot decipher how it was calculated you lose credibility and gain confusion.

The metrics we selected for our research application are what we call “simple metrics” in that they are not complex assemblies of multiple metrics (otherwise known as composites, indexes, or franken-measures.) But to make sure the selected metrics are as simple as possible we narrowed them down to core concepts that people understand: the number of proposals and awards, and the dollars associated with proposals, awards and expenses — concepts most anyone in the research world readily understand.

Want to see more about useful proposal metrics?

We’ve taken the principles illustrated in this article and beyond and have applied them to our own product that can deliver accessible and actionable data insights to anyone who uses it. Check out the demo video.