Data Storytelling Workshops, Part 1

The Juice team has been traveling around from conference to conference showcase our method of quickly and easily creating data stories from a data set. We got the opportunity to utilize Nashville’s Open Data project to source the data we used for the workshops. Attendees were divided into several groups and given the option to choose between several personas for whom they would build their data story. By focusing on a particular type of user’s goals, attendees were easily able to create questions that should be answered by the data. These questions or “goals” for their data story were written out on sticky notes by each group member and were shared with the entire group.

Juice Werkshops GIF 1.GIF

Once the goals for their stories were distilled into just 3 questions total, attendees chose metrics and dimensions that would function to best answer the questions that would achieve the goals that their personas wanted from the data story. Making decisions about building effective data stories typically take hours if not days. We were able to accomplish this in less than an hour and saw attendees leave with a full understanding of how a great data story is built for a particular audience or user.

Some workshop moments that were captured can be found in the 30-second video below:

Stay tuned for part two, in which we will showcase a data story that was created by one of our attendees. You won’t want to miss what he created in just under an hour with Juice’s guidance!

If you can't wait and want to see how you can start making your own data stories with Juice, send us a message using the button below.

Why We Prototype

At Juice, we’ve spent the last year relentlessly pushing to make it easier to build world-class interactive analytical applications, or "data stories.” This was an important change for us. In the past, like a design agency, we would create carefully-crafted user interface mock-ups with detailed descriptions of functionality and interactions. Anything we couldn’t show in a static picture we would describe in words. Now we can do something massively more effective: we can build a live, interactive prototype in the time it takes us to draw all those pictures.

Here are the most important reasons we felt it was necessary to be able to prototype with ease:

1. Non-designers don't speak the language of mock-ups

With a decade of experience designing analytical interfaces, we became adept at making the mental leap between a static mock-up and the live application it would become. Static mock-ups imply — but don’t show — interaction points. They suggest what the data may look like, but don’t try to accurately show the data. They highlight dynamic content, but can’t show it change.

Take the following visualization mock-up as an example. Can you tell:

  • How the orange button will change as you interact with the visualization?
  • What happens when you roll over the points?
  • Why the title indicates “4 categories”?
  • The image implies a lot of functionality to an experienced information design audience. That doesn’t help everyone else.
why we prototype image one.png

2. Uncover data difficulties early

Your data isn’t always what you think it is. It certainly isn’t as clean or complete as you might hope. By prototyping with real data, you discover some of the issues in your data that run counter to your assumptions. You may also find trends or patterns that reshape what information you want to show.

Recently we built an application for a client that delivers an assessment checklist. We expected that we’d be able to look at the average scores to see how well students were performing. But in reality, students didn’t need to submit their scores until they were complete (100%). As a result, all the scores were perfect. And perfectly lacking in insight.

Here are just a few of the common things we run into when we prototype with real data:

  • Missing values where data should be
  • Multiple date fields, sometimes with confusing meanings
  • Averages that need to be weighted
  • Unexpected behaviors captured in the data that create unexpected data results

3. Validate hypotheses about the story you want to tell

Designs are based on a lot of assumptions about users. How will users interact with the data? What data is important to them? What views will be most impactful?

Prototypes give us the opportunity to test these hypotheses.  We utilize a user experience tool called FullStory to see in detail how users interact with their data story. We can see where they get confused and where they focus their attention. We also ask pointed questions to ensure our assumptions are playing out as we expected.

Screen Shot 2017-10-11 at 11.43.53 AM.png

4. Gather user feedback to sand-off the rough edges

User feedback isn’t only helpful for the big things. It can help you understand whether you’re on track the small, but important, details. A great data application needs to communicate the meaning of the content, including everything from the metrics to the labels to the descriptive notes. A few things to look for:

  • Do users understand the meaning of the metrics accurately?
  • Do the descriptions and labels convey the right meaning?
  • Is the styling — color, contrast — work for users or is it distracting?
Screen Shot 2017-10-11 at 11.45.10 AM.png

5. Build buy-in and a bandwagon

Making the transition from standard reporting to an interactive data application can be a big step for some organizations. For example, it can be scary to imagine giving your customers the ability to explore data by themselves. What will they find?

Taking this big step sometimes requires baby steps. Prototyping is an easy baby step. If you can create a real, working version of a solution to put in front of senior leadership, it will go a long way towards helping them get on board. Now people don’t need to envision what is possible, they can see it.

Screen Shot 2017-10-11 at 11.46.18 AM.png

Interested in building a prototype with your data? Get started by sending us a message!

The Art of Data Storytelling: Structure

This is the second in a series of posts on The Art of Storytelling, a video series from Pixar that shares its storytelling methodology. In this post, we will be examining how the lesson on Story Structure can be applied to data storytelling. For part one on storytelling and character, click here

Introduction to Structure

While traditional storytelling and data storytelling are not identical mediums, there is quite a bit of overlap between the two, and many of the best practices for one can be applied to the other. Take for example the idea of structure when it comes to storytelling. Structure, or in simpler terms, “what do you want the audience to know, and when?” is hugely important when it comes to the practice of data storytelling.

It may seem counterintuitive to consider modeling your data presentations after traditional storytelling structure. After all, storytelling is an inherently subjective act. The storyteller is crafting something that helps the audience learn about a theme that the storyteller finds important, and consequently a moral that should be learned. Applying this to data can seem like enemy territory for analysts who feel that their job in presenting data is to “let the data tell the story.” It’s important to note, however, that the data doesn’t have an opinion on what is important. For example, I was speaking to an HR Analytics team recently and it was clear to me that they wanted to use data to share important lessons with the business. It was less clear that they felt empowered to do so because they felt the data should speak for itself. Data often needs a voice to give it meaning.

When creating the structure of your data stories, keep in mind that it often takes a while to get to the structure that works best for what you are trying to accomplish. That is why it is important to create something ‒ even in a rough form ‒ and get it in front of people who will give you feedback. Does it resonate and connect with the audience ‒ or is it more like the unpopular original structure of Finding Nemo? Without this knowledge, you’re more lost than Dory and Marlin ever were.

Story Beats & Story Spine

An effective way of organizing story structure is by utilizing story beats, the most important moments in your story, and story spine, a pattern into which most stories can fit. While your data story most likely won’t open with “once upon a time…” and end with “and ever since then…” the lesson can still be applied. Using a structure that is broadly familiar to audiences and hitting familiar story beats will help ensure that a data story leverages the hooks that storytelling already has in people. Your audience is looking for certain things in a data story, just like they would in a Pixar film. Who or what are the key players? What’s the conflict? How can it be resolved? Utilizing these when appropriate will make your data stories much more effective.

Act 1

The first act of a film serves to introduce the audience to a protagonist, establish the setting, provide information into how the characters’ world works, and introduce an obstacle that sets the rest of the story in motion.

In traditional dashboards and reports, this information is often missing and leads to users not knowing where to start. If your audience is going to go on a data adventure with you, they should start off by caring about the situation that exists. Data stories should start with a high-level summary that then lets users progressively and logically drill into more complex details and context.

Act 2

Pixar states that the second act of a story as “a series of progressive complications.” My favorite way of describing act two is “the part of the story in which you throw rocks at your characters.” Either way, what happens in the next part of your data story is clear: addressing conflict.

When it comes to data stories, act two is the back-and-forth exploration of the problem. In the traditional story spine they refer to it as “because of that…”; for analytics we call it “slicing-and-dicing.” Throughout act two of your data story you are showing your audience the drivers of problems and identifying any outliers.

Act 3

In traditional storytelling, the third act is the part of the story where the main character learns what she truly needs, as opposed to what she thought she wanted. The character has gone on a transformation along the course of the story, and that is evidenced in the final act.

This is much harder to pull off in data storytelling. In data storytelling, I believe the protagonist is the audience. Much like the main character, the audience needs to be transformed and understand something new and important. A satisfying story is when a problem is fixed and the world is set right in some way. Great data stories deliver that change -- but to do so they need to do more than change the audience’s perspective. They need to make the audience act on, not just discuss, this transformation.


The best bit of advice from the Pixar storytellers is simple: work backwards. This is how we do it at Juice: we consider what is the endpoint, the change or impact that we want to make on the audience, and then craft the story that can help get us there.

We know that crafting data stories can be a challenging process, and that’s why we’re here to help. If you’d like to talk to us about how we create data stories for organizations like the Virginia Chamber of Commerce, send us a message at or click the link below.

Q&A with Treasure Data: Everything You Ever Wanted to Know about Data Viz and Juice

This post originally appeared on the Treasure Data blog. 

Tell us at the story behind Juice Analytics. What’s your mission?

My brother and I started Juice Analytics over a decade ago. From the beginning, our mission has been to help people communicate more effectively with data. We saw the same problem then that still exists today: organizations can’t bridge the “last mile” of data. They have valuable data at their fingertips but struggle to package and present that data in ways that everyday decision makers can act on it. Even with the emergence of visual analytics tools, data still remains the domain of a small group of specialized analysts leaving a lot of untapped value.

Our company has worked with dozens of companies, from media (Cablevision, U.S. News & World Report) to healthcare (Aetna, United Healthcare), to help them build analytical tools that make it easy and intuitive to explore data. We published a popular book in 2014 titled Data Fluency: Empowering Your Organization with Effective Data Communication (Wiley) with a framework and guidance to enable better data communication. To bring our best practices and technology to a broad audience, we built a SaaS platform called Juicebox that enables any organization with data to create an interactive and visual data storytelling application.

Why is data visualization so important to an organization’s ability to understand its data?

Data visualization is one of the most useful tools in bridging the gap between an organization’s valuable data and the minds of decision makers. For most people, it is difficult to extract insights or find patterns from raw data. When we tap into the power of visuals to help us recognize patterns, data becomes more accessible to a broader audience.

For many of the organizations we work with, data visualization has the added value of uncovering issues with the data. Once you start visualizing trends and outliers, the weaknesses or mistaken assumptions about your data come to the surface.

What is data storytelling? How can it be useful to marketing professionals?

The term data storytelling has become increasingly popular over the last few years. We know that data is important to reflect reality — but absorbing data, even in the form of dashboards or data visualizations, can still feel like eating your vegetables. We all recognize the power of storytelling to engage an audience and help them remember important messages. People who focus on communicating data — like our team at Juice — feel that there is an opportunity to use some of the elements of storytelling to carry the message. Stories have a narrative flow and cohesiveness that distinguishes them from most data presentations.

However, data storytelling is different from standard storytelling in some important ways. For one thing, in a data story the reader is encouraged to discover insights that matter to them. One analogy I like to use is a “guided safari.” Data storytelling should take the audience to the views of data where new insights are likely to occur, but it is up to the audience to “take a picture” of what is more relevant to them.

In our experience, data storytelling is particularly valuable to marketing professionals. For internal audiences, data storytelling techniques can help you explain the impact of your marketing efforts to your stakeholders. For customer or prospects, data stories can be used to lend credibility to your marketing messages and enable deeper insights of your product.

What are essential tools for data storytelling?

The tools for data storytelling fall into a couple of categories: human skills and technology solutions.

The most critical skill you can have for data storytelling is empathy for your audience. You want to know where they are coming from, what they care about, how data can influence their decisions, and what actions they would take based on the right data. Knowing your audience allows you to shape a story that emphasizes the most important data and leads them to conclusions that will help them. Data storytellers must remember that an audience has a scarcity of attention and a need for the most relevant information.

At Juice, we’ve thought a lot about the capabilities that make data storytelling most effective — after all, we’ve created a technology solution that lets people build interactive data stories. Here are six features that we consider most crucial:

  1. Human-friendly visualizations. Your audience should be able to understand your data presentation the first time they see it.
  2. Combine text and visuals. There are lots of tools for creating graphs and charts. But data stories are a combination of data visuals flowing together with thoughtful prose and carefully-constructed explanations.
  3. Narrative flow. The text and visuals should carry your audience from a starting point (often the big picture of a situation) to the insights or outcomes that will influence decisions.
  4. Connected stories. In many cases, it takes more than one data story to paint the whole picture. Think of presenting your data as a, “Choose Your Own Adventure” book, in which the audience can pick a path at the end of each section to follow their interests.
  5. Saving your place. The bigger and more flexible a data story becomes, the more important it is to let the audience save the point they’ve arrived at in their exploration journey.
  6. Sharing and collaboration. Data stories are often a social exercise with many people in an organization trying to find the source of a problem and decide what they should do about it. Therefore, it is critical to let people share their insights, discuss what they’ve found, and decide on actions together.

Where do you see organizations struggling the most with managing and understanding the data they collect? What should they be doing differently?

A common problem is that organizations don’t truly understand the data they are collecting. Ideally, data is truth— it should allow us to capture and save the reality of historical events, such as customer interactions and transactions. However, more often than not, what the data is capturing isn’t exactly what people imagine. We find it useful when we can get a data expert in the same room as the business folks who will be using the data. A deep dive discussion about the meaning of individual data fields will often reveal mistaken assumptions or gaps in understanding. Working together to build a data dictionary can be invaluable as you continue to use data.

Data exploration is an iterative process. Answering one question will raise a few more. In this way, organizations will eventually identify where they lack understanding of their data. The faster you can iterate on analyzing and presenting data, the sooner you will resolve the issues.

Is all data visualization created equal? What do organizations need to know about finding the right type of visualization to help better understand their story?

Not all data visualization is created equal. There are visualization approaches — charts and graphs — that could be a good fit for your data and message and there are poor data visualization choices that will obscure your data. One mistake that we see is an ambivalence toward finding the right chart for the job. You may have seen dashboards that default to show data as a bar chart, but also give users the ability to pick a variety of other charts types. Why not choose the best chart to convey your data and unburden users from making any more decisions?

There are also well executed and poorly executed data visualizations. Good data visualization emphasizes the data over unnecessary styling, clearly labels the content and directs attention to the most important parts of the data.

From where you sit, how should organizations approach their data management – from collection to storing to analyzing?

We start from the end, then work our way backward. One of the biggest mistakes we see is organizations trying to collect and consolidate all the data they may possibly need in one place. These types of data warehouse projects quickly spin out of control with endless requirements and increasing complexity. It doesn’t have to be that way. Instead, we’d encourage people to start with three simple questions:

  1. What important action do we take today that could be better informed by data? Only include high impact actions where you have the data to answer the question.
  2. How would we present that data to the people who make take those actions? Most of the time it isn’t a data analyst who is going to be acting on the data on a day-to-day basis. Consider the simplest possible view of the data that would enable the end users.
  3. What data is necessary to deliver that view? Now you’ve narrowed down to just the critical data that is going to make an impact.

Once you’ve answered these questions for one specific action, you can go back and do it again for another.

What trends or innovations in Big Data are you following today?

Here are a few of the areas that are interesting to us:

  • Data narratives. Companies like Narrative Science are turning data into textual summaries. Like us, they are looking for ways to transform complex data into a form that is readable to humans.
  • The intersection of enterprise collaboration (e.g. Slack), data communication (e.g. Juice), and business workflows (e.g. Salesforce). Our goal isn’t just to help visualize data more effectively. We want people to act on that data. To do so, data visualization needs to connect to places where people are having conversations and into systems where people make business decisions.
  • Specialized analytical tools. The pendulum appears to be swinging away from do-it-all business intelligence platforms and toward best-of-class, modular solutions. Companies like Looker, Alteryx and Juice aren’t trying to be everything to everyone — but rather serve a specific portion of the data analysis value chain. We’ve found more and more companies that are looking for the best tools for the job, but require mobility of the data between these tools.

Do you have a question about data viz, data storytelling, or Juice that we didn't answer? Send us a message at or fill out the form below.

New Ebook: 5 Strategies for Getting Started with Workforce Analytics

Picture this: you're an HR executive in a top healthcare organization. You love your job, and you're committed to providing the absolute best patient care possible. But with increased demands and a tightening resource base, doing so is becoming more and more challenging. How are you supposed to provide more when you're being given less?

Thankfully, there's a solution. Workforce analytics can provide invaluable insight into healthcare organizations that can have a direct impact on patient care and satisfaction. However, getting started with workforce analytics can be a confusing process. That's where we come in.

For years we've been working with healthcare organizations to address these very issues using workforce analytics. We've got some of the best minds in the industry tackling the same problems you face, and now they're sharing what they've learned about workforce analytics in our newest ebook. It will walk you through what workforce analytics are and the steps you can take to implement workforce analytics in your organization right away.

5 WA 3D.jpg

So if you're feeling ready to get started with workforce analytics, download the ebook for free now! 

Building Really Great Data Products, Phase 3: Make It Available and Scalable

Over the past few weeks, we’ve talked about what it takes to build really great data products. We started with how to go from a blank canvas to design the right data product. This week we want to touch on how to maximize the reach of your data product with Phase 3: Make it available and scalable.

There are two primary areas that facilitate scaling: 1) how the product connects with the target users, and 2) how the technology of the data product enables a higher volume of users and data.

Connect with your target

Just like any other product you might think of, data products need to be used by their target to accomplish their one job. If you followed our Guided Story Design™ process, you’ve already done most of the heavy lifting to connect with your target audience. But there are some post-design considerations that you need to make if you want to maximize how your data product connects with your target.

Before people will use a product, they have to know about it. When you begin the process of telling others about your product, don’t take the “build it and they will come” approach and toss it out there and see what happens. Instead, be intentional about how you introduce folks to your product. Begin with properly-crafted messaging about the problem your product solves. Frame it in a way that they understand how it helps them. Avoid “Hey look at this cool thing I made.” (i.e., what it does for you) and focus on “This application will point you to departments with high staff turnover” (what it does for them). You’ll want to make this message as simple as possible, focusing on the chief problem it solves and leaving discussion of features for later. Realize that if it takes you a paragraph to get someone to understand why they should use it, you’re gonna lose folks before they’ve even tried it out.

Once you’ve connected with your target in a way that makes them want to use the product, you have to make it so that they can actually start using it. Don’t forget that the first time they see the product, they’re going to have to build their own mental framework for how they engage with it; any structure you can put in place to help them with this makes onboarding so much better. Some tricks to lower the barrier to use include gradual reveal, simple introduction videos, and step-by-step guides on how to accomplish common tasks. We love to use new-user tours in our Juicebox apps, but these can also be accomplished through other less automated means (such as onboarding emails, training, or documentation).

In addition to those “push” onboarding ideas, you’ll also want to to consider encouraging “pull” engagement -- allowing your users to connect with you (for user feedback and support) and with other users (to discuss findings and questions). Believe it or not, interpersonal connections about the product will most certainly help them to connect better with the product.

Technology scaling and operations

The second component of scaling the data product is about how well the technology base on which it was composed enables more people to use more data. Because effective scaling is a very complex topic, we’re just going to briefly touch on it here with some scaling questions you’ll want to consider. As you ponder these questions, ask yourself how important each of these are to the success of your product.

Capabilities that make it easier to operate the product on a daily basis:

* Can I bulk add new users? Adding a handful of users by hand is no problem, but if you have to add dozens or hundreds, that’s no fun.

* Can I assign users to group access permissions? If different people need to have restricted access to different things, it may be more efficient to have permission groups to which you assign users so that there are no privacy slip-ups.

* Can I monitor what users are doing and how they’re using the data product? When you know who’s actually using the product you can better tune onboarding efforts.

* Can I load data using automation? Automation reduces error; if data quality is important, this may help.

* Do system resources (e.g., servers, data storage) autoscale to accommodate both growth and idle time? Making sure response times stay reasonable keeps users happier.

Capabilities that report on system health:

* Do I know who’s using the data product (now and in the past)? When you know who’s actually using the product and what they’re doing, you can better respond to questions and feature requests.

* Do I know if data loads ran successfully? Everything works perfectly… until it doesn’t. Then you’ll want to know.

* Can I effectively identify performance bottlenecks? If you know things that impact user experience, you can improve user experience.

* Am I notified when there’s a system issue? You won’t have to spend too much time looking for broken things before you’ll really appreciate smart issue notifications.

Capabilities that enable future improvements:

* Does my technology support my data product’s life cycle? Design → Develop → Production → Upgrade → Retire.

* Can I work on new features and bug fixes without disrupting production? Being able to make changes in a development environment prevents oh so many embarrassing moments.

* Can I reliably deploy a new release without breaking the data product? Don’t miss any pieces and don’t include pieces that don’t belong.

* Can I provide branded versions to my customers that have the same core code? White labeling and customer-specific configurations.

* Can I set up users that have access to different versions of the data product for testing purposes? Giving existing users access to pre-release versions can cure headaches before they happen.

All of these are important things to take into consideration when making your data product available and scalable. It can be a difficult undertaking, but it's not an impossible one. If you have questions or want to know more about the approach we take to build our data products, send us an email at or send us a message using the form below.

Building Really Great Data Products, Phase 1: Narrow the Story Options

This is the second of three posts in a series that discuss best practices for designing data products. This post focusses on narrowing all of the “blank canvas” options down to the right design. Check out part one of the series here

The Starting Line

Folks who have data of any size in their possession also typically have some ideas and goals for what insights they want extracted from that data. While a sense of curiosity about data is never a bad thing, it’s often too broad to hone in on important insights that should be extracted from the data. Think about it like an artist who starts with a blank canvas: transforming the canvas into a beautiful work takes expertise, focus, and execution. In the case of properly crafting a great data product, by adhering to a carefully crafted process that functions to narrow the focus of the story that the data will ultimately tell, they’ll be able to go from broad ideas to authoring their very own data story that has purpose and direction.

How to do it

We use a process called Guided Story Design™. This process takes the infinite "blank canvas" options that every data visualization tool offers and narrows the options down to the one that best enables the target audience to act on the data. We do this by helping the data product author see the reporting challenge from the perspective of their users/audience, and then put the data into a context that is easily understood and acted upon. This all-important process of narrowing the purpose and function of the data application is accomplished in 3 steps.

Step 1: Identifying the Audience

The author’s attention when thinking about his or her data must be narrowed to focus on how the data will be used for the good of the business. In order for authors to truly consider and understand how their data will be consumed, they must step into the shoes of their users. These should be users that will have specific roles and goals within the data application.

For instance, consider a data product intended to enable a state chamber of commerce to better plan for future economic development in their state. Clearly identifying users as policy makers as opposed to investors or target corporations introduces the critical nuance of full disclosure as opposed to only show people where I’m the best. This can dramatically impact the focus and purpose of your data story.

This laser focus on the user persona gives the author a sense of context as to what the purpose of the data application is and encourages them to consider who the audience of their data application will be. When someone is forced to consider what his or her users’ goals are, the metrics and dimensions that will be most valuable to their audience come to the surface.

Step 2: Designing the App

Now that the focus and the goal of the application are in place, the design is the next key factor to make the data story one with which users want to engage. When we lay out the scope of a design, it includes three components: content, layout and flow, and styling. All three play an important role in connecting with the users and deserve intentional attention.

Picking the right content typically starts with identifying the metrics that support the goals of the audience. Once metrics are defined, specifying how to reveal additional detail about each one is the next step. For example, a metric about sales revenue might be most useful when trended across time. Or perhaps it’s more useful shown as a breakout across regions. Try to avoid showing as many breakdowns as you can think of since your target user most likely prefers just a few (or one) of your options (and more breakouts frequently lead to more confusion.)

Once you have the right content, it needs to be laid out in the proper sequence with the proper visual and interaction connections so that the user can understand it. Think of it like writing a thesis: there’s an introduction (typically key metrics), a body (the break outs for each metric you’ve identified), followed by a conclusion (either a summary of findings or perhaps a listing of lowest-level elements such as students or transactions). The key thing to remember: there should be a flow through the content that seems natural and leads the audience to an action.

The purposeful styling of the application should invite the user to engage and seek understanding while supporting any branding guidelines that are necessary. Company logos, color palettes, and relevant images should be embedded into the app to fulfill styling guidelines and to make the application feel personalized.

Step 3: First Eyes on the App

Once the data application is ready, publishing it to a small group of actual users that fit the target gives you the ability to test and refine your design. From this test group, insights about the effectiveness and usefulness of the application will come pouring in. The subset of users should be given a window of time (the amount of time can vary, but it’s important that there’s a specific period identified to keep things moving -- we feel a few weeks is typically enough) to explore the data and give their feedback while being able to test this feedback with fellow users. The conversations and direct feedback generated through this process will make the path forward for final touch-ups to the final product very clear. The feedback given by the users also functions as an affirmation to the author that the application they’ve put together is one that is actually useful to its target audience.

The author has gone from having a blank canvas to a data application that users interact with and give feedback on, all thanks to the Guided Story Design process that puts the focus of the application’s design onto its actual end-users. In the next post, we will take you through the steps required to take the application from a small subset of test users to a living, breathing product that supports thousands of users.

Have questions about our Guided Story Design process? We've got answers! Send them our way at or check out our Contact page to shoot us a quick message.

3 Phases of Building a Data Product

This post is the first part of a three-part series. To begin, we’ll discuss discuss the difference between reporting and data products. The second part will talk about what it takes to design an effective data product. The third and final post will review what factors to consider to get to scale with your data product.

Over the years, we’ve written about the virtues of proper data visualization and use, from Chart Chooser to dashboard design best practices. As we’ve practiced these principles with our customers to help their audiences use data, we’ve observed over and over again that the most impactful results come not as data visualizations, but rather as persistent data applications that are purpose-built and long-lived. We call these data applications "data products."

Let’s review what we mean by “data product”

In our Data Reporting Maturity Model (↑) there’s a full spectrum of attributes from raw data to lifecycle management. While not all data reporting opportunities justify the attention and effort necessary to productize the data, true data products demonstrate nearly all of those attributes. These attributes fall into four broad groups: data, defined audience, accessible and usable, and productized.


Data products begin with quantitative and specific data every time. This includes raw data as well as “re-formatted” data such as tables or charts. Smart and legitimate qualitative explanation and description enables full bloom of the meaning and consequently, full reach of the audience, but the qualitative part is always subject to the quantitative data.

Defined Audience

“Reporting” implies some specific audience and distribution along with a level of summarization and interpretation. We like to clarify this by thinking about reporting as primarily intended for “up and out.” “Up” refers to people in the author’s own organization who are less familiar with the details of the data (e.g., up the reporting chain), and “out” refers to people outside the author’s domain who are less familiar with the domain itself (e.g., peer departments, or customers). In both cases, the target audience needs the guidance of a trusted advisor in order to fully understand the importance of the data, among other things.

Accessible and Usable

Most of the attention in the data visualization space has been on the side of what we refer to as ad-hoc reporting -- "I have some data and need to explore it to find out what it might tell me." This is a necessary part of the data value chain, but let’s not be deceived: it’s by no means the last mile. The goal is to get people to act on what the data reveals. This means crafting an intentional message and providing that message in a form that is consistently available in the same way. It supports access management, enables easy new-user onboarding through guidance and help, features robust interactivity, and provides operational support (such as usage, error reporting, auto-scaling) — all those things you would expect from any real application platform.


All data reports have a lifecycle, but for the really important stuff we’ve found it helpful to think about it from the perspective of what it will take to provide it to customers over an extended period of time. You’ll want to consider its market, what sort of feature planning is required, what it takes to manufacture its first version versus future versions, how you take it to market and get users to adopt it, how you provide support and answer questions about it, and how you retire it when its time has come.

The building phases 

Now that we know what data products are, how do we build them? We break this journey down into three phases.


Phase 1: Narrow the Story Options

This first part is about narrowing a virtually infinite number of options presented by a “blank canvas” approach down to the right design. We use our own process called Guided Story Design™ to solve this problem (see chart above). 

Phase 2: Build It

This phase is about taking what you’ve designed and Pinocchio-ing it into a "real boy" (no blue fairies needed).

Phase 3: Make It Available and Scalable

The final phase of making a data product is about enabling what you’ve made to scale both in usage and capacity. This means accessible, usable, low barrier-to-entry and guidance for new users, interpretation, easy to share, discussions, and support.

What comes next

Now that we’ve set the stage and defined what a data product is, you may be interested for more detail on how to make it happen. Over the coming weeks, we’ll be delving into the first and third phases*, so stay tuned.

*Phase two is left up to the interested reader for self-exploration based on your tool/technology of choice. Once you have the first stage result, you can implement it using many different technologies. Juicebox is what we use to create the best apps for telling data stories. If you want to learn more about it, contact us.

Six Essential Features for a Data Storytelling Solution

Traditional dashboards are good at showing a full-status picture all at once. Visual analytics tools are great for flexible exploration. But neither of those solutions were designed to tell stories with data. Data storytelling is a new model for communicating information to an audience using narrative flow, text, and visuals to engage, educate, and move people to action. 

In this new era where the audience needs to come first, the priorities are different. Here are six essential features necessary to deliver compelling data story applications.

1. Human-friendly visualizations. Your audience should be able to easily understand your data presentation the first time they see it. Using common language and clear images is key to achieving this effect.

2. Integration of text and visuals. There are lots of tools for creating graphs and charts, but data stories are a combination of data visuals flowing together with thoughtful prose and carefully-constructed explanations. It's important to first set the stage for what you're presenting, then give context and add detail to your data story. 

3. Narrative flow. The text and visuals should carry your audience from a starting point (often the big picture of a situation) to the insights or outcomes that will influence decisions. Every user selection helps craft a relevant story.

4. Connected stories. In many cases, it takes more than one data story to tell the whole story. Think of exploring your data as a 'Choose Your Own Adventure' book, in which the audience can pick a path at the end of each section to follow their interests.

5. Saving your place. The bigger and more flexible a data story becomes, the more important it is to let the audience save the point they’ve arrived at in their exploration journey. In this way, they can come back to the analysis over time and share it with colleagues.

6. Sharing and collaboration. Data stories are often a social exercise with many people in an organization trying to find the source of a problem and what they should do about it. Therefore it is critical to give users an easy way to share their insights, discuss what they’ve found, and decide on actions together.

Juice has built the world’s most complete solution for creating interactive data stories. Interested in learning more? Give us a holler at or let us know how we can help at the link below.

"Choose Your Own Adventure" Data Stories

Before the days of iPads, smart phones, gaming systems, and on-demand TV, children read to keep themselves entertained. I know what you're thinking -- "What?! How could that be possible? Kids hate reading!" False! When I was growing up in the 80’s and early 90’s, one of my favorite modes of entertainment was reading, and I especially loved the “choose your own adventure" genre. I can remember reading with a flashlight under the covers eagerly awaiting the next page to choose what happened to the main character. Even though I was choosing from a set number of options, I still felt in control of the adventure. At Juice, we see multiple parallels with “choose your own adventure” stories and data storytelling.

One of the main challenges when it comes to data storytelling is being able to get both analytical and non-analytical users on the same page. Data always tells a story, and we want to enable people to communicate the story to their audience and ultimately deliver something of value, regardless of their level of data fluency. This means giving users a common language in which to communicate and a platform to do so.

Some data stories are simple: they have a few metrics and a number of ways you can slice and dice the data. But what if a user wants to aggregate different sets of data and find trends, commonality, and meaning? This is one of the challenges we have taken on in Blueprint, and the starting point for finding such commonality is deciding on a root unit of measure. For Blueprint that is the employee of a hospital or health system. In our conversations with these organizations, we have discovered that leadership wants to see their employees under many different lenses (such as hiring, turnover, tenure, engagement, compensation etc.). The problem is that each of those lenses is a different data set. With Blueprint we have created an aggregator for those disparate data sets to live. By filtering the data down to an organization, department, or supervisor, we can allow a leader to “choose their own adventure” and find the story in the data that is most important to them. This allows them to see more clearly into their organization and make smart, thoughtful, data-driven decisions.

Blueprint may be the first of its kind, as demonstrated by its use of shorter modules/stacks that allow the user to make his or her selection and then carry it onto the next module, but we know it won't be the last. We're truly excited about what this “choose your own adventure” type of navigating means not only for the future of our products, but for the industry as a whole. And now the choice is up to you -- what will be the next step of your data storytelling adventure?