All the Colours, All the Space

Everyone knows inflation is a thing. If not, when was the last time you went shopping? Last week the Boston Globe looked specifically at children’s shoes. I don’t have kids, but I can imagine how a rapidly growing miniature human requires numerous pairs of shoes and frequently. The article explores some of the factors going into the high price of shoes and uses, not very surprisingly, some line charts to show prices for components and the final product over time. But the piece also contains a few bar charts and that’s what I’d like to briefly discuss today, starting with the screenshot below.

What is going on here?

What we see here are a list of countries and the share of production for select inputs—leather, rubber, and textiles—in 2020. At the top we have a button that allows the user to toggle between the two and a little movement of the bars provides the transition. The length of the bar encodes the country in question’s market share for the selected material.

We also have all this colour, but what is it doing? What data point does the colour encode? Initially I thought perhaps geographic regions, but then you have the US and Mexico, or Italy and Russia, or Argentina and Brazil, all pairs of countries in the same geographic regions and yet all coloured differently. Colour encodes nothing and thus becomes a visual distraction that adds confusion.

Then we have the white spaces between the bars. The gap between bars is there because the country labels attach to the top of the bars. But, especially for the top of the chart, the labels are small and the gap is at just the right height such that the white spaces become white bars competing with the coloured bars for visual attention.

The spaces and the colours muddy the picture of what the data is trying to show. How do we know this? Because later in the article we get this chart.

Ahh, much better. Much clearer.

This works much better. The focus is on the bars, the labelling is clear, almost nothing else competes visually with the data. I have a few quibbles with this design as well, but it’s certainly an improvement over the earlier screenshot we discussed. (I should note that this graphic, as it does here, also comes after the earlier graphic.)

My biggest issue is that when I first look at the piece, I want to see it sorted, say greatest to least. In other words, Furniture and bedding sits at the top with its 15.8% increase, year-on-year, and then Alcoholic beverages last at 3.7%. The issue here, however, is that we are not necessarily looking at goods at the same hierarchical level.

The top of the list is pretty easy to consider: food, new vehicles, alcoholic beverages, shelter, furniture and bedding, and appliances. We can look at all those together. But then we have All apparel. And then immediately after that we have Men’s, Women’s, Boys’ , Girls’, and Infants’ and toddlers’ apparel. In other words, we are now looking at a subset of All apparel. All apparel is at the same level of Food or Shelter, but Men’s apparel is not.

At that point we would need to differentiate between the two, whilst also grouping them together, because the range of values for those different sub-apparel groups comprise the aggregate value for All apparel. And showing them all next to Food is not an apples-to-apples comparison.

If I were to sort these, I would sort by from greatest to least by the parent group and then immediately beneath the parent I would display the children. To differentiate between parent-level and children-level, I would probably make the bars shorter in the vertical and then address the different levels typographically with the labels, maybe with smaller type or by putting the children in italic.

Finally, again, whilst this is a massive improvement over the earlier graphic, I’d make one more addition, an addition that would also help the first graphic. As we are talking about inflation year-on-year, we can see how much greater costs are from Furniture and bedding to Alcoholic beverages and that very much is part of the story. But what is the inflation rate overall?

According to the Bureau of Labour Statistics, inflation over that period was 8.5%. In other words, a number of the categories above actually saw price increases less than the average inflation rate—that’s good—even though they were probably higher than increases had been prior to the pandemic—that’s bad. But, more importantly for this story, with the addition of a benchmark line running vertically at 8.5%, we could see how almost all apparel and footwear child-level line items were below the inflation rate. But the children and infant level items far exceeded that benchmark line, hence the point of the article. I made a quick edit to the screenshot to show how that could work in theory.

To the right, not so good.

Overall, an interesting article worth reading, but it contained one graphic in need of some additional work and then a second that, with a few improvements, would have been a better fit for the article’s story.

Credit for the piece goes to Daigo Fujiwara.

The Potential Impacts of Throwing Out Roe v Wade

Spoiler: they are significant.

Last night we had breaking news on two very big fronts. The first is that somebody inside the Supreme Court leaked an entire draft of the majority opinion, written by Justice Alito, to Politico. Leaks from inside the Supreme Court, whilst they do happen, are extremely rare. This alone is big news.

But let’s not bury the lede, the majority opinion is to throw out Roe v. Wade in its entirety. For those not familiar, perhaps especially those of you who read me from abroad, Roe v Wade is the name of a court case that went before the United States Supreme Court in 1971 and was decided in 1973. It established the woman’s right to an abortion as constitutionally protected, allowing states to enact some regulations to balance out the state’s role in concern for women’s public health and the health of the fetus as it nears birth. Regardless of how you feel about the issue—and people have very strong feelings about it—that’s largely been the law of the United States for half a century.

Until now.

To be fair, the draft opinion is just that, a draft. And the supposed 5-3 vote—Chief Justice Roberts is reportedly undecided, but against the wholesale overthrow of Roe—could well change. But let’s be real, it won’t. And even if Roberts votes against the majority he would only make the outcome 5-4. In other words, it looks like at some point this summer, probably June or July, tens of millions of American women will lose access to reproductive healthcare.

And to the point of this post, what will that mean for women?

This article by Grid runs down some of the numbers, starting with laying out the numbers on who chooses to have abortions. And then ultimately getting to this map that I screenshot.

That’s pretty long distances in the south…

The map shows how far women in a state would need to travel for an abortion with Roe active as law and without. I’ve used the toggle to show without. Women in the south in particular will need to travel quite far. The article further breaks out distances today with more granularity to paint the picture of “abortion deserts” where women have to travel sometimes well over 200 miles to have a safe, legal abortion.

I am certain that we will be returning to this topic frequently in coming months, unfortunately.

Credit for the piece goes to Alex Leeds Matthews.

Where’s My (State) Stimulus?

Here’s an interesting post from FiveThirtyEight. The article explores where different states have spent their pandemic relief funding from the federal government. The nearly $2 trillion dollar relief included a $350 billion block grant given to the states, to do with as they saw fit. After all, every state has different needs and priorities. Huzzah for federalism. But where has that money been going?

Enter the bubbles.

I mean bubbles need water distribution systems, right?

This decision to use a bubble chart fascinates me. We know that people are not great at differentiating between area. That’s why bars, dots, and lines remain the most effective form of visually communicating differences in quantities. And as with the piece we looked at on Monday, we don’t have a legend that informs us how big the circles are relative to the dollar values they represent.

And I mention that part because what I often find is that with these types of charts, designers simply say the width of the circle represents, in this case, the dollar value. But the problem is we don’t see just the diameter of the circle, we actually see the area. And if you recall your basic maths, the area of a circle = πr2. In other words, the designer is showing you far more than the value you want to see and it distorts the relationship. I am not saying that is what is happening here, but that’s because we do not have a legend to confirm that for us.

This sort of piece would also be helped by limited duty interactivity. Because, as a Pennsylvanian, I am curious to see where the Commonwealth is choosing to spend its share of the relief funds. But there is no way at present to dive into the data. Of course, if Pennsylvania is not part of the overall story—and it’s not—than an inline graphic need not show the Keystone State. In these kinds of stories, however, I often enjoy an interactive piece at the end wherein I can explore the breadth and depth of the data.

So if we accept that a larger interactive piece is off the table, could the graphic have been redesigned to show more of the state level data with more labelling? A tree map would be an improvement over the bubbles because scaling to length and height is easier than a circle, but still presents the area problem. What a tree map allows is inherent grouping, so one could group by either spending category or by state.

I would bet that a smart series of bar charts could work really well here. It would require some clever grouping and probably colouring, but a well structured set of bars could capture both the states and categories and could be grouped by either.

Overall a fascinating idea, but I’m left just wanting a little more from the execution.

Credit for the piece goes to Elena Mejia.

How Accurate Is Punxsutawney Phil?

For those unfamiliar with Groundhog Day—the event, not the film, because as it happens your author has never seen the film—since 1887 in the town of Punxsutawney, Pennsylvania (60 miles east-northeast of Pittsburgh) a groundhog named Phil has risen from his slumber, climbed out of his burrow, and went to see if he could see his shadow. Phil prognosticates upon the continuance of winter—whether we receive six more weeks of winter or an early spring—based upon the appearance of his shadow.

But as any meteorological fan will tell you, a groundhog’s shadow does not exactly compete with the latest computer modelling running on servers and supercomputers. And so we are left with the all important question: how accurate is Phil?

Thankfully the National Oceanic and Atmospheric Administration (NOAA) published an article several years ago that they continue to update. And their latest update includes 2021 data.

Not exactly an accurate depiction of Phil.

I am loathe to be super critical of this piece, because, again, relying upon a groundhog for long-term weather forecasting is…for the birds (the best I could do). But critiques of information design is largely what this blog is for.

Conceptually, dividing up the piece between a long-term, i.e. since 1887, and a shorter-term, i.e. since 2012, makes sense. The long-term focuses more on how Phil split out his forecasts—clearly Phil likes winter. I dislike the use of the dark blue here for the years for which we have no forecast data. I would have opted for a neutral colour, say grey, or something that is visibly less impactful than the two light colours (blue and yellow) that represent winter and spring.

Whilst I don’t love the icons used in the pie chart, they do make sense because the designers repeat them within the table. If they’re selling the icon use, I’ll buy it. That said, I wonder if using those icons more purposefully could have been more impactful? What would have happened if they had used a timeline and each year was represented by an icon of a snowflake or a sun? What about if we simply had icons grouped in blocks of ten or twenty?

The table I actually enjoy. I would tweak some of the design elements, for example the green check marks almost fade into the light blue sky. A darker green would have worked well there. But, conceptually this makes a lot of sense. Run each prognostication and compare it with temperature deviation for February and March (as a proxy for “winter” or “spring”) and then assess whether Phil was correct.

I would like to know more about what a slightly above or below measurement means compared to above or below. And I would like to know more about the impact of climate change upon these measurements. For example, was Phil’s accuracy higher in the first half of the 20th century? The end of the 19th?

Finally, the overall article makes a point about how difficult it would be for a single groundhog in western Pennsylvania to determine weather for the entire United States let alone its various regions. But what about Pennsylvania? Northern Appalachia? I would be curious about a more regionally-specific analysis of Phil’s prognostication prowess.

Credit for the piece goes to the NOAA graphics department.

Graduate Degrees

Many of us know the debt that comes along with undergraduate degrees. Some of you may still be paying yours down. But what about graduate degrees? A recent article from the Wall Street Journal examined the discrepancies between debt incurred in 2015–16 and the income earned two years later.

The designers used dot plots for their comparisons, which narratively reveal themselves through a scrolling story. The author focuses on the differences between the University of Southern California and California State University, Long Beach. This screenshot captures the differences between the two in both debt and income.

Pretty divergent outcomes…

Some simple colour choices guide the reader through the article and their consistent use makes it easy for the reader to visually compare the schools.

From a content standpoint, these two series, income and debt, can be combined to create an income to debt ratio. Simply put, does the degree pay for itself?

What’s really nice from a personal standpoint is that the end of the article features an exploratory tool that allows the user to search the data set for schools of interest. More than just that, they don’t limit that tool to just graduate degrees. You can search for undergraduate degrees.

Below the dot plot you also have a table that provides the exact data points, instead of cluttering up the visual design with that level of information. And when you search for a specific school through the filtering mechanism, you can see that school highlighted in the dot plot and brought to the top of the table.

Fortunately my alma mater is included in the data set.

Welp.

Unfortunately you can see that the data suggests that graduates with design and applied arts degrees earn less (as a median) than they spend to obtain the degree. That’s not ideal.

Overall this was a really nice, solid piece. And probably speaks to the discussions we need to have more broadly about post-secondary education in the United States. But that’s for another post.

Credit for the piece goes to James Benedict, Andrea Fuller, and Lindsay Huth.

Philadelphia’s Wild Winters

Winter is coming? Winter is here. At least meteorologically speaking, because winter in that definition lasts from December through February. But winters in Philadelphia can be a bit scattershot in terms of their weather. Yesterday the temperature hit 19ºC before a cold front passed through and knocked the overnight low down to 2ºC. A warm autumn or spring day to just above freezing in the span of a few hours.

But when we look more broadly, we can see that winters range just that much as well. And look the Philadelphia Inquirer did. Their article this morning looked at historical temperatures and snowfall and whilst I won’t share all the graphics, it used a number of dot plots to highlight the temperature ranges both in winter and yearly.

Yep, I still prefer winter to summer.

The screenshot above focuses attention on the range in January and July and you can see how the range between the minimum and maximum is greater in the winter than in the summer. Philadelphia may have days with summer temperatures in the winter, but we don’t have winter temperatures in summer. And I say that’s unfair. But c’est la vie.

Design wise there are a couple of things going on here that we should mention. The most obvious is the blue background. I don’t love it. Presently the blue dots that represent colder temperatures begin to recede into and blend into the background, especially around that 50ºF mark. If the background were white or even a light grey, we would be able to clearly see the full range of the temperatures without the optical illusion of a separation that occurs in those January temperature observations.

Less visible here is the snowfall. If you look just above the red dots representing the range of July temperatures, you can see a little white dot near the top of the screenshot. The article has a snowfall effect with little white dots “falling” down the page. I understand how the snowfall fits with the story about winter in Philadelphia. Whilst the snowfall is light enough to not be too distracting, I personally feel it’s a bit too cute for a piece that is data-driven.

The snowfall is also an odd choice because, as the article points out, Philadelphia winters do feature snowfall, but that on days when precipitation falls, snow accounts for less than 1/3 of those days with rain and wintry mixes accounting for the vast majority.

Overall, I really like the piece as it dives into the meteorological data and tries to accurately paint a portrait of winters in Philadelphia.

And of course the article points out that the trend is pointing to even warmer winters due to climate change.

Credit for the piece goes to Aseem Shukla and Sam Morris.

Data Analysis and Baseball

First, a brief housekeeping thing for my regular readers. It is that time of year, as I alluded to last week, where I’ll be taking quite a bit of holiday. This week that includes yesterday and Friday, so no posts. After that, unless I have the entire week off—and I do on a few occasions—it’s looking like three days’ worth of posts, Monday through Wednesday. Then I’m enjoying a number of four day weekends.

But to start this week, we have Game 6 of the World Series tonight between the Atlanta Braves and the Houston Astros. That should the Braves vs. the Red Sox, but whatever. If you want your bats to fall asleep, you deserve to lose. Anyways, rest in peace, RemDawg.

Yesterday the BBC posted an article about baseball, which is first weird because baseball is far more an American sport that’s played in relatively few countries. Here’s looking at you Japanese gold medal for the sport earlier this year. Nevertheless I fully enjoyed having a baseball article on the BBC homepage. But beyond that, it also combined baseball with history and with data and its visualisation.

You might say they hit the sweet spot of the bat.

There really isn’t much in the way of graphics, because we’re talking about work from the 1910s. So I recommend reading the piece, it’s fascinating. Overall it describes how Hugh Fullerton, a sportswriter, determined that the 1919 White Sox had thrown the World Series.

Fullerton, long story short, loved baseball and he loved data. He went to games well before the era of Statcast and recorded everything from pitches to hits and locations of batted balls. He used this to create mathematical models that helped him forecast winners and losers. And he was often right.

For the purposes of our blog post, he explained in 1910 how his system of notations worked and what it allowed him to see in terms of how games were won and lost. Below we have this screen capture of the only relevant graphic for our purposes.

Grooves on the diamond

In it we see the areas where the batter is like safe or out depending upon where the ball is hit. Along the first and third base foul lines we thin strips of what all baseball fans fear: doubles or triples down the line. If you look closely you can see the dark lines become small blobs near home plate. We’ve all seen those little tappers off the end of the bat that die, effectively a bunt.

Then in the outfield we have the two power alleys in right- and left-centre. When your favourite power hitter hits a blast deep to the outfield for a home run, it’s usually in one of those two areas.

We also have some light grey lines, which are more where batted balls are going to get through the infielders. We are talking ground balls up the middle and between the middle infielders and the corners. Of course this was baseball in the early 20th century. And while, yes, shifting was a thing, it was nowhere near as prevalent. Consequently defenders were usually lined up in regular positions. These correspond to those defensive alignments.

Finally the vast majority of the infield is coloured another dark grey, representing how infielders can usually soak up any groundball and make the play.

The whole article is well worth the read, but I loved this graphic from 1910 that explains (unshifted) baseball in the 21st century.

Credit for the piece goes to Hugh Fullerton.

Covid Vaccination and Political Polarisation

I will try to get to my weekly Covid-19 post tomorrow, but today I want to take a brief look at a graphic from the New York Times that sat above the fold outside my door yesterday morning. And those who have been following the blog know that I love print graphics above the fold.

On my proverbial stoop this morning.

Of the six-column layout, you can see that this graphic gets three, in other words half-a-page width, and the accompany column of text for the article brings this to nearly 2/3 the front page.

When we look more closely at the graphic, you can see it consists of two separate parts, a scatter plot and a line chart. And that’s where it begins to fall apart for me.

Pennsylvania is thankfully on the more vaccinated side of things

The scatter plot uses colour to indicate the vote share that went to Trump. My issue with this is that the colour isn’t necessary. If you look at the top for the x-axis labelling, you will see that the axis represents that same data. If, however, the designer chose to use colour to show the range of the state vote, well that’s what the axis labelling should be for…except there is none.

If the scatter plot used proper x-axis labels, you could easily read the range on either side of the political spectrum, and colour would no longer be necessary. I don’t entirely understand the lack of labelling here, because on the y-axis the scatter plot does use labelling.

On a side note, I would probably have added a US unvaccination rate for a benchmark, to see which states are above and below the US average.

Now if we look at the second part of the graphic, the line chart, we do see labelling for the axis here. But what I’m not fond of here is that the line for counties with large Trump shares, the line significantly exceeds the the maximum range of the chart. And then for the 0.5 deaths per 100,000 line, the dots mysteriously end short of the end of the chart. It’s not as if the line would have overlapped with the data series. And even if it did, that’s the point of an axis line, so the user can know when the data has exceeded an interval.

I really wanted to like this piece, because it is a graphic above the fold. But the more I looked at it in detail, the more issues I found with the graphic. A couple of tweaks, however, would quickly bring it up to speed.

Credit for the piece goes to Ashley Wu.

Misleading Graphics Aren’t Limited to US Elections

Last week I wrote about how CBS News’ coverage of the California recall election featured a misleading graphic. In particular, the graphic created the appearance that the results were closer than they really were.

This week we had another election and, sadly, I find that I have to write the same sort of piece again. Except this time we are headed north of the border to Canada.

I was watching CBC coverage last night and I noticed early on that the vote share bar chart looked off given the data points. Next time it popped up I took a screenshot.

Look at the bars

First we need to note these are three-dimensional and the camera angle kept swinging around—not ideal for a fair comparison. This was the most straight-on angle I captured.

Second, at first glance, we have the Conservative share at a little more than 3/4 the Liberal vote share. That looks to be about right. Then you have the New Democratic Party (NDP) at roughly half the vote of the Conservatives. And the bar looks about half the height of the blue Conservative bar. Checks out. Then you have the People’s Party of Canada at roughly 1/4 the amount of NDP votes. But now look at the bar’s height. The purple bar is nearly the same height as the orange bar.

Clearly that is wrong and misleading.

The problem, I think, is that the designers artificially inflated the height of the bars to include the labels and data points for the bars. The designers should have dropped the labelling below the bars and let the bars only represent the data.

I created the following graphic to show how the chart should have looked.

And my take…

Here you can more clearly see how much greater the NDP victory was over the People’s Party. The labelling falls below the charts and doesn’t distort the height comparison between the bars. In some respects, it wasn’t even close. But the original graphic made it look else wise.

I just wish I knew what the designers were thinking. Why did they inflate the bars? Like with the CBS News graphic, I hope it wasn’t intentional. Rather, I hope it was some kind of mistake or even ignorance.

Credit for the original piece goes to the CBC graphics department.

Credit for the updated version is mine.

Correcting CBS News Charts

One of the long-running critiques of Fox News Channel’s on air graphics is that they often distort the truth. They choose questionable if not flat-out misleading baselines, scales, and adjust other elements to create differences where they don’t exist or smooth out problematic issues.

But yesterday a friend sent me a graphic that shows Fox News isn’t alone. This graphic came from CBS News and looked at the California recall election vote totals.

If you just look at the numbers, 66% and 34%, well we can see that 34 is almost half of 66. So why does the top bar look more like 2/3 of the length of the bottom? I don’t actually know the animus of the designer who created the graphic, but I hope it’s more ignorance or sloppiness than malice. I wonder if the designer simply said, 66%, well that means the top bar should be, like, two-thirds the length of the bottom.

The effect, however, makes the election seem far closer than it really was. For every yes vote, there were almost two no votes. And the above graphic does not capture that fact. And so my friend asked if I could make a graphic with the correct scale. And so I did.

One really doesn’t need a chart to compare the two numbers. And I touch on that with the last point, using two factettes to simply state the results. But let’s assume we need to make it sexy, sizzle, or flashy. Because I think every designer has heard that request.

A simple scale of 0 to 66 could work and we can see how that would differ from the original graphic. Or, if we use a scale of 0 to 100, we can see how the two bars relate to each other and to the scale of the total vote. That approach would also have allowed for a stacked bar chart as I made in the third option. The advantage there is that you can easily see the victor by who crosses the 50% line at the centre of the graphic.

Basically doing anything but what we saw in the original.

Credit for the original goes to the CBS News graphics department.

Credit for the correction is mine.