As Russia redeploys its forces in and around Ukraine, you can expect to hear more about how they are attempting to reconstitute their battalion tactical groups. But what exactly is a battalion tactical group?
Recently in Russia, the army has been reorganised increasingly away from regiments and divisions and towards smaller, more integrated units that theoretically can operate more independently: battalion tactical groups. They typically comprise less than a thousand soldiers, about 200 of which are infantry. But they also include a number of tanks, infantry fighting vehicles (IFVs), armoured personnel carriers (APCs), artillery, and other support units.
In an article from two weeks ago, the Washington Post explained why the Russian army had stalled out in Ukraine. And as part of that, they explained what a battalion tactical group is with a nice illustration.
Just some of the vehicles in a BTG
Russia’s problem is that in the first month of the war, Ukrainian anti-armour weapons like US-made Javelins and UK-made NLAWs have ripped apart Russian tanks, IFVs, and APCs. Atop that, Ukrainian drones and artillery took out more armour. The units that Russia withdrew from Ukraine now have to be rebuilt and resupplied. Once fresh, Russia can deploy these into the Donbas and southern Ukraine.
This graphic isn’t terribly complicated, but the nice illustrations go a long way to showing what comprises a battalion tactical group. And when you see photos of five or six tanks destroyed along the side of a Ukrainian road, you now understand that constitutes half of a typical unit’s available armour. In other words, a big deal.
I expect to hear more out of Russia and Ukraine in coming days about how Russia is providing new vehicles and fresh soldiers to resupply exhausted units.
Credit for the piece goes to Bonnie Berkowitz and Artur Galocha.
Here’s an interesting post from FiveThirtyEight. The article explores where different states have spent their pandemic relief funding from the federal government. The nearly $2 trillion dollar relief included a $350 billion block grant given to the states, to do with as they saw fit. After all, every state has different needs and priorities. Huzzah for federalism. But where has that money been going?
Enter the bubbles.
I mean bubbles need water distribution systems, right?
This decision to use a bubble chart fascinates me. We know that people are not great at differentiating between area. That’s why bars, dots, and lines remain the most effective form of visually communicating differences in quantities. And as with the piece we looked at on Monday, we don’t have a legend that informs us how big the circles are relative to the dollar values they represent.
And I mention that part because what I often find is that with these types of charts, designers simply say the width of the circle represents, in this case, the dollar value. But the problem is we don’t see just the diameter of the circle, we actually see the area. And if you recall your basic maths, the area of a circle = πr2. In other words, the designer is showing you far more than the value you want to see and it distorts the relationship. I am not saying that is what is happening here, but that’s because we do not have a legend to confirm that for us.
This sort of piece would also be helped by limited duty interactivity. Because, as a Pennsylvanian, I am curious to see where the Commonwealth is choosing to spend its share of the relief funds. But there is no way at present to dive into the data. Of course, if Pennsylvania is not part of the overall story—and it’s not—than an inline graphic need not show the Keystone State. In these kinds of stories, however, I often enjoy an interactive piece at the end wherein I can explore the breadth and depth of the data.
So if we accept that a larger interactive piece is off the table, could the graphic have been redesigned to show more of the state level data with more labelling? A tree map would be an improvement over the bubbles because scaling to length and height is easier than a circle, but still presents the area problem. What a tree map allows is inherent grouping, so one could group by either spending category or by state.
I would bet that a smart series of bar charts could work really well here. It would require some clever grouping and probably colouring, but a well structured set of bars could capture both the states and categories and could be grouped by either.
Overall a fascinating idea, but I’m left just wanting a little more from the execution.
The National Oceanic and Atmospheric Administration (NOAA) released its 2022 report, Sea Level Rise Technical Report, that details projected changes to sea level over the next 30 years. Spoiler alert: it’s not good news for the coasts. In essence the sea level rise we’ve seen over the past 100 years, about a foot on average, we will witness in just thirty years to 2050.
Now I’ve spent a good chunk of my life “down the shore” as we say in the Philadelphia dialect and those shore towns will all have a special place in my life. But that looks more to be like a cherished memory fading into time. I took a screenshot of the Philadelphia region and South Jersey in particular.
Not just the Shore, but also the Beaches
To be fair, that big blob of blue is Delaware Bay. That’s already the inlet to the Atlantic. But the parts that ought to disturb people are just how much blue snakes into New Jersey and Delaware, how much/little space there is between those very small ribbons of land land off the Jersey coast.
You can also see little blue dots. When the user clicks on those, the application presents the user with a small interactive popup that models sea level rise on a representative photograph. In this case, the dot nearest to my heart is that of the Avalon Dunes, with which I’m very familiar. As the sea level rises, more and more of the street behind protected by the dunes disappears.
My only real issue with the application is how long it takes to load and refresh the images every single time you adjust the zoom or change your focus. I had a number of additional screenshots I wanted to take, but frankly the application was taking too long to load the data. That could be down to a million things, true, but it frustrated me nonetheless.
Regardless of my frustration, I do highly recommend you check out the application, especially if you have any connection to the coast.
Today we have an interesting little post, a choropleth map in a BBC article examining the changes occurring in the voting systems throughout the United States. Broadly speaking, we see two trends in the American political system when it comes to voting: make it easier because democracy; make it more restrictive because voter fraud/illegitimacy. The underlying issue, however, is that we have not seen any evidence of widespread or concerted efforts of voter fraud or problems with elections.
Think mail-in ballots are problematic? They’ve been used for decades without issues in many states. That doesn’t mean a new state could screw up the implementation of mail-in voting, but it’s a proven safe and valid system for elections.
Think that were issues of fraudulent voters? We had something like sixty cases brought before the courts and I believe in only one or two instances were the issues even remotely proven. The article cites some Associated Press (AP) reporting that identified only 500 cases of fraudulent votes. Out of over 14 million votes cast.
500 out of 14,000,000.
Anyway, the map in the article colours states by whether they have passed expansive or restrictive changes to voting. Naturally there are categories for no changes as well as when some expansive changes and some restrictive changes were both passed.
Normally I would expect to see a third colour for the overlap. Imagine we had red and blue, a blend of those colours like purple would often be a designer’s choice. Here, however, we have a hatched pattern with alternating stripes of orange and blue. You don’t see this done very often, and so I just wanted to highlight it.
I don’t know if this marks a new stylistic design direction by the BBC graphics department. Here I don’t necessarily love the pattern itself, the colours make it difficult to read the text—though the designers outlined said text, so points for that.
But I’ll be curious to see if I, well, see more of this in coming weeks and months.
Credit for the piece goes to the BBC graphics department.
We’re starting this week with an article from the Philadelphia Inquirer. It looks at the increasing number of guns confiscated by the Transportation Security Administration (TSA) at Philadelphia International Airport. Now while this is a problem we could discuss, one of the graphics therein has a problem that we’ll discuss here.
We have a pretty standard bar chart here, with the number of guns “detected” at all US airports from 2008 through 2021. The previous year is highlighted with a darker shade of blue. But what’s missing?
We have two light grey lines running across the graphic. But what do they represent? We do have the individual data points labelled above each bar, and that gives us a clue that the grey lines are axis lines, specifically representing 2,000 and 4,000 guns, because they run between the bars straddling those two lines.
However, we also have the data labels themselves. I wonder, however, are they even necessary? If we look at the amount of space taken up by the labels, we can imagine that three labels, 2k, 4k, and 6k, would use significantly less visual real estate than the individual labels. The data contained in the labels could be relegated to a mouseover state, revealed only when the user interacts directly with the graphic. Here it serves as a “sparkle”, distracting from the visual relationships of the bars.
If the actual data values to the single digit are important, a table would be a better format for displaying the information. A chart should show the visual relationship. Now, perhaps the Inquirer decided to display data labels and no axis for all charts. I may disagree with that, but it’s a house data visualisation stylistic choice.
But then we have the above screenshot. In this bar chart, we have something similar. Bars represent the number of guns detected specifically at Philadelphia International Airport, although the time framer is narrower being only 2017–2021. We do have grey lines in the background, but now on the left of the chart, we have numbers. Here we do have axis labels displaying 10, 20, and 30. Interestingly, the maximum value in the data set is 39 guns detected last year, but the chart does not include an axis line at 40 guns, which would make sense given the increments used.
At the end of the day, this is just a frustrating series of graphics. Whilst I do not understand the use of the data labels, the inconsistency with the data labels within one article is maddening.
This is an older piece that I stumbled across doing some other work. I felt like it needed sharing. The interactive graphic shows the high and low note vocal ranges of major musical artists.
Good to see some of my favourite artists in the mix.
Interactive controls allow the user to sort the bars by the greatest vocal range, high notes, or low notes. Colour coding distinguishes male from female vocalists.
In particular I enjoy the bottom of the piece that uses the keyboard to show the range of notes. When the user mouses over a particular singer, the ends of the range display the particular song in which the singer hit the note.
Again, this is an older piece that I just discovered, but I did enjoy it. I would be curious to see how these things could change over time. As an artist ages, how does that change his or her vocal range? Are there differences between albums? This could be a fascinating point at which branching out for further research could be done.
Today is just a quick little follow-up to my post from Monday. There I talked about how a Boston Globe piece using three-dimensional columns to show snowfall amounts in last weekend’s blizzard failed to clearly communicate the data. Then I showed a map from the National Weather Service (NWS) that showed the snowfall ranges over an entire area.
Well scrolling through the weather feeds on the Twitter yesterday I saw this graphic from the NWS that comes closer to the Globe‘s original intent, but again offers a far clearer view of the data.
Much better
Whilst we miss individual reports being depicted as exact, that is to say the reports are grouped into bins and assigned a colour, we have a much more granular view than we did with the first NWS graphic I shared.
The only comment I have on this graphic is that I would probably drop the terrain element of the map. The dots work well when placed atop the white map, but the lighter blues and yellows fade out of view when placed atop the green.
But overall, this is a much clearer view of the storm’s snowfall.
Credit for the piece goes to the National Weather Service graphics department.
For those unfamiliar with Groundhog Day—the event, not the film, because as it happens your author has never seen the film—since 1887 in the town of Punxsutawney, Pennsylvania (60 miles east-northeast of Pittsburgh) a groundhog named Phil has risen from his slumber, climbed out of his burrow, and went to see if he could see his shadow. Phil prognosticates upon the continuance of winter—whether we receive six more weeks of winter or an early spring—based upon the appearance of his shadow.
But as any meteorological fan will tell you, a groundhog’s shadow does not exactly compete with the latest computer modelling running on servers and supercomputers. And so we are left with the all important question: how accurate is Phil?
Thankfully the National Oceanic and Atmospheric Administration (NOAA) published an article several years ago that they continue to update. And their latest update includes 2021 data.
Not exactly an accurate depiction of Phil.
I am loathe to be super critical of this piece, because, again, relying upon a groundhog for long-term weather forecasting is…for the birds (the best I could do). But critiques of information design is largely what this blog is for.
Conceptually, dividing up the piece between a long-term, i.e. since 1887, and a shorter-term, i.e. since 2012, makes sense. The long-term focuses more on how Phil split out his forecasts—clearly Phil likes winter. I dislike the use of the dark blue here for the years for which we have no forecast data. I would have opted for a neutral colour, say grey, or something that is visibly less impactful than the two light colours (blue and yellow) that represent winter and spring.
Whilst I don’t love the icons used in the pie chart, they do make sense because the designers repeat them within the table. If they’re selling the icon use, I’ll buy it. That said, I wonder if using those icons more purposefully could have been more impactful? What would have happened if they had used a timeline and each year was represented by an icon of a snowflake or a sun? What about if we simply had icons grouped in blocks of ten or twenty?
The table I actually enjoy. I would tweak some of the design elements, for example the green check marks almost fade into the light blue sky. A darker green would have worked well there. But, conceptually this makes a lot of sense. Run each prognostication and compare it with temperature deviation for February and March (as a proxy for “winter” or “spring”) and then assess whether Phil was correct.
I would like to know more about what a slightly above or below measurement means compared to above or below. And I would like to know more about the impact of climate change upon these measurements. For example, was Phil’s accuracy higher in the first half of the 20th century? The end of the 19th?
Finally, the overall article makes a point about how difficult it would be for a single groundhog in western Pennsylvania to determine weather for the entire United States let alone its various regions. But what about Pennsylvania? Northern Appalachia? I would be curious about a more regionally-specific analysis of Phil’s prognostication prowess.
Credit for the piece goes to the NOAA graphics department.
During the pandemic, media reports of the rise of crime have inundated American households. Violent crimes, we are told, are at record highs. One wonders if society is on the verge of collapse.
But last night a few friends asked me to take a look at the data during the pandemic (2020–2021) and see what is actually going on out on the streets in a few big cities. Naturally I agreed and that’s why we have this post today. The first thing to understand, however, is that we do not have a federal-level database where we can cross compare crimes in cities using standardised definitions. The FBI used to produce such a thing, but in 2020 retired it in favour of a new system that, for reasons, local and state agencies have yet to fully embrace. Consequently, just when we need some real data, we have a notable lack of it.
At the very least we have national-level reporting on violent crimes and homicides, the latter of which is a subset of violent crimes. Though these reports are also dependent on local and state agencies self-reporting to the FBI. I also wanted to look at not just whether crime is up of late, but is crime up over the last several years. I chose to go back 30 years, or a generation.
We can see one important trend here, that at a national level violent crimes are largely stable at rate of 400 per 100,000 people. Homicides, however, have climbed by nearly a third. Violent crimes are not rising, but murders are.
My initial charge was to look at cities and violent crime. However, knowing that nationally violent crimes are largely stable, the issue of concern would be how the rise in murders is playing out on American city streets. With the caveat that we do not have a single database to review, I pulled data directly from the five cities of interest: Philadelphia, Chicago, New York, Washington, and Detroit.
I also considered that large cities will have more murders simply by dint of their larger populations. And so when I collected the data, I also tried to find the Census Bureau’s population estimates of the cities during the same time frame. Unfortunately the 2021 estimates are not yet available so I had to use the 2020 population estimates for my 2021 calculations.
First we can see that not all cities report data for the same time period. And for Detroit in particular that makes comparisons tricky. In fact only New York had data back to the beginning of the century. Regardless of the data set’s less than full robustness we can see that in all five cities homicides rose in 2020 and 2021.
Second, however, if squint through that lack of full data, we see a trend at the city level that aligns with the national level. Homicides, tragically, are indeed up. However, in New York and Washington homicides are still below the data from near 2000 and at that time homicides already appear on a downward trajectory. I would bet that homicides were even higher during the 1990s and that the 2000s represented a long-run decline. In other words, whilst homicides are up, they are still below their peaks. A worrying trend, but far from the sky is falling.
That cannot quite be said for other cities. Let’s start with Detroit. Sadly we have too few years of data to draw any conclusion other than that homicides rose compared to the years preceding the pandemic.
That leaves us with Philadelphia and Chicago. Philadelphia has less data available and it’s harder to make a determination of what is happening. But we can say that since 2007, homicides have not been higher. If you look closely though, you can see how there does appear to be a downward trend at the beginning of the line. We do not have enough data like we do with New York and Washington, but I would bet homicides are up in Philadelphia, but still far short of what they were in the 1990s.
Chicago is the oddball. Yes, it saw a peak in homicides during the pandemic. But in 2016 the city didn’t miss the pandemic peak by much. In other words, homicides were staggeringly high in Chicago before the pandemic. If anything, we see a failure to combat high crime rates. But even before that spike in 2016, we see more of a valley floor in homicides. True, at the beginning of the century homicides appear to have trended down. But unlike the other cities here, homicides bottomed out at around 450 per 100,000 people. I’m not so certain we had a persistent, long-run decline in Chicago with which to start.
And like I said above, larger populations we would expect to have more murders because more potential criminals and victims. When we equalise for population we see the same trends as we expect—the city populations have been relatively stable over the last 20 years. Instead what we see is that relative to each other murders are more common in some cities and less so in others.
New York is a great example with nearly 500 murders last year, a number on par with Philadelphia. But New York has over 8 million inhabitants. Philadelphia has just 1.6. Consequently New York’s homicide rate is a surprisingly low 5.9 per 100,000 people. Philadelphia’s on the other hand? 35.6.
Philadelphia is near the top of that list, with Washington and Chicago having similar, albeit lower, rates at 31.7 and 30.1, respectively. But sadly Detroit surpasses them all and is in league of its own: 47.5 in 2021.
On Friday, I mentioned in brief that the East Coast was preparing for a storm. One of the cities the storm impacted was Boston and naturally the Boston Globecovered the story. One aspect the paper covered? The snowfall amounts. They did so like this:
All the lack of information
This graphic fails to communicate the breadth and literal depth of the snow. We have two big reasons for that and they are both tied to perspective.
First we have a simple one: bars hiding other bars. I live in Greater Centre City, Philadelphia. That means lots of tall buildings. But if I look out my window, the tall buildings nearer me block my view of the buildings behind. That same approach holds true in this graphic. The tall red columns in southeastern Massachusetts block those of eastern and northeastern parts of the state and parts of New Hampshire as well. Even if we can still see the tops of the columns, we cannot see the bases and thus any real meaningful comparison is lost.
Second: distance. Pretty simple here as well, later today go outside. Look at things on your horizon. Note that those things, while perhaps tall such as a tree or a skyscraper, look relatively small compared to those things immediately around you. Same applies here. Bars of the same data, when at opposite ends of the map, will appear sized differently. Below I took the above screenshot and highlighted two observations that differed in only 0.5 inches of snow. But the box I had to draw—a rough proxy for the columns’ actual heights—is 44% larger.
These bars should be about the same.
This map probably looks cool to some people with its three-dimensional perspective and bright colours on a dark grey map. But it fails where it matters most: clearly presenting the regional differences in accumulation of snowfall amounts.
Compare the above to this graphic from the Boston office of the National Weather Service (NWS).
No, it does not have the same cool factor. And some of the labelling design could use a bit of work. But the use of a flat, two-dimensional map allows us to more clearly compare the ranges of snowfall and get a truer sense of the geographic patterns in this weekend’s storm. And in doing so, we can see some of the subtleties, for example the red pockets of greater snowfall amounts amid the wider orange band.
Credit for the Globe piece goes to John Hancock.
Credit for the NWS piece goes to the graphics department of NWS Boston.