I filtered the dataset to show only the 12 states where more than 20% of residents traveled out of state for abortion care. This is the most consequential deceptive choice in the visualization: it cherry-picks the states that most dramatically support the displacement narrative while hiding the majority of states where travel is minimal. I considered showing all 50 states on the dumbbell, but the effect was diluted. Most states have small gaps, and the chart became noisy. The filtered version is far more persuasive precisely because it excludes the counterevidence.
Red is visually dominant and emotionally charged; gray recedes. By encoding the residence rate (the “true demand” number) in bold red and the occurrence rate (the “suppressed” number) in muted gray, the chart guides the viewer to see gray as the artificially deflated figure and red as the reality being hidden. An alternative was to use neutral colors for both and let the gap speak for itself, but that weakened the emotional pull. The current encoding essentially tells the viewer which number to trust before they’ve formed their own judgment.
Missouri is annotated with a callout box reading “99% traveled out of state — 170 in-state vs 11,710 by residents.” The data is accurate, and Missouri genuinely is the most extreme case. But by placing it at the top of the chart with a prominent annotation, it becomes the cognitive anchor for interpreting every other state. Viewers unconsciously generalize from the most extreme example. I considered annotating a mid-range state instead, but that undermined the persuasive impact. The score is only −0.5 because the underlying data is truthful; the deception is in emphasis, not fabrication.
The title presupposes a causal mechanism: that clinic closures caused the cross-state travel. The chart actually shows a correlation between low clinic access and high out-of-state travel, but the title converts this into a confident causal claim. “Pushed” implies coercion and agency. I considered more neutral titles like “Occurrence vs. Residence Rates in High-Travel States,” but the editorial framing is what makes the chart persuasive at first glance — many viewers read the title and form their conclusion before examining the data.
The core visual encoding is a genuinely informative analytical choice. The occurrence-vs-residence gap is a real and well-documented metric in reproductive health research, and it directly measures displacement. This is the one design decision that is earnest: if you stripped away the cherry-picking, the color framing, and the editorialized title, the underlying comparison would still be analytically sound. It is the honest foundation that the deceptive choices exploit.
The scatter plot displays every state (except DC), a trend line, and an R² value. This creates a strong impression of statistical rigor and completeness. The deception is that statistical completeness is not the same as analytical completeness: showing all states on one metric distracts from the fact that the most revealing metric (occurrence vs. residence comparison) is entirely absent. The R² badge acts as an authority signal that discourages further questioning. I initially tried the chart without R² and it felt less convincing — the number gives viewers a reason to stop analyzing.
This is the most important deceptive choice. Residence-based rates count abortions by where the patient lives, not where the procedure occurs, meaning that displaced abortions (e.g., a Missouri resident traveling to Illinois) still count toward Missouri’s rate. Despite this, restricted states still show lower rates, which the chart frames as evidence that restrictions “work.” By hiding the occurrence-vs-residence comparison, the chart suppresses the very evidence that would reveal displacement. I considered showing both metrics as a dual-axis chart, but that would have undermined the persuasive narrative entirely.
The color scale maps clinic access to an emotional spectrum: teal for states with few clinics, red for states with many. This subtly frames clinic availability as a problem and restriction as the normative baseline. The effect is strongest because it operates below conscious awareness. Viewers rarely interrogate why a particular color was chosen, but the emotional association shapes their interpretation. A neutral single-color scheme would have been more honest but less persuasive.
Only ~10 states are labeled, and they are deliberately chosen: high-rate “blue” states (New York, New Jersey, California) at the top-left and low-rate “red” states (Wyoming, South Dakota, Utah) at the bottom-right. This anchors viewers on the most politically polarized comparisons and implicitly ties abortion rates to political identity rather than clinic access. The unlabeled middle states fade into the background. I considered labeling all states, but the chart became unreadable and the clean narrative was lost.
“Fewer Clinics, Fewer Abortions” presents a correlation as an implied causal chain. The structure (“X, Y”) suggests that the first causes the second. In reality, conservative states have both fewer clinics and lower baseline demand for abortion. The causal arrow almost certainly runs from cultural/political attitudes to both variables simultaneously, not from clinic counts to abortion rates. I considered “Clinic Access and Abortion Rates Across 50 States” but it felt descriptive rather than persuasive. The causal framing is what gives the chart its argumentative force.
The most straightforward part of this exercise was the data work: the Guttmacher dataset is clean, well-structured, and rich enough to support both arguments without any fabrication. What surprised me was how little it took to flip the narrative. The two visualizations use the same source, the same year, and overlapping subsets of the same columns. The only difference is which metric is foregrounded and which is hidden. Figure 1 shows the gap between occurrence and residence rates, revealing displacement. Figure 2 shows only residence rates, concealing it. Neither chart contains a single false data point, yet they lead to opposite conclusions. The most effective persuasive techniques were not visual at all. They were decisions about what not to show.
I now think “ethical visualization” is less about any individual design choice and more about whether the overall presentation enables or prevents the viewer from reaching an independent conclusion. Color choices, annotations, and even cherry-picking can be acceptable when they serve clarity. The line is crossed when design choices systematically prevent the viewer from noticing what’s missing. Figure 1’s cherry-picking is deceptive not because filtering data is inherently wrong, but because the filter hides contradictory evidence and the chart offers no signal that it exists. Figure 2’s R² badge is deceptive not because statistics are misleading, but because it weaponizes the viewer’s trust in quantitative rigor to shut down further inquiry. The hard boundary I’d draw: a persuasive visualization becomes misleading when a reasonably attentive viewer cannot, from the chart alone, identify the strongest counterargument to the claim being made.