#misleading graphs
Explore tagged Tumblr posts
Text
Do you ever read someone’s argument where they are using a graph and you think “They’re reading that wrong, it’s not supporting their viewpoint as much as they think”?
#I want to correct them#however#I do not care enough about them or the subject in general#I know that realistically I only have a surface amount of information to bring to the discussion/argument#I just know that the graph is misleading#the y axis went up by percentages of 5 ending at 40%#The X axis has the years 1900 1940 1960 1980 2018#this is a line graph so it’s showing huge ass jumps but of course you would with x and y axis like that#in reality the numbers are on steady inclines and declines over 20-40 year periods#probably following world events the economies etc#but it’s more interesting to see the pretty lines jump like my heart monitor after having to run for like half a minute#the largest percentage difference that mattered to the argument was a 7% increase over a 60year period
1 note
·
View note
Text
In case you don't think your figures are professional enough looking this is Carl Linnaeus' figure from his thesis depicting plant pollination.
0 notes
Text
I don’t think Fox believes in fall damage. He 100% believes he can survive any fall, to the great frustration of his medics. He’s got in enough arguments about it that he makes a graph of the height of the fall vs the amount of time spent in the med bay afterwards. While technically correct, the results are misleading because Fox refuses to stay in the med bay longer than 26 hours (that’s their record, he was sedated most of the time but he’s become immune to them).
537 notes
·
View notes
Text
Hey. Hey buddy. You maybe... wanna... run the numbers into those pie charts again?
Especially the Undertale one?
I don't see a way that 15.8k is 25% of a graph while 5.2k is 23%.
Hey guys remind me how many properties there are on ao3 where m/m outnumbers f/f for no good reason
11K notes
·
View notes
Text
For the past six years or so, this graph has been making its rounds on social media, always reappearing at conveniently timed moments…
The insinuation is loud and clear: parallels abound between 18th-century France and 21st-century USA. Cue the alarm bells—revolution is imminent! The 10% should panic, and ordinary folk should stock up on non-perishables and, of course, toilet paper, because it wouldn’t be a proper crisis without that particular frenzy. You know the drill.
Well, unfortunately, I have zero interest in commenting on the political implications or the parallels this graph is trying to make with today’s world. I have precisely zero interest in discussing modern-day politics here. And I also have zero interest in addressing the bottom graph.
This is not going to be one of those "the [insert random group of people] à la lanterne” (1) kind of posts. If you’re here for that, I’m afraid you’ll be disappointed.
What I am interested in is something much less click-worthy but far more useful: how historical data gets used and abused and why the illusion of historical parallels can be so seductive—and so misleading. It’s not glamorous, I’ll admit, but digging into this stuff teaches us a lot more than mindless rage.
So, let’s get into it. Step by step, we’ll examine the top graph, unpick its assumptions, and see whether its alarmist undertones hold any historical weight.
Step 1: Actually Look at the Picture and Use Your Brain
When I saw this graph, my first thought was, “That’s odd.” Not because it’s hard to believe the top 10% in 18th-century France controlled 60% of the wealth—that could very well be true. But because, in 15 years of studying the French Revolution, I’ve never encountered reliable data on wealth distribution from that period.
Why? Because to the best of my knowledge, no one was systematically tracking income or wealth across the population in the 18th century. There were no comprehensive records, no centralised statistics, and certainly no detailed breakdowns of who owned what across different classes. Graphs like this imply data, and data means either someone tracked it or someone made assumptions to reconstruct it. That’s not inherently bad, but it did get my spider senses tingling.
Then there’s the timeframe: 1760–1790. Thirty years is a long time— especially when discussing a period that included wars, failed financial policies, growing debt, and shifting social dynamics. Wealth distribution wouldn’t have stayed static during that time. Nobles who were at the top in 1760 could be destitute by 1790, while merchants starting out in 1760 could be climbing into the upper tiers by the end of the period. Economic mobility wasn’t common, but over three decades, it wasn’t unheard of either.
All of this raises questions about how this graph was created. Where’s the data coming from? How was it measured? And can we really trust it to represent such a complex period?
Step 2: Check the Fine Print
Since the graph seemed questionable, the obvious next step was to ask: Where does this thing come from? Luckily, the source is clearly cited at the bottom: “The Income Inequality of France in Historical Perspective” by Christian Morrisson and Wayne Snyder, published in the European Review of Economic History, Vol. 4, No. 1 (2000).
Great! A proper academic source. But, before diving into the article, there’s a crucial detail tucked into the fine print:
“Data for the bottom 40% in France is extrapolated given a single data point.”
What does that mean?
Extrapolation is a statistical method used to estimate unknown values by extending patterns or trends from a small sample of data. In this case, the graph’s creator used one single piece of data—one solitary data point—about the wealth of the bottom 40% of the French population. They then scaled or applied that one value to represent the entire group across the 30-year period (1760–1790).
Put simply, this means someone found one record—maybe a tax ledger, an income statement, or some financial data—pertaining to one specific year, region, or subset of the bottom 40%, and decided it was representative of the entire demographic for three decades.
Let’s be honest: you don’t need a degree in statistics to know that’s problematic. Using a single data point to make sweeping generalisations about a large, diverse population (let alone across an era of wars, famines, and economic shifts) is a massive leap. In fact, it’s about as reliable as guessing how the internet feels about a topic from a single tweet.
This immediately tells me that whatever numbers they claim for the bottom 40% of the population are, at best, speculative. At worst? Utterly meaningless.
It also raises another question: What kind of serious journal would let something like this slide? So, time to pull up the actual article and see what’s going on.
Step 3: Check the Sources
As I mentioned earlier, the source for this graph is conveniently listed at the bottom of the image. Three clicks later, I had downloaded the actual article: “The Income Inequality of France in Historical Perspective” by Morrisson and Snyder.
The first thing I noticed while skimming through the article? The graph itself is nowhere to be found in the publication.
This is important. It means the person who created the graph didn’t just lift it straight from the article—they derived it from the data in the publication. Now, that’s not necessarily a problem; secondary analysis of published data is common. But here’s the kicker: there’s no explanation in the screenshot of the graph about which dataset or calculations were used to make it. We’re left to guess.
So, to figure this out, I guess I’ll have to dive into the article itself, trying to identify where they might have pulled the numbers from. Translation: I signed myself up to read 20+ pages of economic history. Thrilling stuff.
But hey, someone has to do it. The things I endure to fight disinformation...
Step 4: Actually Assess the Sources Critically
It doesn’t take long, once you start reading the article, to realise that regardless of what the graph is based on, it’s bound to be somewhat unreliable. Right from the first paragraph, the authors of the paper point out the core issue with calculating income for 18th-century French households: THERE IS NO DATA.
The article is refreshingly honest about this. It states multiple times that there were no reliable income distribution estimates in France before World War II. To fill this gap, Morrisson and Snyder used a variety of proxy sources like the Capitation Tax Records (2), historical socio-professional tables, and Isnard’s income distribution estimates (3).
After reading the whole paper, I can say their methodology is intriguing and very reasonable. They’ve pieced together what they could by using available evidence, and their process is quite well thought-out. I won’t rehash their entire argument here, but if you’re curious, I’d genuinely recommend giving it a read.
Most importantly, the authors are painfully aware of the limitations of their approach. They make it very clear that their estimates are a form of educated guesswork—evidence-based, yes, but still guesswork. At no point do they overstate their findings or present their conclusions as definitive
As such, instead of concluding with a single, definitive version of the income distribution, they offer multiple possible scenarios.
It’s not as flashy as a bold, tidy graph, is it? But it’s far more honest—and far more reflective of the complexities involved in reconstructing historical economic data.
Step 5: Run the numbers
Now that we’ve established the authors of the paper don’t actually propose a definitive income distribution, the question remains: where did the creators of the graph get their data? More specifically, which of the proposed distributions did they use?
Unfortunately, I haven’t been able to locate the original article or post containing the graph. Admittedly, I haven’t tried very hard, but the first few pages of Google results just link back to Twitter, Reddit, Facebook, and Tumblr posts. In short, all I have to go on is this screenshot.
I’ll give the graph creators the benefit of the doubt and assume that, in the full article, they explain where they sourced their data. I really hope they do—because they absolutely should.
That being said, based on the information in Morrisson and Snyder’s paper, I’d make an educated guess that the data came from Table 6 or Table 10, as these are the sections where the authors attempt to provide income distribution estimates.
Now, which dataset does the graph use? Spoiler: None of them.
How can we tell? Since I don’t have access to the raw data or the article where this graph might have been originally posted, I resorted to a rather unscientific method: I used a graphical design program to divide each bar of the chart into 2.5% increments and measure the approximate percentage for each income group.
Here’s what I found:
Now, take a moment to spot the issue. Do you see it?
The problem is glaring: NONE of the datasets from the paper fit the graph. Granted, my measurements are just estimates, so there might be some rounding errors. But the discrepancies are impossible to ignore, particularly for the bottom 40% and the top 10%.
In Morrisson and Snyder’s paper, the lowest estimate for the bottom 40% (1st and 2nd quintiles) is 10%. Even if we use the most conservative proxy, the Capitation Tax estimate, it’s 9%. But the graph claims the bottom 40% held only 6%.
For the top 10% (10th decile), the highest estimate in the paper is 53%. Yet the graph inflates this to 60%.
Step 6: For fun, I made my own bar charts
Because I enjoy this sort of thing (yes, this is what I consider fun—I’m a very fun person), I decided to use the data from the paper to create my own bar charts. Here’s what came out:
What do you notice?
While the results don’t exactly scream “healthy economy,” they look much less dramatic than the graph we started with. The creators of the graph have clearly exaggerated the disparities, making inequality seem worse.
Step 7: Understand the context before drawing conclusions
Numbers, by themselves, mean nothing. Absolutely nothing.
I could tell you right now that 47% of people admit to arguing with inanimate objects when they don’t work, with printers being the most common offender, and you’d probably believe it. Why? Because it sounds plausible—printers are frustrating, I’ve used a percentage, and I’ve phrased it in a way that sounds “academic.”
You likely wouldn’t even pause to consider that I’m claiming 3.8 billion people argue with inanimate objects. And let’s be real: 3.8 billion is such an incomprehensibly large number that our brains tend to gloss over it.
If, instead, I said, “Half of your friends probably argue with their printers,” you might stop and think, “Wait, that seems a bit unlikely.” (For the record, I completely made that up—I have no clue how many people yell at their stoves or complain to their toasters.)
The point? Numbers mean nothing unless we put them into context.
The original paper does this well by contextualising its estimates, primarily through the calculation of the Gini coefficient (4).
The authors estimate France’s Gini coefficient in the late 18th century to be 0.59, indicating significant income inequality. However, they compare this figure to other regions and periods to provide a clearer picture:
Amsterdam (1742): Much higher inequality, with a Gini of 0.69.
Britain (1759): Lower inequality, with a Gini of 0.52, which rose to 0.59 by 1801.
Prussia (mid-19th century): Far less inequality, with a Gini of 0.34–0.36.
This comparison shows that income inequality wasn’t unique to France. Other regions experienced similar or even higher levels of inequality without spontaneously erupting into revolution.
Accounting for Variations
The authors also recalculated the Gini coefficient to account for potential variations. They assumed that the income of the top quintile (the wealthiest 20%) could vary by ±10%. Here’s what they found:
If the top quintile earned 10% more, the Gini coefficient rose to 0.66, placing France significantly above other European countries of the time.
If the top quintile earned 10% less, the Gini dropped to 0.55, bringing France closer to Britain’s level.
Ultimately, the authors admit there’s uncertainty about the exact level of inequality in France. Their best guess is that it was comparable to other countries or somewhat worse.
Step 8: Drawing Some Conclusions
Saying that most people in the 18th century were poor and miserable—perhaps the French more so than others—isn’t exactly a compelling statement if your goal is to gather clicks or make a dramatic political point.
It’s incredibly tempting to look at the past and find exactly what we want to see in it. History often acts as a mirror, reflecting our own expectations unless we challenge ourselves to think critically. Whether you call it wishful thinking or confirmation bias, it’s easy to project the future onto the past.
Looking at the initial graph, I understand why someone might fall into this trap. Simple, tidy narratives are appealing to everyone. But if you’ve studied history, you’ll know that such narratives are a myth. Human nature may not have changed in thousands of years, but the contexts we inhabit are so vastly different that direct parallels are meaningless.
So, is revolution imminent? Well, that’s up to you—not some random graph on the internet.
Notes
(1) A la lanterne was a revolutionary cry during the French Revolution, symbolising mob justice where individuals were sometimes hanged from lampposts as a form of public execution
(2) The capitation tax was a fixed head tax implemented in France during the Ancien Régime. It was levied on individuals, with the amount owed determined by their social and professional status. Unlike a proportional income tax, it was based on pre-assigned categories rather than actual earnings, meaning nobles, clergy, and commoners paid different rates regardless of their actual wealth or income.
(3) Jean-Baptiste Isnard was an 18th-century economist. These estimates attempted to describe the theoretical distribution of income among different social classes in pre-revolutionary France. Isnard’s work aimed to categorise income across groups like nobles, clergy, and commoners, providing a broad picture of economic disparity during the period.
(4) The Gini coefficient (or Gini index) is a widely used statistical measure of inequality within a population, specifically in terms of income or wealth distribution. It ranges from 0 to 1, where 0 indicates perfect equality (everyone has the same income or wealth), and 1 represents maximum inequality (one person or household holds all the wealth).
#frev#french revolution#history#disinformation#income inequality#critical thinking#amateurvoltaire's essay ramblings#don't believe everything you see online#even if you really really want to
221 notes
·
View notes
Note
https://x.com/ferra_ria/status/1817615372104740934?s=46
what is that ass top speed omg how is it so low compared to everyone? were we not running a low downforce setup i am confused
This is the graph shown in the tweet. And this is a very good example of how data can be very misleading without context.
So the reason for this is simple. Charles was not in DRS as much as the others. He made passes quickly, while many of these others had longer runs in DRS. The speed difference here is made by DRS, not the car. All this shows is who was in DRS and for the longest. It doesn't show anything substantive about the cars.
And it's pretty clear because while the Ferrari is not the fastest car are you trying to tell me it's slower than the Sauber? Really.
This kph difference is basically the exact difference that comes from DRS. All this shows was that Charles didn't have any long runs in DRS compared to the others. He also made any passes quickly instead of being stuck. Running on the straights without DRS was around 314kph(give or take) for everyone.
Mid field cars often have the chance to hit higher speeds simply because they are more likely to be in DRS for longer.
And this is why data in context is important. Just presenting the raw data this way does not inform anything about the cars and the race and borders on misinformation.
#luci answers#charles leclerc#scuderia ferrari#my beef with big data accounts is never ending when they put out shit like this
120 notes
·
View notes
Note
Hey, as someone who has never done statics before, is there a noob-friendly way to calculate how many kudos the average fic on ao3 has? Or does that statistic already exist somewhere?
I kinda feel it would be something like <50 kudos but that’s just me.
--
What kind of "average"?
One thing you have to consider is whether a straight average is misleading.
My guess is that many fandoms have a pattern where the majority of works have 0 or 1 kudos. Then there's some single popular fic with a gajillion kudos. Is an average of 3 kudos representative of that fandom?
It's like life expectancy. People are all: "Everyone in the Middle Ages died at 35!!! Wharrrrrrrgarbl!" Except they didn't. Shittons of people lived to be what we would consider a fairly normal old age now. The average was low because fucktons of small children died. If you made it past 5, you had an okay-ish life expectancy thereafter.
What you most likely actually want is a graph showing X stories with 0 kudos, Y with 1, Z with 2, etc.
You can use the Works Search to search by number of kudos. Currently, I'm getting:
How you choose to break it out really affects the vibe of the chart even if the numbers are all accurate. How you present numbers matters.
So decide what you're trying to say and make the chart brackets reflect that.
53 notes
·
View notes
Text
People smarter than me tell me that this graph is a little misleading, but the sentiment on a whole is one I agree with: we need to see some charts and data that show us exactly what being servants to a billionaire class costs the rest of us.
#billionaires should not exist#ceo mindset#ceo down#fuck corporate greed#end citizens united#tax the rich#eat the rich#anti capitalism#healthcare for all#universal basic income
41 notes
·
View notes
Text
Silicon Valley has its own variety of racism. And you'll never guess who is the leading figure in spreading this poisonous ideology. [CLUE: He left South Africa at age 18 when the country had just begun the process of eliminating apartheid and moving to majority black rule.]
Racist pseudo-science is making a comeback thanks to Elon Musk. Recently, the tech billionaire has been retweeting prominent race scientist adherents on his platform X (formally known as Twitter), spreading misinformation about racial minorities’ intelligence and physiology to his audience of 176.3 million followers—a dynamic my colleague Garrison Hayes analyzes in his latest video for Mother Jones. X, and before it Twitter, has a long-held reputation for being a breeding ground for white supremacy. [ ... ] Musk is amplifying users who will incorporate cherry-picked data and misleading graphs into their argument as to why people of European descent are biologically superior, showing how fringe accounts, like user @eyeslasho, experience a drastic jump in followers after Musk shares their tweets. The @eyeslasho account has even thanked Musk for raising “awareness” in a thread last year. (Neither @eyeslasho nor Musk, via X, responded to Garrison’s request for an interview.) “People are almost more susceptible to simpler charts with race and IQ than they are to the really complicated stuff,” Will Stancil, a lawyer and research fellow at the Institute on Metropolitan Opportunity, told Garrison in a video interview. He added: “This is the most basic statistical error in the book: Correlation does not equal causation.”
Racist pseudo-science simply sprays cologne on the smelly bullshit of plain old irrational bigotry. Warped theology, which was used to justify slavery, passed the baton of officially sanctioned race prejudice to pseudo-science in the late 19th and early 20th century.
DNA and other real science not only undermine the pseudo-science of racism but has revealed that "race" is not even a valid scientific concept among humans. What is widely regarded as race is defined by rather generalized phenotypes.
There has always been petty bigotry. But racial pseudo-science has been used to justify exploitation, colonialism, and territorial expansion by the powerful and ignorant. Elon Musk certainly qualifies as both powerful and ignorant.
In 2022, just one week after Musk purchased Twitter, the Center for Countering Digital Hate —an online civil rights group— found that racial slurs against Black people had increased three times the year’s average, with homophobic and transphobic epithets also seeing a significant uptick, according to the Associated Press. More than a year later, Musk made headlines once again for tweeting racist dog whistles in a potential attempt to “woo” a recently fired Tucker Carlson. But, his new shift into sharing tech-bro-friendly bigotry carries its own unique set of consequences.
If you are still on Twitter/X then you are indirectly supporting the propagation of pseudo-scientific racism – as well as just plain hate. Like quitting alcohol and tobacco, ditching Twitter/X can be difficult. But after doing so, you'll eventually notice how much better you feel.
#racism#white supremacy#pseudo-science#silicon valley#twitter#x#elon musk#hate speech#center for countering digital hate#leave twitter#quit twitter#delete twitter
84 notes
·
View notes
Note
https://twitter.com/ZakugaMignon/status/1739703106466627976?t=6BRheBvMK4MlCt5gXaj4ig&s=19
this is a comically misleading graph lol (copy-pasted it below)
like you look at this and obviously think 'wow training an AI model is so much more carbon-intensive than planes or cars' -- but if you look at the actual units being chosen, they're fucking stupid. like, why is the air travel one the only one measured per passenger? the AI model will presumably be used by more than one person. fuck, even the average car will often be transporting more than one person! why is the car the only one that has its lifetime use and manufacturing cost combined when the AI model only has its manufacturing cost and the plane only has the cost of a single flight divided per passenger?
these are absolutely fucking nonsensical things to compare, and obviously chosen in ways that make AI look disproportionately bad (and air travel look disproportionately good, for some reason?). i'm not even touching the profoundly Fucked political implications of measuring the CEO emissions of 'a human life'
but let's, just for funsies, assume that these are reasonable points of comparison -- in 2021, 9 million cars were made in the USA. for the environmental impact of training AI to be even equal to the effects that those cars will have over their lifetimes (which this graph obviously seeks to imply), there would have to be over a million AI models trained in that time. which--hey, hang on a minute, that reminds me, what the fuck is 'training an AI model'! AI models come in all kinds of sizes -- i don't think the machine learning model that the spiderverse animators used to put the outlines on miles' face cost as much to train as, say, DALL-E 3! there is no indicator that this is an 'average' AI model, or whether this is on the low or high end of the scale.
so having just had that above question, i went to look at the original study and lol, lmao even:
the 626,000 lbs figure is actually just the cost of that one specific model, and of course the most expensive in the study. there's no convincing evidence that this is a reasonable benchmark for most AI training -- for all this study's extremely limited set of examples tells us, this could be an enormous outlier in either direction!
so yeah, this is officially Bad Data Visualization, & if you originally saw this and thought "oh, it looks like AI art really has a huge environmental impact!" then, like, you're not stupid -- the person making this chart obviously wanted you to think that -- but i recommend taking some time when you look at a chart like this to ask questions like 'how was this data obtained?' and 'are these things it makes sense to compare on the same scale?', because there's a lot of misleading charts out there, both purposefully and through sheer incompetence.
370 notes
·
View notes
Text
Hey guys I don’t usually pull out my infectious disease epidemiologist card but I recommend that if you see the blog below you approach all claims with extreme skepticism. This account seems to post a lot of screenshots of tweets and sensationalist headlines, with misleading, undersourced graphs, stats, and forecasts from uncredentialed people with big claims (read: grifters and conspiracy theorists).
I do not judge anyone for being bamboozled by these - they put a lot of effort into looking convincing. I’m just piping in as a so-called “expert” that there are other, better sources for COVID-19 info and forecasts. Let me know if you want me to post those separately, or feel free to DM me.
Peace out everybody and happy new year!!!!
18 notes
·
View notes
Text
You’re not supposed to be like me. You deserve to be like you.
Never think, “I’m probably not really aromantic, because I’m not as completely aromantic as that person.”
Never think, “I’m probably not really asexual, because I’m not as completely asexual as that person.”
People like me are over-represented on the aromantic and asexual spectra, because we have the time and social appetite to be more visible. It can give the misleading impression that we represent what aromanticism, asexuality, or aromantic asexuality look like.
I was in the middle of writing a 1,500 word essay complete with scatter graphs and shit, but this was important, so for now I’ll just say it, and prove it later, because I think you know this is true, or at least probable:
There are a lot of people who feel like they experience a little less romantic or sexual attraction than other people do, but who still have found a way to make a relationship work. Many of them will never even hear about aromanticism or asexuality, and if they do they’ll think it must apply to someone other than them, because they’re with someone. Their “invisibility” (by way of not self-identifying as aspec) means their ends of the asexual and aromantic spectra are underrepresented or even erased.
1 2 3 4 5 6 7 8 9 10
On this scale, 5 and 6 are safely in the middle of the purple numbers.
1 2 3 4 5 6 7 8 9 10
On this scale, where 1-4 have excluded themselves, 5 and 6 are on the edge of the purple numbers.
The under representation or erasure of non-self-identifying aromantic and asexuals means you might feel like more of an edge-case or an outlier than you are, because it’s the real edge-cases and outliers—the very asexual or very aromantic—who generate most of the written discourse about aromanticism or asexuality, because we have more time to, and we have a stronger motivation to connect with others like us.
But you belong here. You’re probably the invisible majority of us, and have a lot to teach us about your far-more-complicated end of our spectra.
26 notes
·
View notes
Text
Also preseved in our archive (Daily updates!)
By Benjamin Mateus
Stanford University held a conference last month with the misleading title, “Pandemic Policy: Planning the Future, Assessing the Past.” Given the utter bankruptcy of the US and global policy in the ongoing COVID-19 pandemic, one would conclude that a discussion on how the world can address the current and future pandemics is of immense importance and has significant relevance to international public health policies.
However, the one-day conference held at the prestigious university was funded through Collateral Global and supported by Brownstone Institute, promoters of pandemic misinformation and COVID-19 contrarians. It was the opposite of a serious discussion on pandemic preparedness.
To place these organizations into a proper perspective, it bears noting that Robert Dingwall, a British sociologist who has been heavily promoted by Collateral Global, wrote on his blog in March 2020 that the elderly would be better off to die from COVID-19 than to be protected and later die from a degenerative disease like dementia. This was a thinly disguised version of fascistic eugenics—weeding out the “unfit” from society.
The Stanford symposium showcased a panel of discredited scientists and supposed policy health experts associated with the reactionary Great Barrington Declaration, better characterized as a manifesto of death, set on promoting the notion that no broad-based public health initiatives should have ever been undertaken during the COVID-19 pandemic or when the next pandemic strikes.
At the core of the debunked “declaration” is the claim that there can be “focused protection” against the pandemic for those at high risk, which would allow those at minimal risk of death to lead normal lives while building up immunity to the virus through natural immunity.
Well-respected global health advocate Peter Hotez said of the conference, “This is awful, a full-on anti-science agenda (and revisionist history), tone deaf to how this kind of rhetoric contributed to the deaths of thousands of Americans during the pandemic by convincing them to shun vaccines or minimize COVID.”
These include discredited figures like Dr. Jay Bhattacharya, a Stanford public policy professor; Dr. Scott Atlas, former Trump administration adviser on the Coronavirus Task Force; and Anders Tegnell, Sweden’s former state epidemiologist, who advocated for a policy of mass infection to achieve “herd immunity” that had horrendous consequences on the population, in particular, the elderly and most frail. Tegnell’s most consequential remark during the conference gave the flavor of the event: “We have focused so much on mortality as a measure of outcome, but there are more important things.”
Included on the panel were Marty Makary, prominent Johns Hopkins University surgeon, who had repeatedly predicted that the population was on the verge of achieving natural immunity and the pandemic would thus come to an end. Also there was Oxford Professor of Epidemiology Sunetra Gupta, one of the original signers of the declaration with Bhattacharya, and Harvard University biostatistician Martin Kuldorff.
Graph compares COVID deaths in the US, Sweden and Norway (which adopted a far more rigorous pandemic mitigation program). [Photo: Our World in Data] Gupta has called for mass infection of the young and declared during the conference that her idea of focused protection had evolved into what she termed “individual risk reduction,” where each person would decide for him or herself the level of protection and mitigation they wanted to assume during a deadly outbreak. This is the literal opposite of public health, treating infection with a highly contagious and potentially lethal disease as a purely individual matter.
That institutions like Johns Hopkins, Harvard and Stanford are at the forefront of promoting such anti-science and anti-public health initiatives speaks to the deep political and moral decline in academic circles. Similarly, these “elite” institutions have embraced censorship and attacks on democratic rights of students protesting the US backing of Israel’s genocidal policies in Gaza.
Closing remarks at the Stanford conference were given by John Ioannidis, professor of epidemiology and one of the principal investigators of the fallacious, non-peer-reviewed Santa Clara County study, released in April 2020, which suggested that COVID-19 was no deadlier than the flu and that the pandemic measures to protect populations needed to be lifted forthwith.
At the time of that study, the COVID-19 pandemic was inundating the healthcare system of New York City. The CDC had noted that close to 20,000 people had died in the three-month window (March through May) with an overall crude fatality rate of 9.2 percent. Also, 30 percent of hospitalized patients with laboratory confirmed COVID-19 were known to have died.
Bhattacharya, who had locked arms with AFT President Randi Weingarten in forcing students and teachers back into schools in 2021 and served as a pandemic adviser to Florida’s fascistic Governor Ron DeSantis, attempted to sell the conference as a forum for people with opposing views coming together to air out their differences.
“What can we do in the future? The pandemic was by any measure a disaster,” he declared. Although he cited the correct number of deaths and economic turmoil caused by the pandemic affecting the poorest in the world, he blamed these losses, not on the failure to carry out systematic public health measures but on the measures themselves. It was a translation into academic jargon of the notorious declaration by New York Times columnist Thomas Friedman that “the cure can’t be worse than the disease.”
Bhattacharya had the audacity to assert, “this conference was four years too late, but this is not too late, this is not the last pandemic the world will face.” The purpose of his efforts to codify the perspectives put forth by the Great Barrington Declaration is to ensure no real effort is taken by any government to address any threat, including the current bird flu outbreak that threatens to ignite another pandemic.
His ideas have nothing to do with the field of epidemiology or any scientific comprehension of the nature of pandemics. If he has a bone to pick with the Biden administration and its response to the pandemic is that Biden and Harris did not adopt the mass infection policy officially from the beginning, but only implemented it piecemeal.
Additionally, Bhattacharya has positioned himself as a fellow traveler with the anti-vaxxers promoting the false notion that the current mRNA vaccines are unsafe and the process through which they were brought forward violated safety measures which is patently false.
He wrote for the Brownstone Institute a report published on November 16, 2022, stating, “The Biden plan enshrines former president Donald Trump‘s Operation Warp Speed as the model response to the next century of pandemics. Left unsaid is that, for the new pandemic plan to work as envisioned, it will require us to conduct dangerous gain-of-function research. It will also require cutting corners in the evaluation of the safety and efficacy of novel vaccines. And while the studies are underway, politicians will face tremendous pressure to impose draconian lockdowns to keep the population ‘safe’.”
Scott Atlas blurted out the real purpose of the conference. Reading a prepared statement, he said that the lockdowns failed to stop the dying, and they failed to stop the spread. He blamed the economic lockdowns for the excess deaths rather than the virus. He blamed Dr. Anthony Fauci for implementing the lockdowns and not enforcing “targeted protection.”
Atlas later also called for complete US divestment from the World Health Organization and called for the termination of all middle-level scientists at the CDC, for which he received applause from his colleagues on the panel.
The Stanford conference was entirely divorced from the actual history of the pandemic, particularly its early weeks. The initial outbreak of COVID in Wuhan showed it was propagated by airborne transmission and was both highly contagious and lethal.
When, on January 30, 2020, the WHO declared the outbreak a Public Health Emergency of International Concern, Europe, the US, and other countries chose not to act. They could have rapidly eliminated and eradicated the virus but did nothing until the virus had spread globally and began its deadly rampage.
It was in early March, six weeks later, with the horrific scenes emerging out of Italy that prompted the working class to demand a shutdown. Auto workers took the lead in many countries, including the United States and Canada, and it was only out of fear of a mass rebellion among workers internationally that the ruling elites were forced to respond with limited lockdowns to stem the tide of infections.
The Great Barrington Declaration, the right-wing campaigns against mask and vaccine mandates and last month’s conference at Stanford were essentially rooted in fear of the independent initiative of workers insisting on serious in public health measures. The populist demagogy about allowing people the “freedom” to work in the midst of a deadly pandemic cannot disguise what is a fundamentally anti-working class perspective.
The maliciously false point being driven home by the organizers of the conference was that social interventions—masking, closure of schools and businesses, lockdowns, and maintaining social distancing—were worse than the disease, despite studies that have shown when such policies were actually implemented, they saved many, many lives.
As one 2023 study published in The Lancet found, in the period from January 1, 2020, to July 31, 2022, Hawaii, with stricter anti-COVID measures, saw 147 deaths per 100,000 compared to 581 per 100,000 in Arizona and 526 per 100,000 in Washington D.C. The national rate was 372 deaths per 100,000.
Similar conclusions were reached in a more recent comprehensive study that evaluated state by state in the US comparing restrictions in place and impact on excess deaths. As the authors of that study noted, “COVID-19 restrictions were associated with substantial reductions in excess pandemic deaths in the US. If all states had weak restrictions, as defined in the Methods section, estimated excess deaths from July 2020 to June 2022 would have been 25 percent to 48 percent higher than if all had imposed strong restrictions. Behavioral responses provided a potentially important mechanism for this, being associated with 49 percent 78 percent of the overall difference.” This last part of the statement underscores the importance of open channels of communications and an all-in approach to such matters. Public health is first and foremost a social concern.
And still another study published in January 2022 found that the impact of the limited measures employed saved between 870,000 to 1.7 million Americans.
The most insidious issue that the COVID-19 contrarians fail to mention is that herd immunity is not achievable with a virus like SARS-CoV-2, which mutates so rapidly, and the issues raised by Long COVID and reinfections with the concomitant long-term health impacts that will debilitate the population are not even considered. Current estimates place the number of those suffering from Long COVID across the globe at over 410 million as of the end of 2023.
The response to pandemics requires a social investment in public health on an international scale. The global nature of the economy poses that a national approach as was seen in China and its Zero COVID policy cannot withstand an anti-public health policy that is imposed on the global population. This raises the need for a socialist perspective not only to the global economy but to the global health of the working class.
#mask up#covid#pandemic#public health#wear a mask#covid 19#wear a respirator#still coviding#coronavirus#sars cov 2
20 notes
·
View notes
Text
here, this is a better article.
apparently usage of "delve" is appearing in greater numbers in written text but it's unclear whether this is directly linked to AI or simply a language trend.
apparently ai writing includes some words that human writing doesn't include at much higher frequencies, such as the word "delve". supposedly ai writing is using this word with such higher frequency it's misleading data collection, as datasets focused on word usage over time are showing major skews since 2021
#not as massive increase as i thought (the graph is a little misleading)#but significant nonetheless
2K notes
·
View notes
Text
I hand-knit the folklore cardigan so [with my v important pointers] you totally can, too!
Pattern
Taylor Swift's folklore the "Cardigan" by Lion Brand (free)
I have several qualms about this pattern, and though it’s easy to comprehend for the most part, I kind of hate it. But! I have tips below so that you can use this free pattern and OG cardigan reference pictures to make the perfect finished project. It’s also super easy actually even if you’ve never done cables or a large project before.
Materials (used as recommended by the pattern)
Needles: Takumi Clover US 9 (5.5 mm), 29" circular needles—My first time trying bamboo needles and this brand, I LOVED it. It made continental knitting so easy and fluid. I would recommend longer cables to make the button band part less stressful, and perhaps smaller diameter needles to make the ribbing prettier
Lion Brand Wool Ease: see rant below
Buttons: 3 1.25"-diameter La Mode buttons (there are prettier ones out there though they can get frighteningly expensive, pick what you like)
A summary of issues
the sizing runs very large
the button band (and, by extension, side panels) is all wrong for sizes other than S/M (the whole pattern is based aound S/M with suggested alterations for other sizes)
the arms turn out way too long for any size if you follow the instructions
the back cables (and possibly some others) are spaced distinctly differently from the OG folklore cardigan from Taylor’s site
the suggested yarn (Lion Brand Wool Ease) is scratchy on sensitive skin, stiff, thicker, more fuzzy than the folklore cardigan (and sheds a lot!), and stretches a lot which makes the cardigan larger than expected
Biggest tips (if you want to knit a cardigan similar to the OG)
CHECK YOUR GAUGE
measure yourself to pick size, and size down
find a bunch of pictures of the OG cardigan in the size that you want & count the stitches from the photos + graph the Lion Brand pattern, and compare before you begin
make alterations as needed
DO NOT BLIND BUY LION BRAND WOOL EASE
My best advice would be to just do a big guage swatch (as recommended on the pattern), run it through the wash, block it, measure it, plus assume that the cardigan will additionally stretch out on your body whenever worn. (Also if you’ve never knitted a garment before, the individual pieces absolutely look bigger once assembled and seamed than when they do on the needles while being knit.) The button band will add some width as well.
The button band is the current object of my misery. The cardigan fits like a cute tent, but the buttons beginning near my stomach is a no-go. I would definitely recommend double checking the spacing of the buttonholes on the button band because I kind of wish I’d altered them a little bit according to how I want the front to look. But then, the side panels would have to start slanting higher up towards the neck, so the whole neck should have been a smaller V. And I don’t have the heart to frog all the way back to to that. Still wondering if I should just shift the buttons higher and redo the button band, but I might just leave it as is and call it a day.
The recommended yarn is Lion Brand Wool-Ease, but actually I regret using it because it’s so stretchy and bulky, so the cardigan turned out a lot more chunky (and a lot more stretchy too I’m guessing) than the OG. I even found the finished measurements on the pattern misleading due to the cardigan stretching due to its own weight.
The pattern also calls for very long arms so I would advise just doing 4.5 diamonds for the back and then 4 diamonds for the arms, just like the OG! I thought 4 diamonds would be too short but the off-shoulder fit makes 5 diamonds incredibly long for me, and 4 would have been perfect!
I’m not sure why the instructions were that misleading with the sizing—Partly it’s me messing up with my guage, but I’m thinking it might also be because Lion Brand was basing it off the OG folklore cardigans from Taylor’s website, which I’ve heard run immensely large in a similar fashion. Still, I’m not sure exactly how the sizing compares to that of the XS/S and M/L OG cardigans
I usually am an S for perfectly fitted T shirts, and I get M sized crewnecks/hoodies for a perfect, comfy, borderline oversized fit that isn’t snug over layers. I was confused between knitting the S/M and L/XL because I wanted an oversized fit. I worried the S/M might be too snug and figured it was better to err on the side of it being a bit larger than expected because it’s still possible to style that, while a too-small cardigan would just be unwearable. But I think sizing down is the best way to go for that pattern and yarn if you’re picking between two sizes. The S/M pattern would probably produced something that fits more like a regular L/XL you would expect to see in a store.
Also, the yarn is fuzzy and pills a lot! It’s also slightly scratchy even after conditioning. So I would say just pick a durable yarn that creates a fabric that you love first before you start the project!
The Lion Brand pattern’s back cables are spaced slightly differently from the OG cardigans. (The OG had some moss stitched space between the two left cables on either end and the group of other cables in the center.) There might be other differences too. I know there are some other patterns out there you can pay for and they might be more accurate to the OG, but I would recommend simply looking up pictures of the OG cardigan in the size that you’re aiming for, and then taking note of the differences and making the alterations yourself! The stitches are fairly easy to count!
I have a breadth of regrets about this project (and some of it is just post-project blues, y’know?), but you live and you learn, folks! And I definitely learned a lot from this project. :) Will come back here and update once I add the (very expensive) silver star patches I’ve been procrastinating to buy because I’m so broke and so sad about how it turned out. I’m confident all the time I’ve spent on her will culminate in me surely falling in love with her soon enough. <3
#taylor swift#folklore#folklore cardigan#knitting#cardigan taylor swift#taylor swift cardigan#folklore aesthetic#folklovermore#folkmore#evermore#willow taylor swift#fiber arts#fibre arts#fiber art#fibre art#fiber artist#fibre artist#knitter#crocheter#cardigan#t swift#ravelry#lion brand yarn#knitting pattern#free knitting pattern#free pattern#cottagecore#winter#fall#autumn
61 notes
·
View notes