#bias science
Explore tagged Tumblr posts
space-dreams-world · 2 years ago
Text
Ambient Amity or the real monsters are human
Or where since their is a lot of ectoplasm surrounding amity, it’s most likely that the people will become ghosts if they pass. So let instead of Danny fearing experimentation, let it be one of the Fenton parents or one of the G.I.W agents. One of them dies, and the other person does not recognize them or is convinced that they are evil and believe their unsound beliefs that ghosts can’t feel pain. So the adult becomes the lab rat, realizing that their bias is unsound, but they are currently a lab torturee.
Doesn’t have to be a dp crossover fic, but it’s an interesting idea worth exploring..
31 notes · View notes
fatliberation · 1 year ago
Note
they have a point though. you wouldn't need everyone to accommodate you if you just lost weight, but you're too lazy to stick to a healthy diet and exercise. it's that simple. I'd like to see you back up your claims, but you have no proof. you have got to stop lying to yourselves and face the facts
Must I go through this again? Fine. FINE. You guys are working my nerves today. You want to talk about facing the facts? Let's face the fucking facts.
In 2022, the US market cap of the weight loss industry was $75 billion [1, 3]. In 2021, the global market cap of the weight loss industry was estimated at $224.27 billion [2]. 
In 2020, the market shrunk by about 25%, but rebounded and then some since then [1, 3] By 2030, the global weight loss industry is expected to be valued at $405.4 billion [2]. If diets really worked, this industry would fall overnight. 
1. LaRosa, J. March 10, 2022. "U.S. Weight Loss Market Shrinks by 25% in 2020 with Pandemic, but Rebounds in 2021." Market Research Blog. 2. Staff. February 09, 2023. "[Latest] Global Weight Loss and Weight Management Market Size/Share Worth." Facts and Factors Research. 3. LaRosa, J. March 27, 2023. "U.S. Weight Loss Market Partially Recovers from the Pandemic." Market Research Blog.
Over 50 years of research conclusively demonstrates that virtually everyone who intentionally loses weight by manipulating their eating and exercise habits will regain the weight they lost within 3-5 years. And 75% will actually regain more weight than they lost [4].
4. Mann, T., Tomiyama, A.J., Westling, E., Lew, A.M., Samuels, B., Chatman, J. (2007). "Medicare’s Search For Effective Obesity Treatments: Diets Are Not The Answer." The American Psychologist, 62, 220-233. U.S. National Library of Medicine, Apr. 2007.
The annual odds of a fat person attaining a so-called “normal” weight and maintaining that for 5 years is approximately 1 in 1000 [5].
5. Fildes, A., Charlton, J., Rudisill, C., Littlejohns, P., Prevost, A.T., & Gulliford, M.C. (2015). “Probability of an Obese Person Attaining Normal Body Weight: Cohort Study Using Electronic Health Records.” American Journal of Public Health, July 16, 2015: e1–e6.
Doctors became so desperate that they resorted to amputating parts of the digestive tract (bariatric surgery) in the hopes that it might finally result in long-term weight-loss. Except that doesn’t work either. [6] And it turns out it causes death [7],  addiction [8], malnutrition [9], and suicide [7].
6. Magro, Daniéla Oliviera, et al. “Long-Term Weight Regain after Gastric Bypass: A 5-Year Prospective Study - Obesity Surgery.” SpringerLink, 8 Apr. 2008. 7. Omalu, Bennet I, et al. “Death Rates and Causes of Death After Bariatric Surgery for Pennsylvania Residents, 1995 to 2004.” Jama Network, 1 Oct. 2007.  8. King, Wendy C., et al. “Prevalence of Alcohol Use Disorders Before and After Bariatric Surgery.” Jama Network, 20 June 2012.  9. Gletsu-Miller, Nana, and Breanne N. Wright. “Mineral Malnutrition Following Bariatric Surgery.” Advances In Nutrition: An International Review Journal, Sept. 2013.
Evidence suggests that repeatedly losing and gaining weight is linked to cardiovascular disease, stroke, diabetes and altered immune function [10].
10. Tomiyama, A Janet, et al. “Long‐term Effects of Dieting: Is Weight Loss Related to Health?” Social and Personality Psychology Compass, 6 July 2017.
Prescribed weight loss is the leading predictor of eating disorders [11].
11. Patton, GC, et al. “Onset of Adolescent Eating Disorders: Population Based Cohort Study over 3 Years.” BMJ (Clinical Research Ed.), 20 Mar. 1999.
The idea that “obesity” is unhealthy and can cause or exacerbate illnesses is a biased misrepresentation of the scientific literature that is informed more by bigotry than credible science [12]. 
12. Medvedyuk, Stella, et al. “Ideology, Obesity and the Social Determinants of Health: A Critical Analysis of the Obesity and Health Relationship” Taylor & Francis Online, 7 June 2017.
“Obesity” has no proven causative role in the onset of any chronic condition [13, 14] and its appearance may be a protective response to the onset of numerous chronic conditions generated from currently unknown causes [15, 16, 17, 18].
13. Kahn, BB, and JS Flier. “Obesity and Insulin Resistance.” The Journal of Clinical Investigation, Aug. 2000. 14. Cofield, Stacey S, et al. “Use of Causal Language in Observational Studies of Obesity and Nutrition.” Obesity Facts, 3 Dec. 2010.  15. Lavie, Carl J, et al. “Obesity and Cardiovascular Disease: Risk Factor, Paradox, and Impact of Weight Loss.” Journal of the American College of Cardiology, 26 May 2009.  16. Uretsky, Seth, et al. “Obesity Paradox in Patients with Hypertension and Coronary Artery Disease.” The American Journal of Medicine, Oct. 2007.  17. Mullen, John T, et al. “The Obesity Paradox: Body Mass Index and Outcomes in Patients Undergoing Nonbariatric General Surgery.” Annals of Surgery, July 2005. 18. Tseng, Chin-Hsiao. “Obesity Paradox: Differential Effects on Cancer and Noncancer Mortality in Patients with Type 2 Diabetes Mellitus.” Atherosclerosis, Jan. 2013.
Fatness was associated with only 1/3 the associated deaths that previous research estimated and being “overweight” conferred no increased risk at all, and may even be a protective factor against all-causes mortality relative to lower weight categories [19].
19. Flegal, Katherine M. “The Obesity Wars and the Education of a Researcher: A Personal Account.” Progress in Cardiovascular Diseases, 15 June 2021.
Studies have observed that about 30% of so-called “normal weight” people are “unhealthy” whereas about 50% of so-called “overweight” people are “healthy”. Thus, using the BMI as an indicator of health results in the misclassification of some 75 million people in the United States alone [20]. 
20. Rey-López, JP, et al. “The Prevalence of Metabolically Healthy Obesity: A Systematic Review and Critical Evaluation of the Definitions Used.” Obesity Reviews : An Official Journal of the International Association for the Study of Obesity, 15 Oct. 2014.
While epidemiologists use BMI to calculate national obesity rates (nearly 35% for adults and 18% for kids), the distinctions can be arbitrary. In 1998, the National Institutes of Health lowered the overweight threshold from 27.8 to 25—branding roughly 29 million Americans as fat overnight—to match international guidelines. But critics noted that those guidelines were drafted in part by the International Obesity Task Force, whose two principal funders were companies making weight loss drugs [21].
21. Butler, Kiera. “Why BMI Is a Big Fat Scam.” Mother Jones, 25 Aug. 2014. 
Body size is largely determined by genetics [22].
22. Wardle, J. Carnell, C. Haworth, R. Plomin. “Evidence for a strong genetic influence on childhood adiposity despite the force of the obesogenic environment” American Journal of Clinical Nutrition Vol. 87, No. 2, Pages 398-404, February 2008.
Healthy lifestyle habits are associated with a significant decrease in mortality regardless of baseline body mass index [23].  
23. Matheson, Eric M, et al. “Healthy Lifestyle Habits and Mortality in Overweight and Obese Individuals.” Journal of the American Board of Family Medicine : JABFM, U.S. National Library of Medicine, 25 Feb. 2012.
Weight stigma itself is deadly. Research shows that weight-based discrimination increases risk of death by 60% [24].
24. Sutin, Angela R., et al. “Weight Discrimination and Risk of Mortality .” Association for Psychological Science, 25 Sept. 2015.
Fat stigma in the medical establishment [25] and society at large arguably [26] kills more fat people than fat does [27, 28, 29].
25. Puhl, Rebecca, and Kelly D. Bronwell. “Bias, Discrimination, and Obesity.” Obesity Research, 6 Sept. 2012. 26. Engber, Daniel. “Glutton Intolerance: What If a War on Obesity Only Makes the Problem Worse?” Slate, 5 Oct. 2009.  27. Teachman, B. A., Gapinski, K. D., Brownell, K. D., Rawlins, M., & Jeyaram, S. (2003). Demonstrations of implicit anti-fat bias: The impact of providing causal information and evoking empathy. Health Psychology, 22(1), 68–78. 28. Chastain, Ragen. “So My Doctor Tried to Kill Me.” Dances With Fat, 15 Dec. 2009. 29. Sutin, Angelina R, Yannick Stephan, and Antonio Terraciano. “Weight Discrimination and Risk of Mortality.” Psychological Science, 26 Nov. 2015.
There's my "proof." Where is yours?
10K notes · View notes
prokopetz · 2 years ago
Text
One of the big sources of statistical bias in medical research that never occurred to me until I started studying the subject but which seems incredibly obvious in retrospect is "all of the participants who didn't experience immediate positive results after their first appointment stopped showing up".
2K notes · View notes
post-futurism · 1 year ago
Text
"The myth that female reproductive capabilities somehow render them incapable of gathering any food products beyond those that cannot run away does more than just underestimate Paleolithic women. It feeds into narratives that the contemporary social roles of women and men are inherent and define our evolution. Our Paleolithic ancestors lived in a world where everyone in the band pulled their own weight, performing multiple tasks. It was not a utopia, but it was not a patriarchy."
458 notes · View notes
mostlysignssomeportents · 1 year ago
Text
The surprising truth about data-driven dictatorships
Tumblr media
Here’s the “dictator’s dilemma”: they want to block their country’s frustrated elites from mobilizing against them, so they censor public communications; but they also want to know what their people truly believe, so they can head off simmering resentments before they boil over into regime-toppling revolutions.
These two strategies are in tension: the more you censor, the less you know about the true feelings of your citizens and the easier it will be to miss serious problems until they spill over into the streets (think: the fall of the Berlin Wall or Tunisia before the Arab Spring). Dictators try to square this circle with things like private opinion polling or petition systems, but these capture a small slice of the potentially destabiziling moods circulating in the body politic.
Enter AI: back in 2018, Yuval Harari proposed that AI would supercharge dictatorships by mining and summarizing the public mood — as captured on social media — allowing dictators to tack into serious discontent and diffuse it before it erupted into unequenchable wildfire:
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Harari wrote that “the desire to concentrate all information and power in one place may become [dictators] decisive advantage in the 21st century.” But other political scientists sharply disagreed. Last year, Henry Farrell, Jeremy Wallace and Abraham Newman published a thoroughgoing rebuttal to Harari in Foreign Affairs:
https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making
They argued that — like everyone who gets excited about AI, only to have their hopes dashed — dictators seeking to use AI to understand the public mood would run into serious training data bias problems. After all, people living under dictatorships know that spouting off about their discontent and desire for change is a risky business, so they will self-censor on social media. That’s true even if a person isn’t afraid of retaliation: if you know that using certain words or phrases in a post will get it autoblocked by a censorbot, what’s the point of trying to use those words?
The phrase “Garbage In, Garbage Out” dates back to 1957. That’s how long we’ve known that a computer that operates on bad data will barf up bad conclusions. But this is a very inconvenient truth for AI weirdos: having given up on manually assembling training data based on careful human judgment with multiple review steps, the AI industry “pivoted” to mass ingestion of scraped data from the whole internet.
But adding more unreliable data to an unreliable dataset doesn’t improve its reliability. GIGO is the iron law of computing, and you can’t repeal it by shoveling more garbage into the top of the training funnel:
https://memex.craphound.com/2018/05/29/garbage-in-garbage-out-machine-learning-has-not-repealed-the-iron-law-of-computer-science/
When it comes to “AI” that’s used for decision support — that is, when an algorithm tells humans what to do and they do it — then you get something worse than Garbage In, Garbage Out — you get Garbage In, Garbage Out, Garbage Back In Again. That’s when the AI spits out something wrong, and then another AI sucks up that wrong conclusion and uses it to generate more conclusions.
To see this in action, consider the deeply flawed predictive policing systems that cities around the world rely on. These systems suck up crime data from the cops, then predict where crime is going to be, and send cops to those “hotspots” to do things like throw Black kids up against a wall and make them turn out their pockets, or pull over drivers and search their cars after pretending to have smelled cannabis.
The problem here is that “crime the police detected” isn’t the same as “crime.” You only find crime where you look for it. For example, there are far more incidents of domestic abuse reported in apartment buildings than in fully detached homes. That’s not because apartment dwellers are more likely to be wife-beaters: it’s because domestic abuse is most often reported by a neighbor who hears it through the walls.
So if your cops practice racially biased policing (I know, this is hard to imagine, but stay with me /s), then the crime they detect will already be a function of bias. If you only ever throw Black kids up against a wall and turn out their pockets, then every knife and dime-bag you find in someone’s pockets will come from some Black kid the cops decided to harass.
That’s life without AI. But now let’s throw in predictive policing: feed your “knives found in pockets” data to an algorithm and ask it to predict where there are more knives in pockets, and it will send you back to that Black neighborhood and tell you do throw even more Black kids up against a wall and search their pockets. The more you do this, the more knives you’ll find, and the more you’ll go back and do it again.
This is what Patrick Ball from the Human Rights Data Analysis Group calls “empiricism washing”: take a biased procedure and feed it to an algorithm, and then you get to go and do more biased procedures, and whenever anyone accuses you of bias, you can insist that you’re just following an empirical conclusion of a neutral algorithm, because “math can’t be racist.”
HRDAG has done excellent work on this, finding a natural experiment that makes the problem of GIGOGBI crystal clear. The National Survey On Drug Use and Health produces the gold standard snapshot of drug use in America. Kristian Lum and William Isaac took Oakland’s drug arrest data from 2010 and asked Predpol, a leading predictive policing product, to predict where Oakland’s 2011 drug use would take place.
Tumblr media
[Image ID: (a) Number of drug arrests made by Oakland police department, 2010. (1) West Oakland, (2) International Boulevard. (b) Estimated number of drug users, based on 2011 National Survey on Drug Use and Health]
Then, they compared those predictions to the outcomes of the 2011 survey, which shows where actual drug use took place. The two maps couldn’t be more different:
https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x
Predpol told cops to go and look for drug use in a predominantly Black, working class neighborhood. Meanwhile the NSDUH survey showed the actual drug use took place all over Oakland, with a higher concentration in the Berkeley-neighboring student neighborhood.
What’s even more vivid is what happens when you simulate running Predpol on the new arrest data that would be generated by cops following its recommendations. If the cops went to that Black neighborhood and found more drugs there and told Predpol about it, the recommendation gets stronger and more confident.
In other words, GIGOGBI is a system for concentrating bias. Even trace amounts of bias in the original training data get refined and magnified when they are output though a decision support system that directs humans to go an act on that output. Algorithms are to bias what centrifuges are to radioactive ore: a way to turn minute amounts of bias into pluripotent, indestructible toxic waste.
There’s a great name for an AI that’s trained on an AI’s output, courtesy of Jathan Sadowski: “Habsburg AI.”
And that brings me back to the Dictator’s Dilemma. If your citizens are self-censoring in order to avoid retaliation or algorithmic shadowbanning, then the AI you train on their posts in order to find out what they’re really thinking will steer you in the opposite direction, so you make bad policies that make people angrier and destabilize things more.
Or at least, that was Farrell(et al)’s theory. And for many years, that’s where the debate over AI and dictatorship has stalled: theory vs theory. But now, there’s some empirical data on this, thanks to the “The Digital Dictator’s Dilemma,” a new paper from UCSD PhD candidate Eddie Yang:
https://www.eddieyang.net/research/DDD.pdf
Yang figured out a way to test these dueling hypotheses. He got 10 million Chinese social media posts from the start of the pandemic, before companies like Weibo were required to censor certain pandemic-related posts as politically sensitive. Yang treats these posts as a robust snapshot of public opinion: because there was no censorship of pandemic-related chatter, Chinese users were free to post anything they wanted without having to self-censor for fear of retaliation or deletion.
Next, Yang acquired the censorship model used by a real Chinese social media company to decide which posts should be blocked. Using this, he was able to determine which of the posts in the original set would be censored today in China.
That means that Yang knows that the “real” sentiment in the Chinese social media snapshot is, and what Chinese authorities would believe it to be if Chinese users were self-censoring all the posts that would be flagged by censorware today.
From here, Yang was able to play with the knobs, and determine how “preference-falsification” (when users lie about their feelings) and self-censorship would give a dictatorship a misleading view of public sentiment. What he finds is that the more repressive a regime is — the more people are incentivized to falsify or censor their views — the worse the system gets at uncovering the true public mood.
What’s more, adding additional (bad) data to the system doesn’t fix this “missing data” problem. GIGO remains an iron law of computing in this context, too.
But it gets better (or worse, I guess): Yang models a “crisis” scenario in which users stop self-censoring and start articulating their true views (because they’ve run out of fucks to give). This is the most dangerous moment for a dictator, and depending on the dictatorship handles it, they either get another decade or rule, or they wake up with guillotines on their lawns.
But “crisis” is where AI performs the worst. Trained on the “status quo” data where users are continuously self-censoring and preference-falsifying, AI has no clue how to handle the unvarnished truth. Both its recommendations about what to censor and its summaries of public sentiment are the least accurate when crisis erupts.
But here’s an interesting wrinkle: Yang scraped a bunch of Chinese users’ posts from Twitter — which the Chinese government doesn’t get to censor (yet) or spy on (yet) — and fed them to the model. He hypothesized that when Chinese users post to American social media, they don’t self-censor or preference-falsify, so this data should help the model improve its accuracy.
He was right — the model got significantly better once it ingested data from Twitter than when it was working solely from Weibo posts. And Yang notes that dictatorships all over the world are widely understood to be scraping western/northern social media.
But even though Twitter data improved the model’s accuracy, it was still wildly inaccurate, compared to the same model trained on a full set of un-self-censored, un-falsified data. GIGO is not an option, it’s the law (of computing).
Writing about the study on Crooked Timber, Farrell notes that as the world fills up with “garbage and noise” (he invokes Philip K Dick’s delighted coinage “gubbish”), “approximately correct knowledge becomes the scarce and valuable resource.”
https://crookedtimber.org/2023/07/25/51610/
This “probably approximately correct knowledge” comes from humans, not LLMs or AI, and so “the social applications of machine learning in non-authoritarian societies are just as parasitic on these forms of human knowledge production as authoritarian governments.”
Tumblr media
The Clarion Science Fiction and Fantasy Writers’ Workshop summer fundraiser is almost over! I am an alum, instructor and volunteer board member for this nonprofit workshop whose alums include Octavia Butler, Kim Stanley Robinson, Bruce Sterling, Nalo Hopkinson, Kameron Hurley, Nnedi Okorafor, Lucius Shepard, and Ted Chiang! Your donations will help us subsidize tuition for students, making Clarion — and sf/f — more accessible for all kinds of writers.
Tumblr media
Libro.fm is the indie-bookstore-friendly, DRM-free audiobook alternative to Audible, the Amazon-owned monopolist that locks every book you buy to Amazon forever. When you buy a book on Libro, they share some of the purchase price with a local indie bookstore of your choosing (Libro is the best partner I have in selling my own DRM-free audiobooks!). As of today, Libro is even better, because it’s available in five new territories and currencies: Canada, the UK, the EU, Australia and New Zealand!
Tumblr media
[Image ID: An altered image of the Nuremberg rally, with ranked lines of soldiers facing a towering figure in a many-ribboned soldier's coat. He wears a high-peaked cap with a microchip in place of insignia. His head has been replaced with the menacing red eye of HAL9000 from Stanley Kubrick's '2001: A Space Odyssey.' The sky behind him is filled with a 'code waterfall' from 'The Matrix.']
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
 — 
Raimond Spekking (modified) https://commons.wikimedia.org/wiki/File:Acer_Extensa_5220_-_Columbia_MB_06236-1N_-_Intel_Celeron_M_530_-_SLA2G_-_in_Socket_479-5029.jpg
CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en
 — 
Russian Airborne Troops (modified) https://commons.wikimedia.org/wiki/File:Vladislav_Achalov_at_the_Airborne_Troops_Day_in_Moscow_%E2%80%93_August_2,_2008.jpg
“Soldiers of Russia” Cultural Center (modified) https://commons.wikimedia.org/wiki/File:Col._Leonid_Khabarov_in_an_everyday_service_uniform.JPG
CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en
832 notes · View notes
a-dinosaur-a-day · 1 year ago
Text
if I could erase the terms "Lower X" and "higher X" from all scientific literature, I would
this post is inspired by a paper I just had to cite that calls birds "lower vertebrates"
888 notes · View notes
bonefall · 3 months ago
Note
It's so funny that your idea is that DOVE should be the one mad because of course they make Ivy throw a hissy fit about their circumstances being different. Girl, your daughter chose to make a valiant sacrifice. Her kid's death was entirely preventable
To be clear; I'm unsure if they're going to give Ivypool another hissy fit. I only read that there is apparently going to be a scene where Dovewing and Ivypool (and also Icewing) bond over having dead kids.
And, I don't really like the idea of that, on its face. Even if this scene is written to be a very straightforward, positive moment of understanding between all of them, I feel like Dovewing's situation is so different from the two of them that I'd find it interesting if she was kind of offended by the comparison.
ESPECIALLY if it was Ivypool making an attempt to connect with Dovewing over this, y'know? Assuming she's totally being good intentioned here, legitimately trying to connect with her estranged sister over what she thinks is a similarity.
Bottom line is, Bristlefrost laid down her life to end tyranny. Beetlewhisker was an adult who chose to take the offer of demon training, and died standing up to Brokenstar. Rowankit was a baby, sick and in pain for days at his mother's belly while his father and older sister raced for a cure, as a parcel of lifesaving medicine sat untouched in the next territory over.
If anything, I'd prefer an Ivypool hissy fit, because I'd like to see her be framed as unreasonable, OR show how bereavement is causing her to lapse into old, bad behaviors. I strongly hope the narrative will examine the differences here (ESPECIALLY if this SE's theme is grief) instead of having the three of them "connect" in an uncomplicated way.
95 notes · View notes
ominousvibez · 5 months ago
Text
i know a lot of people tend to define dani/ellie's "obsession" as the ocean to go with danny's space obsession but tbh i feel like as dani gets older she's going to do a lot to separate herself from her identity as danny's "clone" and become her own person
like, that's why i love that the fandom has adopted the "ellie" nickname so much for her. i so wish the show went on for longer with more competent writers or the comic could expand on this maybe but i think finding her own identity and voice would be such a powerful narrative. you're made for one thing but you don't wanna do that, so now what do you do?
(tbh that kind of identity exploration is probably too anit-christian for b*tch h*rtman so i'm glad he didn't get to touch it)
it's canon that dani goes off and travels the world, so why not have her obsession be traveling? she's got a huge case of wanderlust. she likes the outdoors, she likes hiking and camping and is a huge granola girl. like, i think that kinda just fits her a bit more, in my opinion. traveling/exploration.
63 notes · View notes
rvspecter · 2 months ago
Text
22 notes · View notes
mindblowingscience · 2 years ago
Link
The daylong implicit bias-oriented training programs now common in most United States police departments are unlikely to reduce racial inequity in policing, research finds.
“Our findings suggest that diversity training as it is currently practiced is unlikely to change police behavior,” says lead author Calvin Lai, an assistant professor of psychological and brain sciences at Washington University in St. Louis.
“Officers who took the training were more knowledgeable about bias and more motivated to address bias at work,” Lai says. “However, these effects were fleeting and appear to have little influence on actual policing behaviors just one month after the training session.”
Published in the journal Psychological Science, the study evaluates the experiences of 3,764 police officers from departments across the nation who participated in one-day bias training sessions provided by the nonprofit Anti-Defamation League.
Continue Reading
346 notes · View notes
quert-ii · 2 months ago
Text
sociology is political science for gay ppl
poli sci is sociology for annoying ppl
criminal justice is sociology for narcs
anthropology is sociology for gayer ppl
economics is poli sci for boring ppl
philosophy is poli sci for pussies
i DO make the rules here
12 notes · View notes
ghastigiggles · 2 months ago
Text
things i cannot reccomend enough to my new followers from pressure;
- watch promare
- play okami
- play portal
- play final fantasy fourteen
15 notes · View notes
avernine · 6 months ago
Text
Tumblr media
HEY FOLKS WHO ENJOY ACTUAL PLAY PODCASTS?!
Ya like WHIMSY?
Ya like it WEIRD?
Ya like it when you get to see into a world where a garden gnome, a guy clothed entirely in swords, and an anglerfish-faced fella in a coat go on quests for money to buy Wasp Drinks, navigating their way around sapient mandrills and riddling sphynxes?
Do you want to meet Big Jim Concrete?
You do want to meet Big Jim Concrete.
Listen to Ludonauts! on Spotify :D
18 notes · View notes
agrumina · 2 months ago
Text
Sorry to the "Fiddleford is a saint who did nothing wrong and the things he did wrong were because of that big fat meanie of Stanford/because of anxiety/because of others" crowd I actually find very interesting that he basically founded a cult and gaslit his colleague (and sure said colleague didn't treat him right but come on he did actually tell Fidd "Um this gun thingie is a bad idea have you considered destroying that, doing meditation and going outside" he didn't tell Fiddleford to do all of what Fidd then did) and the concept of him being a bit problematic in general.
8 notes · View notes
mostlysignssomeportents · 2 years ago
Link
I am on record on the subject of science fiction writers predicting the future: we do not. Thank goodness we don’t predict the future! If the future were predictable, then nothing any of us did would matter, because the same future would arrive, just the same. The unpredictability of the future is due to human agency, something the best science fiction writers understand to their bones. Fatalism is for corpses.
(One time, at a science fiction convention, I was on a panel with Robert Silverberg, discussing this very topic, and the subject of Heinlein’s belief in his predictive powers came up. “Oh,” Silverberg sniffed, “you mean Robert A. Timeline?” He’s a pistol!)
Science fiction does something a lot more interesting than predicting the future — sometimes, it inspires people to make a given future, and sometimes, it sparks people to action to prevent a given future.
Mostly, though, I think science fiction is a planchette on a vast, ethereal Ouija board on which all our fingers rest. We writers create all the letters the planchette can point at, all the futures we can imagine, and then readers’ automotor responses swing the planchette towards one or another, revealing their collective aspirations and fears about the future.
But sometimes, if you throw enough darts, you might hit the target, even if the room is pitch black and even if you’re not sure where the target is, or whether there even is a target.
Lately, I’ve been thinking about three times I managed to, well, not predict the future, but at least make a lucky guess. These three stories — all on the web — have been much on my mind lately, because of how they relate to crisis points in our wider world.
In chronological order they are:
Nimby and the D-Hoppers (2003)
Other People’s Money (2007)
Chicken Little (2011)
Read the rest
260 notes · View notes
a-dinosaur-a-day · 1 year ago
Text
I know in my life the most annoying one is mammal bias, but I'm a bird researcher, and honestly, I want our insect family to rise up in a single cry of agony
374 notes · View notes