#yes i know this is anti mask propaganda facebook boomer posting
Explore tagged Tumblr posts
Text
#yes i know this is anti mask propaganda facebook boomer posting#BUT its really funny as a sentiment divorced fromthat
969 notes
¡
View notes
Text
Facebook thrives on criticism of "disinformation"
The mainstream critique of Facebook is surprisingly compatible with Facebookâs own narrative about its products. FB critics say that the companyâs machine learning and data-gathering slides disinformation past usersâ critical faculties, poisoning their minds.
Meanwhile, Facebook itself tells advertisers that it can use data and machine learning to slide past usersâ critical faculties, convincing them to buy stuff.
In other words, the mainline of Facebook critics start from the presumption that FB is a really good product and that advertisers are definitely getting their moneyâs worth when they shower billions on the company.
Which is weird, because these same critics (rightfully) point out that Facebook lies all the time, about everything. It would be bizarre if the only time FB was telling the truth was when it was boasting about how valuable its ad-tech is.
Facebook has a conflicted relationship with this critique. Iâm sure theyâd rather not be characterized as a brainwashing system that turns good people into monsters, but not when the choice is between âbrainwashersâ and âcon-artists selling garbage to credulous ad execs.â
As FB investor and board member Peter Thiel puts it: âIâd rather be seen as evil than incompetent.â In other words, the important word in âevil geniusâ is âgenius,â not âevil.â
https://twitter.com/doctorow/status/1440312271511568393
The accord of tech critics and techbros gives rise to a curious hybrid, aptly named by Maria Farrell: the Prodigal Techbro.
A prodigal techbro is a self-styled wizard of machine-learning/surveillance mind control who has see the error of his ways.
https://crookedtimber.org/2020/09/23/story-ate-the-world-im-biting-back/
This high-tech sorcerer doesnât disclaim his magical powersââârather, he pledges to use them for good, to fight the evil sorcerers who invented a mind-control ray to sell your nephew a fidget-spinner, then let Robert Mercer hijack it to turn your uncle into a Qanon racist.
Thereâs a great name for this critique, criticism that takes its subjectsâ claims to genius at face value: criti-hype, coined by Lee Vinsel, describing a discourse that turns critics into âthe professional concern trolls of technoculture.â
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
The thing is, Facebook really is terribleâââbut not because it uses machine learning to brainwash boomers into iodine-guzzling Qnuts. And likewise, there really is a problem with conspiratorial, racist, science-denying, epistemologically chaotic conspiratorialism.
Addressing that problem requires that we understand the direction of the causal arrowâââthat we understand whether Facebook is the cause or the effect of the crisis, and what role it plays.
âFacebook wizards turned boomers into orcsâ is a comforting tale, in that it implies that we need merely to fix Facebook and the orcs will turn back into our cuddly grandparents and get their shots. The reality is a lot gnarlier and, sadly, less comforting.
Thereâs been a lot written about Facebookâs sell-job to advertisers, but less about the concern over âdisinformation.â In a new, excellent longread for Harpers, Joe Bernstein makes the connection between the two:
https://harpers.org/archive/2021/09/bad-news-selling-the-story-of-disinformation/
Fundamentally: if we question whether Facebook ads work, we should also question whether the disinformation campaigns that run amok on the platform are any more effective.
Bernstein starts by reminding us of the ad industryâs one indisputable claim to persuasive powers: ad salespeople are really good at convincing ad buyers that ads work.
Think of department store magnate John Wanamakerâs lament that âHalf the money I spend on advertising is wasted; the trouble is I donât know which half.â Whoever convinced him that he was only wasting half his ad spend was a true virtuoso of the con.
As Tim Hwang documents brilliantly in his 2020 pamphlet âSubprime Attention Crisis,â ad-tech is even griftier than the traditional ad industry. Ad-tech companies charge advertisers for ads that are never served, or never rendered, or never seen.
https://pluralistic.net/2020/10/05/florida-man/#wannamakers-ghost
They rig ad auctions, fake their reach numbers, fake their conversions (they also lie to publishers about how much theyâve taken in for serving ads on their pages and short change them by millions).
Bernstein cites Hwangâs work, and says, essentially, shouldnât this apply to âdisinformation?â
If ads donât work well, then maybe political ads donât work well. And if regular ads are a swamp of fraudulently inflated reach numbers, wouldnât that be true of political ads?
Bernstein talks about the history of ads as a political tool, starting with Eisenhowerâs 1952 âAnswers Americaâ campaign, designed and executed at great expense by Madison Ave giants Ted Bates.
Hannah Arendt, whom no one can accuse of being soft on the consequences of propaganda, was skeptical of this kind of enterprise: âThe psychological premise of human manipulability has become one of the chief wares that are sold on the market of common and learned opinion.â
The ad industry ran an ambitious campaign to give scientific credibility to its products. As Jacques Ellul wrote in 1962, propagandists were engaged in âthe increasing attempt to control its use, measure its results, define its effects.â
Appropriating the jargon of behavioral scientists let ad execs âassert audiences, like workers in a Taylorized workplace, need not be persuaded through reason, but could be trained through repetition to adopt the new consumption habits desired by the sellers.â -Zoe Sherman
These âscientific adsâ had their own criti-hype attackers, like Vance âHidden Persuadersâ Packard, who admitted that âresearchers were sometimes prone to oversell themselvesâââor in a sense to exploit the exploiters.â
Packard cites Yaleâs John Dollard, a scientific ad consultant, who accused his colleagues of promising advertisers âa mild form of omnipotence,â which was âwell received.â
Todayâs scientific persuaders arenât in a much better place than Dollard or Packard. Despite all the talk of political disinformationâs reach, a 2017 study found âsharing articles from fake news domains was a rare activityâ affecting <10% of users.
https://www.science.org/doi/10.1126/sciadv.aau4586
So, how harmful is this? One study estimates âif one fake news article were about as persuasive as one TV campaign ad, the fake news in our database would have changed vote shares by an amount on the order of hundredths of a percentage point.â
https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211
Now, all that said, American politics certainly feel and act differently today than in years previous. The key question: âis social media creating new types of people, or simply revealing long-obscured types of people to a segment of the public unaccustomed to seeing them?â
After all, American politics has always had its âparanoid style,â and the American right has always had a sizable tendency towards unhinged conspiratorialism, from the John Birch Society to Goldwater Republicans.
Social media may not be making more of these yahoos, but rather, making them visible to the wider world, and to each other, allowing them to make common cause and mobilize their adherents (say, to carry tiki torches through Charlottesville in Nazi cosplay).
If thatâs true, then elite calls to âfight disinformationâ are unlikely to do much, except possibly inflaming things. If âdisinformationâ is really people finding each other (not infecting each other) labelling their posts as âdisinformationâ wonât change their minds.
Worse, plans like the Biden adminâs National Strategy for Countering Domestic Terrorism lump 1/6 insurrectionists in with anti-pipeline activists, racial justice campaigners, and animal rights groups.
Whatever new powers we hand over to fight disinformation will be felt most by people without deep-pocketed backers whoâll foot the bill for crack lawyers.
Hereâs the key to Bernsteinâs argument: âOne reason to grant Silicon Valleyâs assumptions about our mechanistic persuadability is that it prevents us from thinking too hard about the role we play in taking up and believing the things we want to believe. It turns a huge question about the nature of democracy in the digital ageâââwhat if the people believe crazy things, and now everyone knows it?âââinto a technocratic negotiation between tech companies, media companies, think tanks, and universities.â
I want to âYes, andâ that.
My 2020 book How To Destroy Surveillance Capitalism doesnât dismiss the idea that conspiratorialism is on the rise, nor that tech companies are playing a key role in that riseâââbut without engaging in criti-hype.
https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59
In my book, I propose that conspiratorialism isnât a crisis of what people believe so much as how they arrive at their beliefsâââitâs an âepistemological crisis.â
We live in a complex society plagued by high-stakes questions none of us can answer on our own.
Do vaccines work? Is oxycontin addictive? Should I wear a mask? Can we fight covid by sanitizing surfaces? Will distance ed make my kind an ignoramus? Should I fly in a 737 Max?
Even if you have the background to answer one of these questions, no one can answer all of them.
Instead, we have a process: neutral expert agencies use truth-seeking procedures to sort of competing claims, showing their work and recusing themselves when they have conflicts, and revising their conclusions in light of new evidence.
Itâs pretty clear that this process is breaking down. As companies (led by the tech industry) merge with one another to form monopolies, they hijack their regulators and turn truth-seeking into an auction, where shareholder preferences trump evidence.
This perversion of truth has consequencesâââtake the FDAâs willingness to accept the expensively manufactured evidence of Oxycontinâs safety, a corrupt act that kickstarted the opioid epidemic, which has killed 800,000 Americans to date.
If the best argument for vaccine safety and efficacy is âWe used the same process and experts as pronounced judgement on Oxyâ then itâs not unreasonable to be skepticalâââespecially if youâre still coping with the trauma of lost loved ones.
As Anna Merlan writes in her excellent Republic of Lies, conspiratorialism feeds on distrust and trauma, and weâve got plenty of legitimate reasons to experience both.
https://memex.craphound.com/2019/09/21/republic-of-lies-the-rise-of-conspiratorial-thinking-and-the-actual-conspiracies-that-fuel-it/
Tech was an early adopter of monopolistic tacticsâââthe Apple ][+ went on sale the same year Ronald Reagan hit the campaign trail, and the industryâs growth tracked perfectly with the dismantling of antitrust enforcement over the past 40 years.
Whatâs more, while tech may not persuade people, it is indisputably good at finding them. If youâre an advertiser looking for people who recently looked at fridge reviews, tech finds them for you. If youâre a boomer looking for your old high school chums, itâll do that too.
Seen in that light, âonline radicalizationâ stops looking like the result of mind control, instead showing itself to be a kind of homecomingâââfinding the people who share your interests, a common online experience we can all relate to.
I found out about Bernsteinâs article from the Techdirt podcast, where he had a fascinating discussion with host Mike Masnick.
https://www.techdirt.com/articles/20210928/12593747652/techdirt-podcast-episode-299-misinformation-about-disinformation.shtml
Towards the end of that discussion, they talked about FBâs Project Amplify, in which the company tweaked its news algorithm to uprank positive stories about Facebook, including stories its own PR department wrote.
https://pluralistic.net/2021/09/22/kropotkin-graeber/#zuckerveganism
Project Amplify is part of a larger, aggressive image-control effort by the company, which has included shuttering internal transparency portals, providing bad data to researchers, and suing independent auditors who tracked its promises.
Iâd always assumed that this truth-suppression and wanton fraud was about hiding how bad the platformâs disinformation problem was.
But listening to Masnick and Bernstein, I suddenly realized there was another explanation.
Maybe Facebookâs aggressive suppression of accurate assessments of disinformation on its platform are driven by a desire to hide how expensive (and profitable) political advertising it depends on is pretty useless.
Image: Anthony Quintano (modified) https://commons.wikimedia.org/wiki/File:Mark_Zuckerberg_F8_2018_Keynote_(41793470192).jpg
Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY: https://creativecommons.org/licenses/by/3.0/deed.en
61 notes
¡
View notes