#and as an end user of someone elses ai model you have been robbed of the ability to respect the labour that went into making the data
Explore tagged Tumblr posts
bigsnorp · 1 year ago
Text
imo the recent wave of "ai art is valid actually" discourse often misses the point of the reaction against ai art. And I think the reaction is often reacting against the wrong thing too.
The end product is not the problem. Like it or not, AI generated art is now a valid subset of art. It makes something that wasn't there before, usually has some purpose or intent in the process, and it makes people feel something. Hating it is still something.
El problema, como siempre, es el capitalismo. Just like that old man said.
Creating an AI model always involves training it on a dataset. The bigger the better, every single time. How do you do that? You write a program to scrape everything from whatever you point it at, and run it through various levels of processing until your AI reliably spits out something you want to see.
This is hard to do, so there is a lot of monetary value in creating one, hosting it, and offering it to others. At no point do you have to create anything of value, you just have to take what other people made and extract surplus value from it. And you do this while interacting as little as possible with the art you've used for this massive dataset, or god forbid with the artists that made it.
In return these artists get nothing, usually. Not so much as a note of acknowledgement in the readme.
Human artists can and do steal from, plagiarise, and piss off other artists all the time. But they cannot do it at the scale of AI art, nor can they completely divorce the original art from its context. Even when this is done for financial gain, it's on an entirely different level, with entirely different stakes. I'm not talking about copyright law or intellectual property, I'm talking about large scale labour devaluation and deskilling for the sake of corporate capitalism which is an inherent part of most of the AI generative art tools that are used by people today.
There's a lot of cool possibilities and interesting thought-provoking questions that have become immediately obvious with AI art and it's not in any rush to go away. I think spending hours debating the artistic merit of the end product of AI generated art is the wrong thing to focus on right now. Humans also make art that is uninspired or derivative or mass-produced and it can still be artistically valuable. Look at Duchamp's fountain, literally mass produced and likely plagiarised from another artist, and one of the most valuable pieces of modern art history we have.
It's about the scale. The decontextualisation. The capital gain for a generally uninterested company seeking to extract surplus value from the unconsenting labour. The picture that gets generated at the end is kind of the least interesting part of the discussion.
30 notes · View notes
Text
AI is Theft, plain and simple.
I'm seeing a group of posts circulating with fanfiction authors forbidding folks to feed their WIPs to an AI to get a quick ending. I am both horrified that there's actual readers who would do that and also resigned that some readers will do it anyway.
A lot of us have already been robbed.
1,000,000 words of my writing were consumed by ChatGPT when its trainers took massive amounts of AO3 works and added them to its training dataset. Nearly every word I've written in my adult life was taken without my consent to build that machine.
I'm locking all my existing and future fics to registered AO3 users only for this reason. It's the best precaution to prevent future scraping of works on the website by AI. I don't want to do that. Half my Kudos and some of my comments come from guests. I want to be able to share my stories with those of you who can't get an AO3 account. But I don't want my work stolen by an AI again.
To folks who would rather use AI to generate the ending of someone else's WIP, or to write a whole story for them, know that youre condoning the theft of billions of words.
Some may say that all writing is created thanks to inspiration from other writing, maybe you think it's not a big deal that others work was used to train an AI. But there are differences to how a human mind writes and how a machine generates text. A human being can be inspired by another writer or dozens of writers. But the work they create is their own, crafted from their unique human experiences. Humans select words based on their definition, connotation, linguistic history, and dozens of other unique factors to convey whatever idea they are striving to put onto a page.
ChatGPT selects words based primarily on their function, one of the reasons it has been demonstrated to be unable to tell the difference between falsehood and fact. It selects words based on how often it knows they have been paired with other words. ChatGPT  does not have its own emotions. It does not think. It does not create. It only reuses the turns of phrase created by real people. None of its words are its own. It has no original ideas of its own. It's producing a facimile of creativity - a facimile made possible by my and millions of other writers stolen, unconsented contributions. Its creators are profiting off of our work.
WGA are striking to ensure their professional writers' hard work is never used for AI models. Those of us who are fanfiction authors deserve the same choice. I never agreed to have my work used for anyone else’s profit, certainly not for an AI which, by design, steals other people’s ideas each time it generates a word.
If you're too impatient to wait for one of my WIPs to be finished, and for some reason dont just want to message me and beg me to spoil the ending, then go ahead, give my work to the AI to finish if youre that impatient. It already ate every word thats ever mattered to me. But know that whatever ending it spits out, it will be no more real than a trick of the light and not half as entertaining. The equivalent of eating a pack of red dye number 2 when you wanted a red apple. And it will be theft. Is that really worth your instant gratification?
190 notes · View notes
viralleakszone-blog · 7 years ago
Text
Social media is giving us trypophobia
Tumblr media Tumblr media
Something is rotten in the state of technology. But amid all the hand-wringing over fake news, the cries of election deforming Kremlin disinformation plots, the calls from political podia for tech giants to locate a social conscience, a knottier realization is taking shape. Fake news and disinformation are just a few of the symptoms of what’s wrong and what’s rotten. The problem with platform giants is something far more fundamental. The problem is these vastly powerful algorithmic engines are blackboxes. And, at the business end of the operation, each individual user only sees what each individual user sees. The great lie of social media has been to claim it shows us the world. And their follow-on deception: That their technology products bring us closer together. In truth, social media is not a telescopic lens — as the telephone actually was — but an opinion-fracturing prism that shatters social cohesion by replacing a shared public sphere and its dynamically overlapping discourse with a wall of increasingly concentrated filter bubbles. Social media is not connective tissue but engineered segmentation that treats each pair of human eyeballs as a discrete unit to be plucked out and separated off from its fellows. Think about it, it’s a trypophobic’s nightmare. Or the panopticon in reverse — each user bricked into an individual cell that’s surveilled from the platform controller’s tinted glass tower. Little wonder lies spread and inflate so quickly via products that are not only hyper-accelerating the rate at which information can travel but deliberately pickling people inside a stew of their own prejudices. First it panders then it polarizes then it pushes us apart. We aren’t so much seeing through a lens darkly when we log onto Facebook or peer at personalized search results on Google, we’re being individually strapped into a custom-moulded headset that’s continuously screening a bespoke movie — in the dark, in a single-seater theatre, without any windows or doors. Are you feeling claustrophobic yet? It’s a movie that the algorithmic engine believes you’ll like. Because it’s figured out your favorite actors. It knows what genre you skew to. The nightmares that keep you up at night. The first thing you think about in the morning. It knows your politics, who your friends are, where you go. It watches you ceaselessly and packages this intelligence into a bespoke, tailor-made, ever-iterating, emotion-tugging product just for you. Its secret recipe is an infinite blend of your personal likes and dislikes, scraped off the Internet where you unwittingly scatter them. (Your offline habits aren’t safe from its harvest either — it pays data brokers to snitch on those too.) No one else will ever get to see this movie. Or even know it exists. There are no adverts announcing it’s screening. Why bother putting up billboards for a movie made just for you? Anyway, the personalized content is all but guaranteed to strap you in your seat. If social media platforms were sausage factories we could at least intercept the delivery lorry on its way out of the gate to probe the chemistry of the flesh-colored substance inside each packet — and find out if it’s really as palatable as they claim. Of course we’d still have to do that thousands of times to get meaningful data on what was being piped inside each custom sachet. But it could be done. Alas, platforms involve no such physical product, and leave no such physical trace for us to investigate. Smoke and mirrors Understanding platforms’ information-shaping processes would require access to their algorithmic blackboxes. But those are locked up inside corporate HQs — behind big signs marked: ‘Proprietary! No visitors! Commercially sensitive IP!’ Only engineers and owners get to peer in. And even they don’t necessarily always understand the decisions their machines are making. But how sustainable is this asymmetry? If we, the wider society — on whom platforms depend for data, eyeballs, content and revenue; we are their business model — can’t see how we are being divided by what they individually drip-feed us, how can we judge what the technology is doing to us, one and all? And figure out how it’s systemizing and reshaping society? How can we hope to measure its impact? Except when and where we feel its harms. Without access to meaningful data how can we tell whether time spent here or there or on any of these prejudice-pandering advertiser platforms can ever be said to be “time well spent“? What does it tell us about the attention-sucking power that tech giants hold over us when — just one example — a train station has to put up signs warning parents to stop looking at their smartphones and point their eyes at their children instead? Is there a new idiot wind blowing through society of a sudden? Or are we been unfairly robbed of our attention? What should we think when tech CEOs confess they don’t want kids in their family anywhere near the products they’re pushing on everyone else? It sure sounds like even they think this stuff might be the new nicotine. External researchers have been trying their best to map and analyze flows of online opinion and influence in an attempt to quantify platform giants’ societal impacts. Yet Twitter, for one, actively degrades these efforts by playing pick and choose from its gatekeeper position — rubbishing any studies with results it doesn’t like by claiming the picture is flawed because it’s incomplete. Why? Because external researchers don’t have access to all its information flows. Why? Because they can’t see how data is shaped by Twitter’s algorithms, or how each individual Twitter user might (or might not) have flipped a content suppression switch which can also — says Twitter — mould the sausage and determine who consumes it. Why not? Because Twitter doesn’t give outsiders that kind of access. Sorry, didn’t you see the sign? And when politicians press the company to provide the full picture — based on the data that only Twitter can see — they just get fed more self-selected scraps shaped by Twitter’s corporate self-interest. (This particular game of ‘whack an awkward question’ / ‘hide the unsightly mole’ could run and run and run. Yet it also doesn’t seem, long term, to be a very politically sustainable one — however much quiz games might be suddenly back in fashion.) And how can we trust Facebook to create robust and rigorous disclosure systems around political advertising when the company has been shown failing to uphold its existing ad standards? Mark Zuckerberg wants us to believe we can trust him to do the right thing. Yet he is also the powerful tech CEO who studiously ignored concerns that malicious disinformation was running rampant on his platform. Who even ignored specific warnings that fake news could impact democracy — from some pretty knowledgeable political insiders and mentors too. Biased blackboxes Before fake news became an existential crisis for Facebook’s business, Zuckerberg’s standard line of defense to any raised content concern was deflection — that infamous claim ‘we’re not a media company; we’re a tech company’. Turns out maybe he was right to say that. Because maybe big tech platforms really do require a new type of bespoke regulation. One that reflects the uniquely hypertargeted nature of the individualized product their factories are churning out at — trypophobics look away now! — 4BN+ eyeball scale. In recent years there have been calls for regulators to have access to algorithmic blackboxes to lift the lids on engines that act on us yet which we (the product) are prevented from seeing (and thus overseeing). Rising use of AI certainly makes that case stronger, with the risk of prejudices scaling as fast and far as tech platforms if they get blindbaked into commercially privileged blackboxes. Do we think it’s right and fair to automate disadvantage? At least until the complaints get loud enough and egregious enough that someone somewhere with enough influence notices and cries foul? Algorithmic accountability should not mean that a critical mass of human suffering is needed to reverse engineer a technological failure. We should absolutely demand proper processes and meaningful accountability. Whatever it takes to get there. And if powerful platforms are perceived to be footdragging and truth-shaping every time they’re asked to provide answers to questions that scale far beyond their own commercial interests — answers, let me stress it again, that only they hold — then calls to crack open their blackboxes will become a clamor because they will have fulsome public support. Lawmakers are already alert to the phrase algorithmic accountability. It’s on their lips and in their rhetoric. Risks are being articulated. Extant harms are being weighed. Algorithmic blackboxes are losing their deflective public sheen — a decade+ into platform giant’s huge hyperpersonalization experiment. No one would now doubt these platforms impact and shape the public discourse. But, arguably, in recent years, they’ve made the public street coarser, angrier, more outrage-prone, less constructive, as algorithms have rewarded trolls and provocateurs who best played their games. So all it would take is for enough people — enough ‘users’ — to join the dots and realize what it is that’s been making them feel so uneasy and queasy online — and these products will wither on the vine, as others have before. There’s no engineering workaround for that either. Even if generative AIs get so good at dreaming up content that they could substitute a significant chunk of humanity’s sweating toil, they’d still never possess the biological eyeballs required to blink forth the ad dollars the tech giants depend on. (The phrase ‘user generated content platform’ should really be bookended with the unmentioned yet entirely salient point: ‘and user consumed’.) This week the UK prime minister, Theresa May, used a Davos podium World Economic Forum speech to slam social media platforms for failing to operate with a social conscience. And after laying into the likes of Facebook, Twitter and Google — for, as she tells it, facilitating child abuse, modern slavery and spreading terrorist and extremist content — she pointed to a Edelman survey showing a global erosion of trust in social media (and a simultaneous leap in trust for journalism). Her subtext was clear: Where tech giants are concerned, world leaders now feel both willing and able to sharpen the knives. Nor was she the only Davos speaker roasting social media either. “Facebook and Google have grown into ever more powerful monopolies, they have become obstacles to innovation, and they have caused a variety of problems of which we are only now beginning to become aware,” said billionaire US philanthropist George Soros, calling — out-and-out — for regulatory action to break the hold platforms have built over us. And while politicians (and journalists — and most probably Soros too) are used to being roundly hated, tech firms most certainly are not. These companies have basked in the halo that’s perma-attached to the word “innovation” for years. ‘Mainstream backlash’ isn’t in their lexicon. Just like ‘social responsibility’ wasn’t until very recently. You only have to look at the worry lines etched on Zuckerberg’s face to see how ill-prepared Silicon Valley’s boy kings are to deal with roiling public anger. Guessing games The opacity of big tech platforms has another harmful and dehumanizing impact — not just for their data-mined users but for their content creators too. A platform like YouTube, which depends on a volunteer army of makers to keep content flowing across the countless screens that pull the billions of streams off of its platform (and stream the billions of ad dollars into Google’s coffers), nonetheless operates with an opaque screen pulled down between itself and its creators. YouTube has a set of content policies which it says its content uploaders must abide by. But Google has not consistently enforced these policies. And a media scandal or an advertiser boycott can trigger sudden spurts of enforcement action that leave creators scrambling not to be shut out in the cold. One creator, who originally got in touch with TechCrunch because she was given a safety strike on a satirical video about the Tide Pod Challenge, describes being managed by YouTube’s heavily automated systems as an “omnipresent headache” and a dehumanizing guessing game. “Most of my issues on YouTube are the result of automated ratings, anonymous flags (which are abused) and anonymous, vague help from anonymous email support with limited corrective powers,” Aimee Davison told us. “It will take direct human interaction and negotiation to improve partner relations on YouTube and clear, explicit notice of consistent guidelines.” “YouTube needs to grade its content adequately without engaging in excessive artistic censorship — and they need to humanize our account management,” she added. Yet YouTube has not even been doing a good job of managing its most high profile content creators. Aka its ‘YouTube stars’. But where does the blame really lie when ‘star’ YouTube creator Logan Paul — an erstwhile Preferred Partner on Google’s ad platform — uploads a video of himself making jokes beside the dead body of a suicide victim? Paul must manage his own conscience. But blame must also scale beyond any one individual who is being algorithmically managed (read: manipulated) on a platform to produce content that literally enriches Google because people are being guided by its reward system. In Paul’s case YouTube staff had also manually reviewed and approved his video. So even when YouTube claims it has human eyeballs reviewing content those eyeballs don’t appear to have adequate time and tools to be able to do the work. And no wonder, given how massive the task is. Google has said it will increase headcount of staff who carry out moderation and other enforcement duties to 10,000 this year. Yet that number is as nothing vs the amount of content being uploaded to YouTube. (According to Statista, 400 hours of video were being uploaded to YouTube every minute as of July 2015; it could easily have risen to 600 or 700 hours per minute by now.) The sheer size of YouTube’s free-to-upload content platform all but makes it impossible to meaningfully moderate. And that’s an existential problem when the platform’s massive size, pervasive tracking and individualized targeting technology also gives it the power to influence and shape society at large. The company itself says its 1BN+ users constitute one-third of the entire Internet. Throw in Google’s preference for hands-off (read: lower cost) algorithmic management of content and some of the societal impacts flowing from the decisions its machines are making are questionable — to put it politely. Indeed, YouTube’s algorithms have been described by its own staff as having extremist tendencies. The platform has also been accused of essentially automating online radicalization — by pushing viewers towards increasingly extreme and hateful views. Click on a video about a populist right wing pundit and end up — via algorithmic suggestion — pushed towards a neo-nazi hate group. And the company’s suggested fix for this AI extremism problem? Yet more AI… Yet it’s AI-powered platforms that have been caught amplifying fakes and accelerating hates and incentivizing sociopathy. And it’s AI-powered moderation systems that are too stupid to judge context and understand nuance like humans do. (Or at least can when they’re given enough time to think.) Zuckerberg himself said as much a year ago, as the scale of the existential crisis facing his company was beginning to become clear. “It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more,” he wrote then. “At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.” ‘Many years’ is tech CEO speak for ‘actually we might not EVER be able to engineer that’. And if you’re talking about the very hard, very editorial problem of content moderation, identifying terrorism is actually a relatively narrow challenge. Understanding satire — or even just knowing whether a piece of content has any kind of intrinsic value at all vs been purely worthless algorithmically groomed junk? Frankly speaking, I wouldn’t hold my breath waiting for the robot that can do that. Especially not when — across the spectrum — people are crying out for tech firms to show more humanity. And tech firms are still trying to force-feed us more AI. Read the full article
0 notes
isaacscrawford · 7 years ago
Text
The Boys From Silicon Valley
By MARGALIT GUR-ARIE
A few weeks ago one man, named @jack, decided that millions of people will be allowed to use up to 280 characters when expressing themselves on Jack’s public square platform. One man decides how many letters each and every one of us, including the “leader of the free world”, can use when we talk to each other. Just like that. Nobody seemed the least bit perturbed by this notion. Another dude, named Mark, decided to ask people for nude pictures of themselves, so he can better protect them from the bad guys. We shrugged that off too. Then, in a most embarrassing exercise in public humiliation, our democratically elected representatives begged three slick lawyers representing these platforms to effectively regulate what people can say or see on “their” platforms.
So here we are, in the land of the free and the home of the brave, where Jack and Mark decide what you can or cannot say, and what you can or cannot hear or see. This, my friend, is the power of “platforms”. In the old days, it used to be that he who pays the piper calls the tune. In the artificially intelligent technology age there are no pipers. He who owns the pipe makes it play whatever the hell he wants it to play. And as Sean Parker, a Facebook founder, elegantly put it, “God only knows what it’s doing to our children’s brains”. Perhaps God knows, but he is certainly not the only one who knows, because these platforms are built with the explicit intent to get people addicted to and dependent on the platform.
Funded with cash from sexist pigs and harassers, a startup, whose business model is to help other startups “hook” people on trashy little apps, is calling itself Dopamine Labs. “Dopamine makes your app addictive” is their promise. According to the website, they use AI and neuroscience to deliver jolts of dopamine that “don’t just feel good: they rewire the brain’s habit centers” of users to “boost usage, loyalty, and revenue”. “Your users will crave it. And they’ll crave you”.
At its rotten core, Silicon Valley is a drug cartel, a very clever and savvy cartel who managed to convince the world that its brand of drug addiction is actually good for you and either way, it’s inevitable.
But just getting billions of people on techno-drugs is obviously not the end game here. After extracting trillions of dollars from addicts who would rather go without food and medicine, than go without an iPhone X that costs more than a full blown top of the line computer, the Capos of Silicon Valley Inc. are now realizing that there is plenty more left to extract from the armies of zombies they are creating. “Because I’m a billionaire, I’m going to have access to better health care so … I’m going to be like 160 and I’m going to be part of this, like, class of immortal overlords. [Laughter] Because, you know the [Warren Buffett] expression about compound interest. … [G]ive us billionaires an extra hundred years and you’ll know what … wealth disparity looks like.”
Ah, yes, health care, the final frontier. When Keytruda (the Jimmy Carter drug) became available, it was considered too expensive at around $150,000, but times are changing. The FDA recently approved the immunotherapy drug Kymriah from Novartis with a price tag of $475,000, although Novartis says it could have charged more, presumably because this drug is a life saver of last resort for small children with cancer.  Next, another CAR-T cell therapy cancer drug, Yescarta was approved by the FDA for adult cancer and Gilead Sciences priced it at only $373,000 a pop (that’s how value-based health care works). At this rate of innovation, it should not be too difficult to project a precise date for the emergence of that immortal class of overlords.
Developing personalized drugs, like immunotherapy, requires mountains of data from millions of people, and this is where the app-addicted public has a crucial role to play. Before the overlords can become immortal, we all need to “donate” our medical data, submit to experimentation, get sick and die, and yes, here and there a few lucky bastards will benefit from therapies their children will never be able to afford. Not surprisingly, Mr. Parker, the aspiring overlord, is now invested in an immunotherapy platform to coordinate research, or something like that. But Mr. Parker is a diversified investor. He has a couple more platforms. One is there to save the world from the AIDS epidemic by providing support to the Clinton Foundation.
The other platform is designed to help us vote. Yes, vote. The guy who promises to show us what wealth disparity really looks like is building platforms, complete with little dopamine jolts and colored pictures of bananas, to teach us all about “civic engagement”, because according to Mr. Parker’s venture buddy “the tools we build in Silicon Valley represent the best hope for fixing our democracy”.  Everything was just fine with “our democracy” until all investments in the Clinton Foundation came crashing down like a house of cards in one fateful night in November 2016, when the overlords were positively robbed by a dopamine-deficient populist mob. In a wholesome democracy, when you pay for a President, you’re supposed to get a President.
Of course “our democracy” has been “broken” in one way or another for upwards of two hundred and forty years, but I think we can all agree that “our democracy” today is less broken than “our democracy” in 1789. There is great utility though, in declaring something to be broken, especially something big and nebulous like “our democracy”, because such declarations are almost always followed by assertions that the diagnosticians of brokenness are uniquely positioned to become the fixers of all broken things. Our health care is broken. Our education is broken. Our justice system is broken. Our economy is broken. Our tax system is broken. Our infrastructure is broken. Our entire goddamn country is broken. Oh, what the hell, the entire freaking world is broken. And Silicon Valley is our only hope.
Silicon Valley has essentially only one product, a very versatile product indeed, but a single product nevertheless. Silicon Valley doesn’t actually make this product. They harvest it by casting gigantic computerized platforms and collecting everything caught in their digital nets, very much like Bubba’s shrimp: “… shrimp is the fruit of the sea.  You can barbecue it, boil it, broil it, bake it, sauté it. Dey’s uh, shrimp-kabobs, shrimp creole, shrimp gumbo. Pan fried, deep fried, stir-fried. There’s pineapple shrimp, lemon shrimp, coconut shrimp, pepper shrimp, shrimp soup, shrimp stew, shrimp salad, shrimp and potatoes, shrimp burger, shrimp sandwich. That- that’s about it.”
Information is the fruit of humanity. You can boil it and broil it to intimidate doctors and manipulate people, to extract immortality (and cash) for you and yours, thus fixing health care. You can sauté it and puree it to terrorize teachers and crush the minds of small children, to generate armies of drones (and cash), thus fixing education. You can sift it, scramble it, steam it, and serve it to nullify judges and juries, to protect property rights (and cash), thus fixing justice for all. You can slice it, dice it, can it and ban it as needed to keep all that cash flowing, thus fixing “our democracy”. 
Remember Jack and Mark? Unlike Mark, Jack is allowing users to remain anonymous on his platform. On Jack’s platform, if you see a blue checkmark next to the name of someone, you can reasonably conclude that you are talking, or rather listening, to a “real” person, instead of, say, a Russian bot.  Over time, it became clear that according to Jack, real people are those who are rich, powerful, or have enough “followers” to influence public opinion. Everybody else on Jack’s platform is shrimp. But Jack is an honorable man.
Jack is fixing “our democracy” by revoking the coveted blue checkmarks from some white supremacists. Presumably Messrs. Spencer and Kessler are no longer real.  On the other hand, the multitude of rich and powerful rapists, pedophiles and garden variety perverts, are still very real according to Jack’s superior morality framework.  Mark is fighting the good fight on behalf of “our democracy” in a different way. His platform is pursuing the enemy from without, by tracking enemy advertising paid for with rubbles, not yuans or ryials or euros or dinars or wons or yens, only rubbles, because the legendary KGB masterminds always pay in rubbles (with a return address of Моско́вский Кремль 103073) for all their international spying needs.
Now that “our democracy” is all nice and fixed, the Cartel can apply lessons learned to “democratize” medicine and fix “our health care” too. Health care is rife with old people, old fashioned ideas, and it is scattered all over the place. Nothing a big platform, dripping with dopamine jolts, can’t fix though. Uber for health care. Facebook for health care. Health care is like the iPhone. Information “blockers” will be prosecuted (this one is for real).  Structured data. Metadata. e-Visits. Remote monitoring. Predictive analytics. Population management. This stuff is just begging for a medical platform with hundreds of millions of patients “sharing” their health, their illness and their medical experience with each other, with doctors, researchers and of course the platform overlord and his customers.
You will share your symptoms, your concerns, your treatments, your outcomes. You will “like” CT scans, “star” lab results, and rate doctors, heath insurers, drugs or devices. Perhaps they’ll have a “dislike” button too. You will post videos of your colonoscopy and maybe live stream your telehealth session. You will ask for advice from patients like you and “clap” for the ones you like best. Your cancer remission could go viral. The platform will ensure you see things you care about and shield you from unsettling content. Before you know it, you will feel compelled to check your “health” every 5 minutes, and certainly when your iPhone vibrates with new images from Bertha’s mammogram, or when your Apple “watch” beeps with updates from your fantasy clinical trials league or with an urgent reminder to record your pre-hypertension medication intake so you can receive the coveted 20% discount on Christmas fruit cakes at CVS just in time.
Platformized health care will be cheap, convenient and readily available. And just like communications, shopping, porn, and news, it will be fake, manipulative, addictive and designed to “protect consumers” instead of benefitting citizens, or patients in this case.  Jack doesn’t converse with his buddies on Twitter. Mark doesn’t get his news from Facebook. Jeff doesn’t shop for deals on Amazon. And none of them will be getting medical care from a phone or a watch. You will. Your children will too.
Facebook just introduced a “safe” messenger for children under 13. Parents are supposed to set this up for their babies. Many will do just that. And experts will be exalting the thoughtfulness of the Cartel for creating a less toxic version, suitable for hooking children on the product. Why would a six year old need to message his “friends” online, instead of chasing them in the backyard? Why would a three year old need to watch sickly YouTube videos prepared exclusively for toddlers, instead of playing with alphabet blocks on the carpet? Why would the most powerful 71 year old man in the world self-destruct on Twitter instead of running said world? Why can’t you read an entire book anymore? Such is the power of the Silicon Valley Cartel.
Article source:The Health Care Blog
0 notes