#tim's idea of a meme is an image with impact text and an image with impact text Only
Explore tagged Tumblr posts
Text
read a fic recently that had an interaction like
sasha: tim get like one social media PLEASE I am BEGGING
tim: I have linkedin?
and I cannot describe the Joy I felt reading that. they get it.
#ramblings with major#the magnus archives#tma#i love tim but he is so offline to me#if he does social media its very lightly#i mean his canon sense of humor is pop culture references like movie quotes and stuff. not memes.#tim's idea of a meme is an image with impact text and an image with impact text Only#danny was the fun online one finding all the weird niche communities and hobbies and whatnot#tim is the boring one#im telling y'all hes the 'straight man' of the archives
723 notes
·
View notes
Text
As the United States prepares for the upcoming November election, misinformation and disinformation have spread through memes. Memes are any idea, expression, or opinion that uses text, visual imagery (e.g., a photo, video, or gif file) with or without sound, which can be copied and shared online. For researchers, they are co-constructions that have meaning primarily in humor, which can be shared by multiple users, and in the case of the internet, across various platforms. The convenient availability of commercial artificial intelligence (AI) tools has also contributed to the existing meme economy. Software like X’s AI chatbot called Grok can quickly generate such images, especially with its ability to use famous peoples’ likenesses, copyrighted material, violent content, or pornographic image generation.
The reality is that AI-generated memes have been inserted into the political conversation. These altered images often seem harmless to voters, particularly because they are sometimes shared by people they trust in their personal and professional networks. It can be difficult for policymakers or content moderators to definitively assess their impact because the humor or profile of memes makes them appear innocuous. In the current policy environment where content moderation and domestic AI policy are still evolving, those who develop and disseminate memes can potentially influence voter information about candidate issues, character, and other relevant election information without the usual guardrails that either regulate speech in the U.S. or trigger attention based on the manipulation of political content.
What are memes?
Tugging on the emotions of voters is a critical part of influencing them to act. Memes present unique opportunities because they can disseminate information and foster a sense of community through humor and other apolitical ways. Examples of the persuasive power of memes have appeared in efforts to spread political messaging, alter the stock market, or even influence the way the public thinks about war.
The current campaign efforts on both sides are also no exception. The Trump campaign and his allies have created an assortment of AI-generated memes to uplift the former president, which have been shared on the social media platform X and Truth Social. There has also been a share of visualizations that have ridiculed the other side, including one of Vice President Kamala Harris leading a communist rally, while another showed many Taylor Swift fans endorsing his campaign. Trump supporters like Musk have even reposted an AI-altered, “parody” video of Harris calling herself a “deep state puppet” and “the ultimate diversity hire.”
On the other side, Vice President Harris’ campaign and allies have generated their own share of memes. During the 2024 Democratic National Convention, the highest number of content creators and online influencers ever were in attendance to capture these and other messages for attendees and viewers. Throughout the event, robust online content was created, including the dissemination of memes to capture emotional responses to the various activities and speakers. Doug Emhoff, the second gentlemen, was often the subject of memes as he openly conveyed his support for his wife, Kamala Harris. Tim Walz’s son, Gus, also became a viral meme as he mouthed his admiration for his father during the acceptance speech. The Harris campaign and allies have been equally culpable in their use of satirical tools. Her team has also been accused of falsely captioning AI-generated videos and memes of the former president. Through their use of clips from a Trump rally in North Carolina, the Harris social media account played up the theme that the former president was “lost and confused” in his suggestion that he was in another state—a point that was later fact-checked and clarified by the Harris team.
Probably the most viral memes being shared by both parties and their allies have been those AI-generated memes of cats on social media platforms, which tout the conspiracy theory and disinformation reference by Trump regarding Haitian immigrants eating pets in Springfield, Ohio—some of which were posted by the former president himself.
As these visualizations continue to become a part of the political landscape, memes will increasingly feed into misinformation and disinformation efforts, and cloak facts in humor and satire to elicit more emotional responses from voters. Due to congressional inaction on copyright protections for the data training large language models (LLMs) or more stringent legislation to curb the flow of false information, memes can flourish and, under current election laws, be perceived to be harmless in nature.
Memes are also not necessarily deepfakes
Congress has also made it clear how they categorize memes when it comes to election and other voter interference. Pending legislation, including the DEEPFAKES Accountability Act, create carve outs for memes, humor, satire, parody, and other commentary as a justification of an individual’s freedom of expression. However, the task of deciphering what is parody and what is deceptive can be very challenging. Despite the deceptive posts by Trump and Musk, for example, the label of satire provides some immunity from liability. Such posts also receive additional protections under Section 230, which shields online platforms that disseminate the information from any liability or association—even if vile or offensive.
Memes also can provoke “rage-baiting,” which refers to using online content to elicit strong negative emotional reactions from users. However, the significant gaps in policy make their dissemination possible and plausible. An examination of the handling of memes in a global context makes the case for stronger guardrails and increased community awareness.
Globally, memes have been perceived as fueling extremist behavior. In 2024, a memo from the Netherland’s National Coordinator for Counterterrorism and Security (NCCS) considered memes to be an “online weapon,” suggesting that the lack of strong content moderation on online platforms has made it easier for memes to thrive and nest themselves in mainstream messaging that disguise their goal of radicalizing unsuspecting online users. In their new book “Lies That Kill,” co-authors Darrell West and Elaine Kamarck point to these and other examples that persuade not just voters, but other everyday people, to consume false information at rapid speeds.
Memes are the next form of political influence
Misinformation and disinformation will continue to be a focus leading up to the November election. While the time has run out for any meaningful legislation to counter deepfakes prior to this election cycle, AI-generated memes are something that policymakers, and especially campaigns, need to monitor. Because of their ability to cloak deeply hateful and vitriolic messages into humorous and satirical images, they have been downplayed in the flow of political rhetoric. Given this, Congress should reconsider the carveouts from pending legislation to quell deepfakes, especially in their use of copyrighted materials and their role in rapidly spreading disinformation. Such loose creation and dissemination of memes should also encourage Congress and other lawmakers to consider some real investments in AI literacy for everyday people to understand the consequences of what they share online. In the meantime, campaigns need to be on the lookout for memes that are harmful or that could potentially lead to violence.
6 notes
·
View notes
Text
EU copyright proposal has free speech advocates worried
Note: This is something I originally published on the New Gatekeepers blog at the Columbia Journalism Review, where I’m the chief digital writer
It hasn’t been that long since the European Union caused upheaval on the internet with the launch of the GDPR or General Data Protection Regulation, which brought in a host of cumbersome rules on how consumer data should be protected. Now, some internet activists and free-speech advocates are warning that the EU could take an even larger step in the wrong direction with a proposed copyright law that is up for a vote later this week. If passed, the law could give platforms like Google and Facebook unprecedented power to remove content on the basis that it might be infringing on copyright.
The bill is Article 13 of the proposed Directive for Copyright in the Digital Single Market, and among other things it would require any internet service that hosts content to proactively filter uploads in order to remove copyright infringement. A letter opposing the law was released last week by a group of internet luminaries including Ethernet inventor Vint Cerf, world wide web inventor Sir Tim Berners-Lee, Wikipedia co-founder Jimmy Wales, net neutrality expert Tim Wu, Internet Archive founder Brewster Kahle and Mozilla Project co-founder Mitchell Baker. The letter says:
By requiring Internet platforms to perform automatic filtering all of the content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users. The damage that this may do to the free and open Internet as we know it is hard to predict, but in our opinions could be substantial.
The European market currently follows a “notice and takedown” copyright system, in much the same way that the US does. In the US, the Digital Millennium Copyright Act gives platforms and content providers a certain amount of immunity (known as “safe harbor”) for hosting content that might infringe on copyright, provided they act immediately to remove it if infringement is brought to their attention. The proposed EU law would replace notice and takedown with a requirement to remove any infringement before it ever goes online.
One risk of this approach is that service providers will remove content that doesn’t infringe because they are afraid of contravening the law. So, for example, they might block a “meme” that uses a copyrighted image to make fun of something, even though that kind of use is typically allowed under “fair use” rules (known as “fair dealing” in the UK and a number of other countries). The signatories of the letter also argue that the cost of this new filtering approach will hit smaller internet services harder, since larger platforms like Google and Facebook will have more than enough resources to comply.
The filtering/censorship risk isn’t the only downside of the proposed law. It also includes a “link tax,” which would give copyright holders to ability to charge online platforms or providers for using even short snippets of text from a work such as a news article. Germany and several other countries have been working on variations of this idea as a way of charging Google and Facebook for taking their content, but critics of the law say its real impact could be a crippling of the internet’s inherent power to link to original source material, since even an innocent link could infringe on this new copyright.
The proposed law goes to a vote by the European Union’s legislative committee on June 20. They could decide to include Article 13, Article 11 (the link tax) or both, or they could decide to include neither one. Judging by one ranking of the potential votes of committee members, however, it looks as though the filtering proposal will almost certainly pass, and the link tax appears to be close. And that could change the way the internet works — in the EU at least — on a fairly fundamental level.
EU copyright proposal has free speech advocates worried was originally published on mathewingram.com/work
0 notes
Text
From Macross to Miku: A History of Virtual Idols
Idols are one of Japanese pop culture’s most ubiquitous fixtures. While the line defining what is and isn’t an idol is a frequent source of debate, there’s one commonality: they’re young personalities that exude a desirable image of decency to the public. They appear in many forms and talents across a wide variety of mediums; some are musicians, others voice actresses, TV personalities, and the list goes on. Name an entertainment industry and you’ll likely find idols within it.
Yet while idol culture has been increasingly expanding in Japan since its inception in the early 1970s, something has taken hold in more recent times that redefined the industry: virtual idols. These intangible stars have ushered in new possibilities in the idol space, ranging from fan content creation to untapped storytelling potential. Let’s take a look at some key moments in the storied history of virtual idols that made them a unique corner of idol culture.
While artificial intelligence was no new concept to anime in the ‘90s, 1994’s Macross Plus introduced a new flavor of virtual sentience to audiences with Sharon Apple. Sharon was an intergalactic pop superstar who also happened to be a computer program. In a plot device that predicted the future of virtual idols like Hatsune Miku and Kizuna AI, the Sharon AI was incomplete and required someone to give her a personality during performances. She essentially became a vehicle through which people could express themselves. That is, until she was corrupted and used her music as a method for mind-control (a cautionary tale that hasn’t quite come to reflect virtual idols as we know them… yet). Sharon’s influence can be seen as recently as the Spring season’s Caligula, in which troubled teens unknowingly reside in a digital world presided over by a virtual idol.
In the year following Sharon’s appearance, an exciting announcement was made by talent agency Horipro that the first real-life virtual idol was in production. Kyoko Date – the idol in question – took both the domestic and international media by storm upon her reveal, hailed as an evolution of the entertainment industry. However, due to a release delay to late 1996 and jarring technical hurdles such as unnatural movement, she failed to capture an audience and faded away after a few months.
Still, Kyoko’s existence was one ahead of its time. She was treated by her agency as talent no differently than her human counterparts. Horipro recognized the value of a digital entity’s ability to be anywhere at any time and not get caught up in scandals. These are the same attributes that companies like Crypton would point to a decade later upon the release of Vocaloid software like Hatsune Miku. Kyoko was different from Miku in that Horipro gave her a predetermined personality and were the sole proprietors of her assets and music output. Her impact had ripples that changed the idea of what an idol could be, even if it would take time to come to fruition.
Horipro’s ambition was realized in 2007 when Crypton Future Media used Yamaha’s Vocaloid 2 software to create Hatsune Miku. Miku was different than any virtual idol that had come before in that she was an entity with no personality beyond a character design, and no predetermined songs to speak of. Instead, she was consumer-facing software that synthesized vocal samples which, when strung together, simulate language. Fans were left to impart themselves upon her by creating music, dances, art, videos, manga and more. They did so in droves as Hatsune Miku became an overnight success with some creators going as far as building careers off of their work (like supercell, a dojin music circle that started with Vocaloid songs and progressed to creating themes for anime like Bakemonogatari and Guilty Crown). Hatsune Miku’s crash-landing on the scene was a watershed moment in the virtual idol landscape that fully realized their potential and made them a worldwide sensation.
What has to be emphasized about Hatsune Miku is her community-building power. Nico Nico Douga served as a hotbed for creators to reach an audience, who would in turn create derivative works of their own, sometimes through collaboration. Those without creative incline would use Nico Nico’s scrolling text chat feature as a means to engage with the content of others, and share their findings in online communities. Through these means, Miku spread like wildfire with top videos garnering millions of views.
Congregation around Miku and other Crypton Vocaloid characters importantly crossed from the online realm to the physical with live performances. Backed by a band, a projection of Miku sings and dances as the audience reciprocate in unison with arm motions and cheers. This experience is not unlike those of real-life idols, a testament to the connection that these digital icons fostered with their audience. It all comes back to grassroots creation; fans make Miku into what they want her to be and grow bonds with her and others through sharing their vision.
A more recent evolution in the virtual idol space is Kizuna AI. She represents a return to the Kyoko Date model where a company crafts a personality for their virtual idol and propagate her through official channels. Kizuna has proven to be the success that Kyoko failed to be. She doesn’t primarily sing and dance, but rather makes YouTube videos, generally vlogs and Let’s Plays. Through exuberant and sometimes sassy videos, she’s very much like any other YouTuber you’d watch, except in the form of an anime girl. Kizuna has garnered over 170 million views across her two channels, inspired a bevy of memes and sparked a trend of other channels using digital avatars as a means of self-expression.
Virtual idols have become an important part of internet culture worldwide and continue to expand into new frontiers. They redefine what an idol can be: a vehicle for self-expression, an entity that can be anywhere at any time, and of course, cute anime girls (and boys!). Particularly in the case of Hatsune Miku, we’ve even seen them be launchpads for artists to start incredibly successful careers. What comes next is unclear, but that’s what makes virtual idols so exciting! As the internet evolves so too will they.
Now, let’s just hope that doesn’t involve them turning against us like Sharon in Macross Plus...
---
Tim Rattray (@thoughtmotion) is a features and video writer for Crunchyroll and founder of Thoughts That Move.
0 notes
Text
Europe takes another step towards copyright pre-filters for user generated content
In a key vote this morning the European Parliament’s legal affairs committee has backed the two most controversial elements of a digital copyright reform package — which critics warn could have a chilling effect on Internet norms like memes and also damage freedom of expression online.
In the draft copyright directive, Article 11; “Protection of press publications concerning online uses” — which targets news aggregator business models by setting out a neighboring right for snippets of journalistic content that requires a license from the publisher to use this type of content (aka ‘the link tax’, as critics dub it) — was adopted by a 13:12 majority of the legal committee.
While, Article 13; “Use of protected content by online content sharing service providers”, which makes platforms directly liable for copyright infringements by their users — thereby pushing them towards creating filters that monitor all content uploads with all the associated potential chilling effects (aka ‘censorship machines’) — was adopted by a 15:10 majority.
MEPs critical of the proposals have vowed to continue to oppose the measures, and the EU parliament will eventually need to vote as a whole.
#Article13, the #CensorshipMachines, has been adopted by @EP_Legal with a 15:10 majority. Again: We will take this fight to plenary and still hope to #SaveYourInternet pic.twitter.com/BLguxmHCWs
— Julia Reda (@Senficon) June 20, 2018
EU Member State representatives in the EU Council will also need to vote on the reforms before the directive can become law. Though, as it stands, a majority of European governments appear to back the proposals.
European digital rights group EDRi, a long-standing critic of Article 13, has a breakdown of the next steps for the copyright directive here. It’s possible there could be another key vote in the parliament next month — ahead of negotiations with the European Council, which could be finished by fall. A final vote on a legally checked text will take place in the parliament — perhaps before the end of the year.
Derailing the proposals now essentially rests on whether enough MEPs can be convinced it’s politically expedient to do so — factoring in a timeline that includes the next EU parliament elections, in May 2019.
We can still turn this around! The #linktax and #uploadfilters passed a critical hurdle today. But in just 2 weeks, all 751 MEPs will be asked to take a stand either for or against a free & open internet. The people of Europe managed to stop ACTA, we can #SaveYourInternet again! pic.twitter.com/883ID7CKDE
— Julia Reda (@Senficon) June 20, 2018
Last week, a coalition of original Internet architects, computer scientists, academics and supporters — including Sir Tim Berners-Lee, Vint Cerf, Bruce Schneier, Jimmy Wales and Mitch Kapor — penned an open letter to the European Parliament’s president to oppose Article 13, warning that while “well-intended” the requirement that Internet platforms perform automatic filtering of all content uploaded by users “takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users”.
“As creators ourselves, we share the concern that there should be a fair distribution of revenues from the online use of copyright works, that benefits creators, publishers, and platforms alike. But Article 13 is not the right way to achieve this,” they write in the letter.
“By inverting this liability model and essentially making platforms directly responsible for ensuring the legality of content in the first instance, the business models and investments of platforms large and small will be impacted. The damage that this may do to the free and open Internet as we know it is hard to predict, but in our opinions could be substantial.”
The Wikimedia Foundational also blogged separately, setting out some specific concerns about the impact that mandatory upload filters could have on Wikipedia.
“[A]ny sort of law which mandates the deployment of automatic filters to screen all uploaded content using AI or related technologies does not leave room for the types of community processes which have been so effective on the Wikimedia projects,” it warned last week. “As previously mentioned, upload filters as they exist today view content through a broad lens, that can miss a lot of the nuances which are crucial for the review of content and assessments of legality or veracity.”
More generally critics warn that expressive and creative remix formats like memes and GIFs — which have come to form an integral part of the rich communication currency of the Internet — will be at risk if the proposals become law…
This may be illegal under #Article13. If just one of the four images is copyrighted, Twitter would be compelled to take this picture off. #Article13 #forDummies https://t.co/aSbYoiPDm0
— Ziga Turk (@ZigaTurkEU) June 12, 2018
Regarding Article 11, Europe already has experience experimenting with a neighboring right for news, after an ancillary copyright law was enacted in Germany in 2013. But local publishers ended up offering Google free consent to display their snippets after they saw traffic fall substantially when Google stopped showing their content rather than pay for using them.
Spain also enacted a similar law for publishers in 2014, but its implementation required publishers to charge for using their snippets — leading Google to permanently close its news aggregation service in the country.
Critics of this component of the digital copyright reform package also warn it’s unclear what kinds of news content will constitute a snippet, and thus fall under the proposal — even suggesting a URL including the headline of an article could fall foul of the copyright extension; ergo that the hyperlink itself could be in danger.
They also argue that an amendment giving Member States the flexibility to decide whether or not a snippet should be considered “insubstantial” (and thus freely shared) or not, does not clear up problems — saying it just risks causing fresh fragmentation across the bloc, at a time when the Commission is keenly pushing a so-called ‘Digital Single Market’ strategy.
“Instead of one Europe-wide law, we’d have 28,” warns Reda on that. “With the most extreme becoming the de-facto standard: To avoid being sued, international internet platforms would be motivated to comply with the strictest version implemented by any member state.”
However several European news and magazine publisher groups have welcomed the committee’s backing for Article 11. In a joint statement on behalf of publishing groups EMMA, ENPA, EPC and NME a spokesperson said: “The Internet is only as useful as the content that populates it. This Publisher’s neighbouring Right will be key to encouraging further investment in professional, diverse, fact-checked content for the enrichment and enjoyment of everyone, everywhere.”
Returning to Article 13, the EU’s executive, the Commission — the body responsible for drafting the copyright reforms — has also been pushing online platforms towards pre-filtering content as a mechanism for combating terrorist content, setting out a “one hour rule” for takedowns of this type of content earlier this year, for example.
But again critics of the copyright reforms argue it’s outrageously disproportionate to seek to apply the same measures that are being applied to try to clamp down on terrorist propaganda and serious criminal offenses like child exploitation to police copyright.
“For copyrighted content these automated tools simply undermine copyright exceptions. And they are not proportionate,” Reda told us last year. “We are not talking about violent crimes here in the way that terrorism or child abuse are. We’re talking about something that is a really widespread phenomenon and that’s dealt with by providing attractive legal offers to people. And not by treating them as criminals.”
Responding to today’s committee vote, Jim Killock, executive director of digital rights group, the Open Rights Group, attacked what he dubbed a “dreadful law”, warning it would have a chilling effect on freedom of expression online.
“Article 13 must go,” he said in a statement. “The EU Parliament’s duty is to defend citizens from unfair and unjust laws. MEPs must reject this law, which would create a Robo-copyright regime intended to zap any image, text, meme or video that appears to include copyright material, even when it is entirely legal material.”
Also reacting to the vote today, Monique Goyens, director general of European consumer rights group BEUC, said: “The internet as we know it will change when platforms will need to systematically filter content that users want to upload. The internet will change from a place where consumers can enjoy sharing creations and ideas to an environment that is restricted and controlled. Fair remuneration for creators is important, but consumers should not be at the losing end.”
Goyens blamed the “pressure of the copyright industry” for scuppering “even modest attempts to modernise copyright law”.
“Today’s rules are outdated and patchy. It is high time that copyright laws take into account that consumers share and create videos, music and photos on a daily basis. The majority of MEPs failed to find a solution that would have benefitted consumers and creators,” she added in a statement.
from RSSMix.com Mix ID 8204425 https://ift.tt/2M7OSxw via IFTTT
0 notes
Text
Propaganda from the Uncanny Valley
Art has always been an ideal vessel for propaganda: persuading with emotion can cut through the need for rational argument. With Facebook’s release of thousands of examples of propaganda created for social media in 2016, it’s becoming clear that artlessness is just as good. After Congressional hearings in the United States, Facebook has announced an “Action Plan Against Foreign Interference” that would double its security team in 2018, and is planning to release a tool for users to check if they’ve clicked on any of this propaganda in 2016. Two conservative activists on Twitter were recently revealed to be bots; that’s two out of the company’s estimated 36,746 Russian-backed bot accounts, though a private investigation found 150,000 such bots operated to influence the Brexit campaign. Russia denies any involvement. Third-party tools, such as botcheck.me, have been developed to evaluate Twitter account histories for bot-like patterns. Today’s propaganda artists are on the frontlines of the “creative” algorithm: the emerging trend of data channeled into “inspiration” for content and channeled back into creative products. In line with our past events examining cyberthreats and digital humanitarianism, we’re looking at how creative algorithms work (or fail) and how that is influencing the next wave of propaganda. What happens when bots talk — and people listen? Batman Elsa Birthday Babies Artist and researcher James Bridle recently took a critical look at YouTube videos crafted for children. The children’s market is a ripe target for this kind of content: toddlers love repetition, parents love the endless stream of (unwatched) content, and producers love their low costs and production values. Bridle writes that the algorithms aren’t just curating this content. They are surfacing the most powerful combinations of keywords, and using them to dictate what content is produced for the site. YouTube selects videos matching similar keywords for its “up next” queue, which are played automatically when one video ends. Create a video that matches these keywords, and you assure that your video will join the infinite stream of content shown to a child searching for Elmo or Frozen videos. There is no shortage of cheap and quickly created content with word-salad titles like “Batman Finger Family Song?—?Superheroes and Villains! Batman, Joker, Riddler, Catwoman.” The audience for that title isn’t a child, or parents. The audience isn’t human at all: the audience is the YouTube algorithm. Once the keywords are crafted for that algorithm, the content is second nature. Throw those characters together and back it with the “family finger song.” The keywords dictate the content, not to benefit any child, but to ensure that the algorithm plays that video in automated queues of videos related to any of those title terms. Bridle points out that something is amiss in these videos. They certainly allow less-than-scrupulous actors to inject weird content into a child’s stream. One nightmarish example shows Spiderman, the Hulk, and Elsa all being bashed in the head by the Joker and other villains, who then bury these favorite children’s characters alive in quicksand. That’s blatantly outrageous content created by anonymous bad actors. But even in harmless videos, there’s something weird about inverting the relationship between keywords and content. Keywords are a categorization of what content contains. By knowing the types of content people are looking for, breaking those words apart from any context and re-assembling them, you create something like a formula to guarantee search results or, at least, high placement in auto-generated content streams. The Dark Art of SEO This is what used to be considered the dark arts of “SEO” — Search Engine Optimization. It’s a tool used for writing blog spam that could show up in search results. The impact of blogspam was somewhat limited to 500-word texts redirecting you to purchase products. Today, we’re seeing SEO create epic, 30-minute-long animated videos that don’t explicitly ask you for money, but generate revenue anyway. The content of these videos is secondary. Kids watch whatever is dictated by the most valuable keywords. Humans create this content quickly in response, resulting in something with no educational value, reflecting a surrealist mash-up of arbitrary search terms: the digital storytelling equivalent of empty calories. Machine learning processes take human inputs, strip them into basic units, and then reassemble them into infinite variations. It’s this blend of human and alien processes that make “AI consciousness” such a weird concept. But it’s a very specific kind of weird: uncanniness. Rethinking the Uncanny For an example of uncanniness, there may be no easier example to understand than the Dadabots‘ album, “Deep the Beatles!” The album is the result of a machine learning computer “listening” (or scanning sound data) to Beatles records and producing something that is, simultaneously, very much the Beatles and very much not the Beatles. Ernst Jentsch first defined a certain emotion, “uncanniness,” in 1906: “In telling a story, one of the most successful devices for easily creating uncanny effects is to leave the reader in uncertainty [of] whether a particular figure in the story is a human being or an automaton, and to do it in such a way that his attention is not focused directly upon his uncertainty, so that he may not be led to go into the matter and clear it up immediately.” It’s an oddly prescient line of thinking that seems to describe the entire internet experience as of 2016. The uncanny has moved from literature into the real (albeit virtual) world, spreading a residue of low-grade, unsettling surrealism into our everyday lives. Looking at a Twitter account with 38,800 followers posting nothing but unsourced political memes in 2015, we might have asked how this person had so much time on their hands. Today, we have to ask if they’re actually human. In its congressional hearings, Facebook shared 3,000 images it claims originated from a shadowy organization in St. Petersburg, Russia, intended to influence American voters. What we see in these images is the surface-skimming of keywords, created from real political debates, boiled down to their most toxic and potent forms. Facebook is transcribing your online actions and reducing them into easily-digestible traits. It can tell if you’re neurotic, a reader, a beach-lover, extroverted. It can tell if you’re gay or straight, married, religious, or have children. It can tell if you’re worried about immigrants, guns, or unemployment. These categories can then be skimmed and recycled into content. Just like a four-year-old who wants to watch an Elsa video, advertisers can tell if you want to see anti-immigrant content, and then deliver it. The Meme War Two anonymous researchers are creating an online archive of these political images. They include groups across the spectrum, from “Army of Jesus” to gay groups, “Woke Blacks,” “Missouri News,” “Feminist Tag.” They target pro- and anti-immigrant sentiment. If there was a set of keywords that could be targeted with divisive political rhetoric, there was a group created to appeal to them. From there, real people, selected by the algorithms, boosted and amplified messages that were essentially dictated by those same algorithms. The social media propaganda images aren’t sophisticated. They’re full of spelling errors, extremist language and imagery. One had Satan suggesting that Hillary Clinton would win the election if he beat Jesus in an arm-wrestling contest. The viewer was encouraged to “like” the post to “help Jesus win.” That content was created specifically for people whose personalities showed a strong affinity to the Bible, Jesus, God, Christianity, and Fox News commentator Bill O’Reilly. The ads can also create associations that rely on several layers of deception. A few targeted Facebook accounts of people with clear anti-immigrant bias and presented advertisements from a fake pro-Muslim group. The ads included an image of Hillary Clinton hugging a woman in a burka with the message “Support Hillary to Save American Muslims.” The idea is that this would be shown to Islamophobic voters, who would share it out of a sense of outrage. When Propaganda goes viral Sharing is an impulse built into all social media, and it’s the real mechanism being “hacked” in contemporary propaganda. We share things we relate and respond to, because they reflect who we are, how we want to be seen, and who we want to connect with. After Freud, psychoanalyst Jacques Lacan took on the study of the uncanny. For Lacan, the uncanny reflects a conflicted appeal to our ideas of ourselves. The images and messages reveal a sense of our identities being reduced, partitioned, and invaded. Something uncanny emerges in this process. These are strange objects pretending to be familiar. Looking at these archives of propaganda images is unsettling because it reveals parts of us we know — the political memes, ideas, and philosophies we believe in — and so they belong to us. But they also push the boundaries of those beliefs, including our ideas of what other people believe about us. It’s an environment that contributed to an especially toxic online atmosphere in 2016. What’s next? Not all creative algorithm content is created equal. In 2013, Netflix analyzed extensive tags it had created for every piece of its content to see what worked for most of its subscribers. From that data, they were able to discern a “Venn diagram” for a successful streaming series, which they agreed to produce, sight unseen. That show was “House of Cards.” But that wasn’t just the product of blind faith in data. Instead, it pointed to a new kind of intelligence, as described by Tim Wu in his New Yorker piece about the show: “It is a form of curation … whose aim is guessing not simply what will attract viewers, but what will attract fans—people who will get excited enough to spread the word. Data may help, but what may matter more is a sense of what appeals to the hearts of obsessive people, and who can deliver that.” The similarities between the art of crafting algorithms into fan-favorite entertainment and crafting successful online propaganda campaigns? You might say it’s uncanny. --- swissnex San Francisco is exploring a number of topics around AI and ethics in 2018. Stay tuned with our event newsletter to stay up to date. https://nextrends.swissnexsanfrancisco.org/propaganda-from-the-uncanny-valley/ (Source of the original content)
0 notes