#tim's idea of a meme is an image with impact text and an image with impact text Only
Explore tagged Tumblr posts
Text
read a fic recently that had an interaction like
sasha: tim get like one social media PLEASE I am BEGGING
tim: I have linkedin?
and I cannot describe the Joy I felt reading that. they get it.
#ramblings with major#the magnus archives#tma#i love tim but he is so offline to me#if he does social media its very lightly#i mean his canon sense of humor is pop culture references like movie quotes and stuff. not memes.#tim's idea of a meme is an image with impact text and an image with impact text Only#danny was the fun online one finding all the weird niche communities and hobbies and whatnot#tim is the boring one#im telling y'all hes the 'straight man' of the archives
722 notes
¡
View notes
Text
As the United States prepares for the upcoming November election, misinformation and disinformation have spread through memes. Memes are any idea, expression, or opinion that uses text, visual imagery (e.g., a photo, video, or gif file) with or without sound, which can be copied and shared online. For researchers, they are co-constructions that have meaning primarily in humor, which can be shared by multiple users, and in the case of the internet, across various platforms. The convenient availability of commercial artificial intelligence (AI) tools has also contributed to the existing meme economy. Software like Xâs AI chatbot called Grok can quickly generate such images, especially with its ability to use famous peoplesâ likenesses, copyrighted material, violent content, or pornographic image generation.
The reality is that AI-generated memes have been inserted into the political conversation. These altered images often seem harmless to voters, particularly because they are sometimes shared by people they trust in their personal and professional networks. It can be difficult for policymakers or content moderators to definitively assess their impact because the humor or profile of memes makes them appear innocuous. In the current policy environment where content moderation and domestic AI policy are still evolving, those who develop and disseminate memes can potentially influence voter information about candidate issues, character, and other relevant election information without the usual guardrails that either regulate speech in the U.S. or trigger attention based on the manipulation of political content.
What are memes?
Tugging on the emotions of voters is a critical part of influencing them to act. Memes present unique opportunities because they can disseminate information and foster a sense of community through humor and other apolitical ways. Examples of the persuasive power of memes have appeared in efforts to spread political messaging, alter the stock market, or even influence the way the public thinks about war.
The current campaign efforts on both sides are also no exception. The Trump campaign and his allies have created an assortment of AI-generated memes to uplift the former president, which have been shared on the social media platform X and Truth Social. There has also been a share of visualizations that have ridiculed the other side, including one of Vice President Kamala Harris leading a communist rally, while another showed many Taylor Swift fans endorsing his campaign. Trump supporters like Musk have even reposted an AI-altered, âparodyâ video of Harris calling herself a âdeep state puppetâ and âthe ultimate diversity hire.â
On the other side, Vice President Harrisâ campaign and allies have generated their own share of memes. During the 2024 Democratic National Convention, the highest number of content creators and online influencers ever were in attendance to capture these and other messages for attendees and viewers. Throughout the event, robust online content was created, including the dissemination of memes to capture emotional responses to the various activities and speakers. Doug Emhoff, the second gentlemen, was often the subject of memes as he openly conveyed his support for his wife, Kamala Harris. Tim Walzâs son, Gus, also became a viral meme as he mouthed his admiration for his father during the acceptance speech. The Harris campaign and allies have been equally culpable in their use of satirical tools. Her team has also been accused of falsely captioning AI-generated videos and memes of the former president. Through their use of clips from a Trump rally in North Carolina, the Harris social media account played up the theme that the former president was âlost and confusedâ in his suggestion that he was in another stateâa point that was later fact-checked and clarified by the Harris team.
Probably the most viral memes being shared by both parties and their allies have been those AI-generated memes of cats on social media platforms, which tout the conspiracy theory and disinformation reference by Trump regarding Haitian immigrants eating pets in Springfield, Ohioâsome of which were posted by the former president himself.
As these visualizations continue to become a part of the political landscape, memes will increasingly feed into misinformation and disinformation efforts, and cloak facts in humor and satire to elicit more emotional responses from voters. Due to congressional inaction on copyright protections for the data training large language models (LLMs) or more stringent legislation to curb the flow of false information, memes can flourish and, under current election laws, be perceived to be harmless in nature.
Memes are also not necessarily deepfakes
Congress has also made it clear how they categorize memes when it comes to election and other voter interference. Pending legislation, including the DEEPFAKES Accountability Act, create carve outs for memes, humor, satire, parody, and other commentary as a justification of an individualâs freedom of expression. However, the task of deciphering what is parody and what is deceptive can be very challenging. Despite the deceptive posts by Trump and Musk, for example, the label of satire provides some immunity from liability. Such posts also receive additional protections under Section 230, which shields online platforms that disseminate the information from any liability or associationâeven if vile or offensive.
Memes also can provoke ârage-baiting,â which refers to using online content to elicit strong negative emotional reactions from users. However, the significant gaps in policy make their dissemination possible and plausible. An examination of the handling of memes in a global context makes the case for stronger guardrails and increased community awareness.
Globally, memes have been perceived as fueling extremist behavior. In 2024, a memo from the Netherlandâs National Coordinator for Counterterrorism and Security (NCCS) considered memes to be an âonline weapon,â suggesting that the lack of strong content moderation on online platforms has made it easier for memes to thrive and nest themselves in mainstream messaging that disguise their goal of radicalizing unsuspecting online users. In their new book âLies That Kill,â co-authors Darrell West and Elaine Kamarck point to these and other examples that persuade not just voters, but other everyday people, to consume false information at rapid speeds.
Memes are the next form of political influence
Misinformation and disinformation will continue to be a focus leading up to the November election. While the time has run out for any meaningful legislation to counter deepfakes prior to this election cycle, AI-generated memes are something that policymakers, and especially campaigns, need to monitor. Because of their ability to cloak deeply hateful and vitriolic messages into humorous and satirical images, they have been downplayed in the flow of political rhetoric. Given this, Congress should reconsider the carveouts from pending legislation to quell deepfakes, especially in their use of copyrighted materials and their role in rapidly spreading disinformation. Such loose creation and dissemination of memes should also encourage Congress and other lawmakers to consider some real investments in AI literacy for everyday people to understand the consequences of what they share online. In the meantime, campaigns need to be on the lookout for memes that are harmful or that could potentially lead to violence.
6 notes
¡
View notes
Text
EU copyright proposal has free speech advocates worried
Note: This is something I originally published on the New Gatekeepers blog at the Columbia Journalism Review, where Iâm the chief digital writer
It hasnât been that long since the European Union caused upheaval on the internet with the launch of the GDPR or General Data Protection Regulation, which brought in a host of cumbersome rules on how consumer data should be protected. Now, some internet activists and free-speech advocates are warning that the EU could take an even larger step in the wrong direction with a proposed copyright law that is up for a vote later this week. If passed, the law could give platforms like Google and Facebook unprecedented power to remove content on the basis that it might be infringing on copyright.
The bill is Article 13Â of the proposed Directive for Copyright in the Digital Single Market, and among other things it would require any internet service that hosts content to proactively filter uploads in order to remove copyright infringement. A letter opposing the law was released last week by a group of internet luminaries including Ethernet inventor Vint Cerf, world wide web inventor Sir Tim Berners-Lee, Wikipedia co-founder Jimmy Wales, net neutrality expert Tim Wu, Internet Archive founder Brewster Kahle and Mozilla Project co-founder Mitchell Baker. The letter says:
By requiring Internet platforms to perform automatic filtering all of the content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users. The damage that this may do to the free and open Internet as we know it is hard to predict, but in our opinions could be substantial.
The European market currently follows a ânotice and takedownâ copyright system, in much the same way that the US does. In the US, the Digital Millennium Copyright Act gives platforms and content providers a certain amount of immunity (known as âsafe harborâ) for hosting content that might infringe on copyright, provided they act immediately to remove it if infringement is brought to their attention. The proposed EU law would replace notice and takedown with a requirement to remove any infringement before it ever goes online.
One risk of this approach is that service providers will remove content that doesnât infringe because they are afraid of contravening the law. So, for example, they might block a âmemeâ that uses a copyrighted image to make fun of something, even though that kind of use is typically allowed under âfair useâ rules (known as âfair dealingâ in the UK and a number of other countries). The signatories of the letter also argue that the cost of this new filtering approach will hit smaller internet services harder, since larger platforms like Google and Facebook will have more than enough resources to comply.
The filtering/censorship risk isnât the only downside of the proposed law. It also includes a âlink tax,â which would give copyright holders to ability to charge online platforms or providers for using even short snippets of text from a work such as a news article. Germany and several other countries have been working on variations of this idea as a way of charging Google and Facebook for taking their content, but critics of the law say its real impact could be a crippling of the internetâs inherent power to link to original source material, since even an innocent link could infringe on this new copyright.
The proposed law goes to a vote by the European Unionâs legislative committee on June 20. They could decide to include Article 13, Article 11 (the link tax) or both, or they could decide to include neither one. Judging by one ranking of the potential votes of committee members, however, it looks as though the filtering proposal will almost certainly pass, and the link tax appears to be close. And that could change the way the internet works â in the EU at least â on a fairly fundamental level.
EU copyright proposal has free speech advocates worried was originally published on mathewingram.com/work
0 notes
Text
From Macross to Miku: A History of Virtual Idols
Idols are one of Japanese pop cultureâs most ubiquitous fixtures. While the line defining what is and isnât an idol is a frequent source of debate, thereâs one commonality: theyâre young personalities that exude a desirable image of decency to the public. They appear in many forms and talents across a wide variety of mediums; some are musicians, others voice actresses, TV personalities, and the list goes on. Name an entertainment industry and youâll likely find idols within it.
Yet while idol culture has been increasingly expanding in Japan since its inception in the early 1970s, something has taken hold in more recent times that redefined the industry: virtual idols. These intangible stars have ushered in new possibilities in the idol space, ranging from fan content creation to untapped storytelling potential. Letâs take a look at some key moments in the storied history of virtual idols that made them a unique corner of idol culture.
While artificial intelligence was no new concept to anime in the â90s, 1994âs Macross Plus introduced a new flavor of virtual sentience to audiences with Sharon Apple. Sharon was an intergalactic pop superstar who also happened to be a computer program. In a plot device that predicted the future of virtual idols like Hatsune Miku and Kizuna AI, the Sharon AI was incomplete and required someone to give her a personality during performances. She essentially became a vehicle through which people could express themselves. That is, until she was corrupted and used her music as a method for mind-control (a cautionary tale that hasnât quite come to reflect virtual idols as we know them⌠yet). Sharonâs influence can be seen as recently as the Spring seasonâs Caligula, in which troubled teens unknowingly reside in a digital world presided over by a virtual idol.
In the year following Sharonâs appearance, an exciting announcement was made by talent agency Horipro that the first real-life virtual idol was in production. Kyoko Date â the idol in question â took both the domestic and international media by storm upon her reveal, hailed as an evolution of the entertainment industry. However, due to a release delay to late 1996 and jarring technical hurdles such as unnatural movement, she failed to capture an audience and faded away after a few months.
Still, Kyokoâs existence was one ahead of its time. She was treated by her agency as talent no differently than her human counterparts. Horipro recognized the value of a digital entityâs ability to be anywhere at any time and not get caught up in scandals. These are the same attributes that companies like Crypton would point to a decade later upon the release of Vocaloid software like Hatsune Miku. Kyoko was different from Miku in that Horipro gave her a predetermined personality and were the sole proprietors of her assets and music output. Her impact had ripples that changed the idea of what an idol could be, even if it would take time to come to fruition.
Horiproâs ambition was realized in 2007 when Crypton Future Media used Yamahaâs Vocaloid 2 software to create Hatsune Miku. Miku was different than any virtual idol that had come before in that she was an entity with no personality beyond a character design, and no predetermined songs to speak of. Instead, she was consumer-facing software that synthesized vocal samples which, when strung together, simulate language. Fans were left to impart themselves upon her by creating music, dances, art, videos, manga and more. They did so in droves as Hatsune Miku became an overnight success with some creators going as far as building careers off of their work (like supercell, a dojin music circle that started with Vocaloid songs and progressed to creating themes for anime like Bakemonogatari and Guilty Crown). Hatsune Mikuâs crash-landing on the scene was a watershed moment in the virtual idol landscape that fully realized their potential and made them a worldwide sensation.
What has to be emphasized about Hatsune Miku is her community-building power. Nico Nico Douga served as a hotbed for creators to reach an audience, who would in turn create derivative works of their own, sometimes through collaboration. Those without creative incline would use Nico Nicoâs scrolling text chat feature as a means to engage with the content of others, and share their findings in online communities. Through these means, Miku spread like wildfire with top videos garnering millions of views.
Congregation around Miku and other Crypton Vocaloid characters importantly crossed from the online realm to the physical with live performances. Backed by a band, a projection of Miku sings and dances as the audience reciprocate in unison with arm motions and cheers. This experience is not unlike those of real-life idols, a testament to the connection that these digital icons fostered with their audience. It all comes back to grassroots creation; fans make Miku into what they want her to be and grow bonds with her and others through sharing their vision.
A more recent evolution in the virtual idol space is Kizuna AI. She represents a return to the Kyoko Date model where a company crafts a personality for their virtual idol and propagate her through official channels. Kizuna has proven to be the success that Kyoko failed to be. She doesnât primarily sing and dance, but rather makes YouTube videos, generally vlogs and Letâs Plays. Through exuberant and sometimes sassy videos, sheâs very much like any other YouTuber youâd watch, except in the form of an anime girl. Kizuna has garnered over 170 million views across her two channels, inspired a bevy of memes and sparked a trend of other channels using digital avatars as a means of self-expression.
Virtual idols have become an important part of internet culture worldwide and continue to expand into new frontiers. They redefine what an idol can be: a vehicle for self-expression, an entity that can be anywhere at any time, and of course, cute anime girls (and boys!). Particularly in the case of Hatsune Miku, weâve even seen them be launchpads for artists to start incredibly successful careers. What comes next is unclear, but thatâs what makes virtual idols so exciting! As the internet evolves so too will they.
Now, letâs just hope that doesnât involve them turning against us like Sharon in Macross Plus...
---
Tim Rattray (@thoughtmotion) is a features and video writer for Crunchyroll and founder of Thoughts That Move.
0 notes
Text
Europe takes another step towards copyright pre-filters for user generated content
In a key vote this morning the European Parliamentâs legal affairs committee has backed the two most controversial elements of a digital copyright reform package â which critics warn could have a chilling effect on Internet norms like memes and also damage freedom of expression online.
In the draft copyright directive, Article 11; âProtection of press publications concerning online usesâ â which targets news aggregator business models by setting out a neighboring right for snippets of journalistic content that requires a license from the publisher to use this type of content (aka âthe link taxâ, as critics dub it) â was adopted by a 13:12 majority of the legal committee.
While, Article 13; âUse of protected content by online content sharing service providersâ, which makes platforms directly liable for copyright infringements by their users â thereby pushing them towards creating filters that monitor all content uploads with all the associated potential chilling effects (aka âcensorship machinesâ) â was adopted by a 15:10 majority.
MEPs critical of the proposals have vowed to continue to oppose the measures, and the EU parliament will eventually need to vote as a whole.
#Article13, the #CensorshipMachines, has been adopted by @EP_Legal with a 15:10 majority. Again: We will take this fight to plenary and still hope to #SaveYourInternet pic.twitter.com/BLguxmHCWs
â Julia Reda (@Senficon) June 20, 2018
EU Member State representatives in the EU Council will also need to vote on the reforms before the directive can become law. Though, as it stands, a majority of European governments appear to back the proposals.
European digital rights group EDRi, a long-standing critic of Article 13, has a breakdown of the next steps for the copyright directive here. Itâs possible there could be another key vote in the parliament next month â ahead of negotiations with the European Council, which could be finished by fall. A final vote on a legally checked text will take place in the parliament â perhaps before the end of the year.
Derailing the proposals now essentially rests on whether enough MEPs can be convinced itâs politically expedient to do so â factoring in a timeline that includes the next EU parliament elections, in May 2019.
We can still turn this around! The #linktax and #uploadfilters passed a critical hurdle today. But in just 2 weeks, all 751 MEPs will be asked to take a stand either for or against a free & open internet. The people of Europe managed to stop ACTA, we can #SaveYourInternet again! pic.twitter.com/883ID7CKDE
â Julia Reda (@Senficon) June 20, 2018
Last week, a coalition of original Internet architects, computer scientists, academics and supporters â including Sir Tim Berners-Lee, Vint Cerf, Bruce Schneier, Jimmy Wales and Mitch Kapor â penned an open letter to the European Parliamentâs president to oppose Article 13, warning that while âwell-intendedâ the requirement that Internet platforms perform automatic filtering of all content uploaded by users âtakes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its usersâ.
âAs creators ourselves, we share the concern that there should be a fair distribution of revenues from the online use of copyright works, that benefits creators, publishers, and platforms alike. But Article 13 is not the right way to achieve this,â they write in the letter.
âBy inverting this liability model and essentially making platforms directly responsible for ensuring the legality of content in the first instance, the business models and investments of platforms large and small will be impacted. The damage that this may do to the free and open Internet as we know it is hard to predict, but in our opinions could be substantial.â
The Wikimedia Foundational also blogged separately, setting out some specific concerns about the impact that mandatory upload filters could have on Wikipedia.
â[A]ny sort of law which mandates the deployment of automatic filters to screen all uploaded content using AI or related technologies does not leave room for the types of community processes which have been so effective on the Wikimedia projects,â it warned last week. âAs previously mentioned, upload filters as they exist today view content through a broad lens, that can miss a lot of the nuances which are crucial for the review of content and assessments of legality or veracity.â
More generally critics warn that expressive and creative remix formats like memes and GIFs â which have come to form an integral part of the rich communication currency of the Internet â will be at risk if the proposals become lawâŚ
This may be illegal under #Article13. If just one of the four images is copyrighted, Twitter would be compelled to take this picture off. #Article13 #forDummies https://t.co/aSbYoiPDm0
â Ziga Turk (@ZigaTurkEU) June 12, 2018
Regarding Article 11, Europe already has experience experimenting with a neighboring right for news, after an ancillary copyright law was enacted in Germany in 2013. But local publishers ended up offering Google free consent to display their snippets after they saw traffic fall substantially when Google stopped showing their content rather than pay for using them.
Spain also enacted a similar law for publishers in 2014, but its implementation required publishers to charge for using their snippets â leading Google to permanently close its news aggregation service in the country.
Critics of this component of the digital copyright reform package also warn itâs unclear what kinds of news content will constitute a snippet, and thus fall under the proposal â even suggesting a URL including the headline of an article could fall foul of the copyright extension; ergo that the hyperlink itself could be in danger.
They also argue that an amendment giving Member States the flexibility to decide whether or not a snippet should be considered âinsubstantialâ (and thus freely shared) or not, does not clear up problems â saying it just risks causing fresh fragmentation across the bloc, at a time when the Commission is keenly pushing a so-called âDigital Single Marketâ strategy.
âInstead of one Europe-wide law, weâd have 28,â warns Reda on that. âWith the most extreme becoming the de-facto standard: To avoid being sued, international internet platforms would be motivated to comply with the strictest version implemented by any member state.â
However several European news and magazine publisher groups have welcomed the committeeâs backing for Article 11. In a joint statement on behalf of publishing groups EMMA, ENPA, EPC and NME a spokesperson said: âThe Internet is only as useful as the content that populates it. This Publisherâs neighbouring Right will be key to encouraging further investment in professional, diverse, fact-checked content for the enrichment and enjoyment of everyone, everywhere.â
Returning to Article 13, the EUâs executive, the Commission â the body responsible for drafting the copyright reforms â has also been pushing online platforms towards pre-filtering content as a mechanism for combating terrorist content, setting out a âone hour ruleâ for takedowns of this type of content earlier this year, for example.
But again critics of the copyright reforms argue itâs outrageously disproportionate to seek to apply the same measures that are being applied to try to clamp down on terrorist propaganda and serious criminal offenses like child exploitation to police copyright.
âFor copyrighted content these automated tools simply undermine copyright exceptions. And they are not proportionate,â Reda told us last year. âWe are not talking about violent crimes here in the way that terrorism or child abuse are. Weâre talking about something that is a really widespread phenomenon and thatâs dealt with by providing attractive legal offers to people. And not by treating them as criminals.â
Responding to todayâs committee vote, Jim Killock, executive director of digital rights group, the Open Rights Group, attacked what he dubbed a âdreadful lawâ, warning it would have a chilling effect on freedom of expression online.
âArticle 13 must go,â he said in a statement. âThe EU Parliamentâs duty is to defend citizens from unfair and unjust laws. MEPs must reject this law, which would create a Robo-copyright regime intended to zap any image, text, meme or video that appears to include copyright material, even when it is entirely legal material.â
Also reacting to the vote today, Monique Goyens, director general of European consumer rights group BEUC, said: âThe internet as we know it will change when platforms will need to systematically filter content that users want to upload. The internet will change from a place where consumers can enjoy sharing creations and ideas to an environment that is restricted and controlled. Fair remuneration for creators is important, but consumers should not be at the losing end.â
Goyens blamed the âpressure of the copyright industryâ for scuppering âeven modest attempts to modernise copyright lawâ.
âTodayâs rules are outdated and patchy. It is high time that copyright laws take into account that consumers share and create videos, music and photos on a daily basis. The majority of MEPs failed to find a solution that would have benefitted consumers and creators,â she added in a statement.
from RSSMix.com Mix ID 8204425 https://ift.tt/2M7OSxw via IFTTT
0 notes
Text
Propaganda from the Uncanny Valley
Art has always been an ideal vessel for propaganda: persuading with emotion can cut through the need for rational argument. With Facebookâs release of thousands of examples of propaganda created for social media in 2016, itâs becoming clear that artlessness is just as good. After Congressional hearings in the United States, Facebook has announced an âAction Plan Against Foreign Interferenceâ that would double its security team in 2018, and is planning to release a tool for users to check if theyâve clicked on any of this propaganda in 2016. Two conservative activists on Twitter were recently revealed to be bots; thatâs two out of the companyâs estimated 36,746 Russian-backed bot accounts, though a private investigation found 150,000 such bots operated to influence the Brexit campaign. Russia denies any involvement. Third-party tools, such as botcheck.me, have been developed to evaluate Twitter account histories for bot-like patterns. Todayâs propaganda artists are on the frontlines of the âcreativeâ algorithm: the emerging trend of data channeled into âinspirationâ for content and channeled back into creative products. In line with our past events examining cyberthreats and digital humanitarianism, weâre looking at how creative algorithms work (or fail) and how that is influencing the next wave of propaganda. What happens when bots talk â and people listen? Batman Elsa Birthday Babies Artist and researcher James Bridle recently took a critical look at YouTube videos crafted for children. The childrenâs market is a ripe target for this kind of content: toddlers love repetition, parents love the endless stream of (unwatched) content, and producers love their low costs and production values. Bridle writes that the algorithms arenât just curating this content. They are surfacing the most powerful combinations of keywords, and using them to dictate what content is produced for the site. YouTube selects videos matching similar keywords for its âup nextâ queue, which are played automatically when one video ends. Create a video that matches these keywords, and you assure that your video will join the infinite stream of content shown to a child searching for Elmo or Frozen videos. There is no shortage of cheap and quickly created content with word-salad titles like âBatman Finger Family Song?â?Superheroes and Villains! Batman, Joker, Riddler, Catwoman.â The audience for that title isnât a child, or parents. The audience isnât human at all: the audience is the YouTube algorithm. Once the keywords are crafted for that algorithm, the content is second nature. Throw those characters together and back it with the âfamily finger song.â The keywords dictate the content, not to benefit any child, but to ensure that the algorithm plays that video in automated queues of videos related to any of those title terms. Bridle points out that something is amiss in these videos. They certainly allow less-than-scrupulous actors to inject weird content into a childâs stream. One nightmarish example shows Spiderman, the Hulk, and Elsa all being bashed in the head by the Joker and other villains, who then bury these favorite childrenâs characters alive in quicksand. Thatâs blatantly outrageous content created by anonymous bad actors. But even in harmless videos, thereâs something weird about inverting the relationship between keywords and content. Keywords are a categorization of what content contains. By knowing the types of content people are looking for, breaking those words apart from any context and re-assembling them, you create something like a formula to guarantee search results or, at least, high placement in auto-generated content streams. The Dark Art of SEO This is what used to be considered the dark arts of âSEOâ â Search Engine Optimization. Itâs a tool used for writing blog spam that could show up in search results. The impact of blogspam was somewhat limited to 500-word texts redirecting you to purchase products. Today, weâre seeing SEO create epic, 30-minute-long animated videos that donât explicitly ask you for money, but generate revenue anyway. The content of these videos is secondary. Kids watch whatever is dictated by the most valuable keywords. Humans create this content quickly in response, resulting in something with no educational value, reflecting a surrealist mash-up of arbitrary search terms: the digital storytelling equivalent of empty calories. Machine learning processes take human inputs, strip them into basic units, and then reassemble them into infinite variations. Itâs this blend of human and alien processes that make âAI consciousnessâ such a weird concept. But itâs a very specific kind of weird: uncanniness. Rethinking the Uncanny For an example of uncanniness, there may be no easier example to understand than the Dadabotsâ album, âDeep the Beatles!â The album is the result of a machine learning computer âlisteningâ (or scanning sound data) to Beatles records and producing something that is, simultaneously, very much the Beatles and very much not the Beatles. Ernst Jentsch first defined a certain emotion, âuncanniness,â in 1906: âIn telling a story, one of the most successful devices for easily creating uncanny effects is to leave the reader in uncertainty [of] whether a particular figure in the story is a human being or an automaton, and to do it in such a way that his attention is not focused directly upon his uncertainty, so that he may not be led to go into the matter and clear it up immediately.â Itâs an oddly prescient line of thinking that seems to describe the entire internet experience as of 2016. The uncanny has moved from literature into the real (albeit virtual) world, spreading a residue of low-grade, unsettling surrealism into our everyday lives. Looking at a Twitter account with 38,800 followers posting nothing but unsourced political memes in 2015, we might have asked how this person had so much time on their hands. Today, we have to ask if theyâre actually human. In its congressional hearings, Facebook shared 3,000 images it claims originated from a shadowy organization in St. Petersburg, Russia, intended to influence American voters. What we see in these images is the surface-skimming of keywords, created from real political debates, boiled down to their most toxic and potent forms. Facebook is transcribing your online actions and reducing them into easily-digestible traits. It can tell if youâre neurotic, a reader, a beach-lover, extroverted. It can tell if youâre gay or straight, married, religious, or have children. It can tell if youâre worried about immigrants, guns, or unemployment. These categories can then be skimmed and recycled into content. Just like a four-year-old who wants to watch an Elsa video, advertisers can tell if you want to see anti-immigrant content, and then deliver it. The Meme War Two anonymous researchers are creating an online archive of these political images. They include groups across the spectrum, from âArmy of Jesusâ to gay groups, âWoke Blacks,â âMissouri News,â âFeminist Tag.â They target pro- and anti-immigrant sentiment. If there was a set of keywords that could be targeted with divisive political rhetoric, there was a group created to appeal to them. From there, real people, selected by the algorithms, boosted and amplified messages that were essentially dictated by those same algorithms. The social media propaganda images arenât sophisticated. Theyâre full of spelling errors, extremist language and imagery. One had Satan suggesting that Hillary Clinton would win the election if he beat Jesus in an arm-wrestling contest. The viewer was encouraged to âlikeâ the post to âhelp Jesus win.â That content was created specifically for people whose personalities showed a strong affinity to the Bible, Jesus, God, Christianity, and Fox News commentator Bill OâReilly. The ads can also create associations that rely on several layers of deception. A few targeted Facebook accounts of people with clear anti-immigrant bias and presented advertisements from a fake pro-Muslim group. The ads included an image of Hillary Clinton hugging a woman in a burka with the message âSupport Hillary to Save American Muslims.â The idea is that this would be shown to Islamophobic voters, who would share it out of a sense of outrage. When Propaganda goes viral Sharing is an impulse built into all social media, and itâs the real mechanism being âhackedâ in contemporary propaganda. We share things we relate and respond to, because they reflect who we are, how we want to be seen, and who we want to connect with. After Freud, psychoanalyst Jacques Lacan took on the study of the uncanny. For Lacan, the uncanny reflects a conflicted appeal to our ideas of ourselves. The images and messages reveal a sense of our identities being reduced, partitioned, and invaded. Something uncanny emerges in this process. These are strange objects pretending to be familiar.  Looking at these archives of propaganda images is unsettling because it reveals parts of us we know â the political memes, ideas, and philosophies we believe in â and so they belong to us. But they also push the boundaries of those beliefs, including our ideas of what other people believe about us. Itâs an environment that contributed to an especially toxic online atmosphere in 2016. Whatâs next? Not all creative algorithm content is created equal. In 2013, Netflix analyzed extensive tags it had created for every piece of its content to see what worked for most of its subscribers. From that data, they were able to discern a âVenn diagramâ for a successful streaming series, which they agreed to produce, sight unseen. That show was âHouse of Cards.â But that wasnât just the product of blind faith in data. Instead, it pointed to a new kind of intelligence, as described by Tim Wu in his New Yorker piece about the show: âIt is a form of curation ⌠whose aim is guessing not simply what will attract viewers, but what will attract fansâpeople who will get excited enough to spread the word. Data may help, but what may matter more is a sense of what appeals to the hearts of obsessive people, and who can deliver that.â The similarities between the art of crafting algorithms into fan-favorite entertainment and crafting successful online propaganda campaigns? You might say itâs uncanny. --- swissnex San Francisco is exploring a number of topics around AI and ethics in 2018. Stay tuned with our event newsletter to stay up to date. https://nextrends.swissnexsanfrancisco.org/propaganda-from-the-uncanny-valley/ (Source of the original content)
0 notes