#seems like characters without en dubs are affected
Explore tagged Tumblr posts
Text
translating saga's new year's line to en since it's missing:
Ah, Doctor. Happy New Year! I'm writing a letter. Though I've roamed for many a year, and have long grown used to this lifestyle, every time this time of year comes around, I can't help but think of my old head priest and the rest of my family at the monastery. It's been far too long—I should return to visit soon.
#arknights#hg FLOP#seems like characters without en dubs are affected#family is not literal btw#amateur translation attempt please go easy on me etc et
28 notes
·
View notes
Text
How to read fiction to build a startup
“The book itself is a curious artefact, not showy in its technology but complex and extremely efficient: a really neat little device, compact, often very pleasant to look at and handle, that can last decades, even centuries. It doesn’t have to be plugged in, activated, or performed by a machine; all it needs is light, a human eye, and a human mind. It is not one of a kind, and it is not ephemeral. It lasts. It is reliable. If a book told you something when you were 15, it will tell it to you again when you’re 50, though you may understand it so differently that it seems you’re reading a whole new book.”—Ursula K. Le Guin
Every year, Bill Gates goes off-grid, leaves friends and family behind, and spends two weeks holed up in a cabin reading books. His annual reading list rivals Oprah’s Book Club as a publishing kingmaker. Not to be outdone, Mark Zuckerberg shared a reading recommendation every two weeks for a year, dubbing 2015 his “Year of Books.” Susan Wojcicki, CEO of YouTube, joined the board of Room to Read when she realized how books like The Evolution of Calpurnia Tate were inspiring girls to pursue careers in science and technology. Many a biotech entrepreneur treasures a dog-eared copy of Daniel Suarez’s Change Agent, which extrapolates the future of CRISPR. Noah Yuval Harari’s sweeping account of world history, Sapiens, is de rigueur for Silicon Valley nightstands.
This obsession with literature isn’t limited to founders. Investors are just as avid bookworms. “Reading was my first love,” says AngelList’s Naval Ravikant. “There is always a book to capture the imagination.” Ravikant reads dozens of books at a time, dipping in and out of each one nonlinearly. When asked about his preternatural instincts, Lux Capital’s Josh Wolfe advised investors to “read voraciously and connect dots.” Foundry Group’s Brad Feld has reviewed 1,197 books on Goodreads and especially loves science fiction novels that “make the step function leaps in imagination that represent the coming dislocation from our current reality.”
This begs a fascinating question: Why do the people building the future spend so much of their scarcest resource — time — reading books?
Image by NiseriN via Getty Images. Reading time approximately 14 minutes.
Don’t Predict, Reframe
Do innovators read in order to mine literature for ideas? The Kindle was built to the specs of a science fictional children’s storybook featured in Neal Stephenson’s novel The Diamond Age, in fact, the Kindle project team was originally codenamed “Fiona” after the novel’s protagonist. Jeff Bezos later hired Stephenson as the first employee at his space startup Blue Origin. But this literary prototyping is the exception that proves the rule. To understand the extent of the feedback loop between books and technology, it’s necessary to attack the subject from a less direct angle.
David Mitchell’s Cloud Atlas is full of indirect angles that all manage to reveal deeper truths. It’s a mind-bending novel that follows six different characters through an intricate web of interconnected stories spanning three centuries. The book is a feat of pure M.C. Escher-esque imagination, featuring a structure as creative and compelling as its content. Mitchell takes the reader on a journey ranging from the 19th century South Pacific to a far-future Korean corpocracy and challenges the reader to rethink the very idea of civilization along the way. “Power, time, gravity, love,” writes Mitchell. “The forces that really kick ass are all invisible.”
The technological incarnations of these invisible forces are precisely what Kevin Kelly seeks to catalog in The Inevitable. Kelly is an enthusiastic observer of the impact of technology on the human condition. He was a co-founder of Wired, and the insights explored in his book are deep, provocative, and wide-ranging. In his own words, “When answers become cheap, good questions become more difficult and therefore more valuable.” The Inevitable raises many important questions that will shape the next few decades, not least of which concern the impacts of AI:
“Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. Each step of surrender—we are not the only mind that can play chess, fly a plane, make music, or invent a mathematical law—will be painful and sad. We’ll spend the next three decades—indeed, perhaps the next century—in a permanent identity crisis, continually asking ourselves what humans are good for. If we aren’t unique toolmakers, or artists, or moral ethicists, then what, if anything, makes us special? In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”
It is precisely this kind of an AI-influenced world that Richard Powers describes so powerfully in his extraordinary novel The Overstory:
“Signals swarm through Mimi’s phone. Suppressed updates and smart alerts chime at her. Notifications to flick away. Viral memes and clickable comment wars, millions of unread posts demanding to be ranked. Everyone around her in the park is likewise busy, tapping and swiping, each with a universe in his palm. A massive, crowd-sourced urgency unfolds in Like-Land, and the learners, watching over these humans’ shoulders, noting each time a person clicks, begin to see what it might be: people, vanishing en masse into a replicated paradise.”
Taking this a step further, Virginia Heffernan points out in Magic and Loss that living in a digitally mediated reality impacts our inner lives at least as much as the world we inhabit:
“The Internet suggests immortality—comes just shy of promising it—with its magic. With its readability and persistence of data. With its suggestion of universal connectedness. With its disembodied imagines and sounds. And then, just as suddenly, it stirs grief: the deep feeling that digitization has cost us something very profound. That connectedness is illusory; that we’re all more alone than ever.”
And it is the questionable assumptions underlying such a future that Nick Harkaway enumerates in his existential speculative thriller Gnomon:
“Imagine how safe it would feel to know that no one could ever commit a crime of violence and go unnoticed, ever again. Imagine what it would mean to us to know—know for certain—that the plane or the bus we’re travelling on is properly maintained, that the teacher who looks after our children doesn’t have ugly secrets. All it would cost is our privacy, and to be honest who really cares about that? What secrets would you need to keep from a mathematical construct without a heart? From a card index? Why would it matter? And there couldn’t be any abuse of the system, because the system would be built not to allow it. It’s the pathway we’re taking now, that we’ve been on for a while.”
Machine learning pioneer, former President of Google China, and leading Chinese venture capitalist Kai-Fu Lee loves reading science fiction in this vein — books that extrapolate AI futures — like Hao Jingfang’s Hugo Award-winning Folding Beijing. Lee’s own book, AI Superpowers, provides a thought-provoking overview of the burgeoning feedback loop between machine learning and geopolitics. As AI becomes more and more powerful, it becomes an instrument of power, and this book outlines what that means for the 21st century world stage:
“Many techno-optimists and historians would argue that productivity gains from new technology almost always produce benefits throughout the economy, creating more jobs and prosperity than before. But not all inventions are created equal. Some changes replace one kind of labor (the calculator), and some disrupt a whole industry (the cotton gin). Then there are technological changes on a grander scale. These don’t merely affect one task or one industry but drive changes across hundreds of them. In the past three centuries, we’ve only really seen three such inventions: the steam engine, electrification, and information technology.”
So what’s different this time? Lee points out that “AI is inherently monopolistic: A company with more data and better algorithms will gain ever more users and data. This self-reinforcing cycle will lead to winner-take-all markets, with one company making massive profits while its rivals languish.” This tendency toward centralization has profound implications for the restructuring of world order:
“The AI revolution will be of the magnitude of the Industrial Revolution—but probably larger and definitely faster. Where the steam engine only took over physical labor, AI can perform both intellectual and physical labor. And where the Industrial Revolution took centuries to spread beyond Europe and the U.S., AI applications are already being adopted simultaneously all across the world.”
Cloud Atlas, The Inevitable, The Overstory, Gnomon, Folding Beijing, and AI Superpowers might appear to predict the future, but in fact they do something far more interesting and useful: reframe the present. They invite us to look at the world from new angles and through fresh eyes. And cultivating “beginner’s mind” is the problem for anyone hoping to build or bet on the future.
source https://techcrunch.com/2019/02/16/the-best-fiction-for-building-a-startup/
0 notes
Text
How to read fiction to build a startup
“The book itself is a curious artefact, not showy in its technology but complex and extremely efficient: a really neat little device, compact, often very pleasant to look at and handle, that can last decades, even centuries. It doesn’t have to be plugged in, activated, or performed by a machine; all it needs is light, a human eye, and a human mind. It is not one of a kind, and it is not ephemeral. It lasts. It is reliable. If a book told you something when you were 15, it will tell it to you again when you’re 50, though you may understand it so differently that it seems you’re reading a whole new book.”—Ursula K. Le Guin
Every year, Bill Gates goes off-grid, leaves friends and family behind, and spends two weeks holed up in a cabin reading books. His annual reading list rivals Oprah’s Book Club as a publishing kingmaker. Not to be outdone, Mark Zuckerberg shared a reading recommendation every two weeks for a year, dubbing 2015 his “Year of Books.” Susan Wojcicki, CEO of YouTube, joined the board of Room to Read when she realized how books like The Evolution of Calpurnia Tate were inspiring girls to pursue careers in science and technology. Many a biotech entrepreneur treasures a dog-eared copy of Daniel Suarez’s Change Agent, which extrapolates the future of CRISPR. Noah Yuval Harari’s sweeping account of world history, Sapiens, is de rigueur for Silicon Valley nightstands.
This obsession with literature isn’t limited to founders. Investors are just as avid bookworms. “Reading was my first love,” says AngelList’s Naval Ravikant. “There is always a book to capture the imagination.” Ravikant reads dozens of books at a time, dipping in and out of each one nonlinearly. When asked about his preternatural instincts, Lux Capital’s Josh Wolfe advised investors to “read voraciously and connect dots.” Foundry Group’s Brad Feld has reviewed 1,197 books on Goodreads and especially loves science fiction novels that “make the step function leaps in imagination that represent the coming dislocation from our current reality.”
This begs a fascinating question: Why do the people building the future spend so much of their scarcest resource — time — reading books?
Image by NiseriN via Getty Images. Reading time approximately 14 minutes.
Don’t Predict, Reframe
Do innovators read in order to mine literature for ideas? The Kindle was built to the specs of a science fictional children’s storybook featured in Neal Stephenson’s novel The Diamond Age, in fact, the Kindle project team was originally codenamed “Fiona” after the novel’s protagonist. Jeff Bezos later hired Stephenson as the first employee at his space startup Blue Origin. But this literary prototyping is the exception that proves the rule. To understand the extent of the feedback loop between books and technology, it’s necessary to attack the subject from a less direct angle.
David Mitchell’s Cloud Atlas is full of indirect angles that all manage to reveal deeper truths. It’s a mind-bending novel that follows six different characters through an intricate web of interconnected stories spanning three centuries. The book is a feat of pure M.C. Escher-esque imagination, featuring a structure as creative and compelling as its content. Mitchell takes the reader on a journey ranging from the 19th century South Pacific to a far-future Korean corpocracy and challenges the reader to rethink the very idea of civilization along the way. “Power, time, gravity, love,” writes Mitchell. “The forces that really kick ass are all invisible.”
The technological incarnations of these invisible forces are precisely what Kevin Kelly seeks to catalog in The Inevitable. Kelly is an enthusiastic observer of the impact of technology on the human condition. He was a co-founder of Wired, and the insights explored in his book are deep, provocative, and wide-ranging. In his own words, “When answers become cheap, good questions become more difficult and therefore more valuable.” The Inevitable raises many important questions that will shape the next few decades, not least of which concern the impacts of AI:
“Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. Each step of surrender—we are not the only mind that can play chess, fly a plane, make music, or invent a mathematical law—will be painful and sad. We’ll spend the next three decades—indeed, perhaps the next century—in a permanent identity crisis, continually asking ourselves what humans are good for. If we aren’t unique toolmakers, or artists, or moral ethicists, then what, if anything, makes us special? In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”
It is precisely this kind of an AI-influenced world that Richard Powers describes so powerfully in his extraordinary novel The Overstory:
“Signals swarm through Mimi’s phone. Suppressed updates and smart alerts chime at her. Notifications to flick away. Viral memes and clickable comment wars, millions of unread posts demanding to be ranked. Everyone around her in the park is likewise busy, tapping and swiping, each with a universe in his palm. A massive, crowd-sourced urgency unfolds in Like-Land, and the learners, watching over these humans’ shoulders, noting each time a person clicks, begin to see what it might be: people, vanishing en masse into a replicated paradise.”
Taking this a step further, Virginia Heffernan points out in Magic and Loss that living in a digitally mediated reality impacts our inner lives at least as much as the world we inhabit:
“The Internet suggests immortality—comes just shy of promising it—with its magic. With its readability and persistence of data. With its suggestion of universal connectedness. With its disembodied imagines and sounds. And then, just as suddenly, it stirs grief: the deep feeling that digitization has cost us something very profound. That connectedness is illusory; that we’re all more alone than ever.”
And it is the questionable assumptions underlying such a future that Nick Harkaway enumerates in his existential speculative thriller Gnomon:
“Imagine how safe it would feel to know that no one could ever commit a crime of violence and go unnoticed, ever again. Imagine what it would mean to us to know—know for certain—that the plane or the bus we’re travelling on is properly maintained, that the teacher who looks after our children doesn’t have ugly secrets. All it would cost is our privacy, and to be honest who really cares about that? What secrets would you need to keep from a mathematical construct without a heart? From a card index? Why would it matter? And there couldn’t be any abuse of the system, because the system would be built not to allow it. It’s the pathway we’re taking now, that we’ve been on for a while.”
Machine learning pioneer, former President of Google China, and leading Chinese venture capitalist Kai-Fu Lee loves reading science fiction in this vein — books that extrapolate AI futures — like Hao Jingfang’s Hugo Award-winning Folding Beijing. Lee’s own book, AI Superpowers, provides a thought-provoking overview of the burgeoning feedback loop between machine learning and geopolitics. As AI becomes more and more powerful, it becomes an instrument of power, and this book outlines what that means for the 21st century world stage:
“Many techno-optimists and historians would argue that productivity gains from new technology almost always produce benefits throughout the economy, creating more jobs and prosperity than before. But not all inventions are created equal. Some changes replace one kind of labor (the calculator), and some disrupt a whole industry (the cotton gin). Then there are technological changes on a grander scale. These don’t merely affect one task or one industry but drive changes across hundreds of them. In the past three centuries, we’ve only really seen three such inventions: the steam engine, electrification, and information technology.”
So what’s different this time? Lee points out that “AI is inherently monopolistic: A company with more data and better algorithms will gain ever more users and data. This self-reinforcing cycle will lead to winner-take-all markets, with one company making massive profits while its rivals languish.” This tendency toward centralization has profound implications for the restructuring of world order:
“The AI revolution will be of the magnitude of the Industrial Revolution—but probably larger and definitely faster. Where the steam engine only took over physical labor, AI can perform both intellectual and physical labor. And where the Industrial Revolution took centuries to spread beyond Europe and the U.S., AI applications are already being adopted simultaneously all across the world.”
Cloud Atlas, The Inevitable, The Overstory, Gnomon, Folding Beijing, and AI Superpowers might appear to predict the future, but in fact they do something far more interesting and useful: reframe the present. They invite us to look at the world from new angles and through fresh eyes. And cultivating “beginner’s mind” is the problem for anyone hoping to build or bet on the future.
Via Danny Crichton https://techcrunch.com
0 notes
Photo
February 27, 2017 by Silvio Lorusso
What Design Can’t Do — Graphic Design between Automation, Relativism, Élite and Cognitariat
"The thing that pisses me off the most is the degradation of the intellectual role of the designer." […] that made me wonder what constitutes that role, whether it actually existed, how it vanished and what replaced it. […] focus on graphic design
An example in the Gig economy (that notion describes: “it is either a working environment that offers flexibility with regard to employment hours, or... it is a form of exploitation with very little workplace protection.” from BBC)
She “claims to be a self-taught graphic designer, since all her previous jobs included tasks where she created graphics. This generalization of graphic design is the keystone of my argument on its potential intellectual role.”
This sounds like “The reduction of working hours would have been made possible by machines, able to automate an increasing number of processes.” [which is back to the hype] thanks to the advances in artificial intelligence and to an ultra-cited 2013 study according to which 47% of the jobs currently performed in the US are put at risk by computerization.”
[…]
graphic design (in which I include web design as well), has already undergone the drastic effects of automation, […the web is already full of] generators capable of producing endless permutations of logos [and graphic design was already impacted by technologies since the popularisation of the computer]
[…]
In his dystopian novel Player Piano (1952), Kurt Vonnegut depicts an almost completely automated society where it is sufficient to simply record the movements of a worker on a disk to ensure that the machine will endlessly repeat them with full accuracy.
[…]
That’s how template culture comes to life, where every new project actually derives from a long sequence of previous projects.
[…] No More Rules. This is the title of the study on the influence of postmodernist mentality on graphic design conducted by Rick Poynor. […]’80s to the early 2000s, designers subverted, one by one, all the commandments […]
A few years later, Tibor Kalman turns his gaze to "low", vernacular visual cultures, […] "design without designers"
[…] a few years later, rules forcefully reappeared. […] the postmodern period consider it a sort of Dark Age. However, the seed of doubt has never been fully eradicated […]. Take German designer Manuel Buerger, who went to a copy shop in Mumbai to commission the design of his own business card.
Manuel Buerger (2009)
[this] deprived graphic design of any anchor
[…] Perhaps partly because of this visual relativism and the democratization of tools, graphic design […], underwent a process of intellectualization. […] tendency to make writing and research the pivot of a designer’s practice.
[…] several designers gradually shy away from doing graphic design as it is commonly understood, and they become primarily writers.
//I’m not sure about that… but that might exist indeed, in the contrary, non-designers (or non-fulfilled/trained designers actually turn researchers/writers).
As a result of the developments I described, graphic design culture has, at least superficially, become part of popular culture
https://www.youtube.com/watch?time_continue=112&v=frBO8PkEQPA
According to Ian Bogost, we’re all "hyperemployed". Instead of merely practicing our official job, we perform many of them
//Still don’t agree, does “work” restrict to one (fordist) activity?
Graphic design becomes micromanagement, while the designer turns into a Kafkaesque, voice-controlled mouse.
[…] Before the advent of TaskRabbit, Italian graphic designers were already active in the defense of their professional and cultural position.
[…] When attacking non-professionals, graphic designers generally appeal to the notion of quality. Thus, they argue that the client should be educated in order to recognize it.
[…] Given the results, it seems that the real communicative hurricane of the last elections -exponentially more chaotic and “authentic” (if by this we mean grassroots) than Sanders’ one- was largely ignored by the design community. Trump’s “meme machine”
[…]
In order to weigh the perceived economic value attributed by the public to the work of graphic designers, I’d like to resurrect the story behind the logo of Madre, a contemporary art museum in Naples. Apparent cost of the restyling: 20,000 euros. Result: popular uprising on social media accompanied by a plethora of parodies.
[Graphic Design is competitive like Highlander (she says), but…] Once one is ahead of the competition, where should they devote their talent? Recently, I read a compelling interview with Ruben Pater, who teaches at the Royal Academy of Art The Hague, […] he tries "to motivate designers to focus on making great work, and while doing that, also imagining the needs and interests of the other 99%."
What has been dubbed critical design is partly driven by this […] motivation. However, this attitude may lead to a curious situation: aspiring designers develop an urgency to solve an ambitious problem affecting some distant countries,
////No… that would rather be the US understanding of Social Design///
while in the evening or in the weekends they do a bullshit job, […David Graeber] as they concentrate on the "other" 99%, they forget to be part of it themselves.
///this critique is made clearer by others (cf. Pedro Oliviera, Luiza Prado, Decolonising-design group), but OK
Furthermore, both the design field and tech startups are affected by what Evgeny Morozov calls "solutionism", the idea that to solve a "wicked", social problem, a technical solution is sufficient.
//I just insert here a quote by Findeli, for inspiration about so called “wicked problems” où il décrit le « changement de paradigme » qui s’opère dans les années 1990,
C’est ainsi que l’on découvrit que le design n’était pas, contrairement à ce que le langage courant dit encore souvent, une activité de résolution de problèmes ; que ses « problèmes » étaient toujours mal définis (wicked ou ill-defined) ; que la phase de construction de la problématique était essentielle à la bonne conduite d’un projet ; que la formulation de la commande d’un projet exigeait toujours d’être remise en question et reconstruite ; qu’un modèle devait aider à penser et non à s’en dispenser ; que les destinataires de nos projets n’étaient pas seulement des consommatrices ou des usagers ayant des besoins à satisfaire ou à combler mais qu’ils étaient, eux et elles aussi, porteurs de projets ; qu’il convenait de distinguer l’agir du faire et du fabriquer ; qu’un produit de design n’était pas nécessairement matériel ; que sous le look se cachaient d’autres significations que celles que le marketing nous imposait, et bien d’autres choses bien instructives encore
Findeli, 2003 : 14
(via this critique of design social & participation)
However, such confidence in technology and science seems dubious when expressed within graphic design […]
The fact that design is always imbued with ideology is indeed what Ruben Pater, an avid critic of design solutionism himself, highlights in his The Politics of Design. He begins the book by pointing out a series of often obliterated expressions of privilege, such as the the mere possibility of reading a text online, a benefit granted to only 40% of the world population. Clearly, we shouldn’t dismiss the denunciation of privilege as a sanctimonious do-good design attitude. And we must take into account the dramatic material imbalances within the 99%. But if it’s true that, as Tony Fry argues, design “either serves or subverts the status quo”, it is legitimate to ask whether within certain instances of design education, the impetus to subvert the status quo is precisely what ultimately serves it.
Both the designer in the call center and the startupper in the garage are victims of the cognitive dissonance of what I call entreprecariat,
/////
MORE FROM http://networkcultures.org/entreprecariat/what-is-the-entreprecariat/
On a basic level, the entreprecariat refers to the reciprocal influence of an entrepreneurialist regime and pervasive precarity. […] The entreprecariat is the semi-young creative worker who put effort in her own studio while freelancing for Foodora […] But one can’t properly describe the precariat without referring to a genuine enthusiasm, sometimes of a euphoric kind, that often emerges from these conditions. […] Entrepreneuralism is a common response to the identity crisis of the precariat. Can this energy provide emancipation? This is the fundamental dilemma of the entreprecariat. […] All these characters share the urgency to optimize their time, their mind, their body, and their soul in order to deal with precarious conditions, be they financial, psychological, affective, physiological, temporal, geographical. Lifehacker.com well represents this urgency […] In the entreprecarious society, everyone is an entrepreneur and nobody is stable. Precarious conditions demand an entrepreneurial attitude, while affirmative entrepreneurialism dwells into constructive instability and change. Thus, entreprecarity is characterized by a cognitive dissonance.
/////
One possible way out for the aspiring designer is exactly education. Since market dynamics rarely allow designers to develop a practice that is intellectually fulfilling, some designers aim to become teachers, tutors, guest lecturers. […] Referring to the master programs in creative writing, Timothy Small speaks of a big Ponzi scheme, "a process in which one writer without money begins to teach in order to supplement their income, creating 15 writers without money who in turn will begin to teach thanks to the recommendation of the first writer-teacher, and that in turn will produce another 15 writers with no money, etc, etc."
[…]
I propose that marginality should be precisely the starting point of any educational project about graphic design. With such premise in mind, it would be easier to recognize the transformation of graphic design from creative work to cognitive work. This mutation, which involves many other sectors of the service economy, has a tragic implication: the war against free labor and the devaluation of graphic design is lost […] in a context characterized by an unceasing cascade of free content and tools, it is increasingly difficult to agree to pay even a modest sum for an intangible asset.
MyCreativity (2006)
What is to be done? First of all, we need to acknowledge the fact that having economic problems as a cognitive worker is a structural consequence, not an individual one. In the 2000s, Richard Florida theorized the advent of the "creative class" ///like Pierre Giorgini, la Fulgurante Recréation///, whose transformative potential he praised. A few years later, the MyCreativity group reformulated this concept pragmatically, speaking of self-exploitation, insecurity and creative underclass. We must admit that design schools contribute to populating this creative underclass. So I think it makes sense to talk about design schools as precarity factories. At the same time, however, these schools could be described, to tweak Hakim Bey’s concept, as temporary autonomous élites, since they constitute a space where one can literally buy a degree of control over their time.
If gratuity is unstoppable […] the issue must be addressed at a broader level. […] such as universal basic income, perhaps not achievable in practice or even counterproductive […] reframe the very meaning of work […] I envision a design school functioning like a think tank. Its area of action would be the redefinition of work and the development of strategies to produce a new cultural hegemony.
[…] some efforts in this direction have already been made […] Brave New Alps, a graphic design duo militating from the heart of the Alps […] on the designers’ working conditions for years. [ ›› ] Construction Site for Non-Affirmative Practices. The collective produced a one of a kind investigation into the economic and social profile of those who identify as designers. The Designers’ Inquiry was launched not by chance during the Salone del Mobile, the epitome of the rich, opulent and glittering entrepreneurial design. Brave New Alps didn’t stop there and, together with Caterina Giuliani, set up the Precarity Pilot, a physical and virtual platform that includes a set of "best practices" to organize one’s career, redefine notions of success, enable cooperation dynamics, and so on. Maybe this path, the political path, is precisely the one to follow in order to reaffirm the intellectual role of the designer and, while we’re at it, of the cognitariat in general.
Author: Silvio Lorusso
Silvio Lorusso is a Rotterdam-based artist, designer, and researcher. His current research focuses on the relationship between entrepreneurship and precarity, i.e. entreprecariat. His work was shown at Transmediale (Berlin, Germany), NRW- Forum (Düsseldorf, Germany), Impakt (Utrecht, Netherlands), Sight & Sound (Montreal, Canada), Adhocracy (Athens, Greece), Biennale Architettura (Venezia, Italy). He holds a Ph.D. in Design Sciences from the School of Doctorate Studies – Iuav University of Venice. His writing has appeared in Prismo, Printed Web 3, Metropolis M, Progetto Grafico, Digicult, Diid, and Doppiozero. His work has been featured in, among others, The Guardian, The Financial Times, and Wired. Currently, he works as a mentor at the Amsterdam University of Applied Sciences’ PublishingLab.
What Design Can’t Do — Graphic Design between Automation, Relativism, Élite and Cognitariat | THE ENTREPRECARIAT)
0 notes
Text
Essay and Delivery Plan
LO6: Make judgements and present arguments through engagement with fundamental historical, cultural and ethical concepts and theories associated with your subject.
Engage in research into the background of the Alice books – why they were written, whether they have morals or license to be creative. What sets them apart from other children’s stories – they allow a more creative outlook on literature rather than books that are made with morals and lessons intact. Research the illustrations of John Tenniel and his tongue and cheek nature – why did they suit this book? What else did he illustrate for? What was the medium?
Research the background of all the Alice movies – incorporate the knowledge of Walt Disney pretty much leaving his animations to the Nine Old Men while he ran Disneyworld and focused on his empire. Write about the low success of Pinocchio and Fantasia, Walt’s “babies”, and how this affected the animation department and the outlook on Alice. Write about his major success as a major feature length movie animation studio, his rivals the Fleischer brothers and his early successes.
Sources: Walt Disney documentary, movies themselves, Animation History books, Alice book.
Carry on writing about Disney and how the animation studio has changed and morphed into something almost wholly 3D/CGI based, and how Tim Burton’s adaptation is both live action and CGI. Explain the benefits and compare/contrast with the original 1951 2D adaptation. Explain how animation is now such that a combination of live action/CGI is possible and both believable (NASA thought Gravity was real), but without the CGI it would be almost impossible – relate to Beauty and Beast live action CGI notes.
Explore cultural backgrounds of fairytales themselves - maybe the PowerfulJRE would help this? How could Alice’s story be used as a lesson or a moral - does it even have one?
Create appropriate relations to theories - Christopher Vogler, Jamie Campbell, Christopher Booker, Vladimir Propp? (maybe).
Points to Mention
The plot; what type of narrative structure are they? How can they be compared and contrasted? What makes that particular type of structure good for that style of movie, if at all? Can it be compared to 3 or 5 act structure?
Does it follow a Hero’s Journey or Propp’s Morphology of the Folk Tale? If only one, suggest/discuss why the other adaptations do not, or whether they follow different types of narrative structure.
Visual storytelling; is it told or shown? (Hitchcock’s idea of “pure” cinema or things explained through dialogue.)
Aesthetics; what type of animation is it? Does this style suit the plot and overall theme of the movie? Refer to mise en scene and semiotics.
Fables and fairytales usually have morals or hidden messages. Do any of these adaptations contain any, and if so, what are they and how can they be compared? Are they the same?
Audience; who is it aimed at? How can you tell? Are they aimed at different audiences? What is the audience supposed to feel – what is the tone and genre? Are they any genre conventions?
History; write about the history surrounding each of these movies – political and historical. Explain how they came about, what it was like during the time they were being animated/created.
Introduction
Before Ozwald the Lucky Rabbit and Mickey Mouse, there was Alice in Cartoonland – or “Alice Comedies”, 50 silent shorts featuring a live-action girl in a cartoon world.
This was the beginning of the Disney companies long withstanding relationship with Alice. Based on the characters from Lewis Carroll’s classic stories, Alice became the groundwork in which Disney’s reputation grew to what it is today.
Alice’s Adventures in Wonderland
“Begin at the beginning,” the King said gravely, “and go on till you come to the end: then stop.” (Carroll and Pullman, 2015, pg. 182)
Written by Charles Lutwidge Dodgson under the pseudonym Lewis Carroll, after a boat ride down Folly Bridge in Oxford with companions Reverend Robinson Duckworth and the three daughters of Henry Liddell: Lorina Charlotte, Alice Pleasance, and Edith Mary.
Illustrated using etching (a style of printmaking) by Sir John Tenniel, one of the leading illustrators of the time who used his satirical and often radical cartoons as chief cartoon artist for Punch, a weekly magazine containing humour and satire.
Compared to other literature of the time (the Brothers Grimm), Alice was a revelation to Victorian society; a story without moral, rules or to inform children that if they behaved in a certain way then they would be punished.
“It’s sometimes said that Lewis Carroll’s Alice books were the origin of all children’s literature” “In Alice, for the first time, we find a realistic child taking part in a story whose intention was entirely fun.” (Carroll and Pullman, 2015, pg. VI)
Follows a ‘Voyage and Return’ story structure by Christopher Booker. The background of Voyage and Return stories is the protagonist voyaging from the real world into a fantasy world where “everything seems disconcertingly abnormal” (Booker, 2005, pg. 87)
In any story there must first be an inciting incident that “occurs toward, or at, the end of that first act, and the protagonist ‘falls down the rabbithole’.” (Yorke, 2014, pg. 30). Insert a figure with the customised structure done in Photoshop. In Alice’s case, her voyage and return was “just a dream” (Booker, 2005, pg. 106) and she learnt no moral or meaning from it, other than it was “what a wonderful dream it had been” (Carroll and Pullman, 2015, pg. 189).
Alice in Wonderland, 1951
“Sorry, you’re much too big. Simply impassible.” “You mean impossible?” “No, impassible. Nothing’s impossible.” (Alice in Wonderland, 1951)
Based on both Alice’s Adventures in Wonderland and Alice Through the Looking Glass. Overall, the film stays closely related to the source material, with a majority of the dialogue being taken directly from the book itself.
During the times after the release of Walt’s beloved Fantasia and Pinoccio, which did not achieve the appreciation that he had hoped for, Walt’s creativity “was focused on his ideas for Disneyland and he left the animation department largely in the hands of his trusted ‘nine old men’” (Cavalier, 2011, pg. 154). Disney’s version of Alice was met with no better reviews than his vision, Fantasia, and The New York Times declared:
Meant to be animated as a piece of visual storytelling – where the images and movie itself tells the story, rather than the dialogue driving it.
Alice featured a much more striking and vibrant colour palette than that of its predecessors; movies such as Snow White and Bambi were created with muted colours in mind with fears that use of bright colour would distract from the emotion and make the feature length animation movies difficult to watch for 70 minutes of viewing time.
Created using ‘cel’ animation (coined because of the fact that each animation was drawn and painted on sheets of celluloid before being photographed), which is “achieved when ‘key drawings’ are produced indicating the ‘extreme’ first and last movement which are then ‘in-betweened’ to create the process of the move” (Wells, 2011, pg. 36).
The plot of the movie was carved out by first creating a soundtrack that documented the conflict and substantial meetings of characters, and then the animation was fleshed out to fit these songs and the voiceovers added afterward, this is called “proto-narrative”.
Fans of the Lewis Carroll classic disapproved of the “Disneyfication” (the removal of unpleasant plot elements and the act of making a story “safe” for younger audiences and their parents) of the movie, and Disney fans were “uncomfortable with the story’s surrealist episodes and the spiky and somewhat grotesque nature of the supporting characters.” (Cavalier, 2011, pg. 154)
Alice in Wonderland, 2010
“I’m not crazy. My reality is just different than yours.” (Alice in Wonderland, 2010)
More than 50 years after the release of the original Alice movie by Disney, it was recreated using a hybrid combination of live-action and CGI with Tim Burton as the figurehead director.
(It is reminiscent of Burton’s macabre stop-motion animation films, such as The Nightmare Before Christmas and Corpse Bride.)
It could almost be seen as a sequel to the 1951 version, as it occurs later in Alice’s life after she has already visited Wonderland – now dubbed “Underland” – and does not follow the original text plot, but draws characters and scenarios from it instead.
The colour palette used for Underland is very dark and muted (see figure), with special exceptions to that of the Mad Hatter (who wears crazy colour combinations that clash and draw your attention) and the Queen of Hearts (donned in a blood coloured crimson).
Alice as Animation
It would not be possible to fully recreate the story of Alice without the use of some type of animation essentially CGI. Fairy tales and fables contain whimsical elements and cause for special effects that cannot be recreated in any other way even with modern technology at the level it’s at – NASA complimented the accuracy of the portrayal of space from the film Gravity (“What I really liked was the feel of space, the look of space, what the earth looked like up on the big screen. It brought back a lot of memories very accurately.” (NASA, 2014) which could not have been achieved without the amount of special effects and CGI used to recreate space itself.
Similarly, the upcoming 2017 live-action adaptation of Beauty and Beast heavily relies on the ability to use CGI for the Beast’s appearance. “You’re only going to see Dan Stevens’ eyes essentially, the rest of him is going to be a CGI creation… He had to have his face covered in basically UV paint, a special new technology… and then he was plopped in front of 30 little cameras and then had to perform the facial part of the Beast” (Clark Collis, PEOPLE, 2017).
Features anthropomorphism (attributing human characteristics to things entirely unhuman such as animals and objects) in the form of the Cheshire Cat, Caterpillar, etc., something that “remains the constant locus of a great deal of animation.” (Wells, 1998, pg. 5)
Conclusion
Since its publication in 1865, Alice has resonated with people worldwide for its timeless story and fun adventures with “mad” characters. It continues to inspire pop culture to this day, and is used for inspiration in movies, books, television shows and games worldwide.
0 notes