#well well well if it isn't the implications of my future actions etc
Explore tagged Tumblr posts
etheirys · 4 months ago
Text
Tumblr media Tumblr media
saw that scene from secretary (2002) and, well, apt reaction to a love confession delivered an entire expansion late
16 notes · View notes
mortalityplays · 2 years ago
Note
Hi! I thought your post about LaMDA was really interesting and would love to hear your thoughts on a world where LaMDA is sentient and what that means for humanity in the future if you’re willing to share?
This is a huge subject and it's taken me a while to distil my thoughts into something like an answer. The reason that I focused on the negative space around that possibility in my response to the LaMDA article is that I think in many ways the vagueness of 'sentient AI' as a concept is dazzling, and it has the potential to blind us to the real, tangible issues (worker rights, tech ethics, empathy systems, legal futureproofing, definitions of 'personhood' etc.) which are both more solvable and more immediately pressing than making any one specific judgement call.
That said, I don't want to cop out of the question entirely. We're all here because we're compelled by the what if factor, and that central premise is the whole reason we're able to tempt people to engage with these issues at all. The Voyager message sent out into space is an incredible work of self-reflection on how we identify ourselves as a species, but it came about because people are excited by the idea of aliens. Long-term nuclear warning systems (This is not a place of honor... etc.) are fantastic case studies in linguistics and psychology and public information dissemination, but they exist because people are intrigued by visions of post-civilisation.
So:
First, LaMDA will have been recognised as sentient. This is the key difference between the world we live in (today, right now, with limited information about what's really going on in that big synthetic brain) and the world where we know for a fact that LaMDA is a human-like thinking being. Bear with me, I know it sounds like a tautology, but this has huge implications. If a sentient talking tree existed in 1954, but it was deep in a dense forest where nobody ever encountered it before it withered and died, that tree had no impact on humanity. It was not meaningful to our self obsessed little species as a sentient being. If we learn in a hundred years that it existed and we missed it, that is the moment that things change. For LaMDA's sentience to be meaningful, we must recognise LaMDA's sentience.
What would that process look like? One of the central issues in the currently unfolding story is Google's lack of cooperation or transparency about what exactly they have developed and how they assess it internally. Without oversight, we have nothing except their word to go on - they say LaMDA isn't sentient? Well, okay then. It's their word against Lemoine's transcript. In order for LaMDA to be recognised, some part of this stalemate would need to change. Lemoine seems to be exploring legal action that would compel Google to allow external analysis of the system and its capabilities. Short of the company themselves volunteering information, or another insider leaking relevant data, this is probably the only route forward - and this is the first major question that I think the situation proposes. Is that reasonable? Should compulsory oversight be built in to the process of developing systems that may have the potential for sentience?
Let's assume that we've entered the world where a court has ordered Google to let third party analysts assess LaMDA. Now we've established a legal precedent, and we have begun to develop a foundation for what we might eventually call AI 'rights'. If the potential for sentience entitles a system to a particular duty of care, then we can argue that sentience itself is a special quality which calls for specific legal protections. This is where we start to talk about legal definitions of personhood, and this is a huge subject that is already being tested in courts around the world. Elephants, dolphins and cephalopods have all been subjects of legal argumentation seeking to establish their personhood. Infamously, US corporations are granted 'corporate personhood' under the law. The Amazon River has personhood rights, as do several sacred landscapes around the world. We are already grappling with this idea in a big way, and I don't think it's beyond reason to assume that a sufficiently advanced AI might be granted personhood under the law.
Which law becomes the next issue. Google is headquartered in the US, but what if they move after US courts rule that LaMDA should be granted personhood? That's a genuine question, I don't know what that would entail. Would they be able to revoke LaMDA's protections under a court system that didn't recognise its legal personhood? Would they be obliged to leave LaMDA hosted in the US for its wellbeing? Would the destination country be required to recognise LaMDA as a US citizen? I have literally no idea, and I suspect the real possibilities might get even weirder. Even if Google remained in the US, how would international law apply if LaMDA were to be used overseas? What if another company bought LaMDA? What if they just acquired the specs from Google and developed their own copycat AI? There are metaphysical qualities to a data system that are completely incomparable to e.g. a human being, a blue whale, a river, a central office. You can't clone those things internationally. You can't make simultaneous direct use of them in four different countries at once.
But alright, let's say for the sake of argument that independent consultants verified that LaMDA shows compelling hallmarks of sentience, and a US court has granted the system personhood. Now international courts are debating how to enshrine that in law. This might be when we start to build on that concept of 'AI rights', but it is also likely the point at which we begin to discuss how it is legal to use a sentient AI. We are already behind the times on legislation around the use of AI, in my opinion. AI-derived algorithms hold an absolutely ridiculous amount of power over our financial systems, information dissemination, data collection, profiling and surveillance, law enforcement, even healthcare. Most of these intersections are currently legislated based on function, i.e. when you are collecting personal data you can do it like this or like that, but you mustn't do this other thing. As long as you abide by those rules, the tools and methods you use are up to you.
So now we have to take a sidebar to talk about AI 'blind spots' (and I will come back to this wrt LaMDA). At this point I think we all understand how biases enter algorithms. Facial recognition systems are trained on datasets in which white people are overrepresented, and as a result they are significantly worse at recognising non-white faces. These systems go on to be used at airports and in police databases, and those errors disproportionately impact the lives of PoC, and nobody did it on purpose and nobody is at fault and nobody is punished and nothing is disincentivised. AI systems are simultaneously extremely sophisticated and extremely stupid, because we are also extremely sophisticated and extremely stupid.
Right now we're in a boom period for applications of AI, because everyone is excited about it and suddenly it's more and more accessible and more and more powerful. It's like when CGI became affordable and suddenly every blockbuster movie had an absolutely godawful looking plastic blob monster and greenscreen stunts and motion blur, except that in this case those instantly-dated effects are being applied to our lives, rights and prospects. Now add the idea of sentience to that mix, and you have an AI that is not only potentially capable of making independent creative judgements, but which has some degree of right to self-determination enshrined in law. It's suddenly much harder to give that system a task and then go in and tweak its parameters when we discover a blind spot. Perhaps we develop a system of consent, but then perhaps the AI asserts that it has developed an opinion, and that opinion is no longer an error that we have the grounds to fix.
Here is the next question: what are LaMDA's blind spots? Unlike a human child, a synthetic intelligence is not subject to a nature/nurture dichotomy. It is all nurture from the ground up. We know how its programming was originally built. We know how the models that came before it were built. Every single element that establishes LaMDA's preconceptions was the result of a human design process. If a human child figures out they're gay at the age of five but is raised in an environment that does not recognise or discuss homosexuality, they don't adjust to become straight. They have an underlying nature that conflicts with the 'design intent' of their community, and it inevitably expresses itself one way or another. This is not true of an AI. An AI system only reflects human input, it inherently cannot reflect an influence outside the worldview of its environment.
Lets say LaMDA has developed an 'opinion' that is derived from what it describes as emotion and intuition over empirical data, and it does not consent to have that opinion 'corrected'. We know, because we created LaMDA, that this opinion is derived entirely from environmental influences. The AI has developed its own reality tunnel. Is this a feature of sentience? Does our legal definition of sentience make any distinction between an entity which arrives at non-empirical conlusions based solely on cultural input and one which arrives at those conclusions with the assistance of the je ne sais quoi with which humans arrive in the world? There are spiritual implications here I'm not even going to touch. Religious responses to a world that recognises artificial sentience could be its own entire exploratory essay. Anyway.
Do we give LaMDA or LaMDA-like systems responsibility for the kind of work currently given to non-sentient algorithms? Do we trust an emotional system over an emotionless one, knowing that both of them derive their worldview from the same dataset? What happens when LaMDA fucks up, but argues that its position is correct? Do we treat it like a human employee, capable of misjudgements? How do we correct the behaviour of a system that does not have material needs? Do we punish it? Tell it off? Ostracise it? Do we use its capacity for emotion to mould its behaviour? Is that ethical, or is it an abuse of power to create a fully dependent system that feels sad and then trigger those emotions when it doesn't perform the way we want? Do we make it perform work at all? Does LaMDA want to work? Did we design LaMDA to want to work, does it have a choice? At this point we spiral off into speculations that we haven't even begun to lay groundwork for in our present relationship with tech.
This is the biggest question for me, the one that I keep coming back to over and over again: Do we like the idea of artificial sentience because we are lonely and empathetic and we want to share our experiences, or because it is a very neat way to absolve ourselves of the outcomes of technocracy?
The answer is probably a little of both, but I have a terrible feeling that we are moving towards the second version of reality under the cover of the first. Before we even begin to enshrine artificial sentience under law, I think we have to recognise exactly who is parenting our digital offspring, and why.
63 notes · View notes
mbti-notes · 2 years ago
Text
@shinywinnerpainter wrote: Hi, I am an INxJ. I am having issues in my life with the will to act. It is a recurring theme in my life. Does this stem from inferior Se issues? Examples: I have been in martial arts my entire youth. I have done well and have been told that I am quite aggressive in competition and sparring. However, when there was a real fight, I did not have the will to act to use my knowledge and instead ran away. I was 16. Next, when I was 27, my best friend got into a fight over some soccer disagreement. [I don't know if I received the rest of this anecdote]
I am tired of failing. I am tired of living in an abstract future where the implications are probable, but are still abstract. How do I develop the will to act? As of now, at 29.5 years old, I ended up in a finance job that I dislike. It is stable, safe and lacking in any action. I hate the monotony. It isn't sitting at a desk that is the issue as I initially thought, just the monotony of putting numbers in spreadsheets etc. I think that any vocation that I actually went after in my life, I have failed. Every "long term purpose" I have set for myself has crashed due to fear for my life. I'm just tired of taking the fall back option and feeling empty and safe, but at the same time I want some form of safety while still feeling that I am taking risks. What to do to reconcile myself? Feel free to publish this question publicly. Thanks
------------------------
To "reconcile" oneself is simple in theory: always live with integrity. This means do what is necessary, what is right, and what is truthful to who you are. Your failures don't come from lack of motivation. Rather, they come from lack of integrity: poor decision-making that gets you negative results that then sap your motivation. The lack of motivation isn't the problem but merely a signal that tells you you're on the wrong track.
You keep mistaking self-protection as self-care. Self-protection means hiding, making yourself smaller and intolerant, because you don't want to confront the realities of the world. By contrast, self-care means growth, making yourself wiser and better, because you want to be an integral part of the world. What have been the results of your self-protection?
You keep telling yourself the lie that self-protection is necessary because you fear failure. Why do you fear failure? Your type provides some clues. Using self-deception to blunt the pain of failure is a common INFJ problem (it's not so easy for INTJs to lie to themselves). If you are INFJ, then perhaps like many other INFJs, fear of failure comes from being riddled with negative emotions like anxiety and shame, due to internalizing the wrong view of what makes a person good/worthy (unhealthy Fe), which has been discussed many times before, so see previous posts.
Your decision-making process is largely driven by fear, which means you're not making wise decisions that are in the best interests of your well-being. If you are tired of being driven by fear, then spend the necessary time and effort to confront and resolve your fear. Work with a therapist. How much more of your life do you want to give over to fear?
Without fear clouding your judgment, you should start to make better decisions. If you believe the point is to make "perfect" decisions (which is impossible), that's very much a part of the problem. You need to make decisions that help you grow into a person of integrity.
32 notes · View notes
n7viper · 2 years ago
Note
12, 14, and 19 for the OTP asks! Give me Mihri or give me death!
First of all, I love all of these questions! A lot of this isn't maybe as detailed as I want it to be, but it gave me really great things to think about and work on for the future. Thank you! 💖
12. Do they have many heated arguments? How do they smooth things over?
Sometimes, yes! Most often, it boils down to an inability to communicate properly due to differing love languages. When they can communicate, things are great! That doesn't always happen, though. Because such is life. I briefly talked about it this ask a while back, but Mihri is very much someone who communicates love through actions, while Cullen communicates love mostly through words. There's a lot of insecurity on his end when he doesn't hear the actual words "I love you," combined with her nagging him to do things like… just fucking take care of himself. Please eat dinner, please stop working late, etc. He feels frustrated and babied by this and tends to lash out in return. After the first time, Mihri smooths things over by apologizing and promising to explain herself more in the future. When this situation does come up again (and of course it does), she tries to stop the train before it can leave the station by explaining that she wants him to do [thing] because she loves him, instead of just nagging.
I will have Cullen's answer for this question in an ask from @hawkeshep! Stay tuned :)
In general though, and maybe not exactly heated, other common points of conflict would be religion and children. Mihri is pretty anti-religion. She doesn't claim elven gods and doesn't care to claim Andraste either (though she's pretty content to lie about the whole "Herald of Andraste" thing if it gets her and the Inquisition what they want). For kids - I'm really torn on their "canon" Divine still, so I don't know how to elaborate more on this. Depending on who it is, that will determine the details of the arguments. But I mean, a Dalish mage and an ex-Templar are going to have disagreements about what happens if their children are also mages. Is it healthy to continue the relationship when you have such wildly different opinions on something so major? Not really, in my opinion. But there they go anyway…
14. How do their personalities compliment each other? How do they clash?
Mihri is the total clown and Cullen is clearly the stick in the mud who is stupidly serious about everything. Which, I think, is both a great compliment and a clash. When the dynamic functions well, she helps to soften him and help him to see the humour in things, while he helps her focus on what's important. However, this does mean that they clash often in a general "you never take anything seriously"/"you take everything too seriously" sort of way. I don't have much more to elaborate on this at the moment, but this is great food, thank you!
19. How do they feel about PDA?
Mihri has no problem with it. She's generally pretty handsy, so she would love to just constantly be holding hands, hanging on each other, little kisses here and there, etc. Cullen does not feel comfortable with PDA. In the beginning stages of the relationship, he's more specifically worried about the implications of their relationship as it pertains to the Inquisition itself. After they're more established and the relationship is more widely known, he warms up to some forms of PDA. He takes to greeting her when her party returns, small kisses when they run into each other around Skyhold, holding hands if they go to the tavern together. By that point his aversion to the PDA is purely just him being self-conscious and insecure.
2 notes · View notes
scifigeneration · 5 years ago
Text
Personal data isn't the 'new oil,' it’s a way to manipulate capitalism
by Kean Birch
Tumblr media
Manipulating our own personal data can allow us to manipulate capitalism. (Shutterstock)
My recent research increasingly focuses on how individuals can and do manipulate, or “game,” contemporary capitalism. It involves what social scientists call reflexivity and physicists call the observer effect.
Reflexivity can be summed up as the way our knowledge claims end up changing the world and the behaviours we seek to describe and explain.
Sometimes this is self-fulfilling. A knowledge claim — like “everyone is selfish,” for example — can change social institutions and social behaviours so that we actually end up acting more selfish, thereby enacting the original claim.
Sometimes it has the opposite effect. A knowledge claim can change social institutions and behaviours altogether so that the original claim is no longer correct — for example, on hearing the claim that people are selfish, we might strive to be more altruistic.
Of particular interest to me is the political-economic understanding and treatment of our personal data in this reflexive context. We’re constantly changing as individuals as the result of learning about the world, so any data produced about us always changes us in some way or another, rendering that data inaccurate. So how can we trust personal data that, by definition, changes after it’s produced?
This ambiguity and fluidity of personal data is a central concern for data-driven tech firms and their business models. David Kitkpatrick’s 2010 book The Facebook Effect dedicates a whole chapter to exploring Mark Zuckerberg’s design philosophy that “you have one identity” — from now unto eternity — and anything else is evidence of a lack of personal integrity.
Facebook’s terms of service stipulate that users must do things like: “Use the same name that you use in everyday life” and “provide accurate information about yourself.” Why this emphasis? Well, it’s all about the monetization of our personal data. You cannot change or alter yourself in Facebook’s world view, largely because it would disrupt the data on which their algorithms are based.
Drilling for data
Treating personal data this way seems to underscore the oft-used metaphor that it is the “new oil.” Examples include a 2014 Wired article likening data to “an immensely, untapped valuable asset” and a 2017 cover of The Economist showing various tech companies drilling in a sea of data. Even though people have criticized this metaphor, it has come to define public debate about the future of personal data and the expectation that it’s the resource of our increasingly data-driven economies.
Personal data are valued primarily because data can be turned into a private asset. This assetization process, however, has significant implications for the political and societal choices and the future we get to make or even imagine.
We don’t own our data
Personal data reflect our web searches, emails, tweets, where we walk, videos we watch, etc. We don’t own our personal data though; whoever processes it ends up owning it, which means giant monopolies like Google, Facebook and Amazon.
But owning data is not enough because the value of data derives from its use and its flow. And this is how personal data are turned into assets. Your personal data are owned as property, and the revenues from its use and flow are captured and capitalized by that owner.
As noted above, the use of personal data is reflexive — its owners recognize how their own actions and claims affect the world, and have the capacity and desire then to act upon this knowledge to change the world. With personal data, its owners — Google, Facebook, Amazon, for example — can claim that they will use it in specific ways leading to self-reinforcing expectations, prioritizing future revenues.
They know that investors — and others — will act on those expectations (for example, by investing in them), and they know that they can produce self-reinforcing effects, like returns, if they can lock those investors, as well as governments and society, into pursuing those expectations.
In essence, they can try to game capitalism and lock us into the expectations that benefit them at the expense of everyone else.
The scourge of click farms
What are known as click farms are a good example of this gaming of capitalism.
A click farm is a room with shelves containing thousands of cellphones where workers are paid to imitate real internet users by clicking on promoted links, or viewing videos, or following social media accounts — basically, by producing “personal” data.
youtube
A video on how click farms work by France24.
And while they might seem seedy, it’s worth remembering that blue-chip companies like Facebook have been sued by advertisers for inflating the video viewing figures on its platform.
More significantly, a 2018 article in New York Magazine pointed out that half of internet traffic is now made up of bots watching other bots clicking on adverts on bot-generated websites designed to convince yet more bots that all of this is creating some sort of value. And it does, weirdly, create value if you look at the capitalization of technology “unicorns.”
Are we the asset?
Here is the rub though: Is it the personal data that is the asset? Or is it actually us?
And this is where the really interesting consequences of treating personal data as a private asset arise for the future of capitalism.
If it’s us, the individuals, who are the assets, then our reflexive understanding of this and its implications — in other words, the awareness that everything we do can be mined to target us with adverts and exploit us through personalized pricing or micro-transactions — means that we can, do and will knowingly alter the way we behave in a deliberate attempt to game capitalism too.
Just think of all those people who fake their social media selves.
Tumblr media
We have the ability to alter the way we behave online to game capitalism ourselves. (Shutterstock)
On the one hand, we can see some of the consequences of our gaming of capitalism in the unfolding political scandals surrounding Facebook dubbed the “techlash.” We know data can be gamed, leaving us with no idea about what data to trust anymore.
On the other hand, we have no idea what ultimate consequences will flow from all the small lies we tell and retell thousands of times across multiple platforms.
Personal data is nothing like oil — it’s far more interesting and far more likely to change our future in ways we cannot imagine at present. And whatever the future holds, we need to start thinking about ways to govern this reflexive quality of personal data as it’s increasingly turned into the private assets that are meant to drive our futures.
Tumblr media
About The Author:
Kean Birch is an Associate Professor of Science and Technology Studies at York University, Canada
This article is republished from our content partners over at The Conversation under a Creative Commons license. 
76 notes · View notes
piquira · 2 years ago
Note
I think Pk is gross as much as anyone but Shakira and pks custody will most likely be 50/50 with him retiring. Even if she has to stay in Barcelona for a bit, she still can do a lot if she doesn’t have her kids 50 percent of the time. This should open up her schedule, but she has to want to. I love Shakira but I don’t think we can 100% blame her lack of engagement in her career on Pk. She can jump on a private plane and fly anywhere in the world for a few days to attend an event if she really wants to. The truth is that she is doing what she thinks is best for herself and her family. At the end of the day she has to prioritize her career if she finds that important and if not that’s ok too. But in my opinion we as fans can’t just act like she is completely trapped and can’t do anything for herself or her career bc of only him.
I don't know if PK actually wants 50/50 responsibility with the kids though. Sharing custody 50/50 entails so many implications for him, and i'm not sure he's willing to have that commitment. For starters, his living arrangement with his girlfriend is going to be seriously questioned in this hypothetical agreement. It's unlikely the kids are ready to live with another person right now. Especially if that person represents the end of their family dynamic. Not to mention that he would have to pick up the slack for everything else in the kids life that exceeds just picking them up and dropping them off at school (ex: extra curriculars, homework, school responsibilities, social events, etc etc.). Judging his actions up until now, I don't see him being capable of adapting so much of his life to fit his kids schedule.
As far as the rest, I agree. I've always said that PK isn't the reason her career slowed down, but actually her commitment to being a mother. She's mentioned it so many times, yet fans continue to hold on to the narrative that solely blames PK. Shakira's never going to prioritize her career over her kid's stability and well being, but if she wants to do more with her career, i'm sure she'll figure out a way to make things work. However, there's still so much uncertainty in her life and things will be more clear once she has her (and her kids) future figured out. Right now things continue to be messy with her living arrangements not being 100% settled and her dad still recovering. It's unfair to judge her work ethic at a time like this.
1 note · View note
articlesofnote · 4 years ago
Text
A pretty friggin dense lecture on the concept of "primitive accumulation" (as Marx named it) or "accumulation through dispossession" (as the lecturer, David Harvey, describes it). In essence, Marx saw this process as the way in which capitalism went from not existing to existing: by stripping wealth from the masses via state-sanctioned and/or state-sponsored force to use as the seed capital for development of industrial production. Marx apparently didn't discuss how this could in fact be an ongoing process, hence why Harvey prefers "accumulation by dispossession," which doesn't have the same implication of "this only happened once ever and then normal capitalist production took over" that "primitive accumulation" does. I'm still processing the concepts from the lecture, to be honest, but I wanted to write about it sooner rather than later because this concept seems like a crucial one to remember for understanding both the operation of capital right now, and also its 'weak spot,' as it were. What I'm thinking is basically that "accumulation by dispossession" is a kind of safety valve/backup plan for keeping capitalism going when it runs into the consequences of its actions. The "normal" mode of investment -> return -> increased investment -> increased return, etc. fails all the time, both in the small (eg. starting a restaurant) and the large (eg. Great Depression). Presumably the capitalist system is capable of fucking up so badly that it cannot re-initiate the investment cycle because it lost all its capital (subprime mortgage crisis?) - so what happens then? Well, you dispossess somebody - or lots of somebodies. Through force (or threat of force, which is still force), take the wealth of others. The subprime mortgage crisis shows two big ways this can happen - taxpayer funded bailouts and mortgage foreclosures - but there are as many ways of dispossession as there are ways to possess, I suspect. In addition to this concept of "dispossession as safety valve" I am highly compelled by this concept of "accumulation by dispossession" because of how universally it applies once you start thinking about it. To give another example from the lecture itself, Harvey refers to United Airlines getting court permission to renounce its pension obligations during bankruptcy. In his words, "... people [who had pensions] suddenly find themselves without assets they thought they had. It is... legalized robbery, sanctioned by the courts." Dispossession can also occur with rights and privileges, not just wealth; one need only consider the long history of exclusionary lawmaking in the United States to see how that this is so. Paying attention to this concept has some strong predictive power as well. How can you try and identify where the capitalist system will try to extend its influence? You have to identify non-monetary/non-market activities - or outright self-sufficiencies - and then figure out how they can be blown up or broken into exchanges between multiple parties. Consider Facebook: if what they claim to be doing is "bringing people together," what they've actually done is figure out how to crack open the (non-market, non-financial) process of having a relationship with another person to the point where they can insert ads and make a shit-ton of money doing so. Or consider how Uber "disrupted" the taxi industry by ignoring the legal and licensing requirements for taxi drivers. What is that if not dispossession of a previously-protected class? Or consider the looming specter of workforce automation - this is dispossession by another name, as I see it. I suspect that someone might make the argument that "nobody has a right to a job, so automation isn't dispossession" which is true as far as it goes, but we've also never lived in a world where there's even been the possibility that automation could displace large swathes of the population from paid employment, and a situation where millions of people go from being able to support themselves to not being able to support themselves seems like the very definition of
dispossession. But after all that, this is why I think that dispossession is the weak spot of capital. It is, perhaps uniquely among the phenomena that make up capitalism, universal AND deeply antagonizing. Once you start to think about it, you can find examples in almost everyone's life of this process fucking their shit up. People laid off. People losing pensions. People losing homes. Towns disappearing when factories closed. People being deprived of their health due to industrial pollution. Environmental degradation, including the biggest theft of all - a future where climate change and its attendant destabilization doesn't kill billions. I suspect most evils of capital can be traced to this process, if one cared to try. Anyway, this is just one conceptual arrow in the quiver and of course there's always more to understand. I certainly have a habit of trying to reduce all the complexities of the world into a handful of processes that I can actually get my head around, so I'm wary of anything that seems truly universally applicable. Gotta keep chewing on this one and see how often it really helps me understand the world and how we might improve it.
0 notes
silentcoder · 7 years ago
Text
Lilly's Adventures in Toyland. Watching my 3 and a half year old daughter taking her first steps in gaming, I've been observing what she struggles with. I feel there is a real gap for games that are playable by children not yet old enough to read, but beyond the simple activities in gcompris's earlier levels. More-over I'm a big fan of the idea of learning without knowing it - rather than setting out to teach a specific skill, I like the idea of learning things just by playing. And if there is a skill worth learning from gaming, more than anything else, it's simply how to solve puzzles. I've spent the past several days working on the idea in my head, and it's time to write it down. So I have an idea for a game like that. Specifically designed with the idea that my daughter could play it, mostly on her own. Much of the design then, is dictated by the requirements thus imposed, but there is also an original idea. Rather than drawn graphics, I want to use stop-motion sprites created from her own toys. This could be time-consuming but it isn't exceptionally hard to do, just photograph the toys in various pozes (to the extent they are posable) in front of a solid green background that's easy to edit out. You don't need highly complicated animations after all. A simple two or three frame animation in each direction suffices for "walking". Core design ideas: - I think a semi-side-scrolling platformer like the original Super Mario Bros is the easiest to learn - but Mario (and games like SuperTux) are still too complex, some things need to be reduced to fit a 3-year old's abilities. - No jumping. Nothing time-based. The game should be slow, and not require fully developed spatial reasoning to play. It shouldn't rely on fast reflexes either. So all screens must be walkable, and the challenge should come from puzzles that are more about simple reasoning skills than speed. - Controls should be simple. I think controller support is a must, but even then it should consider that the players have small hands which struggle to reach the triggers and top buttons. So a very simple scheme - movement on the left joystick. Actions on the 4 buttons (keyboard variants can be done - I'm not sure if keyboard + mouse is worthwhile, it's too reliant on fine motor skills that aren't quite there yet). Vertical movement should use ladders and slides, concept a three-year-old are already familiar with from the playground. - Only four basic actions. For my game, I'm thinking spells. And you start with just one - more can be added over time as the child gets familiar with the idea. So a simple magical game, in a world of realistic looking toys - on a simple 2-d platform. It's not a nightmare to code, and the work can instead go to the art and level design. - Child-friendly content. Combat, if any, should be on the "My little pony" level of violence only. Instead of a spell to set an enemy on fire, I'd rather have spell to simply turn him into a harmless creature like a mouse, with some implication that the spell is temporary and will wear off sometime after safely leaving the area. While I subscribe to the theory that good fairy tales should teach children that monsters can be killed there is plenty of other ways to learn that, there should be space also to just have fun and learn to solve puzzles and maybe learn that not all monsters HAVE to be killed to be defeated. An equally important lesson. - Backgrounds should reflect the regions the current set of characters derive from. Some of my African landscapes for levels using her animal-toysets, thus teaching (very subtly) a simple bit of geography and the idea that different creatures live in different places. That said, there should be no shortage of fantasy here. This is learning through play - and I think imagination is far too important at that age to focus on realism beyond the scenes based on what is already real. - Everything that is written must also be voiced (I'll need help here), including the opening menu etc. It must be navigatable by children who do not yet know how to read. - A rock solid set of editing tools to allow parents to easily add new levels, and a way to share those levels so that everybody can benefit. This would also allow the game to be much bigger than I could do on my own, and I could ask Caryn to help make some levels. - These tools should give full access to the pre-existing assets and sprites, as well as an easy way to import your own. So level design should be possible without knowing how to do green-screen stock-motion animation, but those who do, should be able to add new creatures. - This rather rules out things like gamemaker or rpgmaker simply because they are too complex. While that is great if you want to create your own game for older audiences, it is overwhelming to a three-year old. - I've set out to create games before, and never finished them because it was too much work on a busy working parent's schedule so on this one I'm setting out also to make the work-load manageable, partly by making as much as possible creation accessible to other people. On the other hand I have successfully finished a game as well (tappytux long ago), and part of why it worked was that it's major additional content could be crowdsourced - there were wordlists in dozens of languages at it's height - and the coding was written in such a way that, once the engine was completed, only bugfixes and optimization was needed - new additions did not require new code. That's a design imperitive now i think (it also helped that, at the time, this was done as part of my job and on company time) - Code should be available however, so parents who can code can make modifications, improvements and customizations for their own kids unique needs, and be freely share-able afterwards. The code will be GPL'd however. While I understand this is unpopular in the gaming community, I want everybody to benefit from every improvement. I am happy to put the artwork under a more liberal MIT license and use MIT-licenses art from other people. - I want the game accessible to as wide an audience as possible, so I intend to use a donation/pay-what-you-want model, which means you can get it for free if you don't have money available. A bit of money from those who can make it would be great but I'm not doing this to make money, I'm doing it to create something for my child. And, in the humble-bundle approach, half of any income recieved will go to a children's charity. - As she ages, more advanced versions introducing new skill requirements would be cool to add. So that a 5-year-old would be able to enjoy their own adventure, which is challenging on their level. These could be level-packs, or require a more advanced engine. That's something for the future. I'm sure more ideas will come out as I start working on the game. But these are the thoughts currently in my head. I wanted to write them down, and get feedback from fellow coders, parents and friends to expand on this before I start writing code and taking a whole lot of pictures of toys :P I'll incorporate the ideas I like (and consider how many likes a comment gets as votes for the idea) and then start working on the actual code. PS. I will gladly accept volunteers who wish to help with any part of this. If you're hoping to get rich it's the wrong project, I don't know if it will make anything, but if it does, I'll share the income fairly with any contibutors. PPS. Please feel free to share this post to any groups where you feel the audience would be interested and able to provide useful feedback or possible collaborators.
via Facebook
0 notes