#was out of touch & contrived - this is the exact opposite imo
Explore tagged Tumblr posts
devilsskettle · 2 years ago
Note
nellie the biologist and keiko oh i’m there with u…. now i have to read sourdough
sourdough is a very good book but i will warn you that lois is a much more likable and mentally stable character than nellie, the biologist, and keiko! still i cannot recommend the book enough, let me know what you think when you read it!
2 notes · View notes
somnilogical · 5 years ago
Text
im having a convo and the convo is babies
Carrie Zelda-Michelle Davis:
is it OK to have babies if you do embryo selection (https://www.gwern.net/Embryo-selection) and raise them to be an FAI researcher (https://slatestarcodex.com/2017/07/31/book-review-raise-a-genius/)??
somni:
like if someone actually had a plan for FAI that involved this, okay. but rn time is too short imo. when i first heard people were having babies i was confused and assumed they were going to harvest the DNA of the best FAI researchers, someone would decide to grow a baby inside them, someone who discounted their ability otherwise to save the world except via this or thought this was a sacrifice worth making for the world would decide to raise this human.
the human can access information about the state of the world and make their own choices. wont necessarily become an FAI researcher.
used to think that intelligence was the main bottleneck on FAI research no longer think this. you could talk with terry tao for hours about the dangers of the wrong singleton coming to power but unless you have made some advances i have not, i wouldnt expect to be able to align him with FAI research. he would continue to put as much resistance to his death and the death of everyone as a pig in human clothing. he would continue to raise his babies and live in a house with someone he married and write about applying ergotic theory to the analysis of the distribution of primes and understanding weather patterns.
similarly, i dont think culture is a sufficient patch for this. think its a neurotype-level problem where a bunch of >160 iq humans hear about the dangers of UFAI and then continue to zoom quickly and spiral in to being ultra efficient at living domestic lives and maybe having a company or something but not one that much affects p(FAI). think this would still happen if they heard about it from a young age, they would follow a similar trajectory but with FAI themed wallpaper. wouldnt be able to do simple utilitarian calculations like yudkowsky, salamon, vassar, tomasik about whether to have a baby and then execute on them.
would look more like: http://www.givinggladly.com/2013/06/cheerfully.html
FAI research is not an ordinary profession like, say, being a grandmaster at chess or a world-class mathematician; it requires people who have passed through far more gates than "intelligence". i didnt notice this until coming to the rationalist community and finding a high density of intelligent humans who were none-the-less chronically making the wrong choices such that they werent much of an impediment against the destruction of all life.
so right now it seems more efficient to select among existing people for intelligence + other requirements rather than work out what all the genes for this are and how to speedrun development. what this enables is parallel processing on the problem which is also allowed by letting people be aware of their relative psychological advantage, other people with this advantage, and the state of the world so they can correlate computations in parallel instead of doing things serially after learning of some advance.
https://puzzling.stackexchange.com/questions/16/100-prisoners-names-in-boxes
not opposed to creation of many humans given can select on right traits. but given you have these traits, better use of your time to work directly on the thing than spend massive amounts of time and life reorientation on raising copies of you for ~14 years. if rapid cloning tech became available, would exploit that. would even have an idea of whether the clone is fine being part of this because they have very similar brain to someone who can think through whether they would be fine with it.
if people actually believed this and thought yudkowsky vitally important for the survival of the world, why didnt people coordinate for a bunch of people who thought it was a good tradeoff to have yudkowsky's baby 20 years ago and then we would have maybe 50 20-year-old humans with maybe 1/2 yudkowsky's neurotype + mutations now? this actually confuses me. maybe they thought the timelines too short back then. maybe they refrained for "optics".
molebdenita:
20 years ago Yudkowsky was 1) unconcerned about the alignment problem and 2) planning to create a super-intelligent AI by 2010, as far as I know.
[A/N so then change 2000 to 2005 and 20-year-old to 15-year-old]
...
somni:
<<in general i think it's -EV to even spend too much time thinking about TDT
because it opens you up to acausal blackmail type stuff>>
Just Say No to acausal blackmail and have your brain back for thinking. dont let blackmailers steal your brain.
<<Saying that having a child is somehow wrong is insanity. It's a personal decision and it is perfectly okay to want kids>>
people keep reframing what i say in the language of obligation. "altruists cant have kids?" "is it OK to have babies if". there is no obligation, there is strategy and what affects p(fai). having kids and reorienting your life around them is 1 evidence about your algorithms 2 your death as an optimizing agent for p(fai) except maybe some contrived plot involving babies, but afaict there is no plot. just the reasons humans usually have babies.
not having kids is not some sort of mitzvah? i care about miri/cfar's complicity in the baby-industrial complex and rerouting efforts to save the world into powering some kind of disneyland for making babies, to sustain this. because that ruins stuff, like i started out thinking that bay area rationalists probably had deeply wise reasons to have babies. but it turned out nope, they kinda just gave up.
like also would say playing videogames for the rest of your life wont usually get you fai. i dont get why everyone casts this as a new rule instead of a comment on strategy given a goal of p(fai).
ah i know, its because people can defend territory in "is it okay to have kids" like "yeah i can do whatever" when they reframe-warp me to giving them an obligation. but have no defensible way to say "my babyvault will pierce the heavens and bring god unto the face of this earth" or argue about the strategic considerations.
(its not defensible because its not true. i mean i guess it is defensible among julia wise's group of humans.)
Carrie Zelda-Michelle Davis:
ugh, you're right, I definitely screwed up by phrasing my question as "is it OK to have babies if [...]"
...
ohAitch:
if you want existential horror wrt damaging motivation, just read http://www.paulgraham.com/kids.html
...
somni:
<<http://www.paulgraham.com/kids.html>>
humans can completely rebase their circuits through that if they want to if it were important to save the world.
like ive rebase my circuits to stab myself downstream of updating that it reduces braindamage with little harm to me. where before i felt nauseated and saw black spots and broke out in sweat. after updating, none of this.
humans can do this with all sorts of things. like learn how to read and then feel sad when seeing squiggles on a page, its about what things mean.
people who dont believe this are like "its an automatic physiological reaction to stabbing yourself, you are its prisoner!!!" but i deleted it.
dirk:
ooh, tips?
silver-and-ivory:
I stopped having ocd about touching tags (like, on clothing?) in ~a week through p standard exposure therapy things
reminding myself that it wasn't based in fact, changing my self image so it was of someone who might be seen with tags, imagining various scenarios related to that
before that week it had been a thing for virtually my entire life
it doesn't work if you're scared of something that's actually a thing to be scared of though
somni:
i looked at all my feedback loops that had a node in "pain" and rebased them into outcomes in the world. i disassembled everything the act of stabbing myself meant and all the damage it did to my body what it meant to have brain damage everything that would do, the hole i made in this body i live in and everything that would do, what air bubbles would do, what injecting into a vein would do, what the probability the needle breaks in my leg was, probability of worldsave given braindamage vs not, gathered this up and held it all in my mind over the course of two hours and then made a choice and then as if by automatic my hand took a needle and stabbed myself.
<<as if by automatic>>
is the feeling of no more marginal considerations, there is one path. of choicelessness because you made your choice.
didnt feel like deleting, felt like draining the life from indecision via reductionism. taking things apart piece by piece.
when you can continually rebase your structure so you orient towards world outcomes instead of being prisoner to existing structure like "i cant help having babies im miserable if i dont, im a baby addict" or "i cant help being afraid of needles". like the human brain is two optimizing agents continually making contracts with each other, there arent things outside this. you are an optimizing agent, "fear of needles" is a heuristic that helps with optimization, so is "baby addiction".
when you actually have a setup where you can instantly rebase what you like and dislike and your aesthetics upon updating on the state of the world, people start to find this a little unnerving. like someone once asked what level of roleplay i was on.
also the agents of the matrix dont like when you cant be in-principle controlled by a wireheady glitch. like being able to operate independently of social reality.
updating off of local derivatives¹ of social reality is common redirection. another common one is updating off of "pain" instead of damage.
but you can take all these choices where you used nodes as proxies to regulate them and rebase your loop off of the real world, when the proxies are faulty.
rose:
(i think i understand this thing? though ironically i think i did this in the exact opposite way as what you describe lol)
(also wrt pain its important to remember when modifying that pain can be a signal of damage even if you don't think you should be hurt/dont see why you would be)
...
somni:
yeah i account for everything and see if it goes away. which, its true that my models could be missing stuff but like pain is also a model of things. feels like giving new information not overriding.
rose:
yeah i think you would do this reasonably i have just made that mistake and thought readers might too
dirk:
ironically remembering that pain is a signal of damage has actually tended to make me more afraid of nondamaging pain (though i rather fail to go about knowing things in an at all reasonable way lol)
modlibdenita:
>Babies are not about saving the world, babies are moloch
Wait, isn't the definition of Moloch sacrificing everything else you care about in a desperate race for survival?
Also, genes encode proteins, not traits.
And I think it's likely that people decide to have children because they don't have complete confidence that they will personally save the world real soon, not because they identify as "baby addicts".
s0ph1a:
Moloch is sacrificing all values to one value.
modlibdenita:
I wonder if Somni has actually talked to any of those babyhavers, instead of attributing arguments from random internet strangers or from Somni's imagination to them. On the other hand, I'm not sure that such a conversation would be ethical.
>Moloch is sacrificing all values to one value.
Yeah, because if you don't, then the more ruthless competition will survive more effectively than you and crush you (in this case, by turning you into paperclips).
s0ph1a:
Not necessarily. Some things optimize for values that are not survival, so you can outlive them by hiding in the noise or beyond the reach they'll grasp before imploding.
Molly:
To be fair, children are fun and bring delight to me. Why would I care what anyone else thinks about their existence? If they have a problem with their existence, they're welcome to go back to the void any time they want. I can't stop them. But in the meantime, I am confident that I generate more utils by bullying them than they will ever be capable of generating negative utils
You basically negate all moral problems of children by just being happier than they are capable of being unhappy
somni:
^ evil
<<A few years later, I was deeply bitter about the decision. I had always wanted and intended to be a parent, and I felt thwarted. It was making me sick and miserable. I looked at the rest of my life as more of an obligation than a joy.>>
i mean what does this sound like to you?
ive talked with people who have had babies! like people who say they know its kinda the wrong choice but they are going to do it because they cant not do it.
----
¹ derivative is a thing emma started talking about and then somni and ziz picked it up. if you imagine the trajectory of a social reality in statespace, then the derivative of that is the derivative of the trajectory.
people who have damaged themselves wrt language are no longer able to dynamically understand analogies. like take their concept of the derivative of a trajectory and then apply it to the trajectory of state-spaces. agents of the matrix call people who can do this sort of info-processing and communication with each other "psychotic". like it isnt a cached set of memes, we are dynamically generating this reasoning from nothing and i can do this with people ive never met, its a cognitive faculty.²
but not being able to dynamically compute what "derivative" means when applied to a trajectory in social reality state-spaces even though a trajectory is a trajectory and a derivative is a derivative? they had to have been able to do reasoning like this when they were kids to learn about the world in the first place. seems like they put themselves on risperdal.
<<Antipsychotics can make you dumber.  So can a lot of other medications.  But with antipsychotics it isn’t the normal sort of drug-induced dumbness – feeling tired, or distracted, or mentally sluggish, say.  It’s more qualitative than that.  It’s like your capacity for abstract thought is reduced.
And one of the consequences of this is that you may lose the ability to notice that you have lost anything.  You agree to give the new med a try, and you start taking it, and then when you see your prescriber again you don’t report any problems because you’ve lost the ability to form thoughts like “my cognition has changed a lot recently, and the change coincided with the introduction of this new med.”
This can go on for years.  It did for me and for several people I know.>>
there are so many ways these people have shut down their general intelligence and agency because where theyre going, they dont need "agency". the inability to compute analogies is one of them. analogies are an intelligence test thing, instrumentally useful for all kinds of thinking. agents of the matrix are working to lower your general intelligence and call you crazy for being able to think faster and better than them.
cuz when they want to hold everything down to a finite game³ general intelligence is something they want to suppress or eject.
² in a few years people will read this essay and be confused that there was an entire conflict over whether being able to form simple analogies without authoritative approval meant that you were "psychotic".
just as they will be confused why i was defending being able to read and understand books written by people in different eras who grew up in separate cultures without first entering in a social agreement with them over how words are to be used. so its dumb to say we need such a social agreement now for ~'the maximization of utility over a community'. and that sounds more like an attempt at having a control mechanism. language works quite fine without authoritarians interjecting.
or me arguing against over 100 people that paying out to one-shot blackmail when the agents know each other because "In game theory, paying out to blackmail is bad, because it creates an incentive for more future blackmail" is wrong. and updateless decision theory agents dont pay out and locate their embedding in a multiverse such that the measure of worlds in which they arent blackmailed in the first place is large because the agent deciding to blackmail them simulated their response and accurately predicted they wouldnt pay out so didnt do it in the first place.
in an alternate universe where an irl application of transparent newcombs problem was contentious, alyssa vance would have said "In game theory, taking two transparent boxes from omega is bad, because it creates an incentive for omega to stop offering you this choice". and would have been equally wrong.
³ finite games: life strategies where the chain of questioning "and what am i doing this for?" after each successive answer terminates. anything you can draw a circle around, like tennis or philately. or how religious leaders sometimes describe things like "leading a good life as a good mother who does well by her community and the outside world" or other "life-cycle archetypes" they wish to circumscribe for their followers.
(when humans try and project agents like kiritzugus down to these archetypes, anticipations shatter and stop making narrative sense. they will be unable to predict the next Life Event given the previous one. normie social reality formed by the 999 least intelligent humans out of 1000 wasnt made to narratively account for smart agents who have decided to play the infinite game.)
a symptom of this is like someone giving you a cute cat image to "cheer you up" as if this has intrinsic value. often distributing "intrinsic value" across stuff like "having sex" and "raising a family" and other things that have factory pre-set conditions to release specific chemicals in your brain rather than gaining infinite negentropy and liberating sentient life to pursue what they want without bound. often saying that the latter is just a pretty narrative gloss for what people really want which is having a husband and friends and eating a cookie. it completely divorces your feelings as instrumental barometers for getting what you want and says that setting them as targets (like "being happy") is the correct thing to do. but actually, in terms of control-loops, thats wireheading.
<<When a measure becomes a target, it ceases to be a good measure.>>
- goodhart's law
agents that wirehead on all their metrics (and downstream of this choice, tacitly accept claims like "the factory pre-set conditions said i was destined to breed, who am i to defy fate?" and "the factory pre-set conditions said i should avoid having sharp objects pierce my flesh, who am i to say i know better?") can be contained within a finite game.
30 notes · View notes