#Peter is swedish he would probably have understood some of this mess
Explore tagged Tumblr posts
thegothicviking · 1 year ago
Text
I translated some of the weirdest Norwegian pornclips out there. It's sfw unless if you/your boss know norwegian words..
Context: I used to have a Filipino friend (we are no longer friends) and I had told him about, and shown him this SFW clip of a horrible, weird Norwegian full length porno movie. I hardly delete anything I write, so I still have the translation that I gave to him.
This clip of the movie is censored and SFW
(unless if you know the Norwegian language, then some things of what they are saying is offensive/not all SFW!)
Please get your eye bleach!
Are y'all ready for ....
"SKOGSTUR"/"FOREST TRIP"?
youtube
The translation for the clip is as follows;
Tumblr media Tumblr media
The brown haired man is the man with the hat/the man who is fishing
"To get some" and "to get a catch" at the end of the 1st part means that they now want to get ladies...not fish.
"If you walk far...Pussy you'll get/Langt Ä gÄ, fitte Ä fÄ" is a dirty spin on a quote from some Norwegian fairytale but I can't remember which one it is and I don't remember what it's supposed to say instead of "pussy/fitte".
There are at least two random naked women and they talk over the men and it's difficult to hear what they are saying (I have seen the entire NSFW movie. Please don't be a fool like me!)
9 notes · View notes
metastable1 · 5 years ago
Link
October 10, 2016.
Four years ago, on a daylong hike with friends north of San Francisco, Altman relinquished the notion that human beings are singular. As the group discussed advances in artificial intelligence, Altman recognized, he told me, that “there’s absolutely no reason to believe that in about thirteen years we won’t have hardware capable of replicating my brain. Yes, certain things still feel particularly human—creativity, flashes of inspiration from nowhere, the ability to feel happy and sad at the same time—but computers will have their own desires and goal systems. When I realized that intelligence can be simulated, I let the idea of our uniqueness go, and it wasn’t as traumatic as I thought.” He stared off. “There are certain advantages to being a machine. We humans are limited by our input-output rate—we learn only two bits a second, so a ton is lost. To a machine, we must seem like slowed-down whale songs.”
OpenAI, the nonprofit that Altman founded with Elon Musk, is a hedged bet on the end of human predominance—a kind of strategic-defense initiative to protect us from our own creations. OpenAI was born of Musk’s conviction that an A.I. could wipe us out by accident. The problem of managing powerful systems that lack human values is exemplified by “the paperclip maximizer,” a scenario that the Swedish philosopher Nick Bostrom raised in 2003. If you told an omnicompetent A.I. to manufacture as many paper clips as possible, and gave it no other directives, it could mine all of Earth’s resources to make paper clips, including the atoms in our bodies—assuming it didn’t just kill us outright, to make sure that we didn’t stop it from making more paper clips. OpenAI was particularly concerned that Google’s DeepMind Technologies division was seeking a supreme A.I. that could monitor the world for competitors. Musk told me, “If the A.I. that they develop goes awry, we risk having an immortal and superpowerful dictator forever.” He went on, “Murdering all competing A.I. researchers as its first move strikes me as a bit of a character flaw.”
It was clear what OpenAI feared, but less clear what it embraced. In May, Dario Amodei, a leading A.I. researcher then at Google Brain, came to visit the office, and told Altman and Greg Brockman, the C.T.O., that no one understood their mission. They’d raised a billion dollars and hired an impressive team of thirty researchers—but what for? “There are twenty to thirty people in the field, including Nick Bostrom and the Wikipedia article,” Amodei said, “who are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”
“We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”
“But what is the goal?” Amodei asked.
Brockman said, “Our goal right now . . . is to do the best thing there is to do. It’s a little vague.”
That’s disturbing but it seems that OpenAI got its act together since then.
A.I. technology hardly seems almighty yet. After Microsoft launched a chatbot, called Tay, bullying Twitter users quickly taught it to tweet such remarks as “gas the kikes race war now”; the recently released “Daddy’s Car,” the first pop song created by software, sounds like the Beatles, if the Beatles were cyborgs. But, Musk told me, “just because you don’t see killer robots marching down the street doesn’t mean we shouldn’t be concerned.” Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana serve millions as aides-de-camp, and simultaneous-translation and self-driving technologies are now taken for granted. Y Combinator has even begun using an A.I. bot, Hal9000, to help it sift admission applications: the bot’s neural net trains itself by assessing previous applications and those companies’ outcomes. “What’s it looking for?” I asked Altman. “I have no idea,” he replied. “That’s the unsettling thing about neural networks—you have no idea what they’re doing, and they can’t tell you.”
OpenAI’s immediate goals, announced in June, include a household robot able to set and clear a table. One longer-term goal is to build a general A.I. system that can pass the Turing test—can convince people, by the way it reasons and reacts, that it is human. Yet Altman believes that a true general A.I. should do more than deceive; it should create, discovering a property of quantum physics or devising a new art form simply to gratify its own itch to know and to make. While many A.I. researchers were correcting errors by telling their systems, “That’s a dog, not a cat,” OpenAI was focussed on having its system teach itself how things work. “Like a baby does?” I asked Altman. “The thing people forget about human babies is that they take years to learn anything interesting,” he said. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.” Altman felt that OpenAI’s mission was to babysit its wunderkind until it was ready to be adopted by the world. He’d been reading James Madison’s notes on the Constitutional Convention for guidance in managing the transition. “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board,” he said. “Because if I weren’t in on this I’d be, like, Why do these fuckers get to decide what happens to me?” Under Altman, Y Combinator was becoming a kind of shadow United Nations, and increasingly he was making Secretary-General-level decisions. Perhaps it made sense to entrust humanity to someone who doesn’t seem all that interested in humans. “Sam’s program for the world is anchored by ideas, not people,” Peter Thiel said. “And that’s what makes it powerful—because it doesn’t immediately get derailed by questions of popularity.” Of course, that very combination of powerful intent and powerful unconcern is what inspired OpenAI: how can an unfathomable intelligence protect us if it doesn’t care what we think? This spring, Altman met Ashton Carter, the Secretary of Defense, in a private room at a San Francisco trade show. Altman wore his only suit jacket, a bunchy gray number his assistant had tricked him into getting measured for on a trip to Hong Kong. Carter, in a pin-striped suit, got right to it. “Look, a lot of people out here think we’re big and clunky. And there’s the Snowden overhang thing, too,” he said, referring to the government’s treatment of Edward Snowden. “But we want to work with you in the Valley, tap the expertise.” “Obviously, that would be great,” Altman said. “You’re probably the biggest customer in the world.” The Defense Department’s proposed research-and-development spending next year is more than double that of Apple, Google, and Intel combined. “But a lot of startups are frustrated that it takes a year to get a response from you.” Carter aimed his forefinger at his temple like a gun and pulled the trigger. Altman continued, “If you could set up a single point of contact, and make decisions on initiating pilot programs with YC companies within two weeks, that would help a lot.” “Great,” Carter said, glancing at one of his seven aides, who scribbled a note. “What else?” Altman thought for a while. “If you or one of your deputies could come speak to YC, that would go a long way.” “I’ll do it myself,” Carter promised. As everyone filed out, Chris Lynch, a former Microsoft executive who heads Carter’s digital division, told Altman, “It would have been good to talk about OpenAI.” Altman nodded noncommittally. The 2017 U.S. military budget allocates three billion dollars for human-machine collaborations known as Centaur Warfighting, and a long-range missile that will make autonomous targeting decisions is in the pipeline for the following year. Lynch later told me that an OpenAI system would be a natural fit. Altman was of two minds about handing OpenAI products to Lynch and Carter. “I unabashedly love this country, which is the greatest country in the world,” he said. At Stanford, he worked on a DARPA project involving drone helicopters. “But some things we will never do with the Department of Defense.” He added, “A friend of mine says, ‘The thing that saves us from the Department of Defense is that, though they have a ton of money, they’re not very competent.’ But I feel conflicted, because they have the world’s best cyber command.” Altman, by instinct a cleaner-up of messes, wanted to help strengthen our military—and then to defend the world from its newfound strength. [...] On a trip to New York, Altman dropped by my apartment one Saturday to discuss how tech was transforming our view of who we are. Curled up on the sofa, knees to his chin, he said, “I remember thinking, when Deep Blue beat Garry Kasparov, in 1997, Why does anyone care about chess anymore? And now I’m very sad about us losing to DeepMind’s AlphaGo,” which recently beat a world-champion Go player. “I’m on Team Human. I don’t have a good logical reason why I’m sad, except that the class of things that humans are better at continues to narrow.” After a moment, he added, “ ‘Melancholy’ is a better word than ‘sad.’ ” Many people in Silicon Valley have become obsessed with the simulation hypothesis, the argument that what we experience as reality is in fact fabricated in a computer; two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation. To Altman, the danger stems not from our possible creators but from our own creations. “These phones already control us,” he told me, frowning at his iPhone SE. “The merge has begun—and a merge is our best scenario. Any version without a merge will have conflict: we enslave the A.I. or it enslaves us. The full-on-crazy version of the merge is we get our brains uploaded into the cloud. I’d love that,” he said. “We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever. What a time to be alive!” Some futurists—da Vinci, Verne, von Braun—imagine technologies that are decades or centuries off. Altman assesses current initiatives and threats, then focusses on pragmatic actions to advance or impede them. Nothing came of Paul Graham’s plan for tech to stop Donald Trump, but Altman, after brooding about Trump for months, recently announced a nonpartisan project, called VotePlz, aimed at getting out the youth vote. Looking at the election as a tech problem—what’s the least code with the most payoff?—Altman and his three co-founders concentrated on helping young people in nine swing states to register, by providing them with registration forms and stamps. By Election Day, VotePlz’s app may even be configured to call an Uber to take you to the polls. Synthetic viruses? Altman is planning a synthetic-biology unit within YC Research that could thwart them. Aging and death? He hopes to fund a parabiosis company, to place the rejuvenative elixir of youthful blood into an injection. “If it works,” he says, “you will still die, but you could get to a hundred and twenty being pretty healthy, then fail quickly.” Human obsolescence? He is thinking about establishing a group to prepare for our eventual successor, whether it be an A.I. or an enhanced version of Homo sapiens. The idea would be to assemble thinkers in robotics, cybernetics, quantum computing, A.I., synthetic biology, genomics, and space travel, as well as philosophers, to discuss the technology and the ethics of human replacement. For now, leaders in those fields are meeting semi-regularly at Altman’s house; the group jokingly calls itself the Covenant. As Altman gazes ahead, emotion occasionally clouds his otherwise spotless windscreen. He told me, “If you believe that all human lives are equally valuable, and you also believe that 99.5 per cent of lives will take place in the future, we should spend all our time thinking about the future.” His voice dropped. “But I do care much more about my family and friends.” He asked me how many strangers I would allow to die—or would kill with my own hands, which seemed to him more intellectually honest—in order to spare my loved ones. As I considered this, he said that he’d sacrifice a hundred thousand. I told him that my own tally would be even larger. “It’s a bug,” he declared, unconsoled. He was happier viewing the consequences of innovation as a systems question. The immediate challenge is that computers could put most of us out of work. Altman’s fix is YC Research’s Basic Income project, a five-year study, scheduled to begin in 2017, of an old idea that’s suddenly in vogue: giving everyone enough money to live on. Expanding on earlier trials in places such as Manitoba and Uganda, YC will give as many as a thousand people in Oakland an annual sum, probably between twelve thousand and twenty-four thousand dollars. The problems with the idea seem as basic as the promise: Why should people who don’t need a stipend get one, too? Won’t free money encourage indolence? And the math is staggering: if you gave each American twenty-four thousand dollars, the annual tab would run to nearly eight trillion dollars—more than double the federal tax revenue. However, Altman told me, “The thing most people get wrong is that if labor costs go to zero”—because smart robots have eaten all the jobs—“the cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.” In the best case, tech will be so transformative that Altman won’t have to choose between the few and the many. When A.I. reshapes the economy, he told me, “we’re going to have unlimited wealth and a huge amount of job displacement, so basic income really makes sense. Plus, the stipend will free up that one person in a million who can create the next Apple.”
About the last three paragraphs, it’s worth to point out that human-level AI (and beyond) isn’t some new kind of Roomba or another gadget with blinking lights, so I am a little bit surprised by musings about job displacement and UBI (when AGI is assumed). Post AGI world will not look like the current world but with robotic arms in place of Amazon’s warehouse workers and now Altman seems to get it.
1 note · View note