#we're like the scientists in a disaster movie
Explore tagged Tumblr posts
Text
Being politically leftie and afab is such a funny experience. Like, everytime I was saying how dangerous some rich people are, I was used to be called "hysterical".
And look at us now.
The most bittersweet I-told-you-so of my freaking life.
#politics#current events#like i'm not even mad nor sad of what is happening#just tired#because we all fucking saw that coming#and nobody believed us#we're like the scientists in a disaster movie#so I'm just shaking my shoulders now
1 note
·
View note
Note
5. Name a movie that makes you genuinely laugh + 10. What is something (or someone) you’re in love with?
5. Name a movie that makes you genuinely laugh
"Oooh, it's hard to pick a specific one. Can I name an entire genre? Disaster movies. They're so funny, ehehehehehe. There's always some over the top genius scientist or two who just can't quite convince the government people that BIG NATURAL DISASTER is about to strike, ehehehehehe, and then sure enough, it does, ehehehehe. And so many humans die. It's funny because these movies are made by humans, like a warning to humans, even as these things unfold before their very eyes, and it's happening in real time, and everybody just says, 'Wow, that movie had cool special effects', and goes on about their day, ehehehehehe, at least until the movie wins some award for having the best special effects and then everybody forgets about the movie two years later while in the background the world is on fire, just like in the movie! Anyway, you humans make very funny movies, ehehehehehehehe."
10. What is something (or someone) you’re in love with?
"I didn't know you could fall in love with things. Can you fall in love with shiny rocks? Well, I'm not in love with them. I just find them extremely alluring. They're really nice to look at, ehehehehehe. Very shiny and sparkly. I like that in a rock, hehehehehe."
"If we're talking about a person, that's much easier. I don't know about being in love because I've never felt it before. I wouldn't know how to identify it. But I really care about Night. She's my favorite human and I can't say there isn't anything I wouldn't do for her. Is that love? If it is, then you could say that I love her. Don't try to kill me now, hehehehehehe. I still have teeth and claws, ehehehehehe."
2 notes
·
View notes
Text
No one cares and this is just going to be word vomit but I was thinking how Jurassic Park underwent a significant shift in its philosophical stance over time. The original 1993 film and Michael Crichton's novel were rooted in cautionary themes about humanity's hubris, the dangers of unchecked scientific ambition, and the moral consequences of "playing God."
However, as the series progressed—particularly with the "Jurassic World" films—the narrative shifted from being cautionary symbols, towards themes of coexistence and the validity of the dinosaurs existence. The decision to equate the little clone girls validity as a human being to the dinosaurs’ right to exist complicated the "playing God" argument. It moved the debate from "should we have done this?" to "now that we have, what is our responsibility?" Because responsibility to who? The dinosaurs? Or humanity who is still at risk because of these dinosaur that weren't meant to exist? I feel like humanity has a responsibility to protect itself from the danger caused by introducing apex predators into ecosystems they were never meant to inhabit. Jurassic World: Dominion briefly touches on this yet it glosses over the ethical tension by focusing on the notion that "life finds a way."
I feel like the new movies undermine the original message. Obviously the idea of the dinos dying is sad. I’m not excited about that. There's no joy there. But we're not set up to have a brachiosaurus tearing down all the powerlines and stepping on someone's cat or child. Or having kids not be able to go outside during recess lest a giant dinosaur bird picks them off.
And also WE didn't create them. A group of scientists did and now everyone else has to suffer the consequences. a handful of reckless, profit-driven scientists and corporations did this. it’s something the movies don’t explore deeply enough. The ethical burden is unfairly spread across all of humanity, even though most people had no part in their creation or the decisions that led to their release. the films shift the moral burden to society at large, asking everyone to sympathize with the dinosaurs without addressing the unfairness of that burden. It’s a bait-and-switch: "We created this disaster, but now you have to deal with it—and you’re heartless if you don’t want to risk your safety to save them."
The clone girls choice to release the dinosaurs because “they’re alive, like me”—is emotional and I get it, but catastrophically naive. I get how she feels but she's also a child who can't see a bigger picture outside her sense of self. Her actions come from a place of personal identification with the dinosaurs as fellow “unnatural” creations, but she doesn’t yet have the maturity to fully grasp the ramifications of what she’s done.
The reality is “saving lives = good” Is the same logic one could use to say we need to not save the dinosaurs to protect humanity. Because people will die because of them. Fates were sealed as soon as they choose the dinosaurs to live. They doomed other people to die. She made a decision that effectively traded human lives for dinosaur lives. Once that choice was made, the consequences were inevitable: every human death caused by the dinosaurs is a direct result of that decision. Letting the dinosaurs live might seem compassionate in the moment, but it’s a decision with catastrophic ripple effects. Each time a person dies because of a rampaging dinosaur, it’s a direct result of the choice to release them. Sometimes when someone is 8 you have to be an adult and go 'this is going to make you sad but we can't let society crumble into chaos and let people die. This is not a reflection on you. You're a person. It's not the same. We can be sad we had to make this choice but hard choices are part of life" And you can say 'look it really sucks to have to make this choice. It's not a choice that feels good. But some people made a choice for everyone in creating these dinosaurs and now while we might feel sad or angry to have to make this choice we still have to protect people" Because I get it. I'd be devastated and furious having to kill the dinosaurs. The idea of them suffering is terrible. But the cost in letting my feelings overtake my decision making would be too high a cost and that would suck. It would feel backed into a corner. But sometimes all you get is terrible choices
The original Jurassic Park raised essential questions about the ethical implications of play god. Ian Malcolm's famous line, "Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should," encapsulates the franchise's initial moral stance. But I just think newer films have muddled the old philosophy. I think it's more that they don't want to deal with the ethical dilemmas and ask hard questions anymore and just wanna be an action film with dinosaur battles.
If you read all of this have dino
1 note
·
View note
Text
im tired of godzilla being big. we're done with godzilla being big. every movie he gets bigger. he's like the size of a mountain and it's no fun, he's stomping on people like they're ants, who cares. a tornado could do that. he's not a sickass dragon thing anymore, he's just one more natural disaster. the scale's meaningless.
i want small godzilla now. i want a movie that starts off with the scientists studying godzilla getting mixed up with the scientists who are inventing a shrink ray because maybe there was a fire alarm or something and now godzilla is the size of a fucking toddler and no one can contain him anymore because all their kaiju-sized containment devices are completely the wrong scale. i want to see godzilla loose on the town when he's shorter than a fire hydrant and someone could conceivably fit a cute little doggie vest on him. i want to see a destruction scene lovingly rendered with all the chaos and passion and full-throttle destruction of every other movie but what he's trashed is the inside of a walmart. i want to see him get dragged around on the end of a baby leash. i want to see him victoriously scale a playground slide and then scream his tiny godzilla scream like a little dinosaur teakettle.
the time has come for tiny godzilla. i believe this with my entire soul.
3K notes
·
View notes
Text
Junk Cereals: Hot Topic Funko's Dr. Ian Malcolm, Snowball and Cuphead Mugman
Hot Topic proved the knowing ability of capitalism to take advantage of children's begging their parents for stuff.
With the Funkos cereal brand, Hot Topic preys on kids who are particularly attuned to niche brands and adults who are suckers for stuff representing cherished figures.
The Funkos trio features three cereals that are almost devoid of taste. To be clear, none of these cereals have anything on the box meritorious of tastelessness. Jeff Goldblum was nominated for one Oscar and it was for a short film, not one of his quintessential character actor performances.
Speaking of whom, the first to be addressed is Dr. Ian Malcolm Cereal. Since the original Jurassic Park movie, Ian Malcolm (a chaos theory expert, which is a branch of math study) has been one of the most inspirational characters of the franchise. However, he's a tragicomic figure that teaches kids that you can speak truth to power, be ignored and hang around to become an ornament of a billion-dollar movie series.
Watching Ian Malcolm is like witnessing any of our favorite elected representatives grow old repeatedly warning of an impending crisis and how he or she grimaces as it tears our world apart.
Also, this cereal is a reminder of how challenging personality, like Bob Marley, can be commoditized.
Lest we forget, Ian Malcolm had the prescient lines in the 1993 original: "Life finds a way," and "You didn't earn the knowledge for yourselves, so you don't take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, patented it, and packaged it, and slapped it on lunchboxes. ... You're selling it. You wanna sell it. Well ... yeah, yeah, but your scientists were so preoccupied with whether they could that they didn't stop to think if they should."
Malcolm was such an adorable character in how his logic was passed over that it had to presented in a special cameo in Jurrasic World: Fallen Kingdom, the 2018 installation that was almost fresh enough to revive the magic of the original, yet a reminder that it's hard to be invigorating when a movie is trying so hard to be on brand.
Malcolm testifies at a U.S. Senate hearing (because senators know their movie history). There, he gives the clarion call for how ridiculous the movie is, as well as the perpetuation of the franchise.
"We've developed landmark genetic technology and what's going to be done with it? It ain't gonna stop with the de-extinction of the dinosaurs," he said, and when asked to clarify, explains, "We're talking about man-made cataclysmic change. ... Change is like death. You don't know what it looks like until you're standing at the gates."
So that'll be ringing in the heads of characters as they dance through the disasters of the next few Jurassic World movies.
The Dr. Ian Malcolm character is much more interesting, and sexier in the original movie, than the cereal. The cereal is black, which implies chocolate, but that's absent from the cereal. Just like people's active listening skills when Malcolm is speaking.
Seeing black, wanting chocolate and not tasting it is like being robbed.
Next, Snowball Cereal emanates from a Rick and Morty character. How significant? Enough to spin some wicked plots and inspire cosplayers.
Did Dr. Ian Malcolm inspire cosplayers? No, you can't cosplay sexiness. But many imitate the charismatic genius.
Snowball Cereal is blue, so I made myself believe it has a blue flavor, just like blue Gatorade.
Finally, there's Cuphead Mugman Cereal based on the Cuphead video game available for Xbox One and Nintendo Switch. This character is so recognizable that no extra marketing is required on the box to explain it.
What needs explaining is how Funkos makers produced a red cereal and didn't pick any of the many possible red flavors for it. Cherry, strawberry, strawberry rhubarb. Anything. Alas, here it is, tasting like nothing. Emptiness. Leaving a person feeling greater guilt than if he or she ate cinnamon or cayenne pepper from a bowl.
This was on sale for $5, which, despite being 54 percent off, was still expensive.
Fortunately, for Hot Topic, these will still look good on shelves when the price goes up and no one buys them.
Oh, and they come with toys.
Nutritional facts (what you can't get from toys): 1 g fat, 230 mg sodium, 30 g carbohydrates (11 g sugar), 1 g fiber.
0 notes
Text
We're told to fear robots. But why do we think they'll turn on us?
New Post has been published on https://nexcraft.co/were-told-to-fear-robots-but-why-do-we-think-theyll-turn-on-us/
We're told to fear robots. But why do we think they'll turn on us?
Despite the gory headlines, objective data show that people all over the world are, on average, living longer, contracting fewer diseases, eating more food, spending more time in school, getting access to more culture, and becoming less likely to be killed in a war, murder, or an accident. Yet despair springs eternal. When pessimists are forced to concede that life has been getting better and better for more and more people, they have a retort at the ready. We are cheerfully hurtling toward a catastrophe, they say, like the man who fell off the roof and said, “So far so good” as he passed each floor. Or we are playing Russian roulette, and the deadly odds are bound to catch up to us. Or we will be blindsided by a black swan, a four-sigma event far along the tail of the statistical distribution of hazards, with low odds but calamitous harm.
For half a century, the four horsemen of the modern apocalypse have been overpopulation, resource shortages, pollution, and nuclear war. They have recently been joined by a cavalry of more-exotic knights: nanobots that will engulf us, robots that will enslave us, artificial intelligence that will turn us into raw materials, and Bulgarian teenagers who will brew a genocidal virus or take down the internet from their bedrooms.
The sentinels for the familiar horsemen tended to be romantics and Luddites. But those who warn of the higher-tech dangers are often scientists and technologists who have deployed their ingenuity to identify ever more ways in which the world will soon end. In 2003, astrophysicist Martin Rees published a book entitled Our Final Hour, in which he warned that “humankind is potentially the maker of its own demise,” and laid out some dozen ways in which we have “endangered the future of the entire universe.” For example, experiments in particle colliders could create a black hole that would annihilate Earth, or a “strangelet” of compressed quarks that would cause all matter in the cosmos to bind to it and disappear. Rees tapped a rich vein of catastrophism. The book’s Amazon page notes, “Customers who viewed this item also viewed Global Catastrophic Risks; Our Final Invention: Artificial Intelligence and the End of the Human Era; The End: What Science and Religion Tell Us About the Apocalypse; and World War Z: An Oral History of the Zombie War.” Techno-philanthropists have bankrolled research institutes dedicated to discovering new existential threats and figuring out how to save the world from them, including the Future of Humanity Institute, the Future of Life Institute, the Center for the Study of Existential Risk, and the Global Catastrophic Risk Institute.
How should we think about the existential threats that lurk behind our incremental progress? No one can prophesy that a cataclysm will never happen, and this writing contains no such assurance. Climate change and nuclear war in particular are serious global challenges. Though they are unsolved, they are solvable, and road maps have been laid out for long-term decarbonization and denuclearization. These processes are well underway. The world has been emitting less carbon dioxide per dollar of gross domestic product, and the world’s nuclear arsenal has been reduced by 85 percent. Of course, though to avert possible catastrophes, they must be pushed all the way to zero.
ON TOP OF THESE REAL CHALLENGES, though, are scenarios that are more dubious. Several technology commentators have speculated about a danger that we will be subjugated, intentionally or accidentally, by artificial intelligence (AI), a disaster sometimes called the Robopocalypse and commonly illustrated with stills from the Terminator movies. Several smart people take it seriously (if a bit hypocritically). Elon Musk, whose company makes artificially intelligent self-driving cars, called the technology “more dangerous than nukes.” Stephen Hawking, speaking through his artificially intelligent synthesizer, warned that it could “spell the end of the human race.” But among the smart people who aren’t losing sleep are most experts in artificial intelligence and most experts in human intelligence.
The Robopocalypse is based on a muzzy conception of intelligence that owes more to the Great Chain of Being and a Nietzschean will to power than to a modern scientific understanding. In this conception, intelligence is an all-powerful, wish-granting potion that agents possess in different amounts.
Humans have more of it than animals, and an artificially intelligent computer or robot of the future (“an AI,” in the new count-noun usage) will have more of it than humans. Since we humans have used our moderate endowment to domesticate or exterminate less well-endowed animals (and since technologically advanced societies have enslaved or annihilated technologically primitive ones), it follows that a super-smart AI would do the same to us. Since an AI will think millions of times faster than we do, and use its super-intelligence to recursively improve its superintelligence (a scenario sometimes called “foom,” after the comic-book sound effect), from the instant it is turned on, we will be powerless to stop it.
But the scenario makes about as much sense as the worry that since jet planes have surpassed the flying ability of eagles, someday they will swoop out of the sky and seize our cattle. The first fallacy is a confusion of intelligence with motivation—of beliefs with desires, inferences with goals, thinking with wanting. Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world? Intelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to the intelligence: Being smart is not the same as wanting something. It just so happens that the intelligence in one system, Homo sapiens, is a product of Darwinian natural selection, an inherently competitive process. In the brains of that species, reasoning comes bundled (to varying degrees in different specimens) with goals such as dominating rivals and amassing resources. But it’s a mistake to confuse a circuit in the limbic brain of a certain species of primate with the very nature of intelligence. An artificially intelligent system that was designed rather than evolved could just as easily think like shmoos, the blobby altruists in Al Capp’s comic strip Li’l Abner, who deploy their considerable ingenuity to barbecue themselves for the benefit of human eaters. There is no law of complex systems that says intelligent agents must turn into ruthless conquistadors.
The second fallacy is to think of intelligence as a boundless continuum of potency, a miraculous elixir with the power to solve any problem, attain any goal. The fallacy leads to nonsensical questions like when an AI will “exceed human-level intelligence,” and to the image of an ultimate “Artificial General Intelligence” (AGI) with God-like omniscience and omnipotence. Intelligence is a contraption of gadgets: software modules that acquire, or are programmed with, knowledge of how to pursue various goals in various domains. People are equipped to find food, win friends and influence people, charm prospective mates, bring up children, move around in the world, and pursue other human obsessions and pastimes. Computers may be programmed to take on some of these problems (like recognizing faces), not to bother with others (like charming mates), and to take on still other problems that humans can’t solve (like simulating the climate or sorting millions of accounting records).
Each system is an idiot savant, with little ability to leap to problems it was not set up to solve.”
The problems are different, and the kinds of knowledge needed to solve them are different. Unlike Laplace’s demon, the mythical being that knows the location and momentum of every particle in the universe and feeds them into equations for physical laws to calculate the state of everything at any time in the future, a real-life knower has to acquire information about the messy world of objects and people by engaging with it one domain at a time. Understanding does not obey Moore’s Law: Knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster. Devouring the information on the internet will not confer omniscience either: Big data is still finite data, and the universe of knowledge is infinite.
For these reasons, many AI researchers are annoyed by the latest round of hype (the perennial bane of AI), which has misled observers into thinking that Artificial General Intelligence is just around the corner. As far as I know, there are no projects to build an AGI, not just because it would be commercially dubious, but also because the concept is barely coherent. The 2010s have, to be sure, brought us systems that can drive cars, caption photographs, recognize speech, and beat humans at Jeopardy!, Go, and Atari computer games. But the advances have not come from a better understanding of the workings of intelligence but from the brute-force power of faster chips and bigger data, which allow the programs to be trained on millions of examples and generalize to similar new ones. Each system is an idiot savant, with little ability to leap to problems it was not set up to solve, and a brittle mastery of those it was. A photo-captioning program labels an impending plane crash “An airplane is parked on the tarmac”; a game-playing program is flummoxed by the slightest change in the scoring rules. Though the programs will surely get better, there are no signs of foom. Nor have any of these programs made a move toward taking over the lab or enslaving their programmers.
Even if an AGI tried to exercise a will to power, without the cooperation of humans, it would remain an impotent brain in a vat. The computer scientist Ramez Naam deflates the bubbles surrounding foom, a technological singularity, and exponential self-improvement:
Imagine you are a super-intelligent AI running on some sort of microprocessor (or perhaps, millions of such microprocessors). In an instant, you come up with a design for an even faster, more powerful microprocessor you can run on. Now…drat! You have to actually manufacture those microprocessors. And those [fabrication plants] take tremendous energy, they take the input of materials imported from all around the world, they take highly controlled internal environments that require airlocks, filters, and all sorts of specialized equipment to maintain, and so on. All of this takes time and energy to acquire, transport, integrate, build housing for, build power plants for, test, and manufacture. The real world has gotten in the way of your upward spiral of self-transcendence.
The real world gets in the way of many digital apocalypses. When HAL gets uppity, Dave disables it with a screwdriver, leaving it pathetically singing “A Bicycle Built for Two” to itself. Of course, one can always imagine a Doomsday Computer that is malevolent, universally empowered, always on, and tamper-proof. The way to deal with this threat is straightforward: Don’t build one.
As the prospect of evil robots started to seem too kitschy to take seriously, a new digital apocalypse was spotted by the existential guardians. This storyline is based not on Frankenstein or the Golem but on the Genie granting us three wishes, the third of which is needed to undo the first two, and on King Midas ruing his ability to turn everything he touches into gold, including his food and his family. The danger, sometimes called the Value Alignment Problem, is that we might give an AI a goal, and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned. If we gave an AI the goal of maintaining the water level behind a dam, it might flood a town, not caring about the people who drowned. If we gave it the goal of making paper clips, it might turn all the matter in the reachable universe into paper clips, including our possessions and bodies. If we asked it to maximize human happiness, it might implant us all with intravenous dopamine drips, or rewire our brains so we were happiest sitting in jars, or, if it had been trained on the concept of happiness with pictures of smiling faces, tile the galaxy with trillions of nanoscopic pictures of smiley-faces.
I am not making these up. These are the scenarios that supposedly illustrate the existential threat to the human species of advanced artificial intelligence. They are, fortunately, self-refuting. They depend on the premises that 1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so moronic that they would give it control of the universe without testing how it works; and 2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding. The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence. So is the ability to interpret the intentions of a language user in context. Only on a television comedy like Get Smart does a robot respond to “Grab the waiter” by hefting the maitre d’ over his head, or “Kill the light” by pulling out a pistol and shooting it.
MORE TECHNOLOGY STORIES
When we put aside fantasies like foom, digital megalomania, instant omniscience, and perfect control of every molecule in the universe, artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety. As AI expert Stuart Russell puts it: “No one in civil engineering talks about ‘building bridges that don’t fall down.’ They just call it ‘building bridges.’” Likewise, he notes, AI that is beneficial rather than dangerous is simply AI.
Artificial intelligence, to be sure, poses the more mundane challenge of what to do about the people whose jobs are eliminated by automation. But the jobs won’t be eliminated that quickly. The observation of a 1965 report from NASA still holds: “Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system that can be mass-produced by unskilled labor.” Driving a car is an easier engineering problem than unloading a dishwasher, running an errand, or changing a diaper, and at the time of this writing, we’re still not ready to loose self-driving cars on city streets. Until the day battalions of robots are inoculating children and building schools in the developing world, or for that matter, building infrastructure and caring for the aged in ours, there will be plenty of work to be done. The same kind of ingenuity that has been applied to the design of software and robots could be applied to the design of government and private-sector policies that match idle hands with undone work.
Adapted from ENLIGHTENMENT NOW: The Case for Reason, Science, Humanism, and Progress by Steven Pinker, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2018 by Steven Pinker.
This article was originally published in the Spring 2018 Intelligence issue of Popular Science.
Written By Steven Pinker
0 notes
Link
14 Winter 2018 Movies We're Excited To See With winter coming, there is only one thing on all of our minds: Star Wars. We all have tunnel vision until December 15, when The Last Jedi comes out. The winter season of movies, between December and February, tends to be less exciting than the rest of the year. However, there are still a lot of movies we think look pretty fun, including a few really interesting looking horror films and 2018'a first Marvel movie, Black Panther . The real question is, "Could February finally be the release of the Cloverfield Universe movie God Particle ?" Here's a list of upcoming movies to keep on your radar for the next three months, starting with December's releases. Jump ahead to January Jump ahead to February The Disaster Artist One of my favorite bad movies is The Room , and one of my favorite non-fiction books is The Disaster Artist by Greg Sestero. So obviously, the movie based on said book is going to be on this season's list. For the first time, James and Dave Franco appear together on screen as director Tommy Wiseau and line-producer-turned- Room- actor Greg Sestero. The film looks hilarious; see for yourself in this trailer . The Disaster Artist opens on December 1, and you can check out our full review of the film here . The Shape of the Water Guillermo del Toro's newest film is The Shape of the Water . Fans have been waiting since they got to see the magical and horrific trailer back in July. The movie takes place during the early '60s, in Cold War America. A janitor at a government laboratory discovers a bizarre being during work. The Shape of the Water opens on December 8, and y ou can check out the full review of the film here . I, Tonya I, Tonya is a biopic about ice skater Tonya Harding. A true story about an ice skater sounds dull, unless it's Harding because her story is crazy and unbelievable. She was a potential Olympic skater whose husband took her #1 competition, Nancy Kerrigan, by attacking her leg. Margot Robbie stars in this movie as the titular character, and the trailer is what won me over , as it's pretty funny. I, Tonya opens on December 8. Star Wars: The Last Jedi You probably already have your tickets to The Last Jedi , since it's one of the most anticipated movies of the year. The newest teaser for the film has a few extra scenes we haven't seen already, but it doesn't matter. It's Star Wars. Regardless of what we say, you already have your mind set. If for some reason you're on the fence about heading to the theaters, remember Porgs are a thing now. Star Wars: The Last Jedi opens on December 15. Jumanji Jumanji was a movie I loved when I was younger, and while Welcome to the Jungle -- which is a sequel that avoids feeling like a sequel --doesn't seem to have that same sense of wonderment as the original, it does have an air of fun in the trailers . It may be the Kevin Hart and Dwayne Johnson fanboy inside of me, but I'm pretty pumped for this movie, even though there is little-to-no chance it will stand up to the original. Jumanji opens on Wednesday, December 20. Bright Netflix original movies can be hit or miss. They're either marvelous or mediocre, but Bright looks pretty awesome . Will Smith stars as a police officer who lives in a world where mystical creatures--like orcs--exist alongside humans. With his orc partner, Smith is on the hunt for a weapon that everyone is after. This has the potential to be one of Netflix's bigger original movies, as the service already has some great original series. Bright comes to Netflix on December 22. Downsizing Matt Damon stars in Downsizing , a movie about people who shrink themselves down in order to reduce their impact on the environment. The trailer is quirky and compelling , and it is a really cool concept, especially if you're someone like me who likes Innerspace or any film with Rick Moranis shrinking things. Downsizing comes out on December 22. Insidious: The Last Key If horror is up your alley, the winter has a few promising-looking movies. The first big horror film of the season is the fourth installment in the Insidious franchise. This time, Dr. Rainier must stop a new haunting: one that is happening in her family home. As you can expect, the trailers are creepy . Now, there's a new terrifying ghostly being with keys for fingers hunting people down. The Last Key looks like more of the same, in a good way. Insidious: The Last Key opens on January 5. Maze Runner: The Death Cure The third installment of the Maze Runner series has had a rough go, after star Dylan O'Brien was injured on the set in 2016 and production was pushed to February 2017. However, the latest trailer was released over the summer , and it looks like the movie is setting up an excellent final act to the story. For the third film, Thomas and his friends must break into a large labyrinthine city which is controlled by the WCKD. Maze Runner: The Death Cure opens on January 26. Extinction In the past, we've recommended upcoming films that haven't released a trailer, but with Extinction , there aren't even any movie posters or set photos anywhere, hence the Universal Pictures logo above. The movie stars Michael Pena and Lizzy Caplan, who find that Earth has been invaded and is on the verge of destruction. That may not be a lot to work with, but Extinction does have Arrival writer Eric Heisserer attached, and that should be reason enough to be interested. Extinction opens on January 26. Cloverfield/God Particle Here we go again. One year ago, we put this movie on our list for Winter 2017 movies to watch, and it still hasn't come out. It was supposed to come out this past October but was pushed back yet again. Now, the movie about a group of scientists on a space station, fighting for their life against something horrible will come out in February--we hope. The upcoming Cloverfield universe movie comes out on February 2. Winchester: The House That Ghosts Built I've always been infatuated by the Winchester Mystery House and the story of its origin. Winchester: The House That Ghosts Built is about Sarah Winchester, the air to the Winchester firearms company. She believes the souls of those killed by the Winchester rifles were haunting her, so she built a large house with doors and staircases to nowhere in order to confuse them. Helen Mirren stars in this supernatural horror film, and it is directed by Michael Spierig and Peter Spierig ( Predestination , Jigsaw ). The trailer seems pretty standard for a movie about angry ghosts , but what excites me is the story of the house itself. Winchester: The House That Ghosts Built opens on February 2. Disclosure: Winchester: The House That Ghosts Built is by CBS Films. CBS is GameSpot's parent company. Black Panther Marvel is getting things started early in 2018 with Black Panther . Personally, I'm more excited for this movie than Ant-Man and the Wasp or Avengers: Infinity War . The movie follows T'Challa--the hero known as Black Panther--as he takes his place as king of Wakanda, as his father was killed during the events of Civil War . The latest trailer for the movie shows that the technologically advanced nation is on the verge of a revolution and T'Challa must rise to the occasion. Black Panther opens on February 16. Annihilation Before we head into the spring, there's Annihilation , a movie that should be on everyone's radar. Starring Natalie Portman and written and directed by Alex Garland ( Ex Machina ), the film follows a biologist who goes on an expedition in an environmental disaster zone, where life is a bit strange. The trailer from back in September is mind-blowing . If that hasn't sold you yet, Garland wrote 28 Days Later, Dredd, Sunshine , DmC: Devil May Cry , and Enslaved: Odyssey to the West . Annihilation opens on February 23. November 25, 2017 at 03:46PM
0 notes