#but its always like. REALLY obvious its from anime or smth similar and its like. hmmm
Explore tagged Tumblr posts
Note
Hi, if you are still taking prompts; A magically powerful Harry not noticing that his magic does things to make Draco happy. This can be pre-relationship or established relationship. Like it starts of with his tea being exactly as he likes and always the right temperature. Then evolves to rooms changing colour or weather changing or people being unable to invade Draco’s personal space due to an invisible barrier or something ridiculous. Btw Draco doesn’t notice as well.
anon.....you really killed me w this one. i’ve been so emo over this wyugeahrwiw might end up writing smth longer tbh bc this concept is literally the only thing that matters to me!!!!!!! i hope u enjoy i had so much fun with it ❤️❤️❤️
“Harry, you do it. Please.”
“No.”
“Please!”
“We’re fucking watching something, Draco!”
“So just pause it!”
Harry grabs the pillow on his lap and slams it onto the sofa next to him. Hermione can see dust rise in its wake. He pauses the telly.
“Are you doing it?” Draco asks hopefully. Harry scowls at him.
“Well you won’t shut up until I do, will you?”
“Definitely not.”
Harry disappears into the kitchen and Draco sits there looking smug.
“It’s kind of sick how you get off on bossing him around,” says Ron, his tone one of simple observation. His fingers are idly playing with Hermione’s hair, but she doesn’t think he notices he’s doing it.
“If I’m not mean to him a few times a week I break out in a rash, Weasley,” Draco says blithely. “Besides, he makes it perfectly. I don’t know how he does it, it’s always exactly the right temperature and sweetness and all that. I s’pose his years as a house-elf for those Muggles gave him plenty of time to perfect the art.”
“You’re a twat,” says Ron. “And my mum makes tea better than him.”
“Well you’re just a pitiful little mummy’s boy, aren’t you, Weasley? We can hardly trust your opinion.”
“Hark who the hell’s talking,” Ron scoffs. “Least I’m not twenty-three and still calling my mum ‘mummy’ like the world’s biggest bloody ponce.”
Draco splutters but before he can retort Harry’s coming back into the room hovering four cups of tea that float placidly to each of them. Draco looks exactly like a satisfied cat as he takes his and Harry drops back down onto the sofa next to him. Not too close, but certainly not too far, either.
“Literally exquisite,” Draco declares after he’s taken a sip. Ron rolls his eyes.
“It’s just tea, Draco,” says Harry, and he grabs for the remote to turn the film back on. “You’re such a demanding little brat. Merlin’s fucking tits.”
But Draco looks happy and Harry looks suspiciously content as well. Ron turns to her and makes a silent gagging face. Hermione snorts and puts a finger to her lips. They’ve decided not to say anything yet.
*
“Wasn’t this place a lot … uglier last time?”
“What?” Harry says absently. He’s not listening — he’s got all his attention zeroed in on a stack of parchment he’s holding. They’d only barely dragged him along to lunch; earlier the captain of the English National Team had apparently owled him a great number of brand-new Quidditch plays and required Harry’s extensive thoughts and notes before their next practise, which was tomorrow morning.
“Uglier,” Draco says emphatically, and Ron mutters something she doesn’t catch. “Remember? The walls were that tragic egg-yolk colour.” He shivers. Hermione thinks it might have been an honest-to-god shiver of revulsion. She also thinks she knows what’s happened, even though the extent of it surprises her.
“Maybe someone heard you whingeing and changed it,” Ron apparently can’t stop himself from saying with a snigger. Hermione elbows him hard and he shoots her a glare, mouthing, he doesn’t know!
Harry would usually be the one to take the lead and get them a table when all four of them go out to eat together but today he’s too wrapped up in his Quidditch plays, so Ron steps forward and does it, which makes Hermione’s chest flutter pleasantly. He’d blush down to his bones if she ever said it aloud but he’s quite capable of being a leader in Harry’s absences.
“Whatever happened,” says Draco pointedly as they’re led to their table, “it’s a great bloody blessing, I was genuinely unsure I’d have the mental fortitude to survive another assault like that on my delicate senses. And, I mean, this —” he gestures to the walls, which are now an admittedly pleasing dark teal above a white trim “— is stunning. It’s my favourite colour.”
“Is it? So weird they picked your favourite colour completely by coincidence,” Ron says, and Hermione elbows him again. Draco notices nothing and neither does Harry, although he does finally set the plays aside once they’re seated at the table.
“Are you complaining about the wall colour again?” he asks drily. They would both be extremely displeased to know they sound like an old married couple. Draco snatches haughtily at the paper napkin on the table and unfolds it to place over his lap. The first time he’d ever done this at a regular, decidedly not upscale restaurant Ron had taken it upon himself to spend the entire meal adopting a posh accent to match Draco’s and saying things to the waiter like “Don’t you have crystal?” while holding up a glass cup full of Pepsi and then commenting “These aren’t real silver, you know” after making a show of inspecting the titanium utensils.
“I can complain about hideous design choices if I want to,” Draco tells Harry with his nose in the air. “Thankfully they’ve rectified it this time.”
On the other side of the restaurant, Hermione sees two employees talking, one of them gesturing at the wall with utter bewilderment. She doesn’t point it out.
*
“Twelve o’clock,” says Ron, nodding past Draco’s shoulder. “Some bloke staring you down hard, Malfoy.”
Draco looks excitedly behind him, but what Hermione takes more notice of is the way Harry’s face falls a little. She can’t help but wonder if he even realises it’s happened. She’s almost certain he’s aware of his feelings for Draco even though he still hasn’t said anything to her (and she’s been waiting months now, the effort of holding her tongue growing only more difficult by the day, and she knows Ron’s always seconds away from shouting at him) but she doesn’t think he knows how obvious he is. Draco doesn’t seem to know either, but she thinks that’s because Draco feels exactly the same way. She’d have called them morons, but she remembers too well how long it had taken her and Ron.
“What the fuck, Weasley,” Draco hisses, turning back around with a scowl that makes Ron laugh and Harry perk up again a little bit. “He looks like he hasn’t washed his hair in weeks.”
“Now, now,” says Ron, “mustn’t judge books by their greasy covers.”
“Then you go shag him if you think he’s so fit.”
“Maybe I will,” Ron says airily, as if he really is considering it, and Hermione can’t help chuckling and kissing his cheek. Then his expression changes to one of wicked amusement, which makes all of them look round to see the bloke coming their way. Hermione glances at Harry to find that — oh yes, he looks flustered and vaguely upset.
“Hullo,” says the greasy bloke to Draco as he comes up beside him at their table. He’s really not terrible-looking, but if she’s learned anything about Draco in the last couple years it’s that his standards amount to models and Harry Potter, so this man has almost no chance.
“Hello,” Draco drawls, reminding her fiercely of his younger self at Hogwarts. “I’m not interested.”
“Right little narcissistic bugger, aren’t you?” the man says. And now, finally, he’s begun to look as revolting to Hermione as he’d done initially to Draco — a repellent personality can do that. “Maybe I just wanted to come and have a chat.”
“Then why aren’t you looking at any of the rest of us?” Ron asks, sounding halfway between amused still and a little put off.
“Can you leave, please?” Draco interjects, cringing away from the man encroaching slowly on his personal space. And suddenly, as he looks on the verge of antagonising Draco further, he shifts his feet and slips, landing right on his bum with a yell of surprise. All four of them get to their feet to see, but there doesn’t seem to be any liquid or even slimy food for him to have tripped on.
“The fuck ...?” the man says, getting back to his feet. But when he moved towards Draco, he only slips again, on absolutely nothing at all. Something clicks and Hermione looks at Harry: he seems as confused as anyone else (if obviously pleased).
She looks at Ron then, who catches her eye and lifts his brows like he’s thinking the same thing.
Draco’s suitor gets up once more and steadies himself, looking a bit dazed. Some deep animal instinct seems to tell him to stop trying, and with a wary glance at Draco he finally leaves.
“Well that was a bit of a fucking scene,” says Harry. Draco, coming out of his own startled daze, laughs.
“Yeah,” Ron says sarcastically, “wonder what could’ve possibly happened.”
*
“I really thought it was going to rain,” Draco mopes where he’s standing at the window. It’s grey outside but it definitely doesn’t look like rain and Draco appears so upset about it that Hermione actually feels badly, even though she’s quite glad for the clear weather.
“Just shut the curtains,” Ron suggests from his place on the floor. He’s sorting through Harry’s collection of VHS tapes, trying to decide on a good Halloween movie. Not that he’s ever seen any of them, and Hermione suspects he’ll end up choosing whichever cover he likes best.
“It’s not the same!” Draco wails. “The thunder and lightning is all part of it, you uncultured pillock! The atmosphere is all wrong.”
“It’ll be just as good when we shut off all the lights and draw the curtains,” she assures him, but it doesn’t remove the look of disappointment from his face. It’s a pouty sort of thing that echoes the brattiness of his youth; she imagines a five-or-six-year-old Draco giving his parents similar looks when he wasn’t getting what he wanted.
At that moment the front door opens and Harry walks in carrying two grocery bags, one of which contains alcohol, which Hermione can tell by the way the plastic is bulging around the cans.
“The fuck are you all doing here?” he says by way of greeting.
“You said eight o’clock, fuckhead,” Ron tells him without looking up. “But it’s fine, I’ve had time to pick a film and Malfoy’s had time to moan about the weather.”
“What’s wrong with the weather?”
“I wanted a storm!”
At that exact moment, a flash of lightning lights up the sky behind Harry where he hasn’t even closed the door yet. Seconds later a downpour begins, and then there’s a rolling crash of thunder.
Hermione’s eyes widen and once more she finds Ron’s gaze, who looks about as shocked as she feels. Draco, meanwhile, has his hands over his mouth and looks like a child on Christmas morning.
For the first time since his magic had begun picking up on Draco’s wishes and granting them of seemingly its own accord, Hermione sees Harry look suspicious. He peers behind him at the storm suddenly raging outside his house before slowly closing the door. When he turns back he looks directly at Hermione, who looks away quickly.
They set up the food Harry had gotten — all kinds of Halloween-themed sweets — and once everyone has their drinks (“Make mine,” Draco tells Harry, “you do it best”) and is comfortable on the two sofas in the room (Harry and Draco are, as usual, as close to each other as they can get without actually touching) they start the movie: The Thing, which Harry swears is one of the greatest horror films of all time.
Funny thing is, an hour and a half into it she looks over and, with a jolt, realises the two of them are kissing half-covered beneath a blanket. She elbows Ron, who positively beams when he notices.
“Fucking finally, dear sweet Merlin,” he whispers, the sound muffled by the continued rain and thunder. “I nearly hit him upside the head when he made it rain, are you fucking kidding me?”
“Shh!” Hermione hisses, though she’s smiling. “They’ll hear you. We’ll rag him about it tomorrow.”
A soft sound of laughter comes from the other sofa that Hermione identifies as Draco’s, and when she risks another peek after a moment she sees that Harry has a hand on Draco’s jaw, and that he’s smiling.
919 notes
·
View notes
Text
hyouka poly slowburn so its like: (post-anime)
satoshi and mayaka are dating all through highschool and houtarou and eru are maybe a little more invested than friends would normally be, and step in when they have problems to a degree that regular people would call a boundary issue.
houtarou and mayaka still give each other shit at every opportunity, but theyre much more outwardly friendly than before, and if she ever argues w satoshi houtarou is the first to say that theres no way its her fault. maybe one time it is and he confronts her abt it in a way that forces her to take him seriously. its the respect and codependency for me.
mayaka and satoshi have functioning eyes and houtarous feelings for eru are completely obvious to them. unfortunately shes a little harder to read, so they never really get further than being very close friends and committing to being together for the foreseeable future, as far as this road will take them - its the same dynamic for the entire friend group, but satoshi starts seriously considering a proposal in third year.
he and mayaka will be seperated through college - she wants to go to art school, he thinks he might like to be a teacher (specifically of like, textiles or language) and he thinks long distance is too much to ask of her. after a serious conversation she agrees and they take a break through college, but they are tentatively engaged, will be keeping in touch, and want to pick things back up once theyre both in the same location again.
mayaka and eru flat in college, bringing up mayakas long buried feelings for her, and theyve always been so touchy, and she has a feeling that erus guilty about something? but she doesnt want to get her hopes up. she gets really frustrated and confides in the only person who knows both satoshi and mayaka and isnt involved - HOUTAROU, who attends a less prestigious college than eru but is taking similar business courses (he hasnt forgotten) and is commuting from home.
hes closer to them than satoshi, but theres still a little distance and they dont meet as often as theyd like, partially bc he doesnt often make the effort - the energy he does have is expended on his classes, bc he has a motivation to do well - if he does, maybe eru will consider him without him even putting himself out there. anyway she still calls him on the phone all the time tho.
he doesnt really have any advice when mayaka speaks to him, but hes quick to reassure her that satoshi wouldnt be bothered by her feelings - "because its eru". functioning adults refer to each other by their first names. it was a super embarassing transition period but theyre used to it now.
so mayaka takes the leap and eru admits that while shes never really been one to dwell on romantic feelings, she reciprocates but is concerned abt satoshi - she loves him too, after all, and he and mayaka were/are/will be a great couple. she ends up confessing this to houtarou, filled w apologies and assurances that he neednt worry abt her personal matters, but he doesnt mind listening. anyway it stings (in a sad way, not a bitter one) that she apparently has interest in both mayaka and satoshi but not him, but he REALLY cant blame her. he tells her that he doesnt know how to advise her and she thanks him for listening, and then he does probably the most meddlesome thing hes ever done and calls satoshi and tells him everything.
satoshi is really cool abt it, and hits him w "lol if theyre dating what if i just take you out to lunch. fairs fair. what do you mean you dont know about my massive crush on you, mr observer didnt pick it up? oh wow okay youre really stupid when it comes to yourself. ill pick you up on friday" and then satoshi calls mayaka and gives her his blessing and assures that he loves them both and wishes them the best and wow they REALLY need to catch up soon. hell bring houtarou and they can compare date notes! and he hangs up.
satoshi is still kind of a petty guy and he probably only confessed to houtarou bc he was taken off guard, but hes not being inauthentic by any means. this is the new improved satoshi 2.0, who is becoming more comfortable w there being things he doesnt like abt himself and working on them and getting his feelings out constructively, rather than pushing them down and refusing to put himself in situations that might turn out badly. he gets his hopes up again, and is happier for it even when hes let down.
eru is shocked to hear abt houtarou and satoshi. mayaka isnt. they talk abt it, interspersed w making out, and are shocked to realise that they like both of them - mayaka is ESPECIALLY taken off guard, both by her own feelings and erus, which shed never noticed before. she almost tells eru abt houtarous 3+ years of pining, but stops herself lest things get messy. shes starting to get an idea, but needs to tread lightly. besides, its not like houtarou wld ever like her. theyre barely even friends. it doesnt all add up as evenly as shed like.
for houtarous part, hes genuinely in wtf mode irt satoshis feelings for him, and hes been in eru chitanda hell for so long that he never considered anything else, but now that he IS.... satoshi isnt so bad. he was always really cute w mayaka, when he wasnt being annoying for fun and profit. sure. okay. so they do some gay double dating through college, but the cross couple pining dont stop. satoshi is absolutely still obsessed w mayaka, but houtarou doesnt mind bc he cant take his eyes off eru whenever they meet up either.
she still calls him on the phone all the time, and when schoolwork picks up he often finds himself calling her w thoughts or questions. they do some more thought exercises, but they dont need to argue as an excuse, and she barely has to badger him anymore. one day he looks at himself and sees a functioning adult who spends a moderate amount of energy on things that arent necessarily necessary, and wants to sigh, but. hes happy.
college ends and they all find themselves back in kamiyama - satoshi is student teaching at their old middle school, eru is hard at work for her family, mayaka is working while she works on her manga debut, and houtarou is working while he figures out what he actually wants.
its clear to all of them that mayaka and satoshi need to have a talk, so they do, and they come up with... poly. its unconventional, but they really are happy, and they really do love each other, and mayaka would love to start wearing her ring again (satoshi never took his off, and she pretended not to notice but she had the biggest lady boner over it).
so now sometimes eru and houtarou are hanging out while their boyfriend and girlfriend are out on dates being engaged, making up for lost time and considering the practicality of marriage while they both have sidepieces, and houtarou and eru are pining BAD, but neither notices the other and he asks how her business is going and maybe kind of offers his assistance platonically.
so now THATS happening, and satoshi and mayaka get to talking one day abt how those two should date, shld we do smth? and if they did then the only pieces missing are mayaka/houtarou and satoshi/eru which is a beautiful dream but wld never happen, what do you mean he/she wld love to date you, wait really, oh my god, what, are we doing this,
and houtarou who has been working himself up to confessing for the past SEVEN YEARS, never gets to bc satoshi and mayaka interrupt while theyre at work and do it for him
#jhkgfjkfjg#hyouka#txt#god i put it in bullets to try and make it legible but im here for suggestions bc this is so much#myfic#sure thats a tag now i dont normally post much of anything but!! i want to be more active so take this#if literally anyone is out there and has poly feelings about these 4 please message me
3 notes
·
View notes
Text
Thoughts on Hasbro Universe after Revolution
Im big fan of G.I. Joe/Transformers. But when I heard that there are more than 2 franhises in one universe, it blew my mind. So I decided to check out them. One of them I heard when I was kid.
Revolution was big. For some it was epic, other think it was mess. I understand why ppl love and hate it. Personally I love it. There’s conflict and how heroes unite against evil. It was the beggining of massive universe. So, how it turned out?
To be fair.... not so good.
Its my own opinion. You can disagree with me. If you love aftermath of Revolution, thats fine. I just want to tell about the conclusion of Hasbro Comic Book Universe.
Optimus Prime.
I think the writer put a lot of his view on life: disappointment on every religion. I really didnt like how he made that Optimus Prime is always wrong. Even when he listens and he does what he was asked to do, ppl still angry at him. “You should listened to me!” and “You shouldn’t listen to me!”. I love that they put Joes, but here’s the big issue: OOC of Mainframe and Flint with his daughter look similar the same age.
Remember when Trasnformers had the mystery of their religion and mythology? Mix of Sci-Fi and Cosmic Fantasy. Yeah, forget about that. It was all Shockwave’s evil plan. Another big disappointment for me.
I like how they described the ghost of Bumblebee, but Shockwave being one of 13 Primes looks very... confusion to me.
Lost Light
Lost Light deserves to be called a weak sequel. Remember when in MTMTE was magic mystery, adventure, gore and development of characters and relationships? Here I found nothing. New characters for me are not interesting. And yes about them being “trans”. Im not transphobic and sorry if my opion might hurt you or offend. I just dont see transgenders in Transformers. I dont see transformers suffering of gender dysphoria. Hell, I doubt they suffer of homophobia, bc they are totally fine with mlm and wlw. If you dont know, hetero relationships are for the population of Earth. And Transformers managed told that they can love each other, but their love is not like Earth’s bc they dont have to have sex to create life. They have strong emotion connection to each other.
Speaking about love. I love Chromedome/Rewind love story bc it was developed. We saw the birth of connection, loss, pain, reunion, fear and happiness. Same with Cyclonus and Tailgate. To be fair I dont ship the last two as romantic couple, but as platonic couple. For me they dont have that emotional connection like Chrome/Rewind but they care each other. In Lost Light nothing. You just accept that a lot characters are couple to each other. Why and how? Just accept it. This is why I dont feel emotional connection to Lug and Anode. To be fair I thought they are friend and Lug looks a lot like a boy. If they’d develop her more better, I think I’d like her. The whole Lost Light is just comics of couples. I was thinking when they’re gonna do the Orgy like in Ancient Rome.
Also here’s another disappointment in religion. Everything was lie. As I told earlier - I didnt like it. I’d rather to rewatch TFP, Bayverse or G1. BC I felt emptiness. MTMTE is masterpiece.
G.I. Joe
Where do I begin? Was written by socialist who doesnt know anything about military, ruined Quick Kick who was nice and gentle, made Scarlett an idiot, turned charasmatic Shipwreck into fat vegan, new characters have no backstory or reasons why they joined to Joes. Also: huge hypocricy. Scarlett says that G.I. Joe is now international team, but they refuse to work with USA. I get it they tried to turn G.I. Joe into Overwatch, but OW was working with every country. Including USA, where they had one of their headquarters. American G.I. Joe was more progressive bc they were helping every country who had deal with Cobra or any threat. They even teamed up with Russian soldiers.
The huge disappointment was no explanation about Snake Eyes rebirth (and no love story of Snake/Scarlett) and Quick Kick being an ass. Just check G.I. Joe ARAH show. There Quick Kick was nice. I miss that one....
The only good stuff was about Rock n’ Roll nightmares and guilt for shooting Grand Slam, grumpy Grand Slam and Doc being half-alien. Thats alll.
Revolutionaries
It was a bit better bc its literally crossover with conflict and backstories. Here they at least tried to make story interesting. And brought a lot interesting references. Especially to 90s: KLAW, Slaugther and even to original Action Force.
M.A.S.K.: Mobile Armored Strike Kommand
At 1st they tried but then it all felt down. I wouldnt call it horrible. You can check out 1st issues. I can say that only villains were interesting. While main heroes... here’s the problem.
Original Matt Trekker was an engineer, millionaire, helped ppl and white. Why the last important? BC in reboot he became boring black guy who seeks vengeance for his father death and the main bad guy is white man. Im not racist bc I like how it was done in Spawn, but it wasnt so obvious who is the bad guy who just wants to take over the world. I get it you hate Trump. He is a clown.
Also original Trekker raises his son alone. So he is widowed. It could play in reboot: lost all, but tries to keep his son safe. So much potential for drama of lonely father. But we got what we got. I just go to rewatch Spawn animated series.
If they wanted “diverse” why they didnt put more poc characters from MASK? You know there are actual canon black man and indian man? Even native american man?
ROM
It was boring. 1st issues were interesting and brutal bc of alien invansion. You wouldnt know who is the enemy and who is the friend. But drama...
Whole Rom’s drama was about losing his humanity. At 1st we see him as cold-hearted alien. Then they all forget about it. Original Rom from Marvel was losing his humanity until he met brave girl Brandy who made him to remember his loss of homeplanet and love of his life. He was afraid to be alone and to be complete machine. And yes, in reboot his old girlfriend is alive. But I felt nothing with this. I prefer to read original comics bc I felt sorry for Rom.
Micronauts: Wrath of Karza
It was boring. The only thing I can remember is Larissa being Baron Karza’s daughter. I dont compare reboot with original series bc I havent read yet. I liked the new one bc of Baron Karza and his wife (and their fetish).
First Strike
Hoo- boy. It was bad. Preety bad. Not bc villains tried to destroy Cybertron. Not bc TF thought its gonna be war of humans and TF. No, it all was good. The main villain is Joe Colton who wants to destroy Cybertron to save Earth. And that he was bad from the beginning. His motivation sounds like Miles Mayhem from M.A.S.K.. That shock effect of surprise villain doesnt work here. It looks like disrespect to Joe fans. They managed to ruin Scarlett’s character who was turned into G.I. Joe not bc she was the best. She was in Joes bc she didnt do 50 push-ups. If you dont know, G.I. Joe is elite guard where they take the best men and women bc they do a lot dangerous work. So the whole story arc is full disrespect to Joe fan. I dont know about you, but I was offended by that.
Was there smth good? Team up of villains and the easter egg of Visionaries.
Rom vs. Transformers: Shining Armor
I almost forget about the plot bc it was boring. Rom was rude like every commander (yeah, for someone “losing humanity”). New character was boring. So everythng was boring. Even Autobots didint save the situation.
Rom & the Micronauts
Well, they at least tried with characters development. I really liked how characters interact with each other. But the whole story was “meh”
Scarlett's Strike Force
It was very short and cancelled. BC that writer Sitterson wrote offensive tweet about Nine Eleven. I get it what he was trying to do: to make comics based on cartoon G.I. Joe. This is why Quick Kick and Spirit fight against Storm Shadow. Personally I thought it was racist bc “only asian fight agains asian”. And Storm Shadow has the worst redesign I’ve ever seen. Theres nothing to talk about the comics bc its unfinished and cancelled. So theres nothing.
Transformers vs. Visionaries
This comic had potential. But the ending ruined it. The story is about colonization to save living race. But it will kill another nation. Its interesting theme. And how they managed? Nothing. For some reason everyone in peace and safe. The ending is just weird. I think writer didint know how to end that conflict so she wrote “everyone safe and in peace. Colonization is bad”. Not the ending is the problem. Main characters: Leoric and Virulina redesigned very strong. Leoric looks like total different character (why not to create new character? He looks good). And Virulina looks like student from art-school, not the villain. The redesigned I like are Cryotek and Arzon. And the art was very good.
The last 2 ones I havent finished yet. I can tell this: TAAO isnt look so bad, but I’m ready for disapointing ending, like TF Unicron.
In conclusion:
I dont tell that it was done horrible. Its just explains why IDW decided to reboot TF and G.I. Joe. Low sales. BC I’ve noticed a lot easter eggs in those comics for future story plots. I think they’d made it good if IDW would give them chance.
If you love them, thats fine. I’ll enjoy my own version of Hasbro Universe.
#review#opinion#transformers#g.i. joe#gijoe#mask#m.a.s.k.#rom#rom the spaceknight#micronauts#visionaries#comics#idw#hasbro#hasbro universe
11 notes
·
View notes
Text
Machine Learning for Everyone - In simple words
This article in other languages: Russian (original) Special thanks for help: @sudodoki and my wife <3
Machine Learning is like sex in high school. Everyone is talking about it, a few know what to do, and only your teacher is doing it. If you ever tried to read articles about machine learning on the Internet, most likely you stumbled upon two types of them: thick academic trilogies filled with theorems (I couldn’t even get through half of one) or fishy fairytales about artificial intelligence, data-science magic, and jobs of the future.
I decided to write a post I’ve been missing all that time. There's a simple introduction for those who always wanted to understand machine learning. Only real-world problems, practical solutions, simple language, and no high-level theorems. One and for everyone.
Let's roll.
Why do we want machines to learn?
This is Billy. Billy wants to buy a car. He tries to calculate how much he needs to save monthly for that. He went over dozens of ads on the internet and learned that new cars are around $20,000, used year-old ones are $19,000, 2-year old are $18,000 and so on.
Billy, our brilliant analytic, starts seeing a pattern: so, the car price depends on its age and drops $1,000 every year, but won't get lower than $10,000.
In machine learning terms, Billy invented regression – he predicted a value (price) based on known historical data. People do it all the time, when trying to estimate a reasonable cost for a used iPhone on eBay or figure out how many ribs to buy for a BBQ party. 200 grams per person? 500?
Yeah, it would be nice to have a simple formula for every problem in the world. Especially, for a BBQ party. Unfortunately, it's impossible.
Let's back to cars. The problem is, they all have different manufacturing date, dozens of options, technical condition, seasonal demand spikes, and god only knows how many more hidden factors. An average Billy can't keep all that data in his head while calculating the price. Me too.
People are dumb and lazy – we need robots to do the maths for them. So, let's go it computational way here. Let's provide the machine a data and ask it to find all hidden patterns related to price.
Aaaand it worked. The most exciting thing is that the machine copes with this task much better than a real person does when carefully analyzing all the dependencies in mind.
That was the birth of machine learning.
Three components of machine learning
The only goal of machine learning is to predict results based on incoming data. That's it. All ML tasks can be represented this way, or it's not an ML from the beginning.
The greater variety in the samples you have, the easier to find relevant patterns and predict the result. Therefore, we need three components to teach the machine:
Data Want to detect spam? Get samples of spam messages. Want to forecast stocks? Find the price history. Want to find out user preferences? Parse their activities on Facebook (no, Mark, stop it, enough!). The more and diverse the data, the better the result. Tens of thousands of rows is the bare minimum for the desperate ones.
There are two main ways of collecting data — manual and automatic. Manually collected data contains far fewer errors but takes more time to collect — that makes it more expensive in general.
Automatic approach is cheaper — you only need to gather everything you can find on the Internet and hope for the best.
Some smart asses like Google use their own customers to label data for them for free. Remember ReCaptcha which forces you to "Select all street signs"? That's exactly what they're doing. Free labor! Nice. In their place, I'd start to show captcha more and more. Oh, wait...
It's extremely tough to collect a good collection of data (aka dataset). They are so important that companies may even reveal their algorithms, but rarely datasets.
Features Also known as parameters or variables. Those could be car mileage, user's gender, stock price, word frequency in the text. In other words, these are the factors for a machine to look at.
When data stored in tables it's simple — features are column names. But what are they if you have 100 Gb of cat pics? We cannot consider each pixel as a feature. That's why selecting the right features usually takes way longer than all the other ML parts. That's also the main source of errors. Meatbags are always subjective. They choose only features they like or find "more important". Please, avoid being human.
Algorithms Most obvious part. Any problem can be solved differently. The method you choose affects the precision, performance, and size of the final model. There is one important nuance though: if the data is crappy, even the best algorithm won't help. Sometimes it's referred as "garbage in – garbage out". So don't pay too much attention to the percentage of accuracy, try to acquire more data first.
Learning vs Intelligence
Once I saw an article titled "Will neural networks replace machine learning?" on some hipster media website. These media guys always call any shitty linear regression at least artificial intelligence, almost SkyNet. Here is a simple picture to deal with it once and for all.
Artificial intelligence is the name of a whole knowledge field, such are biology or chemistry.
Machine Learning is a part of artificial intelligence. Important, but not the only one.
Neural Networks is one of machine learning types. A popular one, but there are other good guys in the class.
Deep Learning is a modern method of building, training, and using neural networks. Basically, it's a new architecture. Nowadays in practice, no one separates deep learning from the "ordinary networks". We even use the same libraries for them. To not look like a dumbass, it's better just name the type of network and avoid buzzwords.
The general rule is to compare things on the same level. That's why the phrase "will neural nets replace machine learning" sounds like "will the wheels replace cars". Dear media, it's compromising your reputation a lot.
Machine can Machine cannot Forecast Create smth new Memorize Get smart really fast Reproduce Go beyond their task Choose best item Kill all humans
The map of machine learning world
If you are too lazy for long reads, take a look at the picture below to get some understanding.
It's important to understand — there is never a sole way to solve a problem in the machine learning world. There are always several algorithms that fit, and you have to choose which one fits better. Everything can be solved with a neural network, of course, but who will pay for all these GeForces?
Let's start with a basic overview. Nowadays there are four main directions in machine learning.
Part 1. Classical Machine Learning
The first methods came from pure statistics in the '50s. They solved formal math tasks, looking for patterns in numbers, evaluating the proximity of data points, and calculating vectors' directions.
Nowadays, half of the Internet is working using these algorithms. When you see a list of articles to "read next" or your bank blocks your card at random gas station in the middle of nowhere, most likely it's the work of one of those little guys.
Big tech companies are huge fans of neural networks. Obviously. For them, 2% accuracy is an additional 2 billion in revenue. But when you are small, it doesn't make sense. I heard stories of the teams spending a year on a new recommendation algorithm for their e-commerce website, before discovering that 99% of traffic came from search engines. Their algorithms were useless. Most users didn't even open the main page.
Despite the popularity, classical approaches are so natural, that you can easily explain them to a toddler. They are like a basic arithmetics — we use it every day, without even thinking.
1.1 Supervised Learning
Classical machine learning is often divided into two categories – Supervised and Unsupervised Learning.
In the first case, the machine has a "supervisor" or a "teacher" who gives machine all the answers, telling is it a cat at the picture or a dog. The teacher is already divided (labeled) the data into cats and dogs, and the machine is using these examples to learn. One by one. Dog by cat.
Unsupervised learning means the machine is left on its own with a pile of animal photos and a task to find out who's who. Data is not labeled, there's no teacher, the machine is trying to find any patterns on its own. We'll talk about these methods below.
Clearly, the machine will learn faster with a teacher, so it's more commonly used in real-life tasks. There are two types of such tasks: classification – an object's category prediction and regression – prediction of a specific point on numeric axis.
Classification
"Splits objects based at one of the attributes known beforehand. Separate socks by based on color, documents based on language, music by genre"
Today used for: – Spam filtering – Language detection – A search of similar documents – Sentiment analysis – Recognition of handwritten characters and numbers – Fraud detection
Popular algorithms: Naive Bayes, Decision Tree, Logistic Regression, K-Nearest Neighbours, Support Vector Machine
Here and onward you can comment with additional information to these sections. Feel free to write your examples of tasks. Everything is written here based on my own subjective experience.
Machine learning is about classifying things, mostly. The machine here is like a baby learning to sort toys: here's a robot, here's a car, here's a robo-car... Oh, wait. Error! Error!
In classification, you always need a teacher. The data should be labeled with features so the machine could assign the classes based on them. Everything could be classified — users based on interests (as algorithmic feeds do), articles based on language and topic (that's important for search engines), music based on genre (Spotify playlists), and even your emails.
In spam filtering was widely used Naive Bayes algorithm. Machine counted the number of "viagra" mentions in spam and normal mail. Then it multiplied both probabilities using Bayes equation, summed the results and yay, we got Machine Learning.
Later, spammers learned how to deal with Bayesian filter by adding lots of "good" words at the end of the email. Ironically, the method was called Bayesian poisoning. It stayed at history as most elegant and first practically useful one, though, other algorithms now used for spam filtering.
Here's another practical example of classification. Let's say, you need some credit money. How bank will know will you pay it back or not? There's no way to know it for sure. Though, the bank has lots of profiles of people who took the money before. Bank has data about age, education, occupation and salary and – most importantly – the fact of paying the money back. Or not.
With that data, we can teach the machine, find the patterns and get the answer. There's not an issue. The issue is that bank can't blindly trust the machine answer. What if there's a system failure, hacker attack or a quick fix from a drunk senior.
To deal with it, we have Decision Trees. All the data automatically divided to yes/no questions. They could sound a bit weird from a human perspective, e.g., whether the creditor earns more than $128.12? Though, the machine comes up with such question to split the data best at each step.
That's how a tree made. The higher the branch — the broader the question. Any analyst can take it and explain afterward. He may not understand it, but explain easily! (typical analyst)
The trees widely used in high responsibility spheres: diagnostics, medicine, and finances.
The two most popular algorithms for forming the trees are CART and C4.5.
Pure decision trees are rarely used now. However, they often set the basis for large systems, and their ensembles even work better than neural networks. We'll talk about that later.
When you google something, there are precisely the bunch of dumb trees which are looking for range the answers for you. Search engines love them because they're fast.
Support Vector Machines (SVM) is rightfully the most popular method of classical classification. It was used to classify everything in existence: plants by types faces at the photos, documents by categories, etc.
The idea behind SVM is simple – it's trying to draw two lines between categories with the largest margin between them. It's more evident in the picture:
There's one very useful side of the classification — anomaly detection. When a feature does not fit any of the classes, we highlight it. Now it used at the medicine — on MRI, computer highlights all the suspicious areas or deviations of the test. Stock markets use it to detect abnormal behavior of traders, to find the insiders. When teaching the computer the right things, we automatically teach it what things are wrong.
Today, for classification more frequently used neural networks. Well, that's what they were created for.
The rule of thumb is the more complex the data, the more complex the algorithm. For text, numbers, and tables, I'd choose the classical approach. The models are smaller there, they learn faster and work more clear. For pictures, video and all other complicated big data things, I'd definitely look at neural networks.
You may find face classifier built on SVM only 5 years ago you. Now, you can choose from hundreds of pre-trained networks. Nothing changed for spam filters, though. They are still written with SVM. And there's no good reason to switch from it anywhere.
Regression
"Draw a line through these dots. Yep, that's the machine learning"
Today this is used for:
Stock price forecast
Demand and sales volume analysis
Medical diagnosis
Any number-time correlations
Popular algorithms are Linear and Polynomial regressions.
Regression is basically classification where we forecast a number instead of category. Such are car price by its mileage, traffic by time of the day, demand volume by growth of the company etc. Regression is perfect when something depends on time.
Everyone who works with finance and analysis loves regression. It's even built-in to Excel. And it's super smooth inside — machine simply tries to draw a line that indicates average correlation. Though, unlike a person with a pen and a whiteboard, machine does at mathematically accurate, calculating the average interval to every dot.
When the line is straight — it's a linear regression, when it's curved – polynomial. These are two major types of regression. The other ones are more exotic. Logistic regression is a black sheep in the flock. Don't let it trick you, as it's a classification method, not regression.
It's okay to mess with regression and classification, though. Many classifiers turn into regression after some tuning. We can not only define the class of the object but memorize, how close it is. Here comes a regression.
1.2 Unsupervised learning
Unsupervised was invented a bit later, in the '90s. It is used less often, but sometimes we simply have no choice.
Labeled data is luxury. But what if I want to create, let's say, a bus classifier? Should I manually take photos of million fucking buses on the streets and label each of them? No way, that will take a lifetime, and I still have so many games not played on my Steam account.
There's a little hope for capitalism in this case. Thanks to the social stratification, we have millions of cheap workers and services like Mechanical Turk who are ready to complete your task for 0.05$. And that's how things usually get done here.
Or you can try to use unsupervised learning. But I can't remember any good practical appliance of it, though. It's usually useful for exploratory data analysis but not as the main algorithm. Specially trained meatbag with Oxford degree feeds the machine with a ton of garbage and watch it. Are there any clusters? No. Any visible relations? No. Well, continue then. You wanted to work in data science, right?
Clustering
"Divides objects based on unknown feature. Machine chooses the best way"
Nowadays used:
For market segmentation (types of customers, loyalty)
To merge close points on the map
For image compression
To analyze and label new data
To detect abnormal behavior
Popular algorithms: K-means_clustering, Mean-Shift, DBSCAN
Clustering is a classification with no predefined classes. It’s like dividing socks by color when you don't remember all the colors you have. Clustering algorithm trying to find similar (by some features) objects and merge them in a cluster. Those who have lots of similar features are joined in one class. With some algorithms, you even can specify the exact number of clusters you want.
An excellent example of clustering — markers on web maps. When you're looking for all vegan restaurants around, the clustering engine groups them to blobs with a number. Otherwise, your browser would freeze, trying to draw all three million vegan restaurants in that hipster downtown.
Apple Photos and Google Photos use more complex clustering. They're looking for faces at photos to create albums of your friends. The app doesn't know how many friends you have and how they look, but it's trying to find the common facial features. Typical clustering.
Another popular issue is image compression. When saving the image to PNG you can set the palette, let's say, to 32 colors. It means clustering will find all the "reddish" pixels, calculate the "average red" and set it for all the red pixels. Fewer colors — less the file size — profit!
However, you may have problems with colors like Cyan◼︎-like colors. Is it green or blue? Here comes the K-Means algorithm.
It randomly set 32 color dots in the palette. Now, those are centroids. The remaining points are marked as assigned to the nearest centroid. Thus, we get kind of galaxies around these 32 colors. Then we're moving the centroid to the center of its galaxy and repeat that until centroids won't stop moving.
All done. Clusters defined, stable, and there are exactly 32 of them. Here is a more real-world explanation:
Searching for the centroids is convenient. Though, in real life clusters not always circles. Let's imagine, you're a geologist. And you need to find some similar minerals at the map. In that case, the clusters can be weirdly shaped and even nested. Also, you don't even know how many of them to expect. 10? 100?
K-means does not fit here, but DBSCAN can be helpful. Let's say, our dots are people at the town square. Find any three people standing close to each other and ask them to hold hands. Then, tell them to start grabbing hands of those neighbors they can reach out. And so on, and so on until no one else can take anyone hand. That's our first cluster. Repeat the process until everyone clustered. Done.
A nice bonus: a person who have no one to hold hands — is an anomaly.
It all looks cool in motion:
Just like classification, clustering could be used to detect anomalies. User behaves abnormally after signing up? Let machine ban him temporarily and create a ticket for the support to check it. Maybe it's a bot. We don't even need to know what is "normal behavior", we just upload all user actions to our model and let the machine decide is it a "typical" user or not.
This approach works not that well compared to the classification one, but it never hurts to try.
Dimensionality Reduction (Generalization)
"Assembles specific features into more high-level ones"
Nowadays is used for:
Recommender systems (★)
Beautiful visualizations
Topic modeling and similar document search
Fake image analysis
Risk management
Popular algorithms: Principal Component Analysis (PCA), Singular Value Decomposition (SVD), Latent Dirichlet allocation (LDA), Latent Semantic Analysis (LSA, pLSA, GLSA), t-SNE (for visualization)
Previously these methods were used by hardcore data scientists, who had to find "something interesting" at the huge piles of numbers. When Excel charts didn't help, they forced machines to do find the patterns. That's how they got Dimension Reduction or Feature Learning methods.
Projecting 2D-data to a line (PCA)
It is always convenient for people to use abstraction, not a bunch of fragmented features. For example, we can merge all dogs with triangle ears, long noses, and big tails to a nice abstraction — "shepherd". Yes, we're losing some information about the specific shepherds, but the new abstraction is much more useful for naming and explaining purposes. As a bonus, such "abstracted" model learn faster, overfit less and use fewer number of features.
These algorithms became an amazing tool for Topic Modeling. We can abstract from specific words to their meanings. This is that Latent semantic analysis (LSA) do. It is based on how frequent you see the word on the exact topic. Like, there are more tech terms in tech articles, for sure. The names of politicians are mostly found in political news, etc.
Yes, we can just make clusters from all the words at the articles, but we will lose all the important connections (for example the same meaning of battery and accumulator in different documents). LSA will handle it properly, that's why its called "latent semantic".
So we need to connect the words and documents into one feature to keep these latent connections. Referring to the name of the method. Turned out that Singular decomposition (SVD) nails this task, revealing the useful topic clusters from seen-together words.
Recommender Systems and Collaborative Filtering is another super-popular use of dimensionality reduction method. Seems like if you use it to abstract user ratings, you get a great system to recommend movies, music, games and whatever you want.
It's barely possible to fully understand this machine abstraction, but it's possible to see some correlations on closer look. Some of them correlate with user's age — kids play Minecraft and watch cartoons more; others correlate with movie genre or user hobbies.
Machine get these high-level concepts even without understanding them, based only on knowledge of user ratings. Nicely done, Mr.Computer. Now we can write a thesis why bearded lumberjacks love My Little Pony.
Association rule learning
"Look for patterns in the orders' stream"
Nowadays is used:
To forecast sales and discounts
To analyze goods bought together
To place the products on the shelves
TO analyze web surfing patterns
Popular algorithms: Apriori, Euclat, FP-growth
This includes all the methods to analyze shopping carts, automate marketing strategy, and other event-related tasks. When you have a sequence of something and want to find patterns in it — try these thingys.
Say, a customer takes a six-pack of beers and goes to the checkout. Should we place peanuts on the way? How often people buy it together? Yes, it probably works for beer and peanuts, but what other sequences can we predict? Can a small change in the arrangement of goods lead to a significant increase in profits?
Same goes for e-commerce. The task is even more interesting there — what customer is going to buy next time?
No idea, why the rule learning seems to be the least elaborated category of machine learning. Classical methods are based on a head-on looking through all the bought goods using trees or sets. Algorithms can only search for patterns, but cannot generalize or reproduce those on the new examples.
In the real world, every big retailer builds their own proprietary solution, so nooo revolutions here for you. The highest level of tech here — recommender systems. Though, I may be not aware of a breakthrough in the area. Let me know in comments if you have something to share.
Part 2. Reinforcement Learning
"Throw a robot into a maze and let it find an exit"
Nowadays used for:
Self-driving cars
Robot vacuums
Games
Automating trading
Enterprise resource management
Popular algorithms: Q-Learning, SARSA, DQN, A3C, Genetic algorithm
Finally, we got to something looks like real artificial intelligence. In lots of articles reinforcement learning is placed somewhere in between of supervised and unsupervised learning. They have nothing in common! Is this because of the name?
Reinforcement learning is used in cases when your problem is not related to data at all, but you have an environment to live. Like a video game world or a city for self-driving car.
youtube
Neural network plays Mario
Knowledge of all the road rules in the world will not teach the autopilot how to drive on the roads. Regardless of how much data we collect, we still can't foresee all the possible situations. This is why its goal is to minimize error, not to predict all the moves.
Surviving in an environment is a core idea of reinforcement learning. Throw poor little robot into real live, punish it for errors and reward for right deeds. Same way we teach our kids, right?
More effective way here — to build a virtual city and let self-driving car to learn all its tricks there first. That's exactly how we train auto-pilots right now. Create a virtual city based on a real map, populate with pedestrians and let the car learn to kill as few people as possible. When the robot is reasonably confident in this artificial GTA, it's freed to test in the real streets. Fun!
There may be two different approaches — Model-Based and Model-Free.
Model-Based means that car needs to memorize a map or its parts. That's a pretty outdated approach since it's impossible for the poor self-driving car to memorize the whole planet.
In Model-Free learning, the car doesn't memorize every movement but tries to generalize situations and act rationally while obtaining a maximum reward.
Remember the news about AI beats a top player at the game of Go? Although, shortly before this, it was proved that the number of combinations in this game is greater than the number of atoms in the universe.
This means, the machine could not remember all the combinations and thereby win Go (as it did chess). At each turn, it simply chose the best move for each situation, and it did well enough to outplay a human meatbag.
This approach is a core concept behind Q-learning and its derivatives (SARSA & DQN). 'Q' in the name stands for "Quality" as a robot learns to perform the most "qualitative" action in each situation and all the situations are memorized as a simple markovian process.
Such a machine can test billions of situations in a virtual environment, remembering which solutions led to greater reward. But how it can distinguish previously seen situation from a completely new one? If a self-driving car is at a road crossing and traffic light turns green — does it mean it can go now? What if there's an ambulance rushing through a street nearby?
The answer is today is "no one knows". There's no easy answer. Researches are constantly searching for it but meanwhile only finding workarounds. Some would hardcode all the situations manually that lets them solve exceptional cases like trolley problem. Others would go deep and let neural networks do the job of figuring it out. This led us to the evolution of Q-learning called Deep Q-Network (DQN). But they are not a silver bullet either.
Reinforcement Learning for an average person would look like a real artificial intelligence. Because it makes you think wow, this machine is making decisions in real life situations! This topic is hyped right now, it's advancing with incredible pace and intersecting with a neural network to clean your floor more accurate. Amazing world of technologies!
Off-topic. When I was a student, genetic algorithms (links has cool visualization) were really popular. This is about throwing a bunch of robots into a single environment and make them try reaching the goal until they die. Then we pick the best ones, cross them, mutate some genes and rerun the simulation. After a few milliard years, we will get an intelligent creature. Probably. Evolution at its finest.
Genetic algorithms are considered as part of reinforcement learning and they have the most important feature proved by the decade-long practice: no one gives a shit about them.
Humanity still couldn't come up with a task where those would be more effective than other methods. But they are great for students experiments and let people get their university supervisors excited about "artificial intelligence" without too much labor. And youtube would love it as well.
Part 3. Ensemble Methods
"Bunch of stupid trees learning to correct errors of each other"
Nowadays is used for:
Everything that fits classical algorithms approaches (but works better)
Search systems (★)
Computer vision
Object detection
Popular algorithms: Random Forest, Gradient Boosting
It's time for modern, grown-up methods. Ensembles and neural networks are two main fighters paving our path to a singularity. Today they are producing the most accurate results and are widely used in production.
However, the neural networks got all the hype today, while the words like "boosting" or "bagging" are scare hipsters on TechCrunch.
Despite all the effectiveness idea behind those is overly simple. If you take a bunch of inefficient algorithms and force them to correct each other's mistakes, the overall quality of a system will be higher than even the best individual algorithms.
You'll get even better results if you take the most unstable algorithms that are predicting completely different results on small noise in input data. Like Regression and Decision Trees. These algorithms are sensitive to even a single outlier in input data to have model go mad.
In fact, this is what we need.
We can use any algorithm we know to create an ensemble. Just throw a bunch of classifiers, spice up with regression and don't forget to measure accuracy. From my experience: don't even try a Bayes or kNN here. Although being "dumb" they are really stable. That's boring and predictable. Like your ex.
Although, there are three battle-tested methods to create ensembles.
Stacking Output of several parallel models is passed as input to last one which makes final decision. Like that girl, who asks her girlfriends whether to meet with you in order to make the final decision herself.
Emphasize here the word "different". Mixing the same algorithm on the same data would make no sense. Choice of algorithms is completely up to you. However, for final decision-making model, regression is usually a good choice.
Based on my experience stacking is less popular in practice, because two other methods are giving better accuracy.
Bagging aka Bootstrap AGGregatING. Use the same algorithm but train it on different subsets of original data. In the end — just average answers.
Data in random subsets may repeat. For example, from a set like "1-2-3" we can get subsets like "2-2-3", "1-2-2", "3-1-2" and so on. We use these new datasets to teach the same algorithm several times and then predict the final answer via simple majority voting.
The most famous example of bagging is the Random Forest algorithm, which is simply bagging on the decision trees (that was illustrated above). When you open your phone's camera app and see it drawing boxes around people faces — it probably results of Random Forest work. Neural network would be too slow to run real-time yet bagging is ideal given it can calculate trees on all the shaders of a video card or on these new fancy ML processors.
In some tasks, the ability of the Random Forest to run in parallel, even more, important than a small loss in accuracy to the boosting, for example. Especially in real-time processing. There is always a trade-off.
Boosting Algorithms are trained one by one sequentially. Every next one paying most attention to data points that were mispredicted by the previous one. Repeat until you are happy.
Same as in bagging, we use subsets of our data but this time they are not randomly generated. Now, in each subsample we take a part from the data previous algorithm failed to process. Thus, we make a new algorithm learn to fix errors of the previous one.
The main advantage here — very high, even illegal in some countries precision of classification that all cool kids can envy. Cons were already called out — it doesn't parallelize. But it's still faster than neural networks. It's like a race between dumper truck and racing car. Truck can do more, but if you want to go fast — take a car.
If you want a real example of boosting — open Facebook or Google and start typing in a search query. Can you hear an army of trees roaring and smashing together to sort results by relevancy? This is it, they are using boosting.
Part 4. Neural Networks and Deep Leaning
"We have a thousand-layer network, dozens of video cards, but still no idea where to use it. Let's generate cat pics!"
Used today for:
Replacement of all algorithms above
Object identification on photos and videos
Speech recognition and synthesis
Image processing, style transfer
Machine translation
Popular architectures: Perceptron, Convolutional Network (CNN), Recurrent Networks (RNN), Autoencoders
If no one ever tried to explain you neural networks using the "human brain" analogies, you're a happy guy. Tell me your secret. But first, I'll explain it as I like.
Any neural network is basically a collection of neurons and connections between them. Neuron is a function with a bunch of inputs and one output. His task is to take all numbers from its input, perform a function on them and send the result to the output.
Here is an example of simple but useful in real life neuron: sum up all numbers on inputs and if that sum is bigger than N — give 1 as a result. Otherwise — zero.
Connections are like channels between neurons. They connect outputs of one neuron with the inputs of another so they can send digits to each other. Each connection has its only parameter — weight. It's like a connection strength for a signal. When the number 10 passes through a connection with a weight 0.5 it turns into 5.
These weights tell the neuron to respond more to one input and less to another. Weights are adjusted when training — that's how the network learns. Basically, that's all.
To prevent the network from falling into anarchy, the neurons are linked by layers, not randomly. Inside one layer neurons are not connected, but connected to neurons of the next and previous layer. Data in the network goes strictly in one direction — from the inputs of the first layer to the outputs of the last.
If you throw in a sufficient number of layers and put the weights correctly, you will get the following - by applying to the input, say, the image of handwritten digit 4, black pixels activate the associated neurons, they activate the next layers, and so on and on, until it lights up the very exit in charge of the four. The result is achieved.
When doing real-life programming nobody is writing neurons and connections. Instead, everything is represented as matrices and calculated based on matrix multiplication for better performance. In two favorite videos of mine, all the process is described in an easily digestible way on the example of recognizing hand-written digits. Watch those if you want to figure this out.
A network that has multiple layers that have connections between every neuron is called perceptron (MLP) and considers the simplest architecture for a novice. I didn't see it used for solving tasks in production.
After we constructed a network, our task is to assign proper ways so neurons would react to proper incoming signals. Now is the time to remember that we have data that is samples of 'inputs' and proper 'outputs'. We will be showing our network a drawing of same digit 4 and tell it 'adapt your weights so whenever you see this input your output would emit 4'.
To start with all weights are assigned randomly afterward we show it a digit, it emits a random answer (the weights are not proper yet) and we compare how much this result differs from the right one. Afterward, we start traversing network backward from outputs to inputs and tell every neuron 'hey, you did activate here but you did a terrible job and everything went south from here downwards, let's keep less attention to this connection and more of that one, mkay?'.
After a hundred thousands of such cycles 'infer-check-punish', there is a hope that weights are corrected and act as intended. Science name for this approach is called Backpropagation or 'method of backpropagating an error'. Funny thing it took twenty years to come up with this method. Before this neural networks, we taught, however.
My second favorite vid is describing this process in depth but still very accessible.
A well trained neural network can fake work of any of the algorithms described in this chapter (and frequently work more precisely). This universality is what made them widely popular. Finally we have an architecture of human brain said they we just need to assemble lots of layers and teach them on any possible data they hoped. Then first AI winter) started, then thaw and then another wave of disappointment.
It turned out networks with a large number of layers required computation power unimaginable at that time. Nowadays any gamer PC with geforces outperforms datacenter of that time. So people didn't have any hope at that time to acquire computation power like that and neural networks were a huge bummer.
And then ten years ago deep learning rose.
In 2012 convolutional neural network acquired overwhelming victory in ImageNet competition that world suddenly remembered about methods of deep learning described in ancient 90s. Now we have video cards!
Differences of deep learning from classical neural networks was in new methods of training that could handle bigger networks. Nowadays only theoretics would try to divide which learning to consider deep and not so deep. And we, as practitioners are using popular 'deep' libraries like Keras, TensorFlow & PyTorch even when we build a mini-network with five layers. Just because it's better suited than all the tools coming before. And we just call them neural networks.
I'll tell about two main kinds nowadays.
Convolutional Neural Networks (CNN)
Convolutional neural networks are all the rage right now. They are used to search for the object on photos and in the videos, face recognition, style transfer, generating and enhancing images, creating effects like slow-mo and improving image quality. Nowadays CNN's are used in all the cases that involve pictures and videos. Even in you iPhone several of these networks are going through your nudes to detect objects in those. If there is something to detect, heh.
Image above is a result produced by Detectron that was recently open-sourced by Facebook
A problem with images was always the difficulty of extracting features out of them. You can split text by sentences, lookup words' attributes in specialized vocabularies. But images had to be labeled manually to teach machine where cat ears or tail were in this specific image. This approach got the name 'handcrafting features' and used to be used almost by everyone.
There are lots of issues with the handcrafting.
First of all, if a cat had its ears down or turned away from the camera you are in trouble, the neural network won't see a thing.
Secondly, try naming at the spot 10 different features that distinguish cats from other animals. I for once couldn't do it. Although when I see black blob rushing past me at night, even I see it in the corner of my eye I would definitely tell a cat from a rat. Because people don't look only at ear form or leg count and account lots of different features they don't even think about. And thus cannot explain it to the machine.
So it means machine need to learn such features on its own building on top of basic lines. We'll do the following: first, we divide the whole image into 8x8 pixels block and assign to each type of dominant line – either horizontal [-], vertical [|] or one of the diagonals [/]. It can be that several would be highly visible this happens too and we are not always absolutely confident.
Output would be several tables of sticks that are in fact are simplest features representing objects' edges on the image. They are images on their own but build out of sticks. So we can once again take a block of 8x8 and see how they match together. And again and again…
This operation is called convolution which gave the name for the method. Convolution can be represented as a layer of a neural network as neuron can act as any function.
When we feed our neural network with lots of photos of cats it automatically assigns bigger weights to those combinations of sticks it saw the most frequently. It doesn't care whether it was a straight line of a cat's back or a geometrically complicated object like a cat's face, something will be highly activating.
As the output, we would put a simple perceptron which will look at the most activated combinations and based on that differentiate cats from dogs.
The beauty of this idea is that we have a neural net that searches for most distinctive features of the objects on its own. We don't need to pick them manually. We can feed it any amount of images of any object just by googling billion of images with it and our net will create feature maps from sticks and learn to differentiate any object on its own.
For this I even have a handy unfunny joke:
Give your neural net a fish and it will be able to detect fish for the rest of its life. Give your neural net a fishing rod and it will be able to detect fishing rods for the rest of its life…
Recurrent Neural Networks (RNN)
The second most popular architecture today. Recurrent networks gave us useful things like neural machine translation (here is my post about it), speech recognition and voice synthesis in smart assistants. RNNs are the best for sequential data like voice, text or music.
Remember Microsoft Sam, the old-school speech synthesizer from Windows XP? That funny guy builds words letter by letter, trying to glue them up together. Now, look at Amazon Alexa or Assistant from Google. They don't only say the words clearly, they even place the right accents!
youtube
Neural Net is trying to speak
All because modern voice assistants are learned to speak not letter by letter, but whole phrases at once. We can take a bunch of voiced texts and train a neural network to generate an audio-sequence closest to the original speech.
In other words, we use text as input and its audio as the desired output. We ask a neural network to generate some audio for the given text, then compare it with the original, correct errors and try to get as close as possible to ideal.
Sounds like a classical leaning process. Even a perceptron is suitable for this. But how should we define it outputs? Fire one particular output for each possible phrase is not an option — obviously.
Here we'll be helped by the fact that text, speech or music are sequences. They consist of consecutive units like syllables. They all sound unique but depend on previous ones. Lose this connection and you get dubstep.
We can train the perceptron to generate these unique sounds, but how will he remember previous answers? So the idea was to add memory to each neuron and use it as an additional input on the next run. A neuron could make a note for itself - hey, man, we had a vowel here, the next sound should sound higher (it's a very simplified example).
That's how recurrent networks appeared.
This approach had one huge problem - when all neurons remembered their past results, the number of connections in the network became so huge that it was technically impossible to adjust all weights.
When a neural network can't forget it can't learn new things (people have the same flaw).
The first decision was simple - let's limit the neuron memory. Let's say, to memorize no more than 5 recent results. But it broke the whole idea.
The much better approach came up later — to use special cells, similar to computer memory. Each cell can record a number, read it or reset it. They were called a long and short-term memory (LSTM) cell.
Now, when neuron needs to set a reminder, it puts the flag in that cell. Like "it was a consonant in a word, next time use different pronunciation rules". When the flag is no longer needed - the cells are reset, leaving only the “long-term” connections of the classical perceptron. In other words, the network is trained not only to learn weights but also to set these reminders.
Simple, but it works!
youtube
CNN + RNN = Fake Obama
You can take speech samples from anywhere. BuzzFeed, for example, took the Obama's speeches and trained the neural network to imitate his voice. As you see, audio synthesis is already a simple task. Video still has issues, but it's a question of time.
There are many more network architectures in the wild. I recommend you a good article called Neural Network Zoo, where almost all types of neural networks are collected and briefly explained.
The End: when the war with the machines?
The main problem here is that the question "when will the machines become smarter than us and enslave everyone?" is initially wrong. There are too many hidden conditions in it.
We say "become smarter than us" like we mean that there is a certain unified scale of intelligence. The top of which is a man, dogs are a bit lower, and stupid pigeons are hanging around at the very bottom.
That's wrong.
In this case, every man must beat animals in everything but it's not true. The average squirrel can remember a thousand hidden places with nuts — I can't even remember where are my keys.
So the intelligence is a set of different skills, not a single measurable value? Or remembering nuts stashes' location is not included in the intelligence?
However, an even more interesting question for me - why do we believe that the human brain possibilities are limited? There are many popular graphs on the Internet, where the technological progress is drawn as an exponent and the human possibilities are constant. But is it?
Ok, multiply 1680 by 950 right now in your mind. I know you won't even try, lazy bastards. But give you a calculator — you'll do it in two seconds. Does this mean that the calculator just expanded the capabilities of your brain?
If yes, can I continue to expand them with other machines? Like, use notes in my phone to not to remember a shitload of data? Oh, seems like I'm doing it right now. I'm expanding capabilities of my brain with the machines.
Think about it. Thanks for reading.
DataTau published first on DataTau
0 notes