#caff's writing resources
Explore tagged Tumblr posts
thecaffeinebookwarrior · 5 years ago
Text
Places To Post Original Fiction
1.)  Commaful -- a friendly and supportive writing community, smaller but denser than Wattpad, and far more active and engaging.
2.)  FictionPress -- original fiction’s answer to FanFiction.net.  If you’re familiar with that format, you’ll be familiar with this.  
3.)  Smashwords -- an ebook publishing platform that also welcomes short stories, and collections thereof.
4.)  WritersCafe -- old-school but solid, with an active community and plenty of contests/challenges to get the creative juices pumping.
5.)  Medium -- a place where you can post, essentially, anything and everything.  Articles and non-fiction are its biggest market, but fiction is welcome as well.
6.)  Booksie -- less community-based, with fewer interactions and comments.  However, it still attracts great talent, and can be great for authors who are shy and don’t want to get bombarded with interaction.
7.)  RoyalRoad -- a rich community, with a strong emphasis on mutual support between authors.  Focuses on web novels, fanfiction, and original stories.
8.)  FanStory -- an oldie but a goody.  Don’t be fooled by the name -- it seems to be predominantly original fiction, and offers contests with cash prizes. 
9.)  Young Writers Society -- as the name suggests, oriented towards writers in their teens and twenties, but is by no means exclusive to authors of this age bracket. 
10.)  Wattpad -- Wattpad provides users with the opportunity to post original fiction and gain a loyal following.  It’s not for everyone, but some people swear by it.  
On that note, you can also post original fiction to AO3 and FanFiction, but as they are predominantly for fan works, I decided not to include them on this list.  What’s your favorite way to post original fiction?
Happy writing, everybody!
4K notes · View notes
rangerslayer-97 · 3 years ago
Text
Things That Will Never Be Said
I got hit with more inspiration and curse my brain that I can’t write anything that isn’t angst. For the time being.
Time is set where my Knight Guardian main Violcrik is the Outlander and Commander of the Alliance. Set post-Echoes of Oblivion, during the Knight only Alliance Alert "The Padawan Returns". Mentions of previous game choices during Shadow of Revan, Knights of the Fallen Empire, Eternal Throne and Onslaught.
~~~~~
WARNINGS: Angst, Hurt, Unrequited love, One-sided attraction, Minor emotional manipulation (I could be wrong, but I'm covering borders)
~~~~
Tumblr media
Summary: Lana comes to terms that she does have buried feelings for Commander Violcrik, but learns someone else won the. Commander’s heart.
~~~~~
Dark Advisor and Alliance Second-in-Command Sith Lord Lana Beniko, the former Minister of Sith Intelligence walked the War Room as she sifts though data on her data pad from various individuals and her contacts. The Commander, former Jedi Knight and Battle Master of the Jedi Order, Violcrik Baliss was taking a much deserved break after the battle of the Meridian Shipyard Complex on Corellia. Lana worries Violcrik hasn’t given enough time to herself to relax. The Commander has always been on her feet, sometimes pulling all-nighters, drinking a lot of caff or taking a concerning amount of stims. Lana felt after Corellia, Violcrik deserves the downtime after running between Ossus, Onderon and Mek-Sha. It was one battle after another. Unbeknownst to the Republic, the Knight’s home faction, Violcrik has chosen to align with the Sith Empire. It came as a surprise to Lana, but a welcome one, even Empress Acina has been made aware of the Commander’s choice. She is pleased that the Empire now has a spy in the Republic ranks. Lana did send a subtle warning to not overstep their bounds with the Commander and that the Alliance is still an independent third-party. Empress Acina respected the warning loud and clear.
The Sith Lord understands the Commander’s reasonings. A lot of them stem from what occurred during the war against the Eternal Empire. Violcrik often confided with her about her gradually crumbling fate with the Republic. These words were normally spoken when Theron Shan, former Republic SIS Agent, was out of earshot. They knew Theron would try to remind the Jedi that the Republic is still good. Violcrik told her she felt her faction had all but abandoned her, the Jedi Order had forsaken her, though she always felt like an outcast in the Order. Lana listened when the Commander revealed the crimes she committed for the ‘good of the Republic’, two of which were definite counts of war crimes. The Sith knew Violcrik wasn’t like the other Jedi, she was emotional, passionate, will do what needs to be done. As a Dark Sider, Lana did find that as an attractive quality. These qualities often put her at odd's end with the High Jedi Council. The Commander admitted she locked horns with the Council members more than she cared to count.
It is enough to say that Lana became Violcrik's go-to for private talks, personal. The Sith advisor is good at reading people and she knows the Commander has a lot on her mind that she hasn't been able to get off her chest. Lana did suggest therapy at one point, but Violcrik laughed it off, claiming she didn't need to see a therapist; and seeing one would ruin her image and reputation. It was never brought up again. The former Jedi did open up to her about her past, it was one Lana would never wish on anyone. Alderaan is a planet for the rich, snobbery and being born out of wedlock can lead to a family name being ruined. Violcrik and her sisters were all but wiped off the family tree, none entitled to an inheritance by their father. From the way Violcrik spoke about her father, there was malice, there was anger and there was hatred. Such intense emotions almost made Lana dizzy.
During the war against the Eternal Empire, even up to now, Lana had taken time out of her schedule to teach Violcrik how to control her darkness. She won't deny there is a danger of having a rogue Dark Jedi running around the galaxy. The Sith Lord herself has witnessed several times, the Commander giving into her darkness; and many times, seeing her eyes change from deep blue to deep orange that nearly glowed. Lana won't lie there are some days she is afraid of what Violcrik is capable of if she lost control. Violcrik did prove as her short stint as Empress, that she will resort to using fear and terror. The last time she got concerned was over the Commander's reliance on Valkorion's powers when he resided inside her mind. Of course, they did get at odds with each other when Violcrik lied to her about Valkorion sharing her mind. It took some days to get one another's trust back. Yet
 then she remembers the Commander went her way to save her twice and refused to leave her side during the breakout with
 that's beside the point. Violcrik saved her twice and when the Alliance was set up, the Commander intended to speak to either her or Theron (long before he got banished after defeating the Order of Zildrog). Lana had a feeling Violcrik wanted to speak to her alone, but when Koth soured the mood; the Battle Master told them to forget about it and walked away.
The Dark Advisor knows things were left unsaid between them. Lana needed to know what it was. When they made love on Yavin IV after they defeated Revan, there was something between them. It wasn't just a physical attraction. While to some it may have appeared as a one-night stand or a fling, but it must not have been, otherwise Violcrik wouldn't have flirted with her during the disaster on Ziost. Though the timing was quite poor on Violcrik's end, the spark between them was there, fresh, a crackle of electricity that was about to spark into a flame. While the Commander hadn't approached to talk about her feelings, Lana respected that. She wasn't going to impose. Lana can't hold it in now after six years of waiting, she is in with love with Commander Violcrik Baliss. She tried to deny the feelings when Violcrik didn't come to her, so she held the emotions in. Now, they can't be held in any longer. Maybe the Commander was scared to come forward and admit her feelings. It is scary territory, to open yourself up and give your heart to someone. Perhaps... maybe the Commander was waiting.
Lana had tried to deny her feelings, but now
 she no longer can't. Dark Advisor Lana Beniko is in love with Commander Violcrik Baliss. She's going to confess her feelings now. No more waiting. The Sith Lord turned to Teeseven, asking where the Commander is now. The astromech told her he had last seen her head to the Force Enclave, which means she's nearby. Lana thanked Teeseven, shut off her datapad and headed for the Force Enclave. She followed the Force signature she felt, small amounts of Light being drowned under the heavy blanket of Darkness.
Her heart was beating fast, Lana ran through several different ways to confess her feelings and not sound like an idiot. There was an unusual skip in the Sith Lord's step, both Republic and Imperial troops dare didn't question what made the stoic advisor so happy. Lana made it to the entrance of the Enclave. Sana-Rae was off somewhere. The advisor was about to call out to the Commander, only to see
 it was Knight Carsen. She and the former Emperor's Wrath, Lord Scourge joined the Alliance after finally be ridding Valkorion and his previous incarnations, Vitiate and Tenebrae. It appears the Commander and Carsen are talking, Lana couldn't hear what, but judging by her body language; something was said to make the Commander's former Padawan disgustingly giggly like a young Jedi Initiate.
Then the two stepped into each other's personal space. Lana's heart dropped like a heavy weight. She watched as the two embraced each and
 they kissed. The Commander and Carsen
 kissed. So is this why Violcrik never let her feelings be known to her? Had they always belonged
 to her? Then what were they? Friends? Friends with benefits? A fling? Is the Commander stupid!? Getting with Carsen, who is undoubtedly loyal to the Republic? Who is quite clearly oblivious to the Commander's true loyalty!? The Commander who is happily turning against her own faction! Who severely weakened the Republic fleet en route to Corellia and destroyed their newly built shipyard that could have tipped the war in an ongoing resource crisis!?
Fair enough, let Carsen be the one broken when the Commander's betrayal come to light. Violcrik will end up running back to her. No, she has to stop these thoughts. Lana is angry, that the Commander made her feel she was nothing more than a fling. At the same time, after the Commander went through her dark period in life after being awoken from carbonite, she can't do anything but respect the choice. Lana won't resort to pettiness, she will respect Violcrik and her choices, who she gives her heart to. It appears now, she must step back. After six years, the Commander deserves this. This
 this happiness.
No, not deserve. Deserve is a crutch for the weak. Lana will take happiness where she can find it. She will not blame the Commander for finding hers.
The Dark Advisor silently slips away from the entrance of the Force Enclave, her head bowed down and a single tear slipping down her cheek.
9 notes · View notes
sciforce · 5 years ago
Text
How to Find a Perfect Deep Learning Framework
Tumblr media
Many courses and tutorials offer to guide you through building a deep learning project. Of course, from the educational point of view, it is worthwhile: try to implement a neural network from scratch, and you’ll understand a lot of things. However, such an approach does not prepare us for real life, where you are not supposed to spare weeks waiting for your new model to build. At this point, you can look for a deep learning framework to help you.
A deep learning framework, like a machine learning framework, is an interface, library or a tool which allows building deep learning models easily and quickly, without getting into the details of underlying algorithms. They provide a clear and concise way for defining models with the help of a collection of pre-built and optimized components.
Briefly speaking, instead of writing hundreds of lines of code, you can choose a suitable framework that will do most of the work for you.
Most popular DL frameworks
The state-of-the-art frameworks are quite new; most of them were released after 2014. They are open-source and are still undergoing active development. They vary in the number of examples available, the frequency of updates and the number of contributors. Besides, though you can build most types of networks in any deep learning framework, they still have a specialization and usually differ in the way they expose functionality through its APIs.
Here were collected the most popular frameworks
TensorFlow
Tumblr media
The framework that we mention all the time, TensorFlow, is a deep learning framework created in 2015 by the Google Brain team. It has a comprehensive and flexible ecosystem of tools, libraries and community resources. TensorFlow has pre-written codes for most of the complex deep learning models you’ll come across, such as Recurrent Neural Networks and Convolutional Neural Networks.
The most popular use cases of TensorFlow are the following:
NLP applications, such as language detection, text summarization and other text processing tasks;
Image recognition, including image captioning, face recognition and object detection;
Sound recognition
Time series analysis
Video analysis, and much more.
Tumblr media
TensorFlow is extremely popular within the community because it supports multiple languages, such as Python, C++ and R, has extensive documentation and walkthroughs for guidance and updates regularly. Its flexible architecture also lets developers deploy deep learning models on one or more CPUs (as well as GPUs).
For inference, developers can either use TensorFlow-TensorRT integration to optimize models within TensorFlow, or export TensorFlow models, then use NVIDIA TensorRT’s built-in TensorFlow model importer to optimize in TensorRT.
Installing TensorFlow is also a pretty straightforward task.
For CPU-only:
pip install tensorflow
For CUDA-enabled GPU cards:
pip install tensorflow-gpu
Learn more:
An Introduction to Implementing Neural Networks using TensorFlow
TensorFlow tutorials
PyTorch
Tumblr media
PyTorch
Facebook introduced PyTorch in 2017 as a successor to Torch, a popular deep learning framework released in 2011, based on the programming language Lua. In its essence, PyTorch took Torch features and implemented them in Python. Its flexibility and coverage of multiple tasks have pushed PyTorch to the foreground, making it a competitor to TensorFlow.
PyTorch covers all sorts of deep learning tasks, including:
Images, including detection, classification, etc.;
NLP-related tasks;
Reinforcement learning.
Tumblr media
Instead of predefined graphs with specific functionalities, PyTorch allows developers to build computational graphs on the go, and even change them during runtime. PyTorch provides Tensor computations and uses dynamic computation graphs. Autograd package of PyTorch, for instance, builds computation graphs from tensors and automatically computes gradients.
For inference, developers can export to ONNX, then optimize and deploy with NVIDIA TensorRT.
The drawback of PyTorch is the dependence of its installation process on the operating system, the package you want to use to install PyTorch, the tool/language you’re working with, CUDA and others.
Learn more:
Learn How to Build Quick & Accurate Neural Networks using PyTorch — 4 Awesome Case Studies
PyTorch tutorials
Keras
Tumblr media
Keras was created in 2014 by researcher François Chollet with an emphasis on ease of use through a unified and often abstracted API. It is an interface that can run on top of multiple frameworks such as MXNet, TensorFlow, Theano and Microsoft Cognitive Toolkit using a high-level Python API. Unlike TensorFlow, Keras is a high-level API that enables fast experimentation and quick results with minimum user actions.
Keras has multiple architectures for solving a wide variety of problems, the most popular are
image recognition, including image classification, object detection and face recognition;
NLP tasks, including chatbot creation
Tumblr media
Keras models can be classified into two categories:
Sequential: The layers of the model are defined in a sequential manner, so when a deep learning model is trained, these layers are implemented sequentially.
Keras functional API: This is used for defining complex models, such as multi-output models or models with shared layers.
Keras is installed easily with just one line of code:
pip install keras
Learn more:
The Ultimate Beginner’s Guide to Deep Learning in Python
Keras Tutorial: Deep Learning in Python
Optimizing Neural Networks using Keras
Caffe
Tumblr media
The Caffe deep learning framework created by Yangqing Jia at the University of California, Berkeley in 2014, and has led to forks like NVCaffe and new frameworks like Facebook’s Caffe2 (which is already merged with PyTorch). It is geared towards image processing and, unlike the previous frameworks, its support for recurrent networks and language modeling is not as great. However, Caffe shows the highest speed of processing and learning from images.
The pre-trained networks, models and weights that can be applied to solve deep learning problems collected in the Caffe Model Zoo framework work on the below tasks:
Simple regression
Large-scale visual classification
Siamese networks for image similarity
Speech and robotics applications
Tumblr media
Besides, Caffe provides solid support for interfaces like C, C++, Python, MATLAB as well as the traditional command line.
To optimize and deploy models for inference, developers can leverage NVIDIA TensorRT’s built-in Caffe model importer.
The installation process for Caffe is rather complicated and requires performing a number of steps and meeting such requirements, as having CUDA, BLAF and Boost. The complete guide for installation of Caffe can be found here.
Learn more:
Caffe Tutorial
Choosing a deep learning framework
You can choose a framework based on many factors you find important: the task you are going to perform, the language of your project, or your confidence and skillset. However, there are a number of features any good deep learning framework should have:
Optimization for performance
Clarity and ease of understanding and coding
Good community support
Parallelization of processes to reduce computations
Automatic computation of gradients
Tumblr media
Model migration between deep learning frameworks
In real life, it sometimes happens that you build and train a model using one framework, then re-train or deploy it for inference using a different framework. Enabling such interoperability makes it possible to get great ideas into production faster.
The Open Neural Network Exchange, or ONNX, is a format for deep learning models that allows developers to move models between frameworks. ONNX models are currently supported in Caffe2, Microsoft Cognitive Toolkit, MXNet, and PyTorch, and there are connectors for many other popular frameworks and libraries.
New deep learning frameworks are being created all the time, a reflection of the widespread adoption of neural networks by developers. It is always tempting to choose one of the most common one (even we offer you those that we find the best and the most popular). However, to achieve the best results, it is important to choose what is best for your project and be always curious and open to new frameworks.
10 notes · View notes
allmoddedapk · 4 years ago
Text
Idle Toilet Tycoon Mod 1.2.5 Apk (Unlimited Gold/Diamonds)
New Post has been published on https://www.allmoddedapk.com/idle-toilet-tycoon-mod-apk/
Idle Toilet Tycoon Mod 1.2.5 Apk (Unlimited Gold/Diamonds)
Tumblr media
Idle Toilet Tycoon Mod 1.2.5 Apk (Unlimited Gold/Diamonds)
Idle Toilet Tycoon – Hotel tycoonPark tycoon? Supermarket, gym, jail, even casino tycoon? Do you know the toilet tycoon? “Idle Toilet Tycoon” is a simulator business game with the theme of toilet!
Toilet will not be the luxury for some people anymore. You will be the manager and operate the toilet business in global! Write a toilet legend and create your own toilet empire now! Are you ready to manage the chain of toilets? Start from the hustle city block, improve the quality of life for landlord and resident, then reach the peak of career, and create a unique toilet experience with sincerity. Pay attention to the details, expand the toilet, upgrade the facilities, design and decorate the space, provide unique services, obtain customer satisfaction. Become an outstanding “toilet hero” or ”toilet emperor”. Pay attention to the surrounding commercial development. Use free time clever by using business strategy to reinvest which income.
Properly manage the human resources department: visit the recruitment market more actively for searching talents, moderately expand recruitment, hire cleaners to clean toilets, hire maintenance workers to repair toilets and dredge sewers; hire baristas and waiters to operate caff; hire chefs doormen, bookers, or bartenders. Carefully formulate business strategies and form a strong work team in your toilet empire to help toilet development.
Take the lead in opening leisure activities for guests to entertain. Prop up parasols near the business site, open outdoor restaurants, playgrounds, cafes, provide cricket sports venues, convenient underground streets, and sightseeing routes.
Starting from the hustle block toilets, then to the airport toilets, nightclubs, zoos, endless possibilities.You can DIY & wait for your ideas. Adapt to local conditions, provide the necessary and most luxurious services for life, and gradually expand the blueprint of the toilet empire. Provide various toilet services, purchase the best quality toilets, toilet paper, trash cans, even air conditioners and artworks, prepare air fresheners, control the temperature, and expand the queue, image jacuzzi, swimming pool, even legoland so that guests will always remember this experience! Provide general toilets, maternity rooms, disabled rooms and other exclusive services for guests with special needs.
By the way, remember to find Wanghong when the funds are urgent. Feeling the help of the mysterious power, it may be the strengthening of gravity, so that all guests can go to the toilet twice! Cheers, drink plenty of water and go to the toilet! Remember to participate in the repair contest when there are problems with the facilities in large areas! Help others, you will get unexpected benefits!
Gem is important! When the subway passes by, the planes fly away one by one, remember to come back to collect operating profits after a day of rest,
Features: ‏- Easy to use, simple and leisure, feel the infinite fun of toilet management -Join various business challenges and tasks, do a good job in data analysis and financial forecasting -Exquisite 3D graphics and cool animation effects (example: peeing, wash hands) -Make important management decisions and develop the toilet industry chain -Start and close the game at any time and earn income when you leave
If you love business games, “Idle Toilet Tycoon” will let you put it down. Strategic toilet management leisure game, simple operation, making strategic decisions, developing hotel business, and earning huge profits. From a small toilet to the world’s best toilet, becoming a must-go for VIP customers!
1 note · View note
stereksummerexchange · 7 years ago
Text
The Accidental Alpha
@septima-sum | AO3 - Septima, I hope this fulfills your fluffy college romance wishes! Thank you for the excuse to write this idea I’ve been thinking about for ages!
by @poetry-protest-pornography
When Stiles goes to college, he meets a new group of supernatural creatures (because of course he does) and it turns out he’s pretty good at taking care of werewolves–and a witch! Derek and John are
 wary.
Two and a half years of running with wolves had given Stiles the ability to recognize a supernatural being with a relative ease, and going to a university with a very large student body gave him a fair amount of practice.
In his first semester English Lit class, there was a girl who spent all of the first class with a sour look on her face, leaning as far away from the professor as possible while still remaining in her seat in the middle of the auditorium. It wasn’t until Stiles went to get the syllabus from the prof that he got a whiff of the man’s oppressive cologne. The next time the class met, Stiles brought some herbal candy and a small tub of salve with him. He had sat next to her in her new place in the back row and placed the items on the table in front of her.
“The lozenges will help, and put a little of the salve under your nose, too. That should block the worst of it,” he’d said quietly, smiling with no teeth and as much sincerity as he could manage so he didn’t appear as a threat–or a crazy person. He preempted her denial by dropping his voice further, turning toward her as he stood to find a different seat and reassuring her with, “When my brother got turned, his senses went crazy, and these were a lifesaver.”
Her jaw had dropped slightly, and her brow had furrowed in a way that Stiles found startlingly endearing, but when her eyes snapped to meet his, there was only mild surprise and confusion there. She’d even smiled, though it seemed to be involuntary, and after he’d settled into his seat a few rows down, he heard the quiet crinkle of a wrapper open. When he’d looked up a moment later, as Dr. English Leather walked in carrying his cloud of chemicals and musk, she was wearing a small pleased smile and replacing the lid on the jar of salve.
It felt good.
After class, she had waited for him at the door, blurting out a “Thank you,” before he could say anything. “I’m Bianca,” she’d said, sticking out a hand and tilting her head to the side. Stiles had been startled by the display, but did his best to ignore it. He’d introduced himself and offered to bring her a bag of the candies and the recipes for both items, and by the end of the conversation he had a study partner for the semester. 
The guy at the campus coffee shop with the too quick reflexes and the uncanny habit of forgetting he had enhanced hearing might as well have just worn a shirt that said “I’m Not Human.”
Stiles had actually called Derek after his first encounter with Neil during orientation week and rambled about the total failure of supernatural education. “Der, you can’t tell me there isn’t like, Super Summer Camp or something! Why do none of you know how to people! You can’t go 2002 Spiderman-ing all over the place and stay a secret!” 
Derek had done a manful job of pretending to be unimpressed, but had eventually agreed that the barista needed to be a little less spectacular. 
Thankfully, Stiles’ nearly problematic dependence on caffeine meant that he didn’t end up having to wait too long to steal a minute with Neil. Unfortunately, creating the moment meant that he’d had to sacrifice his perfectly crafted cinnamon mocha. As he “accidentally” dropped the steaming cup of spicy chocolatey goodness, Neil predictably moved to save him from the burning hot backlash. When the kid had successfully saved him, Stiles had untangled himself from the still awkwardly long limbs of Neil The Were-Barista (mentally noting that the kid was going to be gigantic when he was done growing) and thanked him with a genuine smile. Neil had shrugged it off shyly and gone to grab a roll of paper towels to clean the mess.
He had looked startled when Stiles kneeled down next to him, a wad of napkins in hand to wipe at a puddle of cocoa-dusted whipped cream. When Stiles had said, calmly and quietly, “I appreciate the save, dude, but you need to start being a little less super, or you’re going to draw unwanted attention, bro,” his eyes had clouded over and his whole body tensed.
“Shit. That didn’t come out right. Don’t freak out.” Miraculously, Neil had relaxed a little, and Stiles was able to continue. “Let a few lattes get dropped now and then. Don’t start making someone’s super complicated half-caff, non-fat, double bullshit drink before the cashier calls it out to you, and maybe be a little more careful not to answer questions you shouldn’t have been able to hear being asked, okay? I know it’s overwhelming, but you have to keep yourself safe.” 
Neil’s stunned gratitude had made Stiles feel proud and warm. The extra-large replacement mocha was nice, too.
He had had his suspicions about his Folklore professor, Dr. Garrett, from day one. The woman was a little too knowledgeable and a little too passionate. And a little to spry for a human 58-year-old.
When Derek, Scott, and Kira had dropped in on him for a surprise “we all randomly had the same 24 hours free and decided we missed you” visit one weekend a few weeks into his first semester, it had been a much needed if whirlwind visit, and also confirmed that Dr. Garrett was most definitely a werewolf (though Stiles had been hoping for a were-cat of some sort, the woman’s grace and haughty humor screamed feline). Dr. Garrett had walked into the classroom with her usual casual determination, but once she reached her desk, she’d frozen and taken a deep breath, her head darting immediately to Stiles, and he had sworn her eyes flashed briefly at him as they narrowed in consideration.
Their conversation after class had been brief, but they continued to meet throughout the semester, sharing stories and resources. She had a fascinating life and an incredible collection of books, and Stiles was grateful to have someone on campus to talk to.
***** 
Going home for Thanksgiving break was strange. Stiles was looking forward to getting back to his pack, to his dad, but there was an odd feeling in the pit of his stomach, like he was forgetting something. Leaving something behind.
He had to physically shake himself to stop from turning around to head back to campus and double check all the knobs on the stove or something. Which was ridiculous, because in the mad paper-writing spree that was the last week before break, he had lived mostly on coffee and take-out food. If not for Bianca and Neil, he probably would’ve opted for just the coffee, but the two had become good friends since their respective first meetings. Stiles was grateful for their presence; it was hard being away from the Pack, and even though he spoke to Derek almost daily, Scott and Lydia at least once a week, and Malia and the junior wolves often enough that they were all up to date on each other’s lives, it was lonely.
The lack of constant life-threatening danger was pretty nice, though.
Despite the feeling of leaving something behind, pulling into the driveway at home was as much of a relief as it always was, the knot of tension in his shoulders relaxing itself at the prospect of a whole week to spend with his dad, Derek, Scott and Melissa, and the rest of his rag-tag crew.
His dad opened the front door before he could fumble his key into the lock, and before he could drop his duffel bag to the floor, he was wrapped up in a tight hug. For a moment, he was caught up in a rush of emotions that had him hugging his dad back a little tighter. The first year after Stiles discovered that werewolves were a real thing had strained his relationship with his dad to the point he wasn’t sure they would be able to recover. He wouldn’t ever stop being grateful he’d been  wrong.
“Good to see you, kiddo,” John said as he pulled away. “You look good, son, you eating something besides pizza and instant noodles?”
Stiles rolled his eyes and raised a brow. “Are you?” They shared a laugh, and Stiles was surprised when Derek joined them in the entryway.
“Like Jordan and Melissa would let him get away with takeout five days a week and face your wrath,” Derek deadpanned. Stiles laughed harder and John snorted, and then Derek was right there, so Stiles took half a step and Derek wrapped his arms around him. “Hey,” Derek said quietly into the side of Stiles’ head, and a different kind of rush went through him.
His relationship with Derek had changed so much, Stiles wasn’t always sure he believed that they had gotten to where they were now. From the beginning they’d been like magnets, pushing against each other and pulling each other in in turns. Now, though, there was almost nowhere he felt safer, felt more like himself, than when he was with Derek.
“Hey yourself.” He pulled away enough to look at Derek, vaguely noted that his dad had disappeared, and reached up to scratch lightly at Derek’s cheek. “Y’know, this is officially a beard now, Der. We are well past sexy-mysterious stubble, dude.”
Derek’s eyebrows quirked upwards and he smirked, his voice dropping teasingly low. “Is that a complaint?”
Stiles’ tongue darted across his upper lip as he shook his head. “Nope,” he said around a grin, relishing in Derek’s answering smile and the way Derek’s eyes traced over his face. So of course instead of doing something, he blurted out, “Are you wearing my shirt?”
Derek laughed, his eyes crinkling in a way that Stiles would never not be endeared by, and he couldn’t regret missing a chance to make a move.
“It’s comfy,” Derek said easily, shrugging and turning stepping a little further away, tweaking the collar of Stiles’ flannel as he did so. “Besides, it’s yours.”
The smile that Stiles felt curve his lips came with a warmth in his chest, and he and Derek were caught in a still moment, just watching each other and enjoying the warm, quiet space between them. 
A small clatter from the kitchen tore them both out of it, but Derek just turned, throwing his arm over Stiles’ shoulder. “C’mon, let’s go help with dinner.”
Read the rest on AO3 
161 notes · View notes
savitayogaashram-blog · 5 years ago
Text
Why Yoga Teacher Training in India?
Tumblr media
India is that the birth place of Yoga. There can’t be any higher destination than India to search out better resource and data concerning Yoga. One might claim that Yoga isn’t any additional confined among India or it’s already sort of a globally acknowledged term. It’s conjointly an indisputable fact that yoga centers is found at each a part of the world. However, the amount of coaching that a yoga ashram in India will offer is just not possible to be matched elsewhere.
Considering the growing interest of Yoga, demand of additional range of yoga academics look obvious. However, it’s a reality at the identical time that the fashionable day yoga enthusiasts aren’t simply astonished by seeing somebody playing any yoga step. They’ll simply notice the videos or photos explaining stepwise ways to follow those poses.
What they expect from the skilled is deep clarification regarding connexion of these poses, or the science concerned behind this. And, the foremost authentic supply that explains this yoga science is that the sacred writing. India being the origin website of religious text information makes it obvious that solely a licensed yoga college in India will offer the authentic Vedic yoga knowledge. Some might claim that religious text books also are out there at different components of the globe.
Well, initial of all, it’s necessary to grasp that sacred writing can’t be understood merely like reading a book. Taking it virtually is extraordinarily confusing. Sacred writing and religious text yoga information ought to solely be gathered through a supposed yoga ashram in India, underneath the steerage of Associate in Nursing practiced religious text skilled. Moreover, one is apparent to refer several aspects as a component of analysis to be a religious text teacher.
The best thanks to have such reference material simply is to hitch a supposed center for yoga teacher coaching in India. Staying in India may be useful for you to possess numerous different sources simply. Of these are the explanations that somebody having a yoga teacher training in India is given additional price than the others. Yoga centers in India are known for organizing best yoga teacher training courses in India.
Benefits of yoga teacher training in India Lineage: Going to the Source
Tumblr media
The Upanishads, a series of Hindu sacred treatises, virtually mean “sitting close to,” as in to sit down as shut as attainable to the master. Just like the classic game of telephone, things drift in translation; thus, traveling to India is that the logical course to find out from the supply of sacred yoga knowledge.
Diverse Tracts
Tumblr media
It Has Mountains and Beaches. Starting from the chain Mountains to the ocean lineation, India could be a stunning country with numerous tract. Rishikesh has access to Mother Ganga and is taken into account the yoga capital of the globe. Mysore is taken into account the place for Ashtanga practitioners to check. Dharamshala is tucked away within the chain peaks and a good place to check if the mountains decision to you. Moreover, state has exciting beach views.
Affordability
Tumblr media
Lower value for Teacher coaching. 2 months in India complete with six weeks of 200-hour teacher coaching, trip flight, lodging, food, and travel when coaching value the identical on behalf of me as learning at any of my native studios. Extra food and drinks are affordable.
Full Immersion
Tumblr media
If you’re traveling to India for teacher coaching, it’s seemingly not your home base. Therefore, there’s no time to push pause and return to your usual routine. Being totally immersed in your coaching and enclosed by likeminded people helps integrate your teachings into your life.
Culture Shock
Tumblr media
Without having traveled before living in Rishikesh for 6 weeks, you’ll be able to expect an analogous expertise throughout the country. However, in Delhi, you’ll be able to expertise disorientation once brewage and chicken were out there at dinner. Traveling among India provided a chance to mirror on your own personal follow and philosophy when the teacher coaching over.
Letting Go
Tumblr media
There is one thing regarding the stark reality of living beside Brobdingnagian poorness that strips away the ego. Coaching in India not solely teaches you the yamas and niyamas, it helps you reside the Yoga philosophy. Considering aparigraha, non-greed or non-possessiveness — in India, it’s easier to strip away the ego that imply high-ticket yoga garments and competition for the proper attitude follow.
Ayurveda
Tumblr media
Developed over 5000 years ago in India, Ayurveda could be a system of preventative medication and health care. Ayurveda identifies a person’s ideal state of balance and offers interventions to modify balance. Whereas in India, learning additional regarding Ayurveda from the supply could be a good complement to learning yoga.
The Power of Patience
Tumblr media
Continuous yoga and meditation follow invitations inner peace. In addition, riding in an exceedingly automobile in India helps increase your ability to remain calm and patient whereas dodging carts left within the middle of an active intersection and hearing honks from a minimum of 10 completely different drivers at the identical time.
Gather Energy From Around the World
Tumblr media
Not solely can there be superb academics from India UN agency will facilitate you bring one thing new back home, you’ll seemingly have a cohort from everywhere the globe UN agency will facilitate your follow flourish.
Fresh Chai
Tumblr media
Lastly, if no different reason on the list attracts your energy to coach in India, let this one facilitate you’re decide: contemporary chai. For some rupees, you’ll be able to lounge outside and sip the foremost refreshing chai caffe latte within the world. If the concept of exploring gorgeous Hindu temples and palaces or swimming within the Ganga or connecting to the magnetic and powerful energy in India doesn’t tempt you, select the chai; they’re lingering.
Benefits of yoga retreat center in India
Tumblr media
It’s not simply the yoga teacher coaching or yoga ashrams in India are the most effective ones, rather, the yoga retreat centers during this a part of the globe also are equally supposed. The type of facilities one will fancy from the Indian centers is not possible to be found elsewhere.
Most significantly, the consultants up here understand it well on the way to teach Associate in Nursing entry-level practician and somebody practiced. In fact, yoga teacher training courses in India teach the scholars regarding numerous techniques about these teaching ways that. Except for this, these yoga retreat centers in India are well resourced further, with all high-end facilities for the guests.
Become a certified yoga teacher at Savitrayogashram
Tumblr media
Savitra Yoga Ashram is a beautiful ashram located in the scenic town of Dharamsala. Dharamsala is known for its pristine mountains, lush green valleys, rivers and divine environment. The Ashram organizes Yoga Alliance Certified Yoga Teacher Training Courses in Indiafor free. You just need to pay for your food and accommodation. After the successful completion of the course, you can teach anywhere in the world.
0 notes
wildlyplanted · 6 years ago
Text
Part 2 of 3: “Bus Station? Dog Bar? What ever happened to Eagle-Eye Cherry?” – Budapest | Prague | Berlin Travels
Hello! Grab yourself a cup of coffee or tea and if you fancy, a treat too. I hope you enjoy this blog post (the second in a 3-part series) about my trip to Budapest, Prague and Berlin. In the series, I share my hosteling and general overall experience in all three cities, the challenges, randomness and realizations I came to along the way of this splendid journey.
Budapest to Prague
We booked our bus ride from Budapest to Prague, on a third-party website called omio.com (formerly GoEuro.com – it was called GoEuro at the time of our booking), with RegioJet bus. The website was recommended by our hostel roommate who is currently living and studying in Prague (she mentioned a different bus company, Flick Bus, but it wasn’t among the choices).
Something that we could have done but didn’t do, was to go book directly with RegioJet (I was in total chill mode on this trip, and I wasn’t checking various sites or planning as hard as I normally would, if I’m being honest). Usually, when I find tickets on third party sites, whether it be for a plane, train or bus, I go and book directly with the airline, train or bus company. It’s usually the same price or sometimes cheaper, and if there are any issues, I feel better dealing directly with the company.
In this instance, it would have benefited us to book directly with RegioJet because GoEuro did not explain that the RegioJet pick up was from the tram stop. The “station” listed on our ticket had the same name as the tram stop, so we expected that once we got off, there would be a bus station. We were further put under this impression because while purchasing our tram ticket, I stopped inside the customer service office to confirm we were taking the correct tram and the associate said the tram stop and bus “station” share the same name. Later, I realized this may have possibly been a language barrier and she meant that the tram stop, and bus station are the same.  
Once we got off the tram, there was some confusion. We were in the middle of a suburban neighborhood next to a school with no bus station in sight and no other travelers to give us an indication that we were in the right place. We asked a local passing by and she pointed us to a nearby bus/train station (one tram stop and also a short walk away) but we found that it wasn’t where we needed to be. Luckily with the help of another local who explained that RegioJet most likely stops on the street, not in the bus station, it clicked, and we dashed back on foot to our original stop. There we found the bus, which clearly arrived while we were searching for the “bus station,” parked a few feet from the tram stop.
*I later checked the RegioJet site and on the homepage, they explain that due to construction at the bus station in Budapest, they will pick up from that tram stop. So, I would recommend, whenever possible, book directly with the company or at least check the company website for details and information. This is something I will always remember to do even if I’m in full on chill mode.
Riding with RegioJet was a great experience. The driver and an attendant checked us in and handled our luggage. The bus was much nicer than I anticipated. The faux leather seats were spacious and reclined. The bus had Wi-Fi, USB charging ports, a tv monitor at each seat and there was complimentary bottled water and coffee service. The attendant was very nice, and even let us keep the headphones with the bus’s logo as a souvenir.
The 6-hour ride was scenic, comfortable and felt shorter. The bus made a stop in Bratislava, Slovakia, to pick-up more passengers, and a quick stop for officials to check passports before crossing the border into Czech Republic.
Before leaving Budapest, we looked up a few hostels on Hostelworld, but didn’t decide on one until we were on the bus (if I remember correctly). Our Budapest hostel, Hostel One, had a location in Prague but it was not as central as we wanted, or else we would have stayed at their sister location since we really enjoyed the Hostel One vibe.
48 Hours in Prague
We chose Rosemary hostel because it seemed perfectly located in the new town, and a very short walk to the old town; from the bus station it’s just 3 or 4 stops, on the tram or about a 15-20-minute walk. We did not make a reservation, and we also only paid for one night in case we wanted to switch and stay elsewhere. We did inquire at check-in if there were enough beds available for the following nights, in case we decided to stay, and there were. We did stay at Rosemary for the duration of our visit (2 nights). We were hoping to stay in Prague longer, but we couldn’t, and it’s ok because I know I’ll definitely return in the very near future.
I really loved the location of Rosemary hostel, and how clean it was. It’s off a main street, Jindrisska, where you can easily find grocery stores, currency exchange, restaurants and cafes. I became fond of Caffe Milani, which I stumbled upon within 30 minutes of arriving at the hostel, while taking a walk to the grocery store for water. They make fresh juice and have delicious coffee and croissants.
Our accommodation was in an apartment, with two bedrooms. Once inside the apartment, the kitchen, bedrooms, toilet room and showers are each separately located off the hallway. There is one entrance into the two bedrooms. You have to walk through the main bedroom – large with a sitting area to get into the smaller one which is where we slept. Each room had 6 beds (there were stairs to a loft in the larger room, so there may have been a couple of single beds up there). The apartment was mixed gender, although the smaller room remained all female during our stay.
The sinks and showers in the apartment are not private, as they are in the same room. There are two sinks and two separate shower stalls with doors. With that said, there was never an issue or feeling of discomfort, and only the same genders were in the shower room at the same time (not sure if this was mindful or just coincidence). Additionally, the door locks, so you have the option for privacy. I left the door unlocked while I showered since the stalls are spacious enough to undress and dress inside, however, I never had to get dressed inside the stall since no one else was in the shower room, and I locked the door for a brief moment while I moisturized and dressed.
I was extremely happy about how clean the bathrooms were, and the shower was so comfortable with great water pressure, so I honestly did not care that it wasn’t necessarily private. Rosemary hostel does have various set-ups – private rooms, with private bathrooms and kitchens, 4 bed female rooms, etc. Most hostels have private room options, and more times than not, with private bathrooms.
Upon arriving at the hostel, we met a couple of our roommates; one guy staying in the larger room, who was our instant resource about Prague since he had been there the longest, and a young lady staying in our smaller room who we shared deep and touching conversation with. She was from China, studying in Germany and visiting Prague and Budapest on her way back home for her semester break. Her concerns and worries were so familiar and like the ones I had while I was in college, or Uni, as it’s referred to in most parts of Europe. All I could do was offer words of comfort and advice, that I hope would spare her some of the unnecessary worry I went through. I told her what I wish I knew then that I know now.
From a young age, we are learning about ourselves, about the world and as we mature and grow, we are trying to form our own identities, and explore the desires of our hearts and express ourselves; but for some of us, somewhere along the way, we are told we need to be a certain way, to do certain things, not to listen to ourselves or what our hearts tell us. We’re discouraged from being ourselves, so we try to quiet the voice in our hearts. We try to dissolve who we naturally are to be someone acceptable to our parents, our family and friends, and society.
The thing is; the soul never forgets who it is. We are reminded of our authentic selves in so many ways –when we do things that bring us happiness, through the experiences that make us feel alive and resonate with us, in moments when our life makes sense; that is our soul reminding us of who we are. It takes so much courage to shape our own happiness, to write our own story; seeing it through, day in and day out. It takes perseverance to endure falling and picking ourselves up many times over in pursuit of joy. It takes strength of mind to stand and walk alone in pursuit of our calling. It takes unwavering faith to believe, when it seems like God is our only cheerleader in this world.
It can take years to undue the damage that one seed of doubt can plant in our mind. Sometimes being human seems so complicated, and it’s easy to let go of what we truly desire and grasp instead to what others tell us we should want.
For me, it’s a daily process of reminding myself that my life is sacred. I have a purpose, and the desires of my soul would not be there if they weren’t meant to be lived out. Being born is confirmation enough. You and I are made of stardust for a reason. God meant for me, for you to shine just as bright as the stars.
Speaking of stars, we made the best of our two nights in beautiful Prague. Our first night, we had dinner at a typical Czech restaurant called CafĂ© SvatĂ©ho VĂĄclava in Wenceslas Square. What a beautiful view, especially of the remarkably lit NĂĄrodnĂ­ muzeum (natural science and history). It’s definitely a tourist area, but it didn’t feel that way during that time of day. We spent the rest of the evening walking around the city before turning in for the night.
Our second day – we had breakfast at Caffe Milani before going to the old town square and joining a walking tour to hear some history about the astronomical clock tower, Charles University and other sites. Rather than continuing the tour, we explored the rest of the town square on our own and made our way to Charles bridge and Prague castle. We opted to walk and explore more of the area instead of touring inside the castle (next time). Just as in Budapest, I was fascinated by the architecture and charm of Prague. While strolling through one of the many beautiful streets, my travel buddy asked which city I liked more, I couldn’t decide. From my personal experience, Budapest is bold and trendy, while Prague is chic with a touch of romance, and I appreciated both.
We had the loveliest dinner at a restaurant whose name I can’t remember, and sadly, I mistakenly erased some of my notes. I do know the second half of the name is “Garten.” I tried searching on google maps since I remembered the general area of the restaurant but couldn’t find it. It was located halfway between our hostel the bus station.
After dinner we walked to the station to purchase our tickets for the ride to Berlin (and ensure the bus left from the station, ha!). There were a number of bus company ticket booths, none of them had a sign for RegioJet, and because we recognized Flickbus (the same hostel roommate in Budapest who told us about GoEuro mentioned that she uses this bus company), so that’s the company we went with.
On our final night in Prague, our hostel roommate (the guy who was in Prague the longest) suggested we go to the Dog Bar. Although, he had never been, he said he heard it was a cool place.
It’s located on a fairly quiet street and behind a huge wooden door. No sign, noise or indication that there’s nightlife going on inside. Once you enter, you first must see the cashier and pre-load money onto a card to use for drink purchases. On your way out, you return the card, and whatever you don’t use is refunded. We only loaded the very minimum as we were not drinking much, and, I wasn’t about to find out if they actually return the money or not. Once the transaction is complete, you make your way downstairs.
Picture an underground maze-like cave with numerous rooms. We settled into a room where the tables and seats are made from doors, and the seats are suspended from the ceiling like swings. Dog Bar must be where, particularly American, exchange students hang out because there were a good number of young Americans there. Members of the band also turned out to be American, and the last thing I could have ever guessed was that I’d be sitting in a bar in Prague, Czech Republic, listening to a band perform “Save Tonight” by Eagle Eye Cherry. I think I was the only person in the room who knew the words (or so I thought at the time). We didn’t stay very long, but it was long enough to see why the place was called “Dog Bar” when a huge, shaggy dog slowly made its way into and around the room. – in context, Dog Bar is a “cool” place to pass through and check out. I understand the interest; but I would not stay longer than the hour that we did. Also, I can’t with the bathrooms there.
As we were leaving the venue, we met a few other people who were also on their way out. We all recognized we were from the states, and one of them happened to be from my hometown, Philly and the others from Long Island. The one young lady was studying abroad in the city. She showed us around a bit more and took us to one of her local hang outs. While hanging out with our American friends, Eagle-Eye Cherry came up, and I was happy to find out that 2 out of 3 knew the artist and the words to “Save Tonight.” We all wondered, what happened to Eagle-Eye Cherry? He is a talented artist and we shared in the disbelief that his other music did not reach the commercial success of “Save Tonight.”
The city of Prague is a beauty. It’s easy to navigate and it’s walkable. I only used the tram a couple of times, (I didn’t use the underground so I’ll make it a point to do so next time), and it was easy to figure out. I’m excited to return in the future to experience much more.
Look for the final blog post in this series:
Part 3 of 3:“Leaving Prague, Berlin, I love you and Final Reflections & Tips” – Budapest | Prague | Berlin Travels
Where you can find me/how to contact me:
IG: wildlyplanted (check out photos)
YouTube: Wildly Planted ( I uploaded short video and photo reels)
0 notes
inkspot-fox · 8 years ago
Note
Meet the muse for Sikai!
Tumblr media
(In which I was sorely tempted to use my GW2 screenshots because the swtor character creator is terrible and Sikai looks nothing like he does in my head)
From this meme. 
➄ What is your character’s full name?: Sikai Surana (because he and @anecdotesandelderthings​‘s Neria Surana decided they were siblings at one point and yes she’s a Dragon Age Origins port). While that is 100% not his original last name, he also does not care what his original last name is. Neria is the only family that matters, so he took her name.➄ When were they born?: 13 BTC-ish, but he doesn’t actually know for sure.➄ What are their parent’s names?: He doesn’t know, they were dead before he got to know them.➄ Do they have any brothers or sisters?: Biologically, no. But Neria is his sister and they are very, very close. It throws people off because he’s Pureblood and she’s a Zabrak. ➄ What kind of eyes do they have?: It’s hard to tell from the picture, but a very pale gold.➄ What kind of hair do they have?: Very fine, very thick hair that’s such a dark, dark red that it looks black.➄ What is their complexion like?: Venetian Red, mostly youthful but on the severe side with a few faded scars.➄ What body type are they?: Tall and spindly. He’s about 6 feet tall, but the sort of thin that lacks muscle mass. He’s not even wiry-strong like Kat or Ashlan; he’s just a fucking nerd twig.➄ What is listening to their voice like?: Ohhh man. I don’t have a voice claim for him, but hrm. Deep, rich, soft, smooth. Unless he’s shouting-levels of livid or snarling, in which case his voice sounds like broken glass being jammed into your eardrums.➄ What do they hate most about themselves?: He hates his own cowardice. He hates the fact that he was too terrified to leave Korriban with Neria, he hates that all he can do is cling to the first scrap of power and security he’s ever known--even though he hates the Empire--because he’s too paralyzed with fear to abandon that stability. He hates that he’s not as brave or strong as his sister, and he wishes he could be more like her.➄ Do they have a favorite quote?: “The most perfidious way of harming a cause consists of defending it deliberately with faulty arguments.”   -- Friedrich Nietzsche, “Die fröhliche Wissenschaft”. Which, yeah, I know, doesn’t exist in star wars but idk, it’s what I got.➄ What sort of music do they enjoy?: Sikai is horrible trash for Sith Operas, particularly ones that focus on non-romantic passions, because he can immerse more easily in those. ➄ Have/would they ever cheat(ed) on a partner?: Nope. Sikai is aro ace, and the closest he might get to a ‘partner’ would be his qpp relationship with Talos. It’s entirely possible that they someday get married for tax purposes. But like, both of those characters are definitely ace. Talos might be demi-romantic. But like. ‘Cheating’ wouldn’t even occur to either of them and also like, how?➄ Have they been cheated on by a partner?: See above!➄ Have they ever lost someone close to them?: Shit, probably. Growing up as a slave in the Empire is not fucking pretty, man. And for a while, he thinks he’s lost Neria, and oh god that hurts so bad. He doesn’t, they re-unite and they’re okay, but holy fuck. ➄ What is their favorite sound?: The sound of coffee percolating, Neria’s soft singing.➄ Are they judgmental of others?: Pff OH YEAH. Of EVERYONE. Constantly. But he’s, you know, elegant about it. He’ll raise an eyebrow, maybe sniff, and go back to what he’s doing unless the offending people are actively making themselves his business.➄ Have they ever been drunk?: Nope. Never. ➄ What are they like when they stay up all night?: ...Sikai does this more than he ever should and it’s embarrassing, because he gets all...giddy. Because if he’s up all night, it’s because he’s doing SCIENCE and it’s all: “TALOS LOOK AT THIS look at what I FOUND I wonder how we could extrapolate from this and has it been tested on other varieties TALOS LET ME BORROW THAT SAMPLE I’ll get you another one if it explodes I promise are we out of caff we should probably get more caff because this will take maybe four hours and I must stay awake but in the MEANTIME perhaps I could start some more cultures growing TALOS HAVE YOU SEEN MY SONIC SCALPEL--oh there it is right behind my ear I knew I put it somewhere clever--”
Eventually he’ll pass out.➄ Have they ever been arrested?: Yyyyyyyeeeeeeeep. That would be when Sikai fuckity murdered his (and Neria’s) master and the entire non-slave household. So fucking arrested. And then sent to the Academy because the use of Force Lightning kind of tipped the authorities off that he and Neria were Force Sensitive.➄ What evokes strong memories for them?: The sound of heavy rain, the smell of hot desert sand.➄ What do they do on rainy days?: If he doesn’t have to Do Official Darth Imperius Things, he puts on headphones to block out the sound of the rain and does Science.➄ What religion are they?: Sith Code.➄ What word do they overuse the most?: *disgusted noise* (It’s a word I swear)➄ What do they wear to bed?: Loose-fitting pajama pants, probably a soft robe if it’s cold. ➄ Do they have any tattoos or piercings?: The ones you see in the screencap, but realistically he’d also have gold clips along the shell of his ear, too. ➄ What type of clothing are they most comfortable in?: He likes loose, comfortable, soft clothing, generally in reds or blacks. He’ll wear sturdier pants when out and doing things, if only because that’s just smart when you’re doing any kind of volatile Science or excavating dig sites. His shirts are loose, though, as is the open-front robe he wears.➄ What is their most disliked food?: Anything bland. Spices exist for a fucking reason. Sikai has probably heard of leafroot stew and its very existence offends him.➄ Do they have any enemies?: Pfff OH YEAH. He’s not... really a good person. He tries to be good. When Neria is looking. Or when Master Reynard is around because he doesn’t want Reynard to think that he’s trying to steal yet another padawan.➄ What does their writing look like?: Sikai’s writing is impeccable. Painstakingly neat, narrow, and a bit angular. ➄ What disgusts them?: Willful ignorance, pointless power games, wasting resources, Jedi Proselytism. Normally I’d also add ‘people wanting to fuck him’ but fortunately Sikai never notices.
(Characters are: Jedi Katsulas, Sith Katsulas, Ashlan, Sikai, Ashashar, Delgado Shepard, Delgado Cousland, Kat Lavellan, GW2 Katsulas, Vosh’sesse’oro, Qeno) 
3 notes · View notes
jhxuk · 4 years ago
Text
Ray Gosling and the Inscape of Landscape
In his 1980 memoir Personal Copy, Ray Gosling recalls a lecture he gave to a group of Nottingham University architecture students sometime in the winter of 1964. Entitled ‘The Darkness of Virtue’, apparently in homage to Jean Genet, the lecture developed Gosling’s theory that all phenomena can be classified as either an orgy, a crucifixion or an abortion. By way of illustration, in the context of a discussion of his and D.H. Lawrence’s differing sentiments on the ugliness of England’s industrial north, he unleashes one of my favourite pieces of social commentary in the English language.      “The slum is awful,” Gosling writes, “but it has a purpose, it is a something, as a crucifixion is an event. There may be no meaning to it, but there is an order, a purpose, a pattern out of which, as in death, there is comfort. The magic of darkness.”
And so we (I) can turn our crucified towns, on wicked rainy days, like Bilston and Bacup, Blackwood and Ilkeston into a romance, tragic and solemn, merry and bright notwithstanding the ugliness. A crucifixion is a world that can be so black that when a break comes, a tear of goodness, it shows. When God pricks the black out, the light can be so bright, angelic, pure, white, innocent, wise and beautiful. Slums have been noted as milky ways of good company, of comradeship, life and laughter, with real people, helping hands in times of trouble.
A crucifixion is a grand anachronism: an event or a place so ugly that it gains a gripping fascination. It fills the eye and moves receptive organs. Life in it is bleak for, however many stars of goodness, no milky way ever shines like the sun. Stars only shine because of the darkness. A crucifixion is like the Church of England prayer book used to be – during a service, moving and emotional and they let you say “Amen” at the end of the vicar’s prayer.
A crucifixion is a graveyard service, the rough and tumble of working in a factory; death; black blood; Dracula; the formal dance; the narrow canyons of a manufacturing district or the City of London, with workrooms between the monumental tombstones – in memoriam; the sweatshops of Queen Victoria’s reign; the reasons for Karl Marx, like Scrooge, Dickensian – and if you cleaned its buildings would they not fall apart? The Ritz, a grand hotel, a very posh district, a Mayfair of rich folk, of villas in mock gothic glory, private roads and pomp, hypocrisy and poverty. The Tatler, The Field and The Gun. A Rutland flat hat. The foxhunt; a backyard knife fight; the formal restaurant meal. A mug in the greasy spoon transport cafĂ©; a pawnbroker. A boarding house breakfast at Blackpool; the stock exchange. Coronation Street; a gentleman’s smoking room club. The entire nineteenth century; the pub wedding party. Pit tips at Gedling; untidied heritage; Liverpool. The Trip to Jerusalem; traditional bookshops; backstreet life. Steptoe and Son – I would later have added Alf Garnett and Enoch Powell. Indoor public baths. The bell on an alarm clock.
All these things, Gosling writes, have style and purpose, and “manners” to their purpose. “You feel that if you hate them, they will hate you back. They make you feel something.” As time rolls on, and as these things gradually become memories, we come to feel a certain affection for them: “We prize our crucifixions: with all their faults, all the misery they caused, they were as much our history as the palaces of the Czars, preserved by the Soviet Russians.” Thank God Lawrence didn’t live to see the England that emerged in their place, Gosling muses: “our modern world of neutered non-events and plastic non-life”. Here, now, we have “exchanged the animal for the vegetable for the mineral. Exchanged the crucifixion for an abortion.”
An abortion is tasteless, has no smell because the room’s anaesthetised. You can’t hate it, you can’t love it
 It has no aesthetic, it is an anaesthetic. [
] It is giving a man a bath, a toilet, hot and cold, central heating, television and taking away his freedom without putting him in prison. Today we have the technological, scientific and human resources we need to make our lives more exciting, more free, full and happy than they have ever been. And no Big Daddy – no need for the rich man to overlord you from his castle
 We are all equal. Life can be smooth. We shall neuter industrial horrors and call work play.
[
] Our towns are becoming places where we meet only by design. There is little accident, little haphazard or chance encounter. We’re becoming so we don’t have to touch or smell each other. A city without Crucifixion or orgy: without either purpose or meaning of a kind that can be immediately understood. Castrated of any spontaneous excitement, any love by people across section lines. This is what I call an abortion: a non-life. It is as modern as the hour

An abortion is a clean kitchen. A new town. Police panda cars. Good television. Chinese restaurants. Suburbs. Coventry precinct. Industrial museums. Muzak in the crematoria. Preserved heritage. The milk bar. Bully Butlin. Net curtains. Hire purchase. A bypass. The lido. Love with a contraceptive sheath. The Soviet Union. The architecture of the Co-op. A motorway caff.
Maybe these notions are merely notions, says Gosling: “not a truth but a party game.” Certainly, the academic world would have little time for such poetic phraseology. But poetic modes of expression (poiesis – “to create”) have their place, particularly in any discipline which claims to study human experience – social or otherwise – whose subject is by definition fluid, protean, largely non-rational, and in a state of constant mutation. Such an analysis will never be “precise” in any formal sense, but the tools provided by poetry are capable of activating a great deal of content in very few words, in a way in which the turgid prose of traditional academic analyses cannot.    Gerard Manley Hopkins used the terms ‘inscape’ and ‘instress’ to refer respectively to “the unified complex of characteristics that give each thing its uniqueness and that differentiate it from other things”, and “the force of being which holds the inscape together, or the impulse from the inscape which carries it whole into the mind of the beholder”. These essential features of a phenomenon cannot be grasped without adopting an intellectual posture in harmony with the phenomenon itself. This is precisely what is gained when we eschew, as Gosling does, a mode of expression that strives for objectivity, detachment and precision, in which words are corralled into singularity of meaning, denuded of their power to move the reader and stir feeling and memory.      On the other hand, Gosling suggests, “[m]aybe all I said was life should be fun and let ourselves be open. Fill all your holes fabulously, said Mr Genet. There can be no code of rights, no bill of liberties. What life should be is an orgy.”
An orgy is the life of an English gentleman in the eighteenth century. Tom Jones and good Queen Anne. An orgy is infectious: everyone joins in. You do the hokeykokey and you shake about. The whole of the eighteenth century. The night Forest won the cup. (Until the police arrested H.H. – that was a crucifixion.) Disco dancing when it’s gone a little wild. The Wine Lodge on a Saturday when the trio plays. A pub crawl. The piano at the back bar of the Napier. Mr Jackson, the grocer’s shop on Piccadilly. A delicatessen: an old open market square: a coaching inn: Goose Fair: Christmas: Lyon’s tea shop: Woolworth’s normally: an Indian restaurant because you get some taste: a hot sensation: Petticoat Lane: Sneinton Market: fast trains with buffets: Skegness on Bank Holiday: The cremation of Nehru. [
]
An orgy is an event, a fiesta, something fabulous. (How awful those words were to become.) We have yet to learn, us English, that pleasure is to be taken. Every freedom we have is licensed, and we worry if to enjoy ourselves is in order. Licenses are abortions.
0 notes
thecaffeinebookwarrior · 5 years ago
Text
Writers’ Resources for Portraying Autism:
Biographies and nonfiction by autistic authors:
Look Me in the Eye, by John Elder Robinson
Being Seen, by Anlor Davin
The Secret Life of a Black Aspie, by Anand Prahlad
Nerdy, Shy, and Socially Inappropriate, by Cynthia Kim
Aspergirls, by Rudy Simone
What Every Autistic Girl Wishes Her Parents Knew, by various authors
Fiction by autistic authors:
Failure to Communicate, by Kaia SÞnderby
The Kiss Quotient, by Helen Hoang
Otherbound, by Corinne Duyvis
Queens of Geek, by Jen Wilde
The Someday Birds, by Sally J. Pla
Videos:
Things Not To Say To An Autistic Person
My Life with Autism
My Autistic X Factor
Autism:  A Quick Trip To My Home Planet
Invisible Diversity:  A Story Of Undiagnosed Autism
My Brain Works Differently:  Autism And Addiction
Young, Gifted, and Black With Autism
More resources to come!  In the meantime, stay away from autism warrior mommies, Autism Speaks, and basically any non-autistic person who claims authority on the experience.
Happy writing, everybody!
748 notes · View notes
josephkitchen0 · 6 years ago
Text
The Many Uses of the Chicory Plant
It’s getting to be that time of year when wildflowers appear a-plenty along roadsides everywhere. As I drive along, I like seeing the colors appear and trying to figure out what all those plants are. Lately, I noted a sea of pale blue and I wondered: what is that? A quick search and I found my answer; the Chicory plant.
I knew the Chicory plant was on the edible plants list, but I couldn’t remember what parts of the plant were edible or how to prepare them. It was time to do some research. I love learning new things, especially about foraging and understanding the plants growing around me, so I was excited to do some reading.
Ready to Start Your Own Backyard Flock?
Get tips and tricks for starting your new flock from our chicken experts. Download your FREE guide today! YES! I want this Free Guide »
About the Chicory Plant
Often called a “blue dandelion,” the Chicory plant has a lot in common with its cousin, the dandelion. You can eat the flowers, leaves and root of both plants. They will both add bitterness to your salad mix, but can be blanched to lessen that effect. The dandelion flower is less intense than the chicory blossom. Some say you can add the pretty blue flowers to a salad; others say they are too bitter.
Like many weeds, this perennial blooms summer into early fall. It is hardy and often found growing places where you wouldn’t expect flowers to thrive. Chicory is commonly seen near roadsides, in highway medians, at the overgrown edges of fields and even in gravel filled areas where nothing else can make it. It’s everywhere! I took this photo waiting on the light on the highway off-ramp.
Interestingly, there are a number of weeds on the edible plants list growing along the state route where we live. Not only will you find chicory plant, but also the milkweed plant, Queen Anne’s lace, honeysuckle, thistle, staghorn sumac and wild grapes. If you know what you are looking for, you can find plants to make everything from wine to jam and lemonade to medicinal remedies.
Milkweed
Wild grapes
Queen Anne’s Lace
Honeysuckle vine
Thistle
Staghorn Sumac
Chicory Benefits
Chicory has been utilized for its therapeutic qualities as far back as the Ancient Egyptians. I recently learned about how Chicory plant has been used throughout time as a home remedy for headaches. It’s also been used to relieve water retention issues, to reduce inflammation and to assist with digestive problems. The leaves, in particular, are rich in vitamins; particularly iron, calcium and copper.
There is a very informative full nutritional profile available on the Food Facts website. This site points out the chicory plant, particularly its root, as a good source of Vitamin A (114 percent of your recommended daily value) and Vitamin C (40 percent of DV).
Harvesting the Chicory Plant
There is some debate over whether these plants should be harvested if they have been absorbing car fumes along the roadside, but that’s a choice you’ll have to make for yourself. Many roadsides are also sprayed with chemicals. I figure our country road, even though it’s a state route, gets a lot less traffic than the highway, and they don’t spray much of the road by us. So I’m going start my harvest there.
As I set out with my shovel and my bucket, though, I find that the plants along our stretch of the road aren’t very big.  Because my husband mows so regularly, they don’t get a whole lot of time to re-grow. I need a spot that’s kind of neglected and doesn’t get mowed often. Back to the drawing board 

I get in my car and drive further down the road. I don’t have to go far before I come upon a huge field that has a really lush stand of Chicory plants at its edge. I park my car and get out with my shovel and buckets. The farmer sees me and comes up, “What’s going on young lady?” he asks. I respond, “Would you mind if I dig out some of these weeds at the edge of your field?”  He looks at me incredulously. I go on, “I want some of this Chicory plant so I can try to make coffee with it.”  He smiles and says, “Go for it. You can have all the weeds you want. Have fun!”
He turns and walks away as I set to work. I want the whole plant so that I can try all the various parts: flowers, leaves and root. That means I need to loosen the soil all around the base of each plant so that I can pull out as much of the long taproot as possible. I pick a nice big plant to begin with and jump on my shovel to get it deep into the soil on all sides of the Chicory plant. Then I grab the stem near the base of the plant and yank.  It slides out nicely, with just a little resistance. Continuing in this way, it doesn’t take me long to collect two big buckets of plants. With everything loaded back into my car, I wave at the farmer and head home.
After arriving home, I start to pull my Chicory plants apart. I take the flowers off and put them in a bowl.
  I pick some of the freshest looking leaves off and put them in a different bowl.
  Then I take a serrated knife and cut the roots free from the plants.  This is a challenge!  The roots are tough.
  When I examine my harvest, I have one really nice big root and lots of much smaller ones.
Most of the resources I read said the roots should really be harvested in the fall so probably if you are patient and wait until the proper time, you’ll get more for your effort.  I’ll work with what I got though.
I put water in my buckets and use a good old scrub brush to clean the roots.  They are caked in wet dirt so this takes some time.
Once clean, I bring the roots, leaves and flowers inside. The latter two I stick in the fridge until I’m ready to make my salad.  The roots I leave out on the counter to dry a bit. Now the real fun begins!
A Chicory Salad
As I mentioned above, the leaves of the chicory plant are edible, though bitter.  They are supposed to be more tender and less intense in the early spring.  I’m a little late on that (it’s late June as I’m writing) but I’m going to give it a try anyway.
I have a lot of good salad items in my garden now, so I’ve decided to make a lunch salad with Chicory leaves and flowers added to it.
I gather my ingredients from the garden: lettuce, sweet peppers, a watermelon radish, a few small beets and a cucumber.
The beets I boil, peel and chop.
Everything else I just clean and cut up.  To this, I add the chicory greens and a few of the lavender chicory flowers, both rinsed well.
For a dressing, I make a batch of the Chive Balsamic Vinaigrette featured in my story on garlic infused white wine vinegar.
I won’t lie, those greens were bitter! The flowers weren’t so bad, but I would only add a few pieces of the greens, chopped up into small pieces if I had them again. Another option is to get them earlier in the spring, when their flavor is supposed to be more tolerable. Or, like dandelion greens, you can cook them, which helps lessen the bitterness. My dad recounts memories of his grandmother cooking dandelion greens and pokeweed leaves in bacon fat and how it was always so delicious.  You can see his mouth start to water when he talks about it.
If you have a lot of flowers, you can also try pickling them. I found an interesting recipe, but didn’t have enough flowers to try it.
A Coffee-Lover’s Adventure: Chicory Root Coffee
As a coffee lover, I was intrigued when I heard about this plant that could be used to make a coffee-like drink or could be mixed with coffee to enrich flavors. I had to try it!
To make chicory coffee, you have to roast the roots. So I gathered up the roots I had cleaned earlier, chopped them into smaller pieces and laid them out on a cookie sheet.
I set my oven to its lowest possible temperature: 170 degrees. I put the roots in the oven and went about my day. Occasionally I came in and turned the roots, but mostly it just took time.
The roots cooked for about seven hours and when they came out they were totally dried out and smelled amazing. I wish I could somehow make this a scratch-and-sniff story so that you could share in the mix of nutmeg and cocoa that filled the oven after I took out my cooked roots.
The last step is to grind it up into a powder. For this, I used a small coffee grinder. It didn’t grind it into a super fine powder, but it did a pretty good job for a simple little machine.  It’ll be good enough for the French Press.
I poured the ground, roasted roots into a jar for storage. Tomorrow I will give it a try mixed with some coffee.
I became a coffee drinker while living in Milan my junior year of college so the only coffee I drink is espresso. I have a little macchinetta di caffe that I got in Italy, which I use every morning. So my first experiment was chicory espresso. I prepared my coffee as usual but filled the strainer cup with half espresso and half chicory root. It tasted very similar to what I usually drink but maybe with a bit more spice to it.
Next I wanted to try straight chicory coffee using a French Press. I measured out 1/4 cup of my ground up roots and put them in the bottom of the French Press. Over that I added several cups of hot water that I prepared in the tea kettle.
I let it steep about eight minutes then pushed down the strainer.
It looked more like a tea than a coffee but that’s ok.
I poured some in one of my mother’s lovely little tea cups and gave it a try. Whew! It was strong 
 earthy 
 bitter! A little sweeter helped tremendously. It did have the same after taste as coffee does, that slightly bitter taste at the back of your mouth. I could see why people mix it with coffee or, in times past, used it as a replacement for coffee.
My final experiment was kind of like a chicory mocha. I added a teaspoon of cocoa to my cup and filled it back up from the French Press. I mixed it together well and drank a little. Now that I could drink regularly. The sweetness of the cocoa offset the bitterness of the chicory plant to make quite a nice drink. And you get the Vitamins A and C from the Chicory plant while enjoying the flavor of the chocolate.
I hope you learned something new from my little country culinary adventure.
What edibÂźave growing in your yard? Let us know in the comments below.
The Many Uses of the Chicory Plant was originally posted by All About Chickens
0 notes
iyarpage · 7 years ago
Text
Beginning Machine Learning with Keras & Core ML
Apple’s Core ML and Vision frameworks have launched developers into a brave new world of machine learning, with an explosion of exciting possibilities. Vision lets you detect and track faces, and Apple’s Machine Learning page provides ready-to-use models that detect objects and scenes, as well as NSLinguisticTagger for natural language processing. If you want to build your own model, try Apple’s new Turi Create to extend one of its pre-trained models with your data.
But if what you want to do needs something even more customized? Then, it’s time to dive into machine learning (ML), using one of the many frameworks from Google, Microsoft, Amazon or Berkeley. And, to make life even more exciting, you’ll need to pick up a new programming language and a new set of development tools.
In this Keras machine learning tutorial you’ll learn how to train a deep-learning convolutional neural network model, convert it to Core ML, and integrate it into an iOS app. You’ll learn some ML terminology, use some new tools, and pick up a bit of Python along the way.
The sample project uses ML’s Hello-World example — a model that classifies hand-written digits, trained on the MNIST dataset.
Let’s get started!
Why Use Keras?
An ML model involves a lot of complex code, manipulating arrays and matrices. But ML has been around for a long time, and researchers have created libraries that make it much easier for people like us to create ML models. Many of these are written in Python, although researchers also use R, SAS, MATLAB and other software. But you’ll probably find everything you need in the Python-based tools:
scikit-learn provides an easy way to run many classical ML algorithms, such as linear regression and support vector machines. Our Beginning Machine Learning with scikit-learn tutorial (coming soon!) shows you how to train these.
At the other end of the spectrum are PyTorch and Google’s TensorFlow, which give you greater control over the inner workings of your deep learning model.
Microsoft’s CNTK and Berkeley’s Caffe are similar deep learning frameworks, which have Python APIs to access their C++ engines.
So where does Keras fit in? It’s a wrapper around TensorFlow and CNTK, with Amazon’s MXNet coming soon. (It also works with Theano, but the University of Montreal stopped working on this in September 2017.) It provides an easy-to-use API for building models that you can train on one backend, and deploy on another.
Another reason to use Keras, rather than directly using TensorFlow, is that coremltools includes a Keras converter, but not a TensorFlow converter — although a TensorFlow to CoreML converter and a MXNet to CoreML converter exist. And while Keras supports CNTK as a backend, coremltools only works for Keras + TensorFlow.
Note: Do you need to learn Python before you can use these tools? Well, I didn’t ;] As you work through this tutorial, you’ll see that Python syntax is similar to Swift: a bit more streamlined, and indentation is an important part of the syntax. If you’re nervous, keep this open in a browser tab, for quick reference: Jason Brownlee’s Crash Course in Python for Machine Learning Developers.
Another Note: Researchers use both Python 2 and Python 3, but coremltools works better with Python 2.7.
Getting Started
Download and unzip the starter folder: it contains a starter iOS app, where you’ll add the ML model and code to use it. It also contains a docker-keras folder, which contains this tutorial’s Jupyter notebook.
Setting Up Docker
Docker is a container platform that lets you deploy apps in customized environments — sort of like a virtual machine, but different. Installing Docker gives you access to a large number of ML resources, mostly distributed as interactive Jupyter notebooks in Docker images.
Note: Installing Docker and building the image will take several minutes, so read the ML in a Nutshell section while you wait.
Download, install, and start Docker Community Edition for Mac. In Terminal, enter the following commands, one at a time:
cd <where you unzipped starter>/starter/docker-keras docker build -t keras-mnist . docker run --rm -it -p 8888:8888 -v $(pwd)/notebook:/workspace/notebook keras-mnist
This last command maps the Docker container’s notebook folder to the local notebook folder, so you’ll have access to files written by the notebook, even after you logout of the Docker server.
At the very end of the command output is a URL containing a token. It looks like this, but with a different token value:
http://0.0.0.0:8888/?token=7b189c8e200f49dcc33845d39101e8a0ab257db5f3b539a7
Paste this URL into a browser to login to the Docker container’s notebook server.
Open the notebook folder, then open keras_mnist.ipynb. Tap the Not Trusted button to change it to Trusted: this allows you to save changes you make to the notebook, as well as the model files, in the notebook folder.
ML in a Nutshell
Arthur Samuel defined machine learning as “the field of study that gives computers the ability to learn without being explicitly programmed”. You have data, which has some features that can be used to classify the data, or use it to make some prediction, but you don’t have an explicit formula for computing this, so you can’t write a program to do it. If you have “enough” data samples, you can train a computer model to recognize patterns in this data, then apply its learning to new data. It’s called supervised learning when you know the correct outcomes for all the training data: then the model just checks its predictions against the known outcomes, and adjusts itself to reduce error and increase accuracy. Unsupervised learning is beyond the scope of this tutorial.
Weights & Threshold
Say you want to choose a restaurant for dinner with a group of friends. Several factors influence your decision: dietary restrictions, access to public transport, price range, type of food, child-friendliness, etc. You assign a weight to each factor, to indicate its importance for your decision. Then, for each restaurant in your list of options, you assign a value for each factor, according to how well the restaurant satisfies that factor. You multiply each factor value by the factor’s weight, and add these up to get the weighted sum. The restaurant with the highest result is the best choice. Another way to use this model is to produce binary output: yes or no. You set a threshold value, and remove from your list any restaurant whose weighted sum falls below this threshold.
Training an ML Model
Coming up with the weights isn’t an easy job. But luckily you have a lot of data from previous dinners, including which restaurant was chosen, so you can train an ML model to compute weights that produce the same results, as closely as possible. Then you apply these computed weights to future decisions.
To train an ML model, you start with random weights, apply them to the training data, then compare the computed outputs with the known outputs to calculate the error. This is a multi-dimensional function that has a minimum value, and the goal of training is to determine the weights that get very close to this minimum. The weights also need to work on new data: if the error over a large set of validation data is higher than the error over the training data, then the model is overfitted — the weights work too well on the training data, indicating training has mistakenly detected some feature that doesn’t generalize to new data.
Stochastic Gradient Descent
To compute weights that reduce the error, you calculate the gradient of the error function at the current graph location, then adjust the weights to “step down” the slope. This is called gradient descent, and happens many times during a training session. For large datasets, using all the data to calculate the gradient takes a long time. Stochastic gradient descent (SGD) estimates the gradient from randomly selected mini-batches of training data — like taking a survey of voters ahead of election day: if your sample is representative of the whole dataset, then the survey results accurately predict the final results.
Optimizers
The error function is lumpy: you have to be careful not to step too far, or you might miss the minimum. Your step rate also needs to have enough momentum to push you out of any false minimum. ML researchers have put a lot of effort into devising optimization algorithms to do this. The current favorite is Adam (Adaptive Moment estimation), which combines the features of previous favorites RMSprop (Root Mean Square propagation) and AdaGrad (Adaptive Gradient algorithm).
Keras Code Time!
OK, the Docker container should be ready now: go back and follow the instructions to open the notebook. It’s time to write some Keras code!
Enter the following code in the keras_mnist.ipynb cell with the matching heading. When you finish entering the code in each cell, press Control-Enter to run it. An asterisk appears in the In [ ]: label while the code is running, then a number will appear, to show the order in which you ran the cells. Everything stays in memory while you’re logged in to the notebook. Every so often, tap the Save and Checkpoint button.
Note: Double-click in a markdown cell to add your own comments; press Control-Enter to render the markdown and run your Python code. You can also use the other notebook buttons to add or copy-paste cells, and move cells.
Import Utilities & Dependencies
Enter the following code, and run it to check the Keras version.
from __future__ import print_function from matplotlib import pyplot as plt import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.utils import np_utils from keras import backend as K import coremltools # coremltools supports Keras version 2.0.6 print('keras version ', keras.__version__)
__future__ is the compatibility layer between Python 2 and Python 3: Python 2 has a print command (no parentheses), but Python 3 requires a print() function. Importing print_function allows you to use print() statements in Python 2 code.
Keras uses the NumPy mathematics library to manipulate arrays and matrices. Matplotlib is a plotting library for NumPy: you’ll use it to inspect a training data item.
Note: You might see a FutureWarning due to NumPy 1.14.
After importing keras, print its version: coremltools supports version 2.0.6, and will spew warnings if you use a higher version. Keras already has the MNIST dataset, so you import that. Then the next three lines import the model components. You import the NumPy utilities, and you give the backend a label with import backend as K: you’ll use it to check image_data_format.
Finally, you import coremltools, which you’ll use at the end of this notebook.
Load & Pre-Process Data
Training & Validation Data Sets
First, get your data! Enter the code below, and run it: downloading the data takes a little while.
(x_train, y_train), (x_val, y_val) = mnist.load_data()
This downloads data from https://s3.amazonaws.com/img-datasets/mnist.npz, shuffles the data items, and splits them between a training dataset and a validation dataset. Validation data helps to detect the problem of overfitting the model to the training data. The training step uses the trained parameters to compute outputs for the validation data. You’ll set callbacks to monitor validation loss and accuracy, to save the model that performs best on the validation data, and possibly stop early, if validation loss or accuracy fail to improve for too many epochs (repetitions).
Inspect x & y Data
When the download finishes, enter the following code in the next cell, and run it to see what you got.
Note: You don’t have to enter the lines beginning with #. These are comments, and most of them are here to show you what the notebook should display when you run the cell.
# Inspect x data print('x_train shape: ', x_train.shape) # Displays (60000, 28, 28) print(x_train.shape[0], 'training samples') # Displays 60000 train samples print('x_val shape: ', x_val.shape) # Displays (10000, 28, 28) print(x_val.shape[0], 'validation samples') # Displays 10000 validation samples print('First x sample\n', x_train[0]) # Displays an array of 28 arrays, each containing 28 gray-scale values between 0 and 255 # Plot first x sample plt.imshow(x_train[0]) plt.show() # Inspect y data print('y_train shape: ', y_train.shape) # Displays (60000,) print('First 10 y_train elements:', y_train[:10]) # Displays [5 0 4 1 9 2 1 3 1 4]
You have 60,000 28×28-pixel training samples and 10,000 validation samples. The first training sample is an array of 28 arrays, each containing 28 gray-scale values between 0 and 255. Looking at the non-zero values, you can see a shape like the digit 5.
Sure enough, the plt code shows the first training sample is a handwritten 5:
The y data is a 60000-element array containing the correct classifications of the training samples: the first training sample is 5, the next is 0, and so on.
Set Input & Output Dimensions
Enter these two lines, and run the cell to set up the basic dimensions of the x inputs and y outputs.
img_rows, img_cols = x_train.shape[1], x_train.shape[2] num_classes = 10
MNIST data items are 28×28-pixel images, and you want to classify each as a digit between 0 and 9.
You use x_train.shape values to set the number of image rows and columns. x_train.shape is an array of 3 elements:
number of data samples: 60000
number of rows of each data sample: 28
number of columns of each data sample: 28
Reshape x Data & Set Input Shape
The model needs the data in a slightly different “shape”. Enter the code below, and run it.
# Set input_shape for channels_first or channels_last if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_val = x_val.reshape(x_val.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_val = x_val.reshape(x_val.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1)
Convolutional neural networks think of images as having width, height and depth. The depth dimension is called channels, and contains color information. Gray-scale images have 1 channel; RGB images have 3 channels.
Keras backends like TensorFlow and CNTK, expect image data in either channels-last format (rows, columns, channels) or channels-first format (channels, rows, columns). The reshape function inserts the channels in the correct position.
You also set the initial input_shape with the channels at the correct end.
Inspect Reshaped x Data
Enter the code below, and run it to see how the shapes have changed.
print('x_train shape:', x_train.shape) # x_train shape: (60000, 28, 28, 1) print('x_val shape:', x_val.shape) # x_val shape: (10000, 28, 28, 1) print('input_shape:', input_shape) # input_shape: (28, 28, 1)
TensorFlow image data format is channels-last, so x_train.shape and x_val.shape now have a new element, 1, at the end.
Convert Data Type & Normalize Values
The model needs the data values in a specific format. Enter the code below, and run it.
x_train = x_train.astype('float32') x_val = x_val.astype('float32') x_train /= 255 x_val /= 255
MNIST image data values are of type uint8, in the range [0, 255], but Keras needs values of type float32, in the range [0, 1].
Inspect Normalized x Data
Enter the code below, and run it to see the changes to the x data.
print('First x sample, normalized\n', x_train[0]) # An array of 28 arrays, each containing 28 arrays, each with one value between 0 and 1
Now each value is an array, the values are floats, and the non-zero values are between 0 and 1.
Reformat y Data
The y data is a 60000-element array containing the correct classifications of the training samples, but it’s not obvious that there are only 10 categories. Enter the code below, and run it once only to reformat the y data.
print('y_train shape: ', y_train.shape) # (60000,) print('First 10 y_train elements:', y_train[:10]) # [5 0 4 1 9 2 1 3 1 4] # Convert 1-dimensional class arrays to 10-dimensional class matrices y_train = np_utils.to_categorical(y_train, num_classes) y_val = np_utils.to_categorical(y_val, num_classes) print('New y_train shape: ', y_train.shape) # (60000, 10)
y_train is a 1-dimensional array, but the model needs a 60000 x 10 matrix to represent the 10 categories. You must also make the same conversion for the 10000-element y_val array.
Inspect Reformatted y Data
Enter the code below, and run it to see how the y data has changed.
print('New y_train shape: ', y_train.shape) # (60000, 10) print('First 10 y_train elements, reshaped:\n', y_train[:10]) # An array of 10 arrays, each with 10 elements, # all zeros except at index 5, 0, 4, 1, 9 etc.
y_train is now an array of 10-element arrays, each containing all zeros except at the index that the image matches.
Define Model Architecture
Model architecture is a form of alchemy, like secret family recipes for the perfect barbecue sauce or garam masala. You might start with a general-purpose architecture, then tweak it to exploit symmetries in your input data, or to produce a model with specific characteristics.
Here are models from two researchers: Sri Raghu Malireddi and François Chollet, the author of Keras. Chollet’s is general-purpose, and Malireddi’s is designed to produce a small model, suitable for mobile apps.
Enter the code below, and run it to see the model summaries.
Malireddi’s Architecture
model_m = Sequential() model_m.add(Conv2D(32, (5, 5), input_shape=input_shape, activation='relu')) model_m.add(MaxPooling2D(pool_size=(2, 2))) model_m.add(Dropout(0.5)) model_m.add(Conv2D(64, (3, 3), activation='relu')) model_m.add(MaxPooling2D(pool_size=(2, 2))) model_m.add(Dropout(0.2)) model_m.add(Conv2D(128, (1, 1), activation='relu')) model_m.add(MaxPooling2D(pool_size=(2, 2))) model_m.add(Dropout(0.2)) model_m.add(Flatten()) model_m.add(Dense(128, activation='relu')) model_m.add(Dense(num_classes, activation='softmax')) # Inspect model's layers, output shapes, number of trainable parameters print(model_m.summary())
Chollet’s Architecture
model_c = Sequential() model_c.add(Conv2D(32, (3, 3), input_shape=input_shape, activation='relu')) # Note: hwchong, elitedatascience use 32 for second Conv2D model_c.add(Conv2D(64, (3, 3), activation='relu')) model_c.add(MaxPooling2D(pool_size=(2, 2))) model_c.add(Dropout(0.25)) model_c.add(Flatten()) model_c.add(Dense(128, activation='relu')) model_c.add(Dropout(0.5)) model_c.add(Dense(num_classes, activation='softmax')) # Inspect model's layers, output shapes, number of trainable parameters print(model_c.summary())
Although Malireddi’s architecture has one more convolutional layer (Conv2D) than Chollet’s, it runs much faster, and the resulting model is much smaller.
Model Summaries
Take a quick look at the model summaries for these two models:
model_m:
Layer (type) Output Shape Param # ================================================================= conv2d_6 (Conv2D) (None, 24, 24, 32) 832 _________________________________________________________________ max_pooling2d_5 (MaxPooling2 (None, 12, 12, 32) 0 _________________________________________________________________ dropout_6 (Dropout) (None, 12, 12, 32) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 10, 10, 64) 18496 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ dropout_7 (Dropout) (None, 5, 5, 64) 0 _________________________________________________________________ conv2d_8 (Conv2D) (None, 5, 5, 128) 8320 _________________________________________________________________ max_pooling2d_7 (MaxPooling2 (None, 2, 2, 128) 0 _________________________________________________________________ dropout_8 (Dropout) (None, 2, 2, 128) 0 _________________________________________________________________ flatten_3 (Flatten) (None, 512) 0 _________________________________________________________________ dense_5 (Dense) (None, 128) 65664 _________________________________________________________________ dense_6 (Dense) (None, 10) 1290 ================================================================= Total params: 94,602 Trainable params: 94,602 Non-trainable params: 0
model_c:
Layer (type) Output Shape Param # ================================================================= conv2d_4 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ conv2d_5 (Conv2D) (None, 24, 24, 64) 18496 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 12, 12, 64) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 12, 12, 64) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 9216) 0 _________________________________________________________________ dense_3 (Dense) (None, 128) 1179776 _________________________________________________________________ dropout_5 (Dropout) (None, 128) 0 _________________________________________________________________ dense_4 (Dense) (None, 10) 1290 ================================================================= Total params: 1,199,882 Trainable params: 1,199,882 Non-trainable params: 0
The bottom line Total params is the main reason for the size difference: Chollet’s 1,199,882 is 12.5 times more than Malireddi’s 94,602. And that’s just about exactly the difference in model size: 4.8MB vs 380KB.
Malireddi’s model has three Conv2D layers, each followed by a MaxPooling2D layer, which halves the layer’s width and height. This makes the number of parameters for the first dense layer much smaller than Chollet’s, and explains why Malireddi’s model is much smaller and trains much faster. The implementation of convolutional layers is highly optimized, so the additional convolutional layer improves the accuracy without adding much to training time. But the smaller dense layer runs much faster than Chollet’s.
I’ll tell you about layers, output shape and parameter numbers in the Explanations section, while you wait for the next step to finish running.
Train the Model
Define Callbacks List
callbacks is an optional argument for the fit function, so define callbacks_list first.
Enter the code below, and run it.
callbacks_list = [ keras.callbacks.ModelCheckpoint( filepath='best_model.{epoch:02d}-{val_loss:.2f}.h5', monitor='val_loss', save_best_only=True), keras.callbacks.EarlyStopping(monitor='acc', patience=1) ]
An epoch is a complete pass through all the mini-batches in the dataset.
The ModelCheckpoint callback monitors the validation loss value, saving the model with the lowest-so-far value in a file with the epoch number and the validation loss in the filename.
The EarlyStopping callback monitors training accuracy: if it fails to improve for two consecutive epochs, training stops early. In my experiments, this never happened: if acc went down in one epoch, it always recovered in the next.
Compile & Fit Model
Unless you have access to a GPU, I recommend you use Malireddi’s model_m for this step, as it runs much faster than Chollet’s model_c: on my MacBook Pro, 76-106s/epoch vs. 246-309s/epoch, or about 15 minutes vs. 45 minutes.
Note: If an .h5 file doesn’t appear in notebook after the first epoch finishes, click the stop button to interrupt the kernel, click the save button, and logout. In Terminal, press Control-C to stop the server, then re-run the docker run command. Paste the URL or token into the browser or login page, navigate to the notebook, and click the Not Trusted button. Select this cell, then select Cell\Run All Above from the menu.
Enter the code below, and run it. This will take quite a while, so read the Explanations section while you wait. But check Finder after a couple of minutes, to make sure the notebook is saving .h5 files.
Note: This cell shows the two types of indentation for multi-line function calls, depending on where you write the first argument. It’s a syntax error if it’s out by even one space.
model_m.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Hyper-parameters batch_size = 200 epochs = 10 # Enable validation to use ModelCheckpoint and EarlyStopping callbacks. model_m.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, callbacks=callbacks_list, validation_data=(x_val, y_val), verbose=1)
Convolutional Neural Network: Explanations
You can use just about any ML approach to create an MNIST classifier, but this tutorial uses a convolutional neural network (CNN), because that’s a key strength of TensorFlow and Keras.
Convolutional neural networks assume inputs are images, and arrange neurons in three dimensions: width, height, depth. A CNN consists of convolutional layers, each detecting higher-level features of the training images: the first layer might train filters to detect short lines or arcs at various angles; the second layer trains filters to detect significant combinations of these lines; the final layer’s filters build on the previous layers to classify the image.
Each convolutional layer passes a small square kernel of weights — 1×1, 3×3 or 5×5 — over the input, computing the weighted sum of the input units under the kernel. This is the convolution process.
Each neuron is connected to only 1, 9, or 25 neurons in the previous layer, so there’s a danger of co-adapting — depending too much on a few inputs — and this can lead to overfitting. So CNNs include pooling and dropout layers to counteract co-adapting and overfitting. I explain these, below.
Sample Model
Here’s Malireddi’s model again:
model_m = Sequential() model_m.add(Conv2D(32, (5, 5), input_shape=input_shape, activation='relu')) model_m.add(MaxPooling2D(pool_size=(2, 2))) model_m.add(Dropout(0.5)) model_m.add(Conv2D(64, (3, 3), activation='relu')) model_m.add(MaxPooling2D(pool_size=(2, 2))) model_m.add(Dropout(0.2)) model_m.add(Conv2D(128, (1, 1), activation='relu')) model_m.add(MaxPooling2D(pool_size=(2, 2))) model_m.add(Dropout(0.2)) model_m.add(Flatten()) model_m.add(Dense(128, activation='relu')) model_m.add(Dense(num_classes, activation='softmax'))
Let’s work our way through this code.
Sequential
You first create an empty Sequential model, then add a linear stack of layers: the layers run in the sequence that they’re added to the model. The Keras documentation has several examples of Sequential models.
Note: Keras also has a functional API for defining complex models, such as multi-output models, directed acyclic graphs, or models with shared layers. Google’s Inception and Microsoft Research Asia’s Residual Networks are examples of complex models with nonlinear connectivity structures.
The first layer must have information about the input shape, which for MNIST is (28, 28, 1). The other layers infer their input shape from the output shape of the previous layer. Here’s the output shape part of the model summary:
Layer (type) Output Shape Param # ================================================================= conv2d_6 (Conv2D) (None, 24, 24, 32) 832 _________________________________________________________________ max_pooling2d_5 (MaxPooling2 (None, 12, 12, 32) 0 _________________________________________________________________ dropout_6 (Dropout) (None, 12, 12, 32) 0 _________________________________________________________________ conv2d_7 (Conv2D) (None, 10, 10, 64) 18496 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ dropout_7 (Dropout) (None, 5, 5, 64) 0 _________________________________________________________________ conv2d_8 (Conv2D) (None, 5, 5, 128) 8320 _________________________________________________________________ max_pooling2d_7 (MaxPooling2 (None, 2, 2, 128) 0 _________________________________________________________________ dropout_8 (Dropout) (None, 2, 2, 128) 0 _________________________________________________________________ flatten_3 (Flatten) (None, 512) 0 _________________________________________________________________ dense_5 (Dense) (None, 128) 65664 _________________________________________________________________ dense_6 (Dense) (None, 10) 1290
Conv2D
This model has three Conv2D layers:
Conv2D(32, (5, 5), input_shape=input_shape, activation='relu') Conv2D(64, (3, 3), activation='relu') Conv2D(128, (1, 1), activation='relu')
The first parameter — 32, 64, 128 — is the number of filters, or features, you want to train this layer to detect. This is also the depth — the last dimension — of the output shape.
The second parameter — (5, 5), (3, 3), (1, 1) — is the kernel size: a tuple specifying the width and height of the convolution window that slides over the input space, computing weighted sums — dot products of the kernel weights and the input unit values.
The third parameter activation='relu' specifies the ReLU (Rectified Linear Unit) activation function. When the kernel is centered on an input unit, the unit is said to activate or fire if the weighted sum is greater than a threshold value: weighted_sum > threshold. The bias value is -threshold: the unit fires if weighted_sum + bias > 0. Training the model calculates the kernel weights and the bias value for each filter. ReLU is the most popular activation function for deep neural networks.
MaxPooling2D
MaxPooling2D(pool_size=(2, 2))
A pooling layer slides an n-rows by m-columns filter across the previous layer, replacing the n x m values with their maximum value. Pooling filters are usually square: n = m. The most commonly used 2 x 2 pooling filter, shown below, halves the width and height of the previous layer, thus reducing the number of parameters, which helps control overfitting.
Malireddi’s model has a pooling layer after each convolutional layer, which greatly reduces the final model size and training time.
Chollet’s model has two convolutional layers before pooling. This is recommended for larger networks, as it allows the convolutional layers to develop more complex features before pooling discards 75% of the values.
Conv2D and MaxPooling2D parameters determine each layer’s output shape and number of trainable parameters:
Output Shape = (input width – kernel width + 1, input height – kernel height + 1, number of filters)
You can’t center a 3×3 kernel over the first and last units in each row and column, so the output width and height are 2 pixels less than the input. A 5×5 kernel reduces output width and height by 4 pixels.
Conv2D(32, (5, 5), input_shape=(28, 28, 1)): (28-4, 28-4, 32) = (24, 24, 32)
MaxPooling2D halves the input width and height: (24/2, 24/2, 32) = (12, 12, 32)
Conv2D(64, (3, 3)): (12-2, 12-2, 64) = (10, 10, 64)
MaxPooling2D halves the input width and height: (10/2, 10/2, 64) = (5, 5, 64)
Conv2D(128, (1, 1)): (5-0, 5-0, 128) = (5, 5, 128)
Param # = number of filters x (kernel width x kernel height x input depth + 1 bias)
Conv2D(32, (5, 5), input_shape=(28, 28, 1)): 32 x (5x5x1 + 1) = 832
Conv2D(64, (3, 3)): 64 x (3x3x32 + 1) = 18,496
Conv2D(128, (1, 1)): 128 x (1x1x64 + 1) = 8320
Challenge: Calculate the output shapes and parameter numbers for Chollet’s architecture model_c.
Solution Inside: Solution SelectShow>
Output Shape = (input width – kernel width + 1, input height – kernel height + 1, number of filters)
Conv2D(32, (3, 3), input_shape=(28, 28, 1)): (28-2, 28-2, 32) = (26, 26, 32)
Conv2D(64, (3, 3)): (26-2, 26-2, 64) = (24, 24, 64)
MaxPooling2D halves the input width and height: (24/2, 24/2, 64) = (12, 12, 64)
Param # = number of filters x (kernel width x kernel height x input depth + 1 bias)
Conv2D(32, (3, 3), input_shape=(28, 28, 1)): 32 x (3x3x1 + 1) = 320
Conv2D(64, (3, 3)): 64 x (3x3x32 + 1) = 18,496
Dropout
Dropout(0.5) Dropout(0.2)
A dropout layer is often paired with a pooling layer. It randomly sets a fraction of input units to 0. This is another method to control overfitting: neurons are less likely to be influenced too much by neighboring neurons, because any of them might drop out of the network at random. This makes the network less sensitive to small variations in the input, so more likely to generalize to new inputs.
Aurélien Géron, in Hands-on Machine Learning with Scikit-Learn & TensorFlow, compares this to a workplace where, on any given day, some percentage of the people might not come to work: everyone would have to be able to do critical tasks, and would have to cooperate with more co-workers. This would make the company more resilient, and less dependent on any single worker.
Flatten
The weights from the convolutional layers must be made 1-dimensional — flattened — before passing them to the fully connected Dense layer.
model_m.add(Dropout(0.2)) model_m.add(Flatten()) model_m.add(Dense(128, activation='relu'))
The output shape of the previous layer is (2, 2, 128), so the output of Flatten() is an array with 512 elements.
Dense
Dense(128, activation='relu') Dense(num_classes, activation='softmax')
Each neuron in a convolutional layer uses the values of only a few neurons in the previous layer. Each neuron in a fully connected layer uses the values of all the neurons in the previous layer. The Keras name for this type of layer is Dense.
Looking at the model summaries above, Malireddi’s first Dense layer has 512 neurons, while Chollet’s has 9216. Both produce a 128-neuron output layer, but Chollet’s must compute 18 times more parameters than Malireddi’s. This is what uses most of the additional training time.
Most CNN architectures end with one or more Dense layers and then the output layer.
The first parameter is the output size of the layer. The final output layer has an output size of 10, corresponding to the 10 classes of digits.
The softmax activation function produces a probability distribution over the 10 output classes. It’s a generalization of the sigmoid function, which scales its input value into the range [0, 1]. For your MNIST classifier, softmax scales each of 10 values into [0, 1], such that they add up to 1.
You would use the sigmoid function for a single output class: for example, what’s the probability that this is a photo of a good dog?
Compile
model_m.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
The categorical crossentropy loss function measures the distance between the probability distribution calculated by the CNN, and the true distribution of the labels.
An optimizer is the stochastic gradient descent algorithm that tries to minimize the loss function by following the gradient down at just the right speed.
Accuracy — the fraction of the images that were correctly classified — is the most common metric monitored during training and testing.
Fit
batch_size = 256 epochs = 10 model_m.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, callbacks=callbacks_list, validation_data=(x_val, y_val), verbose=1)
Batch size is the number of data items to use for mini-batch stochastic gradient fitting. Choosing a batch size is a matter of trial and error, a roll of the dice. Smaller values make epochs take longer; larger values make better use of GPU parallelism, and reduce data transfer time, but too large might cause you to run out of memory.
The number of epochs is also a roll of the dice. Each epoch should improve loss and accuracy measurements. More epochs should produce a more accurate model, but training takes longer. Too many epochs can result in overfitting. You set up a callback to stop early, if the model stops improving before completing all the epochs. In the notebook, you can re-run the fit cell to keep improving the model.
When you loaded the data, 10000 items were set as validation data. Passing this argument enables validation while training, so you can monitor validation loss and accuracy. If these values are worse than the training loss and accuracy, this indicates that the model is overfitted.
Verbose
0 = silent, 1 = progress bar, 2 = one line per epoch.
Results
Here’s the result of one of my training runs:
Epoch 1/10 60000/60000 [==============================] - 106s - loss: 0.0284 - acc: 0.9909 - val_loss: 0.0216 - val_acc: 0.9940 Epoch 2/10 60000/60000 [==============================] - 100s - loss: 0.0271 - acc: 0.9911 - val_loss: 0.0199 - val_acc: 0.9942 Epoch 3/10 60000/60000 [==============================] - 102s - loss: 0.0260 - acc: 0.9914 - val_loss: 0.0228 - val_acc: 0.9931 Epoch 4/10 60000/60000 [==============================] - 101s - loss: 0.0257 - acc: 0.9913 - val_loss: 0.0211 - val_acc: 0.9935 Epoch 5/10 60000/60000 [==============================] - 101s - loss: 0.0256 - acc: 0.9916 - val_loss: 0.0222 - val_acc: 0.9928 Epoch 6/10 60000/60000 [==============================] - 100s - loss: 0.0263 - acc: 0.9913 - val_loss: 0.0178 - val_acc: 0.9950 Epoch 7/10 60000/60000 [==============================] - 87s - loss: 0.0231 - acc: 0.9920 - val_loss: 0.0212 - val_acc: 0.9932 Epoch 8/10 60000/60000 [==============================] - 76s - loss: 0.0240 - acc: 0.9922 - val_loss: 0.0212 - val_acc: 0.9935 Epoch 9/10 60000/60000 [==============================] - 76s - loss: 0.0261 - acc: 0.9916 - val_loss: 0.0220 - val_acc: 0.9934 Epoch 10/10 60000/60000 [==============================] - 76s - loss: 0.0231 - acc: 0.9925 - val_loss: 0.0203 - val_acc: 0.9935
With each epoch, loss values should decrease, and accuracy values should increase. The ModelCheckpoint callback saves epochs 1, 2 and 6, because validation loss values in epochs 3, 4 and 5 are higher than epoch 2’s, and there’s no improvement in validation loss after epoch 6. Training doesn’t stop early, because training accuracy never decreases for two consecutive epochs.
Note: Actually, these results are from 20 or 30 epochs: I ran the fit cell more than once, without resetting the model, so loss and accuracy values are already quite good, even in epoch 1. But you see some wavering in the measurements, for example, accuracy decreases in epochs 4, 6 and 9.
By now, your model has finished training, so back to coding!
Convert to Core ML Model
When the training step is complete, you should have a few models saved in notebook. The one with the highest epoch number (and lowest validation loss) is the best model, so use that filename in the convert function.
Enter the following code, and run it.
output_labels = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] # For the first argument, use the filename of the newest .h5 file in the notebook folder. coreml_mnist = coremltools.converters.keras.convert( 'best_model.09-0.03.h5', input_names=['image'], output_names=['output'], class_labels=output_labels, image_input_names='image')
Here, you set the 10 output labels in an array, and pass this as the class_labels argument. If you train a model with a lot of output classes, put the labels in a text file, one label per line, and set the class_labels argument to the file name.
In the parameter list, you supply input and output names, and set image_input_names='image' so the Core ML model accepts an image as input, instead of a multi-array.
Inspect Core ML model
Enter this line, and run it to see the printout.
print(coreml_mnist)
Just check that the input type is imageType, not multi-array:
input { name: "image" shortDescription: "Digit image" type { imageType { width: 28 height: 28 colorSpace: GRAYSCALE } } }
Add Metadata for Xcode
Now add the following, substituting your own name and license info for the first two items, and run it.
coreml_mnist.author = 'raywenderlich.com' coreml_mnist.license = 'Razeware' coreml_mnist.short_description = 'Image based digit recognition (MNIST)' coreml_mnist.input_description['image'] = 'Digit image' coreml_mnist.output_description['output'] = 'Probability of each digit' coreml_mnist.output_description['classLabel'] = 'Labels of digits'
This information appears when you select the model in Xcode’s Project navigator.
Save the Core ML Model
Finally, add the following, and run it.
coreml_mnist.save('MNISTClassifier.mlmodel')
This saves the mlmodel file in the notebook folder.
Congratulations, you now have a Core ML model that classifies handwritten digits! It’s time to use it in the iOS app.
Use Model in iOS App
Now you just follow the procedure described in Core ML and Vision: Machine Learning in iOS 11 Tutorial. The steps are the same, but I’ve rearranged the code to match Apple’s sample app Image Classification with Vision and CoreML.
Step 1. Drag the model into the app:
Open the starter app in Xcode, and drag MNISTClassifier.mlmodel from Finder into the project’s Project navigator. Select it to see the metadata you added:
If instead of Automatically generated Swift model class it says to build the project to generate the model class, go ahead and do that.
Step 2. Import the CoreML and Vision frameworks:
Open ViewController.swift, and import the two frameworks, just below import UIKit:
import CoreML import Vision
Step 3. Create VNCoreMLModel and VNCoreMLRequest objects:
Add the following code below the outlets:
lazy var classificationRequest: VNCoreMLRequest = { // Load the ML model through its generated class and create a Vision request for it. do { let model = try VNCoreMLModel(for: MNISTClassifier().model) return VNCoreMLRequest(model: model, completionHandler: handleClassification) } catch { fatalError("Can't load Vision ML model: \(error).") } }() func handleClassification(request: VNRequest, error: Error?) { guard let observations = request.results as? [VNClassificationObservation] else { fatalError("Unexpected result type from VNCoreMLRequest.") } guard let best = observations.first else { fatalError("Can't get best result.") } DispatchQueue.main.async { self.predictLabel.text = best.identifier self.predictLabel.isHidden = false } }
The request object works for any image that the handler in Step 4 passes to it, so you only need to define it once, as a lazy var.
The request object’s completion handler receives request and error objects. You check that request.results is an array of VNClassificationObservation objects, which is what the Vision framework returns when the Core ML model is a classifier, rather than a predictor or image processor.
A VNClassificationObservation object has two properties: identifier — a String — and confidence — a number between 0 and 1 — the probability the classification is correct. You take the first result, which will have the highest confidence value, and dispatch back to the main queue to update predictLabel. Classification work happens off the main queue, because it can be slow.
Step 4. Create and run a VNImageRequestHandler:
Locate predictTapped(), and replace the print statement with the following code:
let ciImage = CIImage(cgImage: inputImage) let handler = VNImageRequestHandler(ciImage: ciImage) do { try handler.perform([classificationRequest]) } catch { print(error) }
You create a CIImage from inputImage, then create the VNImageRequestHandler object for this ciImage, and run the handler on an array of VNCoreMLRequest objects — in this case, just the one request object you created in Step 3.
Build and run. Draw a digit in the center of the drawing area, then tap Predict. Tap Clear to try again.
Larger drawings tend to work better, but the model often has trouble with ‘7’ and ‘4’. Not surprising, as a PCA visualization of the MNIST data shows 7s and 4s clustered with 9s:
Note: Malireddi says the Vision framework uses 20% more CPU, so his app includes an extension to convert a UIImage object to CVPixelBuffer format.
If you don’t use Vision, include image_scale=1/255.0 as a parameter when you convert the Keras model to Core ML: the Keras model trains on images with gray scale values in the range [0, 1], and CVPixelBuffer values are in the range [0, 255].
Thanks to Sri Raghu M, Matthijs Hollemans and Hon Weng Chong for helpful discussions!
Where To Go From Here?
You can download the complete notebook and project for this tutorial here. If the model shows up as missing in the app, replace it with the one in the notebook folder.
You’re now well-equipped to train a deep learning model in Keras, and integrate it into your app. Here are some resources and further reading to deepen your own learning:
Resources
Keras Documentation
coremltools.converters.keras.convert
Matthijs Hollemans’s blog
Jason Brownlee’s blog
Further Reading
François Chollet, Deep Learning with Python, Manning Publications
Stanford CS231N Convolutional Networks
Comparing Top Deep Learning Frameworks
Preprocessing in Data Science (Part 1): Centering, Scaling, and KNN
Gentle Introduction to the Adam Optimization Algorithm for Deep Learning
I hope you enjoyed this introduction to machine learning and Keras. Please join the discussion below if you have any questions or comments.
The post Beginning Machine Learning with Keras & Core ML appeared first on Ray Wenderlich.
Beginning Machine Learning with Keras & Core ML published first on https://medium.com/@koresol
0 notes
ajaklaaaaaaaa · 7 years ago
Quote
Who is Nick Weaver coming all the way from Seattle over L.A. to Germany wearing an Eintracht Frankfurt Jersey in his Tour promo video? He is an artist, he is a performer, he is a producer and a guy like you and me. Check out our interview for information about his HipHop approach, his self-perception and the tour dates in Bad Nauheim on the 10th and 11th of November 2017. In 2016, 330724 Spotify listeners streamed 9.8 years of his music. The first release that I found online was „Forever Automatic„ in 2012, followed by his debut „Day One, None“ in 2013, by “Yardwork” 2015, „Prowler“ in 2016 and „Photographs Of Other People“ 2017. Besides that you can find side projects and freestyles on his Youtube account. You’ll find the tour date infos at the end of the article. NICK WEAVER Website – Facebook – Twitter – Instagram – Spotify – Soundcloud – Youtube – auf RUN FFM When asked about the style of his music he says it has a Kendrick Lamar, James Blake and Jamie XX touch and is for any fans of Hip-Hop, R&B and even Electronic Music. The list of artists he likes and is inspired by is long and diverse: From Depeche Mode, Mozart, N.W.A., The National, LCD Soundsystem, Hot Chip, Mogwai, Godspeed You Black Emperor!, Jay-Z, Nas, Biggie, Mobb Deep, to Eminem.* Let’s jump in with the interview. Can you tell me something about your family- and your educational-background? I grew up outside of the city of Seattle in the United States. I have two older brothers, a mother, and a father – they all live in Seattle area. My father was a scientist for the United States Geological Survey and my mother was a writer/technical editor. I studied marketing at a college in Seattle, which lead me to a business career before I got super serious about music when I moved to L.A. It seems like you had a productivity boost since 2016 judging from you youtube output. What happened? I became much more self-sustained as an artist. I taught myself how to produce instrumentals, so I could pretty much do it all. That process was (and still is) so inspiring and motivating to me. It re-ignited my creativity; the desire to try new stuff. What do you think people feel when they listen to your music? I hope they hear someone who puts a lot of themselves out there. Music is such an incredible thing to be able to do, so you really gotta go all in. I hope people hear someone who really loves the process of making music, and someone who is constantly working at creating their own style of sound. You seem to be a dedicated person thinking about you inspirational and motivational talks on Youtube. Why did you decide to share those thoughts with your audience? I love talking about the process of being creative. I love sitting down with fellow artists, business owners, even friends who have big dreams in other fields. It’s motivating, and it keeps me driven. I created the #WorkFlow series as a way to let people see that side, and to show them that “Nick Weaver” the artist is also very human. In Yardwork BTS: Ep 4 – The Show you say the following: “Everyday shit that I know a lot of people relate to. It’s just real life shit. Keeping that theme, maturation, coming of age, figuring out what is important in life and sort of casting away the rest of the bullshit.” Would you say that you are a normal guy? I would like to think so, yes. Everybody has their own quirks and eccentric parts to them, I have no shortage of my own. I like that about people though. It’s not hard to figure out that you are a sports fan. On your IG are a lot of pictures showing you in jerseys of many different teams. Are you also practicing sports? I used to play a lot of sports – soccer (properly called football in your country), and basketball. I am short, so I always play sports with a big chip on my shoulder! There is one thing that we are especially interested in: Where did you got this awesome Eintracht Frankfurt jersey from? Good choice! Haha, I knew RUNFFM would like that! So in the States you can actually order the Bundesliga kits without sponsor logos on them. Eintracht Frankfurt is my team for life! When I toured Germany in 2016 I promised myself that if I got to see a Bundesliga team play at home, that team was going to be my squad – and it just so happened to be Eintracht that I got to see. The fans were so amazing, it was like a spiritual sporting experience for me. Frankfurt has been my team ever since. I even got to see them do a friendly match in my hometown against the Seattle Sounders. How did some of your videos end up at the german entertainment and gamer channel RELOADIAK? Christian (RELOADIAK) is a homie. He has been such a great support to me. He initially found my music when it was featured in a Super Street Magazine car video a few years back. He really liked it and contacted me about asking to use it. He had so many amazing fans there that really liked my music, so we built a relationship where he was cool with posting some of my content on his channel. Which kind of relationship do you have with Germany? What made you tour here and not in another European country? The fans I have in Germany are so supportive. They show me so much love on Spotify, YouTube, Instagram, FB, everywhere. I love them for that. They created the opportunity for me to go somewhere else in the world and play music. I can’t even describe how grateful I am for the fans! The tour had to be in Germany! It seems like you’re doing more than just rapping. You contacted me, I saw storyboards on your IG, you are playing instruments and you are producing? Is it a necessary evil or do you need to be in control of all the work? I think independent artists should be in control of as much as they think they can still be happy with. I certainly think there are things like videos and photography that you need to leave to the pros. But there are so many tools these days to help you create your content. It’s all one YouTube tutorial away. Even if your doing a lot on your own you probably have a team or people that you trust and work with. Who are these people and how do they help you? My manager Austin Hurwitz is a lifesaver, extremely professional, and an incredibly hard worker. He keeps me honest about my process too. My good friend Ryan Skut shoots a ton of my stuff. He shot all the artwork for the Photographs Of Other People project. I still have a core group of my best childhood friends and my family who come to all of my shows in my hometown in the states, that support after all these years means so much. Do you still have your full time job and what is it about? I have been lucky enough to do music mostly full time at this point in my career. Every once in a while I do contract work on the side for corporate event planning. The first video on your Youtube channel supports a foundation called “Growing Veterans”, you released an EP dedicated to the movie “The Rock” and you can be seen shooting a rifle on your IG. What do you think about the army and weapons in general? You really did your homework on me! Growing Veterans is a great organization in the States started by my childhood friend Christopher –  a US Marine veteran who wanted to help other military veterans back home. The USA still does not give our military veterans the resources they need after service. They struggle with mental health, finding work, and many other problems when they come home. Chris created Growing Veterans to find work for former military folks in agricultural and farming industries. Our military has endured a lot on behalf of our country’s decisions, it’s important for me to support groups like Growing Veterans. Despite that IG photo, I am very anti-guns. Somehow, someway, the United States is going to have to figure out how to decrease gun violence. Literally as I write this answer, I am seeing a news update about another public shooting in a church. Gun violence is the saddest most senseless shit, and the United States needs real leadership to change this issue. NICK WEAVER GERMANY TOUR „HipHop Jam VII“ Freitag, 10.11.2017, Einlass: 19:30 Uhr Jugendhaus Alte Feuerwache, Johannisstrasse 5, 61231 Bad Nauheim Facebook-Event „Fashion by relict with Nick Weaver DJ Set“ Samstag, 10.11.2017, Einlass: 21:00 Uhr Fashion Caffe Bar, Reinhardstraße 10, 61231 Bad Nauheim Facebook-Event To get an even more tangible picture of your persona I would like you to share something about you that you do not highlight on social media. Guilty pleasures, secrets, things that you want to improve? I love old video games. I just bought a Super Nintendo yesterday, because I still have all my old SNES games from growing up. And I’m not talking about that little “Super Nintendo Classic” that comes pre-loaded with 20 games. I went and bought the OG system on eBay. I still miss my Honda Civic, which is immortalized in my Prowler, Yardwork, and Day, One None albums. I owned that car for 10 years, a whole decade! I am going to get a tattoo of it. I just recently watched the entire Friday Night Lights TV series for the first time. It’s the closest thing to a soap opera that I vibe with. Thank you for your time and the intresting insights! Thank you so much for this opportunity, I know it is in on short notice as well so I really appreciate you doing this! *If you would like to know even more about Nick Weaver you can check out his interviews with respectmyregion, distinctionmgmt, illuminati2g and therealhip-hop. Thanks to my awesome and beautiful Italian co-author Laura for cross-reading! I hope you enjoyed the article For more global HipHop related articles chek my previous posts. Feel free to send me suggestions or feedback to „yoscha at runffm.com“. Find me on Facebook/Instagram/YouTube/SoundCloud/tumblr. Cya! Um mit weiteren Events und Konzerten in und um Frankfurt auf dem Laufenden zu bleiben, empfehlen wir euch unsere Gruppe mit Veranstaltungshinweisen auf Facebook. Der Beitrag Have you met?! with Nick Weaver erschien zuerst auf RUNFFM.
http://runffm.com/2017/11/have-you-met-with-nick-weaver/
0 notes
thecaffeinebookwarrior · 5 years ago
Text
Writer’s Resources For Portraying Polyamory
By far one of THE most frequent questions I’ve received since I began this blog years ago, and one that I’m not especially equipped to answer, as I’m happiest when I’m single and have little experience with any kind of long-term relationship (monogamous or otherwise.)
My most basic advice is that polyamory is just another form of relationship between human beings, and can either lift participants to new heights, provide them with comfort and commitment, or prove toxic and unhealthy -- just like any other kind of relationship.  It all depends on the people, boundaries, and dynamics involved.
There is no single formula for any kind of relationship, and so it’s impossible to follow a single mold.  All that is necessary is that they are portrayed with humanity, authenticity, and respect.
Hopefully, these long-overdue resources will help you all understand the many forms polyamory can take, and apply them to your work.  
Happy writing, everybody!
Articles:
True Stories
More True Stories
When Polyamory Goes Wrong
Polyamory Breakup Stories
Relationship Advice From Polyamorous People
Books:
More Than Two
The Ethical Slut
The Polyamorists Next Door
The Smart Girl’s Guide to Polyamory
Videos:
Polyamorous people Answer Questions
Polyamory Ted Talk
Should You Be Polyamorous?
How to Handle Polyamory in Jealousy
Why Polyamory Can’t Be For Everyone
Polyamory is Natural
425 notes · View notes
thecaffeinebookwarrior · 5 years ago
Text
Resources For Writing Nonbinary and Genderfluid Characters
“How do I portray the nonbinary experience?”  This, in various combinations, is one of the most frequent asks I receive from readers.  And, I can say as an enby person, there is no concrete answer, because there is no single way to be nonbinary.  
Thus, I have compiled a long-overdue selection of resources for your convenience, with more to come!  Nonbinary, genderfluid, gender noncomforming, or questioning followers are also welcome to submit their own experiences for inclusion.
I hope this helps, and happy writing!
The Basics:
“List of Nonbinary Identities”
“Nonbinary (Genderqueer):  Definition, Terminology, and Identities”
Articles:
“What Is It Like to Be Non-Binary?  I’m Still Finding Out”
“Being Nonbinary Has Nothing To Do With Looking Nonbinary”
“Nine Young People Explain What Being Nonbinary Means To Them”
“Seven Non-Binary People on What It’s Like To Have An ‘X’ Gender Marker”
Fiction Literature: 
The Black Tides of Heaven, by J.Y. Yang (an intoxicating “silkpunk” world, in which gender is not assigned at birth and is a matter of personal choice.)
An Unkindness of Ghosts, by Rivers Solomon (a sci-fi endeavor that derives clear inspiration from Octavia Butler’s legacy of Afrofuterism, wherein the White upper class becomes more harshly binary, and the “inferior” people of color are more fluid with their identities.)
Symptoms of Being Human, by Jeff Gavin (a young, genderfluid teen secretly blogs about their inner life, and must confront the danger of being outed.)
Lizard Radio, by Pat Schmatz (a nonbinary protagonist grapples with a rigidly binary society, and the prospect that she may have descended from lizard people.  Hashtag queer culture.)
The Unintentional Time Traveler, by Everett Maroon (a time-traveler who swaps genders along with time periods, and must come to terms with which are right for them -- and whether both is an option.)  
Orlando, by Virginia Woolf (a groundbreaking depiction of gender-fluidity and same-gender love.  The titular Orlando uses the singular “they” at one point, and gets a happy ending.  Far ahead of its time.)
Non-Fiction Literature:
Nonbinary:  Memoirs of Gender and Identity (thirty authors discuss how they’ve navigated -- or broken away from -- the gender binary.)
586 notes · View notes
thecaffeinebookwarrior · 5 years ago
Note
Hello! Do you know if any mythological/historical figures who were, by chance, Genderfluid? I have a character whose male, but has mild dysphoria and has a magical ability to present more feminine and switch between when comfortable, and I'm trying to find something realworld wise to allude them to (all the characters for my story have allusions real world wise). Thanks!
*Cracks knuckles*  This is difficult to define, as contemporary terminology for gender identity didn’t exist until recently, but I most certainly do know of many mythological figures whom I would classify as genderfluid or gender noncomforming.
This almost merits a post all it’s own, but off the top of my head:
Ishtar, the Mesopotamian goddess of love and war, is often depicted with a beard.
Hermaphroditus, the child of Hermes and Aphrodite, is both male and female, and described as a being of tremendous beauty.
Atum, the first god in the Ancient Egyptian creation myth, was both male and female.
Shikhandi, a warrior in Hindu mythology who was identified as female at birth and then identified as male later in life.
Orlando, a figure in classical literature who was immortal and periodically switched genders.
I find it a little problematic to assign gender fluidity to historical figures who may have “just” been trans – it was impossible at the time for them to publicly specify, especially as many were living in secret – but I will direct you to 10 examples of nonbinary genders throughout history.
I hope this helps, and happy writing!
112 notes · View notes