#so i finally manage to log in and i try to migrate the server or whatever using my microsoft account. it tells me that my microsoft account
Explore tagged Tumblr posts
Conversation
Cassie when Frank first offered for her to go undercover: Wait a minute! This is a very big decision. It might affect the course of my entire life. I shall have to think about it.
Cassie: *pauses for 1 second*
Cassie: I’ll do it.
#okay i am going to complain abt something#i have been trying to remove files from a fifteen year old computer for like three days#specifically ancient sims and minecraft files#and the way nothing fucking works#with the sims stuff i know how to remove files so it should be whatever like i did it when my laptop broke with no issues#so i opened the files and like ??? half of them are gone??? because apparently my mother was just deleting things#so i've given up on those because i don't know what's gone and i'm afraid of breaking my game by putting fucked up files on my computer#then there's the stupid fucking minecraft ones oh my god#so like. i can't log into it on the computer because apparently at some point in the last like decade they migrated their servers? so i have#go through my phone because internet browsers don't function on the computer and i have the migrate it over that way#except it's my sister's account. not mine#and i don't know what the fucking password is#except naturally she doesn't either so i have to change the password. that finally happens and i go to log in#EXCEPT!!!! you have to answer the fucking security questions#so i have to try to get into the headspace of my sister when she was like 11 and try to work out what her favourite movie and author are#naturally she refuses to help with this and asks like it's ridiculous that i would even ask#eventually i just fucking give up and after like ten minutes of the website not functioning i managed to change the security questions to#stuff that will not change lmao#so i finally manage to log in and i try to migrate the server or whatever using my microsoft account. it tells me that my microsoft account#does not exist. i literally have it open in another tab#but whatever i'll just make a new one ig so i did that with my gmail and FINALLY everything got moved over#which fucking fantastic! i can log into it on the computer now!!! so like i do that and i dont know how this game works so i had looked up#how you remove the game files and it seemed really easy except none of the stuff that the internet says i should be looking at was there#but i was sorta confused about whether i was supposed to be in the launch window or like actually open the game so i decided to open the#game and see if i could find it there#except it loaded forever and ever and then i finally had to force quit it because it froze#and now nothing on the entire computer will open#like literally nothing#i dont know what the fuck to do#okay that's all sorry LMAO
6 notes · View notes
i-want-my-iwtv · 6 years ago
Note
How has the purge affected u?
[Apologies in advance for the Wall of Text™, I feel like longposting, sorry for the dash coverage, I didn’t think I had this much to say about this… And I probably shouldn’t do this, probably should have kept this to a flippant “It sucks!” with a VC meme, but I haven’t shared much publicly lately… now feels like a singularly poignant time to do so.]
NO CUTS WE LONGPOST LIKE MEN
It’s strange. I think running and participating in the @vcsecretgifts exchange (not finished yet!), and backing up that blog and this one for preservation (not finished yet!), helped take my mind off it! I’ve been busy with @wicked-felina coordinating substitute Santas, so I haven’t had much chance to indulge in it like a participant yet, but I did see that my recipient liked my gift, and that was heartwarming! I’ll reply properly when I have the peace of mind for it (yes I could be doing it now but this is the gear I want to be on right now), and I haven’t had a chance to read the gift from my own Santa, I’m saving that as a treat!
I did the #Log/ffProt/st, that helped. The purge is/was creatively stifling, somewhat, too, bc even though I don’t produce NS/FW stuff myself (I WANT TO, THO), I do reblog it, and support it, I see other artists and writers affected by it, and I felt and still feel helpless, unable to protect them. One of our VC fandom members who draws slash art has been shadowbanned, that I know of. It’s frustrating that the morality & purity police seem to have won this battle, but they haven’t won the war. We’ll take our garbage underground if we have to. 
How crushing to wake up to one’s blog(s) just canceled w/o explanation? We were given 2 weeks’ notice? To pack up our “nasty” stuff and leave? 
Tumblr media
[X]
There’s nothing wrong with NS/FW stuff, adult ppl should be able to talk about it, fantasize about it, make art and write fiction about it, have kinks and explore them. I never bought the “if you like it in fiction you support it in reality!” argument, just like with all dangerous things we like in fiction but wouldn’t want in reality. 
“… Fiction is how we both study and de-fang our monsters. To lock violent fiction away, or to close our eyes to it, is to give our monsters and our fears undeserved power and richer hunting grounds.” - Warren Ellis [X]
But I’ve fought those battles and there’s no point in engaging in unwinnable debate with ppl who are committed to misunderstanding me and twisting my words into a strawman they can easily knock over.  
It’s baffling that it’s an unpopular opinion that minors should be allowed to learn about sex, as much as they learn about how to (eventually) drive a car, manage alcohol consumption, defend themselves against violence, handle medication or recreational drugs, all these things that are potentially and not inherently dangerous to them, that they’ll be faced with in the Real World. I remember there were religious rituals in my youth where children could taste alcohol a little bit, it was exposure to an adult thing in a safe space, among other adults. Is this really all about Protecting the Children? Really? Or is it about mental domination? What it looks like to me is a self-proclaimed Particular Authority who wants to keep minors (and adults) submissive and reliant on that Particular Authority, it’s so much easier to keep them submissive and reliant to that same Particular Authority as adults. It’s always been about power. 
And I’m seeing that the communities most affected by the purge are AFAB ppl and LGBTQIA+. It’s misogynistic, LGBTQIA+-phobic. The fact that tungle reportedly blocked archivists from saving blogs before the NS/FW purge is just pouring salt in the wound.
I’ve started following these refugee/evicted tumblr ppl where they’ve migrated to. I’m trying to keep track of them. I’m in the @fiction-is-not-reality2 discord server, keeping my eye out for the next alternative platform.
Leading up to the purge I considered blasting a bunch of smut as a last hurrah, and I did reblog some Controversial™ stuff, just in case my blog was going to be deleted, but then, I lost steam on that. Why put in extra effort and get deleted anyway? Why poke the bear, and deliberately get deleted for it? Most of my blog is SFW, anyway.
I preserved my blog, the gifts blog, and just for archival purposes I should have been doing that all along, so it was good for my own historical safekeeping… so much good commentary and fanworks here, in the past 5+ years! Collecting the scraps just like I’d done in 1994, when there were articles about the IWTV movie and I wanted all of them, I especially wanted the illustrations and caricatures in the magazines (which was really validating of my interest in some way, fanart that was published, essentially!). And I had my folder of Deviantart I liked, of course. So I packed up my blog here to preserve it, it’s on wordpress now, iwantmyiwtv.com, with a lame layout, but I’ve got the tags showing, where fanart that’s blocked here can still be seen on WP.
I’m rambling. 
The purge reminded me that all this, as we know it, could and will be gone someday. Purges have done that before, especially to our fandom, attacked by its own canon author. We’ve survived this before. 
I’ve been on tungle since July ‘13. I’ve made and lost some wonderful friends here, some have moved on to other fandoms, or we’ve had partings of the ways. The fanart in this fandom, my memes, have been spread all over, I see them on Pinterest, Facebook, Twitter. When this blog is deleted, either by content flagging or by tumblr finally keeling over, our stuff is going to outlive us all.  
Who even made this one? One of the vintage memes. Maybe their watermark was long ago cropped off, or maybe they hadn’t put it on:
Tumblr media
^It was used in a meme here, but I don’t think that was the OP, it’s gotta be more than 4 yrs old. Pretty sure the “JUDGING YOU” in Impact font was around Twilight time, which came out in 2008. This meme is still floating around, it’s still amusing to ppl all these years later. Someone’s stroke of inspiration, and we may never know who it was, but we enjoy it, it’s part of the worn fabric of the fandom.
Will ppl remember me when/if I’m gone? I don’t need to be remembered, it’s enough that I was here at one point, and encouraged ppl to make fanworks, that I helped bring ppl together. I don’t need them to know it was me, specifically, or know much about me, this blog was never meant to be about me. Those I brought together might remember how they met. There are those who have seen behind the curtain and I hope to hang onto them as long as possible.
If/When this all disappears, I want ppl to know how much I enjoyed interacting with ppl through asks, the chat feature. I’ve missed answering asks, and I’ve missed the feeling of seeing new ask alerts without having to brace myself for Discourse. I’ve missed seeing that anon icon as a friendly, but shy, human being, rather than a living person who’s in pain, somewhere else in the world, throwing bricks through my window. Someone who’s suffering bc they’re not getting the attention they need, truly, someone who deserves to be loved, someone who needs validation for their opinions on things, and wanted mine, but I couldn’t give it. I’m only human, too. I made this blog for 15 year old me, who couldn’t find enough VC fanworks, so I set out to collect, make, and encourage them, but all in the spirit of optimism, bc that’s what I got out of canon. 15 year old me drew self esteem from those books. That’s the only person I ever wanted to please with this thing and that girl is still my priority. 
We’ll survive this purge, we’ve done it before. Hold onto the ppl who you’ve made connections with. I’ll be here as long as I can. 
Most importantly, I’m not letting the morality & purity police tell me what ’m allowed to learn about, make fanworks about, or enjoy in published or fan fiction, etc. 
24 notes · View notes
blogdial707 · 3 years ago
Text
Evernote Dropbox
Tumblr media
Why Migrate from Dropbox to OneDrive?
Nowadays, online cloud becomes the most popular storage type for people. And, a lot of users havemore than one cloud accounts belong to the same or different cloud brands. The common cloudservices are Dropbox, OneDrive, Google Drive, Amazon Drive, MEGA and so on. Some cloud usersplan to switch from Dropbox to OneDrive for the following reasons: 1password single sign on.
Tumblr media
OneDrive provides more free cloud space (5 GB) than Dropbox (2 GB).
OneDrive offers cheaper and more reasonable storage plan (50 GB: $1.99/m; 1 TB: $6.99/m; 5TB: $9.99/m) than Dropbox (1 TB: $9.99/m; 2 TB: $19.99/m).
They have purchased office 365 product with OneDrive service.
They have graduated from school and have to move their schoolwork from Dropbox for Businessaccount to their personal OneDrive.
They have resigned from their last position and need to transfer their working documentsfrom public Dropbox account to their own OneDrive cloud.
The Dropbox account is running out of space while there is much storage in OneDrive.
Their friends recommend OneDrive to them and they find OneDrive suits them better afterdoing the test.
……
How to Migrate Dropbox to OneDrive in Common Ways?
Evernote on the Mac can change the font in notes, but not on iOS, and sometimes routine copy/pasting on iOS results in text with bizarre fonts that can't be fixed on that device. The use of Apple Data Detectors also means that anything Evernote thinks might be a phone number or calendar event is highlighted like a clickable link - I wish this. Best Note-Taking Apps in 2021: Organizing Your Thoughts. A good note-taking app is the best way to organize your thoughts and making sure you don't forget something important.
Evernote is an award-winning service that turns every computer and phone you use into an extension of your brain. Use Evernote to save your ideas, experiences and inspirations, then easily access them all at any time from anywhere.
Brother iPrint&Scan. Scan and print from your mobile device with our free iPrint&Scan app. Connect a compatible Brother printer or all-in-one to your wireless network, and scan and print documents from a smartphone, tablet, or laptop.
Go to ifttt.com to make the connection.
As we all know, neither Dropbox nor Microsoft OneDrive app has a function to migrate data betweeneach other. So it’s a question on how to transfer from Dropbox to OneDrive directly. In thefollowing parts, we will offer you two traditional methods to achieve your goal.
Solution 1: Download and Upload
Step 1. Sign in your Dropbox account. Step 2. Create a new folder, select all files under your Dropbox account and move them to thenew folder. Step 3. Put the mouse on the new folder, click the three-dot symbol and click the“Download” button and wait for the process to complete.
Step 4. Log in your OneDrive account. Step 5. Click “Upload” button to upload that .zip file to your OneDrive account and waitfor the process to complete.
Notes: C minor.
The new folder will become a .zip file after it’s downloaded to the local PC.
If you want to upload folder to your OneDrive directly, then you need to extractthat.zip file first.
Solution 2: Migrate Dropbox to OneDrive with Windows Explorer
Step 1. Download both Dropbox app and OneDriveapp and install them on your PC. Step 2. After installing them on your PC, you will find them through Windows File Explorer.
Step 3. You can move Dropbox to OneDrive with the “Cut” and “Paste” feature through Windows FileExplorer.
As you can see, any above solution can help you copy Dropbox to OneDrive but it takes time tocomplete because you need to perform the operations manually. For the solution 1, you cannotclose the page until the process is complete. For the solution 2, you need to install apps onyour PC so you can complete the operations.
Actually, there is an effective way that can move data from Dropbox to OneDrive quicklywithout encountering above problems. Please continue to read the following parts.
How to Migrate Dropbox to OneDrive in Effective Way?
If you want to quickly move files from Dropbox to OneDrive while ensuring data security, you arerecommended to try a free and professinal cloudto cloud transfer service, MultCloud, without logging in to both clouds, shifting fromone cloud to another or downloading and re-uploading. Now, follow the steps below to quicky andsafely migrate files from Dropbox to OneDrive.
Step 1. Register MultCloud - Free
MultCloud is a Free web based cloud file transfer manager. To use it, firstly, you have tocreate an account.
Step 2. Add Dropbox and OneDrive Accounts to MultCloud
After creating an account, sign into its platform. In the main panel, click “Add Clouds”tab on the top and select the cloud brand you are going to include in. Then, follow the simpleguidance to finish adding cloud.
Note: You can only add one cloud account in one time. So, add the other clouds byrepeating the process.
Step 3. Migrate Dropbox to OneDrive with “Cloud Transfer”
Working with C# The C# support in Visual Studio Code is optimized for cross-platform.NET Core development (see working with.NET Core and VS Code for another relevant article). Our focus with VS Code is to be a great editor for cross-platform C# development. VS Code supports debugging of C# applications running on either.NET Core or Mono. Welcome to the C# extension for Visual Studio Code! This extension provides the following features inside VS Code: Lightweight development tools for.NET Core. Great C# editing support, including Syntax Highlighting, IntelliSense, Go to Definition, Find All References, etc. C'est la vie. Visual Studio tutorials C#. Create C# apps with Visual Studio. Get started How-To Guide Install Visual Studio; Get Started Start a guided tour. In this section, you use Visual Studio Code to create a local Azure Functions project in C#. Later in this article, you'll publish your function code to Azure. Choose the Azure icon in the Activity bar, then in the Azure: Functions area, select the Create new project.
Evernote And Dropbox
Now, go to “Cloud Transfer” tab and specify source and destination as Dropbox and OneDriverespectively. Finally, click “Transfer Now” and wait for the process to complete.
Tips:
“Cloud Transfer” feature supports entire cloud or folders as source. If you want totransfer some files only from Dropbox to OneDrive, you could use the “Copy” and“Paste” feature through “Cloud Explorer”.
If you need to delete source files after migrating Dropbox to OneDrive, just tick “Deleteall source files after transfer is complete” in Options window.
Set OneDrive as source and Dropbox as destination if you want to migrate OneDrive to Dropbox.
If you have very large data to transfer, to get a super fast transferspeed, you could also upgrade your account to the premium accountso MultCloud uses 10 threads to transfer your files across clouds.
More about MultCloud
Following any way above, you could easily migrate Dropbox to OneDrive. If you select popular wayabove with MultCloud, you can also enjoy other advanced features. In addittion to Dropbox andOneDrive, MultCloud supports more than 30 clouds at presentincluding G Suite, OneDrive, Dropbox, Google Photos, MEGA, Amazon S3, Flickr, Box, pCloud, etc.
One of the greatest aviation memoirs ever written, Viper Pilot is an Air Force legend's thrilling eyewitness account of modern air warfare. For twenty years, Lieutenant Colonel Dan Hampton was a leading member of the Wild Weasels, logging 608 combat hours in the world's most iconic fighter jet: the F-16 'Fighting Falcon,' or 'Viper.' The viper pilot. 270.8k Followers, 99 Following, 629 Posts - See Instagram photos and videos from viperpilot (@theviperpilot).
Using Evernote Dropbox
Besides “Cloud Transfer “ feature, MultCloud can also do cloud to cloud sync/backup/copy with“Cloud Sync”. If you are going to migrate G Suite to G Suitebecause your domain has changed, you can make full use of this feature.
Evernote Dropbox Integration
As a browser app, MultCloud requires no downloading and installation. Thus, you can save muchlocal disk space. It still can be applied in all operating systems like Windows PC & Server,Linux, Mac, ios, android and Chrome OS; and all devices including desktop, laptop, notebook,iPad, cellphone, etc.
Tumblr media
0 notes
qwertsypage · 4 years ago
Text
Building a Real-Time Webapp with Node.js and Socket.io
Tumblr media
In this blogpost we showcase a project we recently finished for National Democratic Institute, an NGO that supports democratic institutions and practices worldwide. NDI’s mission is to strengthen political and civic organizations, safeguard elections and promote citizen participation, openness and accountability in government.
Our assignment was to build an MVP of an application that supports the facilitators of a cybersecurity themed interactive simulation game. As this webapp needs to be used by several people on different machines at the same time, it needed real-time synchronization which we implemented using Socket.io.
In the following article you can learn more about how we approached the project, how we structured the data access layer and how we solved challenges around creating our websocket server, just to mention a few. The final code of the project is open-source, and you’re free to check it out on Github.
A Brief Overview of the CyberSim Project
Political parties are at extreme risk to hackers and other adversaries, however, they rarely understand the range of threats they face. When they do get cybersecurity training, it’s often in the form of dull, technically complicated lectures. To help parties and campaigns better understand the challenges they face, NDI developed a cybersecurity simulation (CyberSim) about a political campaign rocked by a range of security incidents. The goal of the CyberSim is to facilitate buy-in for and implementation of better security practices by helping political campaigns assess their own readiness and experience the potential consequences of unmitigated risks.
The CyberSim is broken down into three core segments: preparation, simulation, and an after action review. During the preparation phase, participants are introduced to a fictional (but realistic) game-play environment, their roles, and the rules of the game. They are also given an opportunity to select security-related mitigations from a limited budget, providing an opportunity to "secure their systems" to the best of their knowledge and ability before the simulation begins.
Tumblr media
The simulation itself runs for 75 minutes, during which time the participants have the ability to take actions to raise funds, boost support for their candidate and, most importantly, respond to events that occur that may negatively impact their campaign's success. These events are meant to test the readiness, awareness and skills of the participants related to information security best practices. The simulation is designed to mirror the busyness and intensity of a typical campaign environment.
Tumblr media
The after action review is in many ways the most critical element of the CyberSim exercise. During this segment, CyberSim facilitators and participants review what happened during the simulation, what events lead to which problems during the simulation, and what actions the participants took (or should have taken) to prevent security incidents from occurring. These lessons are closely aligned with the best practices presented in the Cybersecurity Campaigns Playbook, making the CyberSim an ideal opportunity to reinforce existing knowledge or introduce new best practices presented there.
Tumblr media
Since data representation serves as the skeleton of each application, Norbert - who built part of the app will first walk you through the data layer created using knex and Node.js. Then he will move to the program's hearth, the socket server that manages real-time communication.
This is going to be a series of articles, so in the next part, we will look at the frontend, which is built with React. Finally, in the third post, Norbert will present the muscle that is the project's infrastructure. We used Amazon's tools to create the CI/CD, host the webserver, the static frontend app, and the database.
Now that we're through with the intro, you can enjoy reading this Socket.io tutorial / Case Study from Norbert:
The Project's Structure
Before diving deep into the data access layer, let's take a look at the project's structure:
. ├── migrations │ └── ... ├── seeds │ └── ... ├── src │ ├── config.js │ ├── logger.js │ ├── constants │ │ └── ... │ ├── models │ │ └── ... │ ├── util │ │ └── ... │ ├── app.js │ └── socketio.js └── index.js
As you can see, the structure is relatively straightforward, as we’re not really deviating from a standard Node.js project structure. To better understand the application, let’s start with the data model.
The Data Access Layer
Each game starts with a preprogrammed poll percentage and an available budget. Throughout the game, threats (called injections) occur at a predefined time (e.g., in the second minute) to which players have to respond. To spice things up, the staff has several systems required to make responses and take actions. These systems often go down as a result of injections. The game's final goal is simple: the players have to maximize their party's poll by answering each threat.
We used a PostgreSQL database to store the state of each game. Tables that make up the data model can be classified into two different groups: setup and state tables. Setup tables store data that are identical and constant for each game, such as:
injections - contains each threat player face during the game, e.g., Databreach
injection responses - a one-to-many table that shows the possible reactions for each injection
action - operations that have an immediate on-time effect, e.g., Campaign advertisement
systems - tangible and intangible IT assets, which are prerequisites of specific responses and actions, e.g., HQ Computers
mitigations - tangible and intangible assets that mitigate upcoming injections, e.g., Create a secure backup for the online party voter database
roles - different divisions of a campaign party, e.g., HQ IT Team
curveball events - one-time events controlled by the facilitators, e.g., Banking system crash
On the other hand, state tables define the state of a game and change during the simulation. These tables are the following:
game - properties of a game like budget, poll, etc.
game systems - stores the condition of each system (is it online or offline) throughout the game
game mitigations - shows if players have bought each mitigation
game injection - stores information about injections that have happened, e.g., was it prevented, responses made to it
game log
To help you visualize the database schema, have a look at the following diagram. Please note that the game_log table was intentionally left from the image since it adds unnecessary complexity to the picture and doesn’t really help understand the core functionality of the game:
Tumblr media
To sum up, state tables always store any ongoing game's current state. Each modification done by a facilitator must be saved and then transported back to every coordinator. To do so, we defined a method in the data access layer to return the current state of the game by calling the following function after the state is updated:
Tumblr media
// ./src/game.js const db = require('./db'); const getGame = (id) => db('game') .select( 'game.id', 'game.state', 'game.poll', 'game.budget', 'game.started_at', 'game.paused', 'game.millis_taken_before_started', 'i.injections', 'm.mitigations', 's.systems', 'l.logs', ) .where({ 'game.id': id }) .joinRaw( `LEFT JOIN (SELECT gm.game_id, array_agg(to_json(gm)) AS mitigations FROM game_mitigation gm GROUP BY gm.game_id) m ON m.game_id = game.id`, ) .joinRaw( `LEFT JOIN (SELECT gs.game_id, array_agg(to_json(gs)) AS systems FROM game_system gs GROUP BY gs.game_id) s ON s.game_id = game.id`, ) .joinRaw( `LEFT JOIN (SELECT gi.game_id, array_agg(to_json(gi)) AS injections FROM game_injection gi GROUP BY gi.game_id) i ON i.game_id = game.id`, ) .joinRaw( `LEFT JOIN (SELECT gl.game_id, array_agg(to_json(gl)) AS logs FROM game_log gl GROUP BY gl.game_id) l ON l.game_id = game.id`, ) .first();
The const db = require('./db'); line returns a database connection established via knex, used for querying and updating the database. By calling the function above, the current state of a game can be retrieved, including each mitigation already purchased and still available for sale, online and offline systems, injections that have happened, and the game's log. Here is an example of how this logic is applied after a facilitator triggers a curveball event:
// ./src/game.js const performCurveball = async ({ gameId, curveballId }) => { try { const game = await db('game') .select( 'budget', 'poll', 'started_at as startedAt', 'paused', 'millis_taken_before_started as millisTakenBeforeStarted', ) .where({ id: gameId }) .first(); const { budgetChange, pollChange, loseAllBudget } = await db('curveball') .select( 'lose_all_budget as loseAllBudget', 'budget_change as budgetChange', 'poll_change as pollChange', ) .where({ id: curveballId }) .first(); await db('game') .where({ id: gameId }) .update({ budget: loseAllBudget ? 0 : Math.max(0, game.budget + budgetChange), poll: Math.min(Math.max(game.poll + pollChange, 0), 100), }); await db('game_log').insert({ game_id: gameId, game_timer: getTimeTaken(game), type: 'Curveball Event', curveball_id: curveballId, }); } catch (error) { logger.error('performCurveball ERROR: %s', error); throw new Error('Server error on performing action'); } return getGame(gameId); };
As you can examine, after the update on the game's state happens, which this time is a change in budget and poll, the program calls the getGame function and returns its result. By applying this logic, we can manage the state easily. We have to arrange each coordinator of the same game into groups, somehow map each possible event to a corresponding function in the models folder, and broadcast the game to everyone after someone makes a change. Let's see how we achieved it by leveraging WebSockets.
Creating Our Real-Time Socket.io Server with Node.js
As the software we’ve created is a companion app to an actual tabletop game played at different locations, it is as real time as it gets. To handle such use cases, where the state of the UI-s needs to be synchronized across multiple clients, WebSockets are the go-to solution. To implement the WebSocket server and client, we chose to use Socket.io. While Socket.io clearly comes with a huge performance overhead, it freed us from a lot of hassle that arises from the stafeful nature of WebSocket connections. As the expected load was minuscule, the overhead Socket.io introduced was way overshadowed by the savings in development time it provided. One of the killer features of Socket.io that fit our use case very well was that operators who join the same game can be separated easily using socket.io rooms. This way, after a participant updates the game, we can broadcast the new state to the entire room (everyone who currently joined a particular game).
To create a socket server, all we need is a Server instance created by the createServer method of the default Node.js http module. For maintainability, we organized the socket.io logic into its separate module (see: .src/socketio.js). This module exports a factory function with one argument: an http Server object. Let's have a look at it:
// ./src/socketio.js const socketio = require('socket.io'); const SocketEvents = require('./constants/SocketEvents'); module.exports = (http) => { const io = socketio(http); io.on(SocketEvents.CONNECT, (socket) => { socket.on('EVENT', (input) => { // DO something with the given input }) } }
// index.js const { createServer } = require('http'); const app = require('./src/app'); // Express app const createSocket = require('./src/socketio'); const port = process.env.PORT || 3001; const http = createServer(app); createSocket(http); const server = http.listen(port, () => { logger.info(`Server is running at port: ${port}`); });
As you can see, the socket server logic is implemented inside the factory function. In the index.js file then this function is called with the http Server. We didn't have to implement authorization during this project, so there isn't any socket.io middleware that authenticates each client before establishing the connection. Inside the socket.io module, we created an event handler for each possible action a facilitator can perform, including the documentation of responses made to injections, buying mitigations, restoring systems, etc. Then we mapped our methods defined in the data access layer to these handlers.
Bringing together facilitators
I previously mentioned that rooms make it easy to distinguish facilitators by which game they currently joined in. A facilitator can enter a room by either creating a fresh new game or joining an existing one. By translating this to "WebSocket language", a client emits a createGame or joinGame event. Let's have a look at the corresponding implementation:
// ./src/socketio.js const socketio = require('socket.io'); const SocketEvents = require('./constants/SocketEvents'); const logger = require('./logger'); const { createGame, getGame, } = require('./models/game'); module.exports = (http) => { const io = socketio(http); io.on(SocketEvents.CONNECT, (socket) => { logger.info('Facilitator CONNECT'); let gameId = null; socket.on(SocketEvents.DISCONNECT, () => { logger.info('Facilitator DISCONNECT'); }); socket.on(SocketEvents.CREATEGAME, async (id, callback) => { logger.info('CREATEGAME: %s', id); try { const game = await createGame(id); if (gameId) { await socket.leave(gameId); } await socket.join(id); gameId = id; callback({ game }); } catch (_) { callback({ error: 'Game id already exists!' }); } }); socket.on(SocketEvents.JOINGAME, async (id, callback) => { logger.info('JOINGAME: %s', id); try { const game = await getGame(id); if (!game) { callback({ error: 'Game not found!' }); } if (gameId) { await socket.leave(gameId); } await socket.join(id); gameId = id; callback({ game }); } catch (error) { logger.error('JOINGAME ERROR: %s', error); callback({ error: 'Server error on join game!' }); } }); } }
If you examine the code snippet above, the gameId variable contains the game's id, the facilitators currently joined. By utilizing the javascript closures, we declared this variable inside the connect callback function. Hence the gameId variable will be in all following handlers' scope. If an organizer tries to create a game while already playing (which means that gameId is not null), the socket server first kicks the facilitator out of the previous game's room then joins the facilitator in the new game room. This is managed by the leave and join methods. The process flow of the joinGame handler is almost identical. The only keys difference is that this time the server doesn't create a new game. Instead, it queries the already existing one using the infamous getGame method of the data access layer.
What Makes Our Event Handlers?
After we successfully brought together our facilitators, we had to create a different handler for each possible event. For the sake of completeness, let's look at all the events that occur during a game:
createGame, joinGame: these events' single purpose is to join the correct game room organizer.
startSimulation, pauseSimulation, finishSimulation: these events are used to start the event's timer, pause the timer, and stop the game entirely. Once someone emits a finishGame event, it can't be restarted.
deliverInjection: using this event, facilitators trigger security threats, which should occur in a given time of the game.
respondToInjection, nonCorrectRespondToInjection: these events record the responses made to injections.
restoreSystem: this event is to restore any system which is offline due to an injection.
changeMitigation: this event is triggered when players buy mitigations to prevent injections.
performAction: when the playing staff performs an action, the client emits this event to the server.
performCurveball: this event occurs when a facilitator triggers unique injections.
These event handlers implement the following rules:
They take up to two arguments, an optional input, which is different for each event, and a predefined callback. The callback is an exciting feature of socket.io called acknowledgment. It lets us create a callback function on the client-side, which the server can call with either an error or a game object. This call will then affect the client-side. Without diving deep into how the front end works (since this is a topic for another day), this function pops up an alert with either an error or a success message. This message will only appear for the facilitator who initiated the event.
They update the state of the game by the given inputs according to the event's nature.
They broadcast the new state of the game to the entire room. Hence we can update the view of all organizers accordingly.
First, let's build on our previous example and see how the handler implemented the curveball events.
// ./src/socketio.js const socketio = require('socket.io'); const SocketEvents = require('./constants/SocketEvents'); const logger = require('./logger'); const { performCurveball, } = require('./models/game'); module.exports = (http) => { const io = socketio(http); io.on(SocketEvents.CONNECT, (socket) => { logger.info('Facilitator CONNECT'); let gameId = null; socket.on( SocketEvents.PERFORMCURVEBALL, async ({ curveballId }, callback) => { logger.info( 'PERFORMCURVEBALL: %s', JSON.stringify({ gameId, curveballId }), ); try { const game = await performCurveball({ gameId, curveballId, }); io.in(gameId).emit(SocketEvents.GAMEUPDATED, game); callback({ game }); } catch (error) { callback({ error: error.message }); } }, ); } }
The curveball event handler takes one input, a curveballId and the callback as mentioned earlier. The performCurveball method then updates the game's poll and budget and returns the new game object. If the update is successful, the socket server emits a gameUpdated event to the game room with the latest state. Then it calls the callback function with the game object. If any error occurs, it is called with an error object.
After a facilitator creates a game, first, a preparation view is loaded for the players. In this stage, staff members can spend a portion of their budget to buy mitigations before the game starts. Once the game begins, it can be paused, restarted, or even stopped permanently. Let's have a look at the corresponding implementation:
// ./src/socketio.js const socketio = require('socket.io'); const SocketEvents = require('./constants/SocketEvents'); const logger = require('./logger'); const { startSimulation, pauseSimulation } = require('./models/game'); module.exports = (http) => { const io = socketio(http); io.on(SocketEvents.CONNECT, (socket) => { logger.info('Facilitator CONNECT'); let gameId = null; socket.on(SocketEvents.STARTSIMULATION, async (callback) => { logger.info('STARTSIMULATION: %s', gameId); try { const game = await startSimulation(gameId); io.in(gameId).emit(SocketEvents.GAMEUPDATED, game); callback({ game }); } catch (error) { callback({ error: error.message }); } }); socket.on(SocketEvents.PAUSESIMULATION, async (callback) => { logger.info('PAUSESIMULATION: %s', gameId); try { const game = await pauseSimulation({ gameId }); io.in(gameId).emit(SocketEvents.GAMEUPDATED, game); callback({ game }); } catch (error) { callback({ error: error.message }); } }); socket.on(SocketEvents.FINISHSIMULATION, async (callback) => { logger.info('FINISHSIMULATION: %s', gameId); try { const game = await pauseSimulation({ gameId, finishSimulation: true }); io.in(gameId).emit(SocketEvents.GAMEUPDATED, game); callback({ game }); } catch (error) { callback({ error: error.message }); } }); } }
The startSimulation kicks the game's timer, and the pauseSimulation method pauses and stops the game. Trigger time is essential to determine which injection facilitators can invoke. After organizers trigger a threat, they hand over all necessary assets to the players. Staff members can then choose how they respond to the injection by providing a custom response or choosing from the predefined options. Next to facing threats, staff members perform actions, restore systems, and buy mitigations. The corresponding events to these activities can be triggered anytime during the game. These event handlers follow the same pattern and implement ourthe three fundamental rules. Please check the public GitHub repo if you would like to examine these callbacks.
Serving The Setup Data
In the chapter explaining the data access layer, I classified tables into two different groups: setup and state tables. State tables contain the condition of ongoing games. This data is served and updated via the event-based socket server. On the other hand, setup data consists of the available systems, game mitigations, actions, and curveball events, injections that occur during the game, and each possible response to them. This data is exposed via a simple http server. After a facilitator joins a game, the React client requests this data and caches and uses it throughout the game. The HTTP server is implemented using the express library. Let's have a look at our app.js.
// .src/app.js const helmet = require('helmet'); const express = require('express'); const cors = require('cors'); const expressPino = require('express-pino-logger'); const logger = require('./logger'); const { getResponses } = require('./models/response'); const { getInjections } = require('./models/injection'); const { getActions } = require('./models/action'); const app = express(); app.use(helmet()); app.use(cors()); app.use( expressPino({ logger, }), ); // STATIC DB data is exposed via REST api app.get('/mitigations', async (req, res) => { const records = await db('mitigation'); res.json(records); }); app.get('/systems', async (req, res) => { const records = await db('system'); res.json(records); }); app.get('/injections', async (req, res) => { const records = await getInjections(); res.json(records); }); app.get('/responses', async (req, res) => { const records = await getResponses(); res.json(records); }); app.get('/actions', async (req, res) => { const records = await getActions(); res.json(records); }); app.get('/curveballs', async (req, res) => { const records = await db('curveball'); res.json(records); }); module.exports = app;
As you can see, everything is pretty standard here. We didn't need to implement any method other than GET since this data is inserted and changed using seeds.
Final Thoughts On Our Socket.io Game
Now we can put together how the backend works. State tables store the games' state, and the data access layer returns the new game state after each update. The socket server organizes the facilitators into rooms, so each time someone changes something, the new game is broadcasted to the entire room. Hence we can make sure that everyone has an up-to-date view of the game. In addition to dynamic game data, static tables are accessible via the http server.
Next time, we will look at how the React client manages all this, and after that I'll present the infrastructure behind the project. You can check out the code of this app in the public GitHub repo!
In case you're looking for experienced full-stack developers, feel free to reach out to us via [email protected], or via using the form below this article.
You can also check out our Node.js Development & Consulting service page for more info on our capabilities.
Building a Real-Time Webapp with Node.js and Socket.io published first on https://koresolpage.tumblr.com/
0 notes
suzanneshannon · 4 years ago
Text
Let’s Create Our Own Authentication API with Nodejs and GraphQL
Authentication is one of the most challenging tasks for developers just starting with GraphQL. There are a lot of technical considerations, including what ORM would be easy to set up, how to generate secure tokens and hash passwords, and even what HTTP library to use and how to use it. 
In this article, we’ll focus on local authentication. It’s perhaps the most popular way of handling authentication in modern websites and does so by requesting the user’s email and password (as opposed to, say, using Google auth.)
Moreover, This article uses Apollo Server 2, JSON Web Tokens (JWT), and Sequelize ORM to build an authentication API with Node.
Handling authentication
As in, a log in system:
Authentication identifies or verifies a user.
Authorization is validating the routes (or parts of the app) the authenticated user can have access to. 
The flow for implementing this is:
The user registers using password and email
The user’s credentials are stored in a database
The user is redirected to the login when registration is completed
The user is granted access to specific resources when authenticated
The user’s state is stored in any one of the browser storage mediums (e.g. localStorage, cookies, session) or JWT.
Pre-requisites
Before we dive into the implementation, here are a few things you’ll need to follow along.
Node 6 or higher
Yarn (recommended) or NPM
GraphQL Playground
Basic Knowledge of GraphQL and Node
…an inquisitive mind!
Dependencies 
This is a big list, so let’s get into it:
Apollo Server: An open-source GraphQL server that is compatible with any kind of GraphQL client. We won’t be using Express for our server in this project. Instead, we will use the power of Apollo Server to expose our GraphQL API.
bcryptjs: We want to hash the user passwords in our database. That’s why we will use bcrypt. It relies on Web Crypto API‘s getRandomValues interface to obtain secure random numbers.
dotenv: We will use dotenv to load environment variables from our .env file. 
jsonwebtoken: Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. jsonwebtokenwill be used to generate a JWT which will be used to authenticate users.
nodemon: A tool that helps develop Node-based applications by automatically restarting the node application when changes in the directory are detected. We don’t want to be closing and starting the server every time there’s a change in our code. Nodemon inspects changes every time in our app and automatically restarts the server. 
mysql2: An SQL client for Node. We need it connect to our SQL server so we can run migrations.
sequelize: Sequelize is a promise-based Node ORM for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server. We will use Sequelize to automatically generate our migrations and models. 
sequelize cli: We will use Sequelize CLI to run Sequelize commands. Install it globally with yarn add --global sequelize-cli  in the terminal.
Setup directory structure and dev environment
Let’s create a brand new project. Create a new folder and this inside of it:
yarn init -y
The -y flag indicates we are selecting yes to all the yarn init questions and using the defaults.
We should also put a package.json file in the folder, so let’s install the project dependencies:
yarn add apollo-server bcrpytjs dotenv jsonwebtoken nodemon sequelize sqlite3
Next, let’s add Babeto our development environment:
yarn add babel-cli babel-preset-env babel-preset-stage-0 --dev
Now, let’s configure Babel. Run touch .babelrc in the terminal. That creates and opens a Babel config file and, in it, we’ll add this:
{   "presets": ["env", "stage-0"] }
It would also be nice if our server starts up and migrates data as well. We can automate that by updating package.json with this:
"scripts": {   "migrate": " sequelize db:migrate",   "dev": "nodemon src/server --exec babel-node -e js",   "start": "node src/server",   "test": "echo \"Error: no test specified\" && exit 1" },
Here’s our package.json file in its entirety at this point:
{   "name": "graphql-auth",   "version": "1.0.0",   "main": "index.js",   "scripts": {     "migrate": " sequelize db:migrate",     "dev": "nodemon src/server --exec babel-node -e js",     "start": "node src/server",     "test": "echo \"Error: no test specified\" && exit 1"   },   "dependencies": {     "apollo-server": "^2.17.0",     "bcryptjs": "^2.4.3",     "dotenv": "^8.2.0",     "jsonwebtoken": "^8.5.1",     "nodemon": "^2.0.4",     "sequelize": "^6.3.5",     "sqlite3": "^5.0.0"   },   "devDependencies": {     "babel-cli": "^6.26.0",     "babel-preset-env": "^1.7.0",     "babel-preset-stage-0": "^6.24.1"   } }
Now that our development environment is set up, let’s turn to the database where we’ll be storing things.
Database setup
We will be using MySQL as our database and Sequelize ORM for our relationships. Run sequelize init (assuming you installed it globally earlier). The command should create three folders: /config /models and /migrations. At this point, our project directory structure is shaping up. 
Let’s configure our database. First, create a .env file in the project root directory and paste this:
NODE_ENV=development DB_HOST=localhost DB_USERNAME= DB_PASSWORD= DB_NAME=
Then go to the /config folder we just created and rename the config.json file in there to config.js. Then, drop this code in there:
require('dotenv').config() const dbDetails = {   username: process.env.DB_USERNAME,   password: process.env.DB_PASSWORD,   database: process.env.DB_NAME,   host: process.env.DB_HOST,   dialect: 'mysql' } module.exports = {   development: dbDetails,   production: dbDetails }
Here we are reading the database details we set in our .env file. process.env is a global variable injected by Node and it’s used to represent the current state of the system environment.
Let’s update our database details with the appropriate data. Open the SQL database and create a table called graphql_auth. I use Laragon as my local server and phpmyadmin to manage database tables.
What ever you use, we’ll want to update the .env file with the latest information:
NODE_ENV=development DB_HOST=localhost DB_USERNAME=graphql_auth DB_PASSWORD= DB_NAME=<your_db_username_here>
Let’s configure Sequelize. Create a .sequelizerc file in the project’s root and paste this:
const path = require('path') 
 module.exports = {   config: path.resolve('config', 'config.js') }
Now let’s integrate our config into the models. Go to the index.js in the /models folder and edit the config variable.
const config = require(__dirname + '/../../config/config.js')[env]
Finally, let’s write our model. For this project, we need a User model. Let’s use Sequelize to auto-generate the model. Here’s what we need to run in the terminal to set that up:
sequelize model:generate --name User --attributes username:string,email:string,password:string
Let’s edit the model that creates for us. Go to user.js in the /models folder and paste this:
'use strict'; module.exports = (sequelize, DataTypes) => {   const User = sequelize.define('User', {     username: {       type: DataTypes.STRING,     },     email: {       type: DataTypes.STRING,       },     password: {       type: DataTypes.STRING,     }   }, {});   return User; };
Here, we created attributes and fields for username, email and password. Let’s run a migration to keep track of changes in our schema:
yarn migrate
Let’s now write the schema and resolvers.
Integrate schema and resolvers with the GraphQL server 
In this section, we’ll define our schema, write resolver functions and expose them on our server.
The schema
In the src folder, create a new folder called /schema and create a file called schema.js. Paste in the following code:
const { gql } = require('apollo-server') const typeDefs = gql`   type User {     id: Int!     username: String     email: String!   }   type AuthPayload {     token: String!     user: User!   }   type Query {     user(id: Int!): User     allUsers: [User!]!     me: User   }   type Mutation {     registerUser(username: String, email: String!, password: String!): AuthPayload!     login (email: String!, password: String!): AuthPayload!   } ` module.exports = typeDefs
Here we’ve imported graphql-tag from apollo-server. Apollo Server requires wrapping our schema with gql. 
The resolvers
In the src folder, create a new folder called /resolvers and create a file in it called resolver.js. Paste in the following code:
const bcrypt = require('bcryptjs') const jsonwebtoken = require('jsonwebtoken') const models = require('../models') require('dotenv').config() const resolvers = {     Query: {       async me(_, args, { user }) {         if(!user) throw new Error('You are not authenticated')         return await models.User.findByPk(user.id)       },       async user(root, { id }, { user }) {         try {           if(!user) throw new Error('You are not authenticated!')           return models.User.findByPk(id)         } catch (error) {           throw new Error(error.message)         }       },       async allUsers(root, args, { user }) {         try {           if (!user) throw new Error('You are not authenticated!')           return models.User.findAll()         } catch (error) {           throw new Error(error.message)         }       }     },     Mutation: {       async registerUser(root, { username, email, password }) {         try {           const user = await models.User.create({             username,             email,             password: await bcrypt.hash(password, 10)           })           const token = jsonwebtoken.sign(             { id: user.id, email: user.email},             process.env.JWT_SECRET,             { expiresIn: '1y' }           )           return {             token, id: user.id, username: user.username, email: user.email, message: "Authentication succesfull"           }         } catch (error) {           throw new Error(error.message)         }       },       async login(_, { email, password }) {         try {           const user = await models.User.findOne({ where: { email }})           if (!user) {             throw new Error('No user with that email')           }           const isValid = await bcrypt.compare(password, user.password)           if (!isValid) {             throw new Error('Incorrect password')           }           // return jwt           const token = jsonwebtoken.sign(             { id: user.id, email: user.email},             process.env.JWT_SECRET,             { expiresIn: '1d'}           )           return {            token, user           }       } catch (error) {         throw new Error(error.message)       }     }   }, 
 } module.exports = resolvers
That’s a lot of code, so let’s see what’s happening in there.
First we imported our models, bcrypt and  jsonwebtoken, and then initialized our environmental variables. 
Next are the resolver functions. In the query resolver, we have three functions (me, user and allUsers):
me query fetches the details of the currently loggedIn user. It accepts a user object as the context argument. The context is used to provide access to our database which is used to load the data for a user by the ID provided as an argument in the query.
user query fetches the details of a user based on their ID. It accepts id as the context argument and a user object. 
alluser query returns the details of all the users.
user would be an object if the user state is loggedIn and it would be null, if the user is not. We would create this user in our mutations. 
In the mutation resolver, we have two functions (registerUser and loginUser):
registerUser accepts the username, email  and password of the user and creates a new row with these fields in our database. It’s important to note that we used the bcryptjs package to hash the users password with bcrypt.hash(password, 10). jsonwebtoken.sign synchronously signs the given payload into a JSON Web Token string (in this case the user id and email). Finally, registerUser returns the JWT string and user profile if successful and returns an error message if something goes wrong.
login accepts email and password , and checks if these details match with the one that was supplied. First, we check if the email value already exists somewhere in the user database.
models.User.findOne({ where: { email }}) if (!user) {   throw new Error('No user with that email') }
Then, we use bcrypt’s bcrypt.compare method to check if the password matches. 
const isValid = await bcrypt.compare(password, user.password) if (!isValid) {   throw new Error('Incorrect password') }
Then, just like we did previously in registerUser, we use jsonwebtoken.sign to generate a JWT string. The login mutation returns the token and user object.
Now let’s add the JWT_SECRET to our .env file.
JWT_SECRET=somereallylongsecret
The server
Finally, the server! Create a server.js in the project’s root folder and paste this:
const { ApolloServer } = require('apollo-server') const jwt =  require('jsonwebtoken') const typeDefs = require('./schema/schema') const resolvers = require('./resolvers/resolvers') require('dotenv').config() const { JWT_SECRET, PORT } = process.env const getUser = token => {   try {     if (token) {       return jwt.verify(token, JWT_SECRET)     }     return null   } catch (error) {     return null   } } const server = new ApolloServer({   typeDefs,   resolvers,   context: ({ req }) => {     const token = req.get('Authorization') || ''     return { user: getUser(token.replace('Bearer', ''))}   },   introspection: true,   playground: true }) server.listen({ port: process.env.PORT || 4000 }).then(({ url }) => {   console.log(`🚀 Server ready at ${url}`); });
Here, we import the schema, resolvers and jwt, and initialize our environment variables. First, we verify the JWT token with verify. jwt.verify accepts the token and the JWT secret as parameters.
Next, we create our server with an ApolloServer instance that accepts typeDefs and resolvers.
We have a server! Let’s start it up by running yarn dev in the terminal.
Testing the API
Let’s now test the GraphQL API with GraphQL Playground. We should be able to register, login and view all users — including a single user — by ID.
We’ll start by opening up the GraphQL Playground app or just open localhost://4000 in the browser to access it.
Mutation for register user
mutation {   registerUser(username: "Wizzy", email: "[email protected]", password: "wizzyekpot" ){     token   } }
We should get something like this:
{   "data": {     "registerUser": {       "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzAwLCJleHAiOjE2MzA3OTc5MDB9.gmeynGR9Zwng8cIJR75Qrob9bovnRQT242n6vfBt5PY"     }   } }
Mutation for login 
Let’s now log in with the user details we just created:
mutation {   login(email:"[email protected]" password:"wizzyekpot"){     token   } }
We should get something like this:
{   "data": {     "login": {       "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzcwLCJleHAiOjE1OTkzMjY3NzB9.PDiBKyq58nWxlgTOQYzbtKJ-HkzxemVppLA5nBdm4nc"     }   } }
Awesome!
Query for a single user
For us to query a single user, we need to pass the user token as authorization header. Go to the HTTP Headers tab.
Tumblr media
…and paste this:
{   "Authorization": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzcwLCJleHAiOjE1OTkzMjY3NzB9.PDiBKyq58nWxlgTOQYzbtKJ-HkzxemVppLA5nBdm4nc" }
Here’s the query:
query myself{   me {     id     email     username   } }
And we should get something like this:
{   "data": {     "me": {       "id": 15,       "email": "[email protected]",       "username": "Wizzy"     }   } }
Great! Let’s now get a user by ID:
query singleUser{   user(id:15){     id     email     username   } }
And here’s the query to get all users:
{   allUsers{     id     username     email   } }
Summary
Authentication is one of the toughest tasks when it comes to building websites that require it. GraphQL enabled us to build an entire Authentication API with just one endpoint. Sequelize ORM makes creating relationships with our SQL database so easy, we barely had to worry about our models. It’s also remarkable that we didn’t require a HTTP server library (like Express) and use Apollo GraphQL as middleware. Apollo Server 2, now enables us to create our own library-independent GraphQL servers!
Check out the source code for this tutorial on GitHub.
The post Let’s Create Our Own Authentication API with Nodejs and GraphQL appeared first on CSS-Tricks.
You can support CSS-Tricks by being an MVP Supporter.
Let’s Create Our Own Authentication API with Nodejs and GraphQL published first on https://deskbysnafu.tumblr.com/
0 notes
xhostcom · 5 years ago
Text
WPvivid Backup Plugin For Wordpress or ClassicPress
Tumblr media
Backing up your Wordpress or ClassicPress site is paramount to site owners and self builders, and an essential part of the maintenance routine. Having a backup solution on your site is one of the most basic ways to ensure WordPress security. The platform also offers many different solutions for it. A relatively new contender in this area is Migrate & Backup WordPress by WPvivid, which calls itself  “the only free all-in-one backup, restore and migration WordPress plugin”. To see what it can do for users and if it brings anything new to the table, in this review I will take an in-depth look at it. Below, I will examine its standout features and go through all of its key functionality so you can make a decision whether it’s the right thing for your needs.
Tumblr media
Stats, Numbers, and Features
Migrate & Backup WordPress is a free plugin available in the WordPress directory. At the time of this writing, it has over 20,000 installs and a solid 4.9/5-star rating. Here’s what it promises it can do for you: Migrate your site to a new domain with a single click, e.g. from development to live production or vice versa Create automatic (remote) backups of your WordPress site including one-click restore or restore via upload There is also a pro version with additional features that we will talk about later. To really get an impression, let’s take it for a spin.
How to Use the WPvivid Backup Plugin
Here are the basics of using the plugin on your WordPress site. 1. Install WPvivid Backup As always, the first thing you want to do is start with the installation. For that, the quickest way is to go to Plugins > Add New and then searching for it by name (WPvivid is enough). Once you have located it in the list, hit the Install Now button and wait until the download has finished. Then, don’t forget to click on Activate. 2. Create a Manual Backup After the installation, you automatically land on the plugin’s main page.
Tumblr media
Here, you have access to its most basic functionality – creating manual backups. For that, simply choose what to back up (database and files, only files, only database) and whether to save them locally or remotely (more on that soon). You can also check a box to determine that a backup should only be deleted manually and not automatically. When you are happy with your choices, hit the big Backup Now button and the plugin will go to work. You can watch the progress via a status bar at the top.
Tumblr media
When you are done, the finished backup will appear at the bottom of the page.
Tumblr media
This is where all of them show up. Here, you can also check the log, download the backup, start a restore (you will soon learn how), or delete an existing backup. So far, so good. 3. Create a Backup Schedule Of course, with backups, it’s better to have a set-it-and-forget-it solution. That’s where scheduling comes into play. You find the options for that under the Schedule tab.
Tumblr media
Setting one up is really easy. Just set a tick mark at the top where it says Enable backup schedule to get started. Next, pick an interval (every 12 hours, daily, weekly, every fortnight, or every month). After that, choose what you want the WPvivid backup plugin to save in terms of files and database and, finally, whether to store it remotely or locally. Save the changes at the bottom when you are done. From now on, your site will automatically be backed up according to your settings. You can also see this on the dashboard where it shows your next backup along with other information.
Tumblr media
4. Set up a Remote Storage If you store your backups in the same place as your site (i.e. on your server), in a worst-case scenario, they might get lost together with your site and you are left with nothing. For that reason, under Remote Storage, WPvivid backup allows you to save your stuff in many different places off-site: Google Drive Dropbox Microsoft OneDrive Amazon S3 DigitalOcean Spaces FTP/SFTP Pick your favorite, enter an alias (so your recognize which storage it is on your own site), configure settings, choose whether to use it as the default remote storage and then hit the blue button to authenticate or test your settings.
Tumblr media
Depending on your choice, you might have to authenticate with the storage provider in the next step and also give the plugin permission/access. However, once you are done, you can also choose remote storage for your manual and automated backups. The new option will also show up in your list of available storage spaces.
Tumblr media
In my test, this worked without a hitch. 5. How to Do a Restore Restoring from storage is pretty much as easy as backing up. Just hit the Restore button on your list of available backups. It will show you this page:
Tumblr media
Click on Restore, confirm when asked if you want to continue. You will see the progress in the field below and get a message when it’s done If the space is remote, the plugin will ask you to download the backup to your server first. In addition, on the main page, you can also upload manual backups to your site via Upload.
Tumblr media
When have done either, click the Scan uploaded backup or received backup button to see it in the list. You can restore them from here as described. 6. Auto-Migration Another option the WPvivid backup plugin offers is migrating your site from one server to another. This can be two live servers, live to staging, local to live, and many other combinations. You can do this automatically or manually. For automatic, you first need to go create a site key for authentication. For that, the same plugin needs to be installed on the site you want to migrate to. There, go to the Key tab. Choose how long you want your key to be valid for and then hit Generate. It will give you something like this:
Tumblr media
Copy it, then log into the site you want to migrate and go to Auto-Migration.
Tumblr media
Here, paste the key from earlier and click Save at the bottom. The plugin will check if everything is alright and then tell you whether it’s fine to move from one place to the other.
Tumblr media
Choose what you want to transfer and hit Clone then Transfer at the bottom afterward. Note that the plugin makers recommend to disable redirect, firewall, security, and caching plugins for the time of the transfer. However, that’s it. Alternatively, you can also download a backup and then restore it manually in the target site as described above. 7. Plugin Settings The plugin offers a number of settings that you can find in the Settings tab (how fitting!).
Tumblr media
They are divided into general and advanced settings. Here are the options under general: Determine the number of backups to keep (up to seven in the free version) Display options for access to the WPvivid backup plugin in the WordPress back end File merging options to save space Storage and naming options for backups Ability to remove older backup instances Setup options for email reporting when backing up Sizes of temporary files and logs and the ability to delete them Export and import of settings for use on other websites You also get a bunch of links to helpful resources. Here’s what to find in the advanced settings: Switch on optimization mode for shared hosting in case there are problems Controls for file compression, file exclusion, script execution limit, memory limits, chunk size, and times to try until time out 8. Other Options Here are the remaining tabs in the main dashboard of WPvivid backup: Debug — If you are having problems, you can send debug information to WPvivid or download them to send manually Logs — Saves everything going on with the plugin so you can see what happened during your backups MainWP — If you are using MainWP to manage your WordPress site, WPvivid backup has their own plugin solution for it that you can find here 9. Export & Import Besides its main functionality, the plugin also lets you transfer posts and pages between WordPress installations (including images) under WPvivid Backup > Export & Import.
Tumblr media
Simply pick which one you want to transfer. Then, in the next step, choose your filters (categories, authors, date).
Tumblr media
This will give you a list, from which can pick single posts and pages.
Tumblr media
Add a comment if needed, otherwise hit Export and Download. This gives you a zip file. You can then upload it in another installation under Import or upload it to the import directory via FTP and scan from there.
Tumblr media
Choose which author to assign the pages to during the process and that’s it.
WPvivid Backup & Restore – Pro Version
As mentioned, WPvivid backup also comes with a pro version, which you can find on their website. It gives you additional options and is currently in beta so you can use it for free. If you do, you can get 40% off of a lifetime license for the finished product. Here’s what it will include: Custom migrations — Choose what to take with you in terms of core files, database tables, themes, plugins, and more. Advanced remote options — Migration and restore becomes possible via remote storage and you can create custom folders in your remote storage. More advanced scheduling options — Time zones, custom start times, custom backup content, and storage locations. Staging and Multisite support — The ability to create staging environments and publish them to live in one click and use the plugin with WordPress Multisite although I do not recommend using Multisite if it is avoidable. Better reporting — Email reports sent to multiple addressees More backups — Store more than seven backups at a time Pricing will start at $49/year for annual plans and $99 for lifetime licenses. It gets more expensive depending on features and the number of websites you want to use the plugin on.
Tumblr media
Evaluation of WPvivid
Overall, WPvivid backup is a solid plugin that is intuitive to work with. I didn’t run into any major problems while testing except that I couldn’t get it to work in my local XAMPP installation. I also think some of the user interface could use some polishing. Aside from that, my only criticism is that WPvivid is one of the plugin solutions where you need to have an existing website to migrate to with the plugin installed. For site migrations, I like a solution better that allows you to clone a website and deploy it on an empty server or as in most of our migrations, just done by hand via FTP and DB export..
Conclusion
Having a backup solution is one of the most basic ways to keep your site safe. No WordPress website should go without. With WPvivd Backup & Restore, users now have even more choices than before. Above, we have gone over the plugin’s main features and how it works. It comes with lots of functionality, is well made, and easy to use. While there are few things to criticize, if you are looking for a new backup solution for WordPress, I can wholeheartedly recommend it. Have you tried out the WPvivid backup plugin? What was your first impression? Please share in the comments below. If you enjoyed this post, why not check out this article on how to Enable GZIP Compression on WordPress or ClassicPress! Post by Xhostcom Wordpress & Digital Services, subscribe to newsletter for more! Read the full article
0 notes
nancydsmithus · 5 years ago
Text
Writing A Multiplayer Text Adventure Engine In Node.js: Game Engine Server Design (Part 2)
Writing A Multiplayer Text Adventure Engine In Node.js: Game Engine Server Design (Part 2)
Fernando Doglio
2019-10-23T14:00:59+02:002019-10-23T12:06:03+00:00
After some careful consideration and actual implementation of the module, some of the definitions I made during the design phase had to be changed. This should be a familiar scene for anyone who has ever worked with an eager client who dreams about an ideal product but needs to be restraint by the development team.
Once features have been implemented and tested, your team will start noticing that some characteristics might differ from the original plan, and that’s alright. Simply notify, adjust, and go on. So, without further ado, allow me to first explain what has changed from the original plan.
Battle Mechanics
This is probably the biggest change from the original plan. I know I said I was going to go with a D&D-esque implementation in which each PC and NPC involved would get an initiative value and after that, we would run a turn-based combat. It was a nice idea, but implementing it on a REST-based service is a bit complicated since you can’t initiate the communication from the server side, nor maintain status between calls.
So instead, I will take advantage of the simplified mechanics of REST and use that to simplify our battle mechanics. The implemented version will be player-based instead of party-based, and will allow players to attack NPCs (Non-Player Characters). If their attack succeeds, the NPCs will be killed or else they will attack back by either damaging or killing the player.
Whether an attack succeeds or fails will be determined by the type of weapon used and the weaknesses an NPC might have. So basically, if the monster you’re trying to kill is weak against your weapon, it dies. Otherwise, it’ll be unaffected and — most likely — very angry.
Triggers
If you paid close attention to the JSON game definition from my previous article, you might’ve noticed the trigger’s definition found on scene items. A particular one involved updating the game status (statusUpdate). During implementation, I realized having it working as a toggle provided limited freedom. You see, in the way it was implemented (from an idiomatic point of view), you were able to set a status but unsetting it wasn’t an option. So instead, I’ve replaced this trigger effect with two new ones: addStatus and removeStatus. These will allow you to define exactly when these effects can take place — if at all. I feel this is a lot easier to understand and reason about.
This means that the triggers now look like this:
"triggers": [ { "action": "pickup", "effect":{ "addStatus": "has light", "target": "game" } }, { "action": "drop", "effect": { "removeStatus": "has light", "target": "game" } } ]
When picking up the item, we’re setting up a status, and when dropping it, we’re removing it. This way, having multiple game-level status indicators is completely possible and easy to manage.
The Implementation
With those updates out of the way, we can start covering the actual implementation. From an architectural point of view, nothing changed; we’re still building a REST API that will contain the main game engine’s logic.
The Tech Stack
For this particular project, the modules I’m going to be using are the following:
Module Description Express.js Obviously, I’ll be using Express to be the base for the entire engine. Winston Everything in regards to logging will be handled by Winston. Config Every constant and environment-dependant variable will be handled by the config.js module, which greatly simplifies the task of accessing them. Mongoose This will be our ORM. I will model all resources using Mongoose Models and use that to interact directly with the database. uuid We’ll need to generate some unique IDs — this module will help us with that task.
As for other technologies used aside from Node.js, we have MongoDB and Redis. I like to use Mongo due to the lack of schema required. That simple fact allows me to think about my code and the data formats, without having to worry about updating the structure of my tables, schema migrations or conflicting data types.
Regarding Redis, I tend to use it as a support system as much as I can in my projects and this case is no different. I will be using Redis for everything that can be considered volatile information, such as party member numbers, command requests, and other types of data that are small enough and volatile enough to not merit permanent storage.
I’m also going to be using Redis’ key expiration feature to auto manage some aspects of the flow (more on this shortly).
API Definition
Before moving into client-server interaction and data-flow definitions I want to go over the endpoints defined for this API. They aren’t that many, mostly we need to comply with the main features described in Part 1:
Feature Description Join a game A player will be able to join a game by specifying the game’s ID. Create a new game A player can also create a new game instance. The engine should return an ID, so that others can use it to join. Return scene This feature should return the current scene where the party is located. Basically, it’ll return the description, with all of the associated information (possible actions, objects in it, etc.). Interact with scene This is going to be one of the most complex ones, because it will take a command from the client and perform that action — things like move, push, take, look, read, to name just a few. Check inventory Although this is a way to interact with the game, it does not directly relate to the scene. So, checking the inventory for each player will be considered a different action. Register client application The above actions require a valid client to execute them. This endpoint will verify the client application and return a Client ID that will be used for authentication purposes on subsequent requests.
The above list translates into the following list of endpoints:
Verb Endpoint Description POST /clients Client applications will require to get a Client ID key using this endpoint. POST /games New game instances are created using this endpoint by the client applications. POST /games/:id Once the game is created, this endpoint will enable party members to join it and start playing. GET /games/:id/:playername This endpoint will return the current game state for a particular player. POST /games/:id/:playername/commands Finally, with this endpoint, the client application will be able to submit commands (in other words, this endpoint will be used to play).
Let me go into a bit more detail about some of the concepts I described in the previous list.
Client Apps
The client applications will need to register into the system to start using it. All endpoints (except for the first one on the list) are secured and will require a valid application key to be sent with the request. In order to obtain that key, client apps need to simply request one. Once provided, they will last for as long as they are used, or will expire after a month of not being used. This behavior is controlled by storing the key in Redis and setting a one-month long TTL to it.
Game Instance
Creating a new game basically means creating a new instance of a particular game. This new instance will contain a copy of all of the scenes and their content. Any modifications done to the game will only affect the party. This way, many groups can play the same game on their own individual way.
Player’s Game State
This is similar to the previous one, but unique to each player. While the game instance holds the game state for the entire party, the player’s game state holds the current status for one particular player. Mainly, this holds inventory, position, current scene and HP (health points).
Player Commands
Once everything is set up and the client application has registered and joined a game, it can start sending commands. The implemented commands in this version of the engine include: move, look, pickup and attack.
The move command will allow you to traverse the map. You’ll be able to specify the direction you want to move towards and the engine will let you know the result. If you take a quick glimpse at Part 1, you can see the approach I took to handle maps. (In short, the map is represented as a graph, where each node represents a room or scene and is only connected to other nodes that represent adjacent rooms.) The distance between nodes is also present in the representation and coupled with the standard speed a player has; going from room to room might not be as simple as stating your command, but you’ll also have to traverse the distance. In practice, this means that going from one room to the other might require several move commands). The other interesting aspect of this command comes from the fact that this engine is meant to support multiplayer parties, and the party can’t be split (at least not at this time). Therefore, the solution for this is similar to a voting system: every party member will send a move command request whenever they want. Once more than half of them have done so, the most requested direction will be used.
look is quite different from move. It allows the player to specify a direction, an item or NPC they want to inspect. The key logic behind this command, comes into consideration when you think about status-dependant descriptions. For example, let’s say that you enter a new room, but it’s completely dark (you don’t see anything), and you move forward while ignoring it. A few rooms later, you pick up a lit torch from a wall. So now you can go back and re-inspect that dark room. Since you’ve picked up the torch, you now can see inside of it, and be able to interact with any of the items and NPCs you find in there. This is achieved by maintaining a game wide and player specific set of status attributes and allowing the game creator to specify several descriptions for our status-dependant elements in the JSON file. Every description is then equipped with a default text and a set of conditional ones, depending on the current status. The latter are optional; the only one that is mandatory is the default value. Additionally, this command has a short-hand version for look at room: look around; that is because players will be trying to inspect a room very often, so providing a short-hand (or alias) command that is easier to type makes a lot of sense.
The pickup command plays a very important role for the gameplay. This command takes care of adding items into the players inventory or their hands (if they’re free). In order to understand where each item is meant to be stored, their definition has a “destination” property that specifies if it is meant for the inventory or the player’s hands. Anything that is successfully picked up from the scene is then removed from it, updating the game instance’s version of the game.
The use command will allow you to affect the environment using items in your inventory. For instance, picking up a key in a room will allow you to use it to open a locked door in another room.
There is a special command, one that is not gameplay-related, but instead a helper command meant to obtain particular information, such as the current game ID or the player’s name. This command is called get, and the players can use it to query the game engine. For example: get gameid.
Finally, the last command implemented for this version of the engine is the attack command. I already covered this one; basically, you’ll have to specify your target and the weapon you’re attacking it with. That way the system will be able to check the target’s weaknesses and determine the output of your attack.
Client-Engine Interaction
In order to understand how to use the above-listed endpoints, let me show you how any would-be-client can interact with our new API.
Step Description Register client First things first, the client application needs to request an API key to be able to access all other endpoints. In order to get that key, it needs to register on our platform. The only parameter to provide is the name of the app, that’s all. Create a game After the API key is obtained, the first thing to do (assuming this is a brand new interaction) is to create a brand new game instance. Think about it this way: the JSON file I created in my last post contains the game’s definition, but we need to create an instance of it just for you and your party (think classes and objects, same deal). You can do with that instance whatever you want, and it will not affect other parties. Join the game After creating the game, you’ll get a game ID back from the engine. You can then use that game ID to join the instance using your unique username. Unless you join the game, you can’t play, because joining the game will also create a game state instance for you alone. This will be where your inventory, your position and your basic stats are saved in relation to the game you’re playing. You could potentially be playing several games at the same time, and in each one have independent states. Send commands In other words: play the game. The final step is to start sending commands. The amount of commands available was already covered, and it can be easily extended (more on this in a bit). Everytime you send a command, the game will return the new game state for your client to update your view accordingly.
Let’s Get Our Hands Dirty
I’ve gone over as much design as I can, in the hopes that that information will help you understand the following part, so let’s get into the nuts and bolts of the game engine.
Note: I will not be showing you the full code in this article since it’s quite big and not all of it is interesting. Instead, I’ll show the more relevant parts and link to the full repository in case you want more details.
The Main File
First things first: this is an Express project and it’s based boilerplate code was generated using Express’ own generator, so the app.js file should be familiar to you. I just want to go over two tweaks I like to do on that code to simplify my work.
First, I add the following snippet to automate the inclusion of new route files:
const requireDir = require("require-dir") const routes = requireDir("./routes") //... Object.keys(routes).forEach( (file) => { let cnt = routes[file] app.use('/' + file, cnt) })
It’s quite simple really, but it removes the need to manually require each route files you create in the future. By the way, require-dir is a simple module that takes care of auto-requiring every file inside a folder. That’s it.
The other change I like to do is to tweak my error handler just a little bit. I should really start using something more robust, but for the needs at hand, I feel like this gets the work done:
// error handler app.use(function(err, req, res, next) { // render the error page if(typeof err === "string") { err = { status: 500, message: err } } res.status(err.status || 500); let errorObj = { error: true, msg: err.message, errCode: err.status || 500 } if(err.trace) { errorObj.trace = err.trace } res.json(errorObj); });
The above code takes care of the different types of error messages we might have to deal with — either full objects, actual error objects thrown by Javascript or simple error messages without any other context. This code will take it all and format it into a standard format.
Handling Commands
This is another one of those aspects of the engine that had to be easy to extend. In a project like this one, it makes total sense to assume new commands will pop up in the future. If there is something you want to avoid, then that would probably be avoid making changes on the base code when trying to add something new three or four months in the future.
No amount of code comments will make the task of modifying code you haven’t touched (or even thought about) in several months easy, so the priority is to avoid as many changes as possible. Lucky for us, there are a few patterns we can implement to solve this. In particular, I used a mixture of the Command and the Factory patterns.
I basically encapsulated the behavior of each command inside a single class which inherits from a BaseCommand class that contains the generic code to all commands. At the same time, I added a CommandParser module that grabs the string sent by the client and returns the actual command to execute.
The parser is very simple since all implemented commands now have the actual command as to their first word (i.e. “move north”, “pick up knife”, and so on) it’s a simple matter of splitting the string and getting the first part:
const requireDir = require("require-dir") const validCommands = requireDir('./commands') class CommandParser { constructor(command) { this.command = command } normalizeAction(strAct) { strAct = strAct.toLowerCase().split(" ")[0] return strAct } verifyCommand() { if(!this.command) return false if(!this.command.action) return false if(!this.command.context) return false let action = this.normalizeAction(this.command.action) if(validCommands[action]) { return validCommands[action] } return false } parse() { let validCommand = this.verifyCommand() if(validCommand) { let cmdObj = new validCommand(this.command) return cmdObj } else { return false } } }
Note: I’m using the require-dir module once again to simplify the inclusion of any existing and new command classes. I simply add it to the folder and the entire system is able to pick it up and use it.
With that being said, there are many ways this can be improved; for instance, by being able to add synonym support for our commands would be a great feature (so saying “move north”, “go north” or even “walk north” would mean the same). That is something that we could centralize in this class and affect all commands at the same time.
I won’t go into details on any of the commands because, again, that’s too much code to show here, but you can see in the following route code how I managed to generalize that handling of the existing (and any future) commands:
/** Interaction with a particular scene */ router.post('/:id/:playername/:scene', function(req, res, next) { let command = req.body command.context = { gameId: req.params.id, playername: req.params.playername, } let parser = new CommandParser(command) let commandObj = parser.parse() //return the command instance if(!commandObj) return next({ //error handling status: 400, errorCode: config.get("errorCodes.invalidCommand"), message: "Unknown command" }) commandObj.run((err, result) => { //execute the command if(err) return next(err) res.json(result) }) })
All commands only require the run method — anything else is extra and meant for internal use.
I encourage you to go and review the entire source code (even download it and play with it if you like!). In the next part of this series, I’ll show you the actual client implemention and interaction of this API.
Closing Thoughts
I may not have covered a lot of my code here, but I still hope that the article was helpful to show you how I tackle projects — even after the initial design phase. I feel like a lot of people try to start coding as their first response to a new idea and that sometimes can end up discouraging to a developer since there is no real plan set nor any goals to achieve — other than having the final product ready (and that is too big of a milestone to tackle from day 1). So again, my hope with these articles is to share a different way to go about working solo (or as part of a small group) on big projects.
I hope you’ve enjoyed the read! Please feel free to leave a comment below with any type of suggestions or recommendations, I’d love to read what you think and if you’re eager to start testing the API with your own client-side code.
See you on the next one!
Tumblr media
(dm, yk, il)
0 notes
phylophe · 7 years ago
Text
Only Human
The Mechanic observes his Magnum Opus. 
----- 
There’s something different about him - he’s used to that shit-eating grin of this asshole, and something about this one just doesn’t feel right.
“Where’s your doctor?” He asks as Four throws himself down onto his couch and stretches out, as if he owned the place.
“Aww, what if I just came over here because I missed you and your rough treatment?” The out-of-place smile is still there, and it feels more wrong by the moment. Still, the man’s committed to his acting, if nothing else, and keeps up the cocky composure even as he smears blood onto one of the new cushions. “Or maybe I just wanted to have a catch-up with my favourite mechanic?”
“Sure. In the small hours of the morning. Covered in dried blood. And… did you get shot?”
Four doesn’t retort immediately with some smart comment; only after a few seconds does he manage a feeble comeback. “I didn’t get shot. I got shot at, and they missed.”
“Nice try with the bullshit.” He doesn’t press further. He hasn’t seen Four so weak, so broken, so human, since the time he’d spent months putting the man’s body back together.
He doesn’t ask anymore questions, and Four doesn’t tell anymore lies. Two days later, he wakes up to find his couch empty.
Don’t do anything stupid, he thinks.
-----
But of course the bastard goes and does multiple stupid things.
Over the following weeks, the double-agent does an abysmal job of upholding that title, and ends up dirtying his couch three more times. The last of these times, his partner ended up having to actively hack into the government surveillance records and manually overwrite some files.
“Are you so full of crap that it’s finally filling up the space in your skull, shit-for-brains?” Four’s actions were reckless and selfish, so he figures he deserves a taste of his own medicine, if only in the form of a scolding. “Please tell me you’re fucking up on purpose, because if you’re getting us into trouble by actual stupidity, I’ll have to kick your ass myself.”
“Maybe I don’t have enough fibre in my diet?” Even while sedated, the shithead somehow musters up enough energy to pull a jerk-ass face, and he’d like nothing more than to put his fist in it, except his hands are currently occupied by clamps and a scalpel.
“I’m serious, Ilvait.” The emphasis on his real name does the trick - Four’s face grows stern and his eye sharpens with attention. “I couldn’t care less if you got your sorry ass handed to you and die in some rat-hole, but if you keep pulling crazy stunts and jeopardising the safety of the rest of us, I’ll go have a word with your superiors on both sides.”
Four doesn’t bother with a reply - that alone tells him that the agent’s streak of poor performance isn’t simply coincidental.
“What happened to your doctor?” His anger is diffusing a bit. He’s only human, after all.
Four supplies a single word: “Hrodna.”
The airstrike. They’d attacked not only the infantry, but also one of the field hospitals. “Did she die?”
“No.”
The gears turn in his head and the pieces click into place. “So are you going to do anything about that stick up your ass, before it migrates too high and gives you a heart attack or a stroke or something?”
“What do you suggest?” Four asks with his face turned away; he can’t say he likes it better when the asshole’s grinning, but he also can’t deny his pity for the guy. “Should I take a leaf out of your book: Insubordination For Dummies?”
“I thought you of all people would somehow find a way around it, what with all those nasty thoughts squirming around in there.” He cauterises a blood vessel with an electric scalpel. There’s some nerve damage and a number of small arteries need to be reconnected - procedures that are beyond his abilities. Four will have to get a proper doctor back at home. “Assuming you can even manage rational thought at this point.”
Four doesn’t respond. Oh my god, he’s actually listening.
“Look, if it’s bothering you to this extent - damned if you do and damned if you don’t - go take care of this personal shit before you fuck up everything else.” He puts down the scalpel and picks up a suturing needle, sighing as he turns back to his subject. “If it’s that difficult, I’ll do what I can. Marclai will help, too.”
He braces himself for a smug quip, perhaps preceded by a coy, overly-affectionate coo.
“Thanks.”
He didn’t expect that. “Don’t thank me.” He waves it off with a soft grunt. “I just don’t want me, or him, or anyone else getting caught up in this mess because you’re worried about your woman.”
Four is only human, after all. The reminder rings once again in his head. And nothing is more human than the weakness for love.
-----
He squints at the scanned document on the screen. It looks like a scientific paper, impeccably formatted with LaTeX, complete with figures, tables, and equations. The author had identified himself as ‘Ivan Mikael Fore’.
The text, however, consists of just one word: chicken, over and over. He glances over to the page count: Page 1/34. “What’s this?”
“Something that came to the attention of the general himself. Apparently one of his close associates - a civilian, to boot - got tipped off with this piece of intelligence.” His boss sounds exasperated through his headset. “It looks like a prank, but there’s a hidden message in this apparent nonsense.” A PDF file is opened, with a short message occupying a tiny fraction of the page:
on ap ril twen ty nin th at ze ro thr ee hund red ho urs th ere will be an acci den tal deto nat ion of a seri es of six und isco ver ed la ndmi nes five po int two ki lome tres sou the ast of the ca mp
He recalls a conversation he had with Marclai a couple of weeks ago. Apparently Four had requested access to the secured bunker where all the yet-to-be-defused explosives from previous wars are kept, and asked to have the records rewritten so the missing items couldn’t be traced. There was also something about drawing up a circuit involving a timer.
“Do you know anything about this, Haekel?” His silence probably answered that question already. Shit. “This has Ilvait written all over it, don’t you think?”
“I can’t say for certain, ma’am.” He fumbles with the mic of his headset. “What camp is this, if I may ask? Does the general have an idea? Any matches with anything in our records?”
“The general thinks it’s Dzisna.” Oh, damn it, Four. “The Naveau name has been popping up mysteriously lately - someone bumped the Hrodna-Dzisna case up the priority list, the password access to files of missing personnel has been removed by an unknown hacker, and rumours are gaining traction. The media loves it, of course, and wants to know what the military’s doing about it.”
“My apologies, but I don’t know anything about this chicken manuscript, ma’am.” He leans back in his chair, feeling both amazed and exasperated. “I can have my associate dig into the server’s logs if you wish - do some data-mining, see if anything turns up.”
“That’d be useful. I’ll send you the details after further discussions with the unit, then.” His boss sighs. “It just seems like too much of a coincidence, with Ilvait volunteering to be deployed to that month-long recon mission in Azerbaijan. Is he trying to create an alibi for himself?”
“I really don’t have an answer for that, but I’ll see what we can do, Major General.”
He waits until the electronic security scan is complete before turning to his partner. “You helped him, didn’t you, Ilya?”
He really pulled a leaf out of your book, after all, Marclai signs from across the room. Since it’s for a righteous reason for once, I agreed. It reminds me of old times.
“I’d be impressed if he can pull this off.” He pulls off his glasses and rubs at his eyes. “Truth be told, I kinda hope he does.”
He will, if I’m backing him up.
-----
Okay, he’s impressed.
The Special Reconnaissance Unit had decided, in conjunction with General Naveau and the rest of intelligence, that the tip was genuine, and too good of a wave not to ride. 
In the chaos provided by the ‘accidental detonation’, a small taskforce composed of volunteers stormed the place, and rescued the surviving prisoners - Four’s doctor among them. The base itself was heavily bombed to erase any evidence of the taskforce’s intrusion.
When Four returned from Azerbaijan, he was taken into custody almost straight off the plane, ferried back to headquarters, and questioned thoroughly and mercilessly, but there’s no solid evidence of his involvement, and his alibi was flawless.
He was even more pleased when the Major General decided to unofficially punish Four, anyway. He sure couldn’t say no to the offer.
“So… you’re my bitch for the next four weeks.” He pulls the most smug, snide, shit-eating smirk he can manage, and drops a stack of dusty binders on top of the pile of documents. It’s probably got nothing on Four’s face, but damn, it feels good. “Looking forward to all the old cases you’ll have the honour to look through?”
“I hate you so much,” Four grumbles, but there’s something behind his petulance - a hint of pride, and satisfaction. He’s back.
“Aww, is that the way to talk to your master?” He chuckles, and not entirely out of spite. “Aren’t you at least a little bit grateful you’re not in a worse situation right now?”
“I guess so.” Four shrugs, pouting as he turns back to his fort of files, and hunches over the computer. “I could be stuck with old case reports and not have air-conditioning.”
He laughs heartily at Four’s sign of defeat. He thinks things over, and after a few minutes of silence broken only by the white noise of fingers tapping away on a keyboard and shuffling through papers, he spins around in his swivel chair to address the man once more. “Did you see her?”
“Nope.” The typing and shuffling don’t pause for even a moment.
“Planning to?”
“Maybe when she gives her statement at the capital.” The man slaps a stained, crinkled stack of paper onto the end of the desk. “Probably not the right time for a catch-up over coffee, though.”
“Probably not.” He agrees. Still, it all feels so… sad - this secrecy, this distance, this unfulfilled longing. “Hey, Four?”
“Hmm?”
“I hope things work out for you.”
“Thanks.” The typing and shuffling stop. A sigh - miserable, weak, human. “I hope so, too.”
-----
He thought he was done dealing with his bullshit once he’s resigned from the unit, but in true Four fashion, the man has once again proved him wrong. 
“What the fuck, Four?” He’s concerned - the man is properly dressed, but his complexion is pale, and there’s this disturbingly absent look in his eyes, but that doesn’t negate how angry he is at the former-agent putting the safety of himself and everyone around him at risk. Again. “Don’t tell me you went around looking like that - in case you’ve forgotten, you’re meant to be dead now, dumb shit.” 
Four has the gall to look up at him - straight in the eyes, then simply shakes his head. “I covered my tracks.” 
He allows Four to shove past him into his workshop, and watches as the man sheds his coat, scarf and gloves in turn, tossing it over his stained couch.
His eyes scan over his body, and stop at his right hand, which is covered in soiled, carelessly-wound bandages. The blood on it looks old. 
He rolls his eyes and lets out a groan. “Sit your sorry ass down before you fall over and break something.” He digs under his desk for his medical kit, gnashing his teeth. He gets the feeling that this will be beyond his ability to fix. “Who and how did you fuck up, this time? You look like shit.” His stomach is flipping. Four isn’t an agent anymore. This wasn’t a mission - this was personal. 
He fucked up someone as a personal errand. 
Four still won’t talk to him, but at least he’s sat down on the couch. “I know you’ve been moving around.” He reaches for the bandaged hand, grabbing Four’s wrist rather roughly. “Ticking off that hit-list you’ve been compiling, right?” 
Four is silent. He takes that as a ‘yes’. 
“Did you catch and release?” He has to reduce his questions to yes-or-no ones; his friend looks damned near catatonic at this stage. 
“No.” Ah, he spoke. “Took care of the last one.” 
“And how long ago was that?” He peels off the bandages - blood and pus and iodine soak the dressing, sticking the layers together, and there’s no way he can be as gentle with it as he’d like. “Long enough for you to take piss-poor care of a simple cut and catch an infection.” He lets out an angry huff at the state of the wound - it’s probably once a neat gash across the palm, but infection has reduced it to a swollen, discoloured, feverish mess. He starts cleaning it with disinfectant. 
Four is muttering. “I was in Dzisna.” 
“…Fuck.” He can’t find a more suitable response. “You screwed up.” It wasn’t a question, because of course he did. 
He’s only human. That place is haunted for him. No way he’d have gotten out of that unscathed. 
He tries his best to get the details out of Four over the next hour as he worked on the wound, asking him short questions and prompting him to divulge. He learns enough to piece things together: Four has been committing to some vigilante work and tracked down those who’d wronged his doctor - his woman - in that camp. He’s appointed himself judge, jury, and executioner, and hit a roadblock when it came to his final victim. 
The sergeant in charge of the camp; the man who’s allowed for the vicious abuse of his woman during her imprisonment there. 
“No wonder you snapped. Damn it, Four.” He glances over at Marclai, making sure his patient is held still, before he tugs the piece of rusty, chipped scalpel out of Four’s palm. The man jolts, but the movement is much weaker than anticipated. “I know you have a lot of mechanical parts in you, but news flash: you’re still human. You have feelings. You’re not invincible.” He starts to suture the swollen, infected mess as best he can. “Don’t put yourself into stupid-ass situations like that, you extra son-of-a-bitch.”
Four doesn’t retort. He finds himself feeling too sorry for the man to scold him anymore, however much he deserves it. “Stay here for a while.” He suggests, and Marclai nods in agreement. “I have to order some shots for the technicolour mess that is your hand, and until you’re better, you’re staying here. I don’t want you passing out somewhere out there and risk exposing all of us.” 
“Until I’m better, huh?” Four lets out a pathetic little snort. 
I know; for people like us, things may never truly get better. Still– “Until you’re good enough to go back to your woman.” He tries to be firm. “No more stupid shit. Your woman doesn’t deserve to see you looking as fucked up as you do now.” 
“Okay.” Four’s response has an edge of his obnoxious sarcasm, but when he opens his mouth to reassert his message, he fancies he can see tears in the former-agent’s eyes. 
“Go lie down before you fall over.” He walks off; Marclai has long since disappeared. He understands it well - space and time are the only things that can make it better, now.
We’re all only human, after all. 
2 notes · View notes
sourabhdubey007 · 4 years ago
Text
How to Move Your WordPress Website from Localhost to Cloudways Using WordPress Duplicator Plugin
If you are a WordPress developer, you work with a customized dev environment that you have customized to your preferences. Every developer has their own configuration settings that are based on their workflows and the tools they use for WordPress development.
Once the project is working as expected on the localhost, the next step is to move the project to an online host. Fortunately, WordPress offers WordPress duplicator solutions in the form of several plugins that simplify the entire process of migrating WordPress sites from localhost to an online host.
For the purpose of this article, I will demonstrate how you can move a WordPress website from a localhost to a Cloudways managed server running a WordPress application. While there are several plugins that work very well (I encourage you to experiment to find the right fit for your requirements), for the purpose of this tutorial, I will use the WordPress Duplicator plugin.
Let’s begin,Table of Contents
WordPress Migrator Plugin
Best Cloning Plugins Around
WordPress Duplicator: Local Server to Cloudways
Why Use WordPress Duplicator
Benefits of WordPress Duplicator
What You Need for WordPress Localhost to Live
WordPress Duplicator Plugin Installation
Step 1: Download and Install Plugin on a Local Site
Step 2: How to Export WordPress Site
Create a New Package
Step 3: How to Make WordPress Site Live
Move WordPress Website from a Localhost to Cloudways
How to Upload a Localhost WordPress Site to Live Server
Upload Installer and Archive File to Live Site
Install WordPress Website on Cloud Server
Step 1: Extract Archive
Step 2: Database Setup
Step 3: Run Installer
Step 4: Data Replacement
Step 5: Test Site
Testing the Live Site
Final Thoughts!
WordPress Migrator Plugins
Moving your WordPress website manually is quite stressful, very time-consuming, and prone to errors. You need to make sure that all steps are executed in the correct order with no issues. An if something goes wrong, you have to start all over again.
Fortunately, there are several excellent WordPress migration plugins that take care of all the steps of the process and ensure that your website gets migrated from the localhost to its new Cloudways WordPress web hosting server without any issues.
Best Cloning Plugins
Here is the list of the best WordPress migrator plugins that you can try out:
WordPress Duplicator (Freemium)
All-in-One WP Migration (Free)
BackupBuddy (Premium)
UpdraftPlus WordPress Backup Plugin (UpdraftPlus Migrator) (Freemium)
WP Migrate DB (Premium)
Migrate Guru (Premium)
VaultPress (Premium)
WP Clone (Free)
As mentioned earlier, I will use the Duplicator plugin for demonstrating the process of WordPress website migration.
WordPress Duplicator: Local Server to Cloudways
In this tutorial, I am going to describe how you can move your WordPress website from localhost to Cloudways using WordPress Duplicator plugin. Duplicator has a great 5 out of 5 stars rating on the WordPress repository and has been downloaded and installed over one million.
Why Use WordPress Duplicator Plugin
The Duplicator plugin provides WordPress administrators the ability to migrate, copy or clone, a WordPress site from one location to another.
Using this plugin, you can forget your worries about backing up the database, plugins, themes, and moving all these components (whether in full or in parts), because the WordPress Duplicator can do everything for you!
Even if you are a newbie with little to no knowledge, this plugin can help you migrate WordPress site from localhost to Cloudways server easily. However, you do need to know a bit about finding your database credentials and related information.
Benefits of WordPress Duplicator
Easily migrate WordPress websites from one host to another
Take manual backups of WordPress websites
Pull down a live site to localhost
Easy website duplication
Schedule backups at your convenience
Expert support available
Email notifications
Additional developer support
Connect to cPanel directly from installer
Database creation built into the installer
Integrated transfer to cloud storage services such as Dropbox, Google Drive, and Amazon S3
What You Need for Moving WordPress Sites to Live Servers
To move your WordPress website from localhost to another server, you need to have two elements. Firstly, you must have a local server setup on your computer (I assume that your website is up and running on a localhost server). Secondly, you should have a good web hosting plan that supports WordPress.
WordPress Duplicator Plugin Installation
Downloading and installing the WordPress Duplicator plugin is a simple matter of following the standard WordPress process.
Step 1: Download and Install Plugin on the Local Site
From your WordPress Dashboard, navigate to Plugins → Add New. Search for WordPress Duplicator plugin in the top-right search bar. Next, click the Install Now button. After successful installation, click the activate button.
The second method is to go to the WordPress Plugins Directory and directly download the Duplicator WordPress Migration Plugin from there. Next, add the plugin manually to the WordPress website. For this, simply go to Plugins → Add New, and then upload the plugin.
After activating the plugin, you will see the Duplicator menu on the left side of your WordPress Dashboard.
Step 2: Export the WordPress Site
Now in this step, I am going to describe how you can package the WordPress files on your local computer using the WordPress Duplicator plugin so that these files can be easily moved to the live server.
Create a New Package
After activating the plugin, you will see the Duplicator menu on the left side WordPress dashboard. Go to Duplicator → Packages and click the Create New button to build a new package.
This process has the following major steps:
1- Setup
Simply click the Next button to start the process.
2- Scan
Click the Build button.
3- Build
When you are done with the setup process, you will receive two files; Installer and Archive. Download both files on your desktop.
Next, let’s move the website files to the live site.
Step 3: How to Take the WordPress Site Live
Now it’s time to transfer the WordPress site from localhost to live server by using the WordPress duplicator plugin.
Move WordPress Website from a Localhost to Cloudways
First things first, log in to your Cloudways account. If you are new to Cloudways, you would need to sign up and then log into your account.
Once you are in, click the Servers tab where you can see all the servers that are active under your account. Go to the server you wish to migrate the WordPress website, go to Server → Server Management and get the FTP Master Credentials.
Upload the Localhost WordPress Site to the Live Server
In previous steps, you downloaded the Installer and Archive files to your computer. Now it’s time to upload those two files to the live server.
Upload Installer and Archive File to Live Site
For uploading these files, you can use any FTP client of your choice. I prefer FileZilla, and thus suggest that if haven’t tried it out yet, go to the  Filezilla official website and download the latest build.
Next, provide the Host, Username, Password, and Port, and connect to your server via FTP.
Note: On Cloudways, you need to use port 22 to avoid issues.
Note: Before you go ahead with uploading archive files, make sure that you delete the wp-config file (located in the public_html folder).
Next, go to the applications folder and to your application’s folder. Navigate to the public_html folder and upload the Installer and Archive files from your desktop to this folder.
It will take a couple of minutes because the archive files are generally large in size.
Install WordPress Website on Cloud Server
Now, it’s time to run the installer setup on the live server. To do this, you need to go to your website address and add installer.php at the end. For instance, http://test.cloudways.com/installer.php.
Now that everything is good, the next step is the extraction of the Archive file.
Step 1: Extract the Archive
After going to the installer page, you can see some like:
Click the Next button to move on.
Step 2: Database Setup
Now, you need to add the database details. To get them, go back to your Cloudways dashboard, choose your Server > Application and click it.
Under the Application Management, you will see database details such as db name, username, and password.
If all goes well, you will set the green lights beside two of the most important fields: Server Connected and Database Found.
image
Now, click Next to move on to the next step.
Step 3: Run the Installer
The Duplicator plugin works best with empty databases. Hence, before moving forward, you need to remove all previous data. For this, go to the Application Management screen and click the Launch Database Manager.
Next, the database window will open. You need to check the checkbox named tables to select all the tables in the database. Next, click the Drop button to remove the selected tables.
Click the Yes button.
Step 4: Data Replacement
Once you click the YES button, you will be given a URL, Path, and Title automatically.
Now, click the Next button.
Step 5: Test the Live Site
Once done, the final essential step is to test your live website. For that, the WordPress Duplicator plugin will ask you to follow several important steps.
Save Permalinks: Click on Save Permalinks button and you will be automatically redirected to your live site. Here, you can change the permalink settings as per your requirements.
Test Site: Click on the Test Site button and it will open the frontend of your live site. Here, you can test if everything is working as expected.
Security Cleanup: Lastly, Security Cleanup allows you to clean all the installation files and other files created by the Duplicator plugin during the transfer process. Before cleaning up, make sure that your website is properly copied and is working correctly.
Now it’s time to move on to the live site.
Testing the Live Site
In most cases, all plugins are deactivated when you move WordPress site from a localhost to the Cloudways server. To reactivate all the deactivated plugins, navigate to Plugins → Installed Plugin, select all and then click the Activate button.
If you have opted to use Breeze, the Cloudways WordPress cache plugin, now is the time to set up the recommended settings of the plugin. This is an important step in finalizing the process of a successful migration of your site from the localhost to the Cloudways managed server. These recommended settings better optimize the website and enhance the speed of WordPress websites.
Congratulations! You have successfully tested and moved your WordPress website from localhost to Cloudways – A Managed Cloud Hosting Platform.
Final Thoughts!
That’s it!
As you can see it is very easy to move WordPress websites from localhost to a Cloudways managed server, thanks to the amazing WordPress Duplicator Plugin. If it weren’t for a WordPress migrator plugin, the process would be a huge hassle. However, in just a few clicks, all website content is successfully transferred from the localhost to a cloud server.
The aim of this tutorial is to educate users on how they can move their WordPress website from localhost to Cloudways. Keep in mind, the above method is not recommended for transferring content from other hosting providers to Cloudways. If you want to do so, I recommend users to use WordPress Migration Plugin.
If you encounter any cloud migration problems while transferring your website content, don’t hold back! Feel free to ask us and leave your questions in the comment section provided below. I would love to help you out with your queries.
The post How to Move Your WordPress Website from Localhost to Cloudways Using WordPress Duplicator Plugin appeared first on The Coding Bus.
from WordPress https://ift.tt/31TaKr4 via IFTTT
0 notes
preciousmetals0 · 5 years ago
Text
Blockchain’s Need for Speed Brings New Tools to the Crypto Industry
Blockchain’s Need for Speed Brings New Tools to the Crypto Industry:
Tumblr media
Another day, another mainnet launch. Or at least, that’s sometimes how it can feel in the blockchain space, as every project seems to be scrambling to be the latest and greatest in balancing the trade-offs between speed, scalability and security. Unfortunately, many of them end up languishing with little development activity and precious few users.
Therefore, when a new project comes along that appears to be stirring up genuine excitement among established players and investors in the space, it’s worth taking a second look. Despite being new on the scene and still in the process of developing its testnet, Solana is one such project. 
It’s currently associated with names such as Bison Trails and Chainlink, having previously garnered $20 million in investment from high-profile funds such as 500 Startups and Multichain Capital. It also recently sold out of all its tokens in a Dutch auction, even despite the mid-March market carnage. So, what’s going on with Solana to generate such significant interest from the industry? 
The background
Back in 2017, CEO Anatoly Yakovenko founded Solana with the ambitious goal of creating a blockchain platform that could scale for global adoption. Yakovenko had previously led the team developing operating systems at telecommunications manufacturer Qualcomm, where as he told Cointelegraph: “I was always a performance geek. I spent 12 years at Qualcomm trying to squeeze out every last bit of performance from hardware.” He also engineered a distributed operating system at Mesosphere and worked on compression at Dropbox. 
Upon founding Solana, he onboarded a team of similarly experienced professionals. The company’s chief technology officer and principal architect, Greg Fitzgerald, had also previously worked at Qualcomm across the full spectrum of embedded systems. Its chief operating officer, Raj Gokal, brought experience in product management and finance from his time as a venture investor at General Catalyst and from managing products at his own startup, Sano, and at Omada Health. The chief scientist, Eric Williams, is a particle physicist who studied at Berkeley and gained his Ph.D. while at the European Organization for Nuclear Research, commonly referred to as CERN, hunting for the Higgs boson particle. 
The Solana team has been able to attract some impressive investors and partners on its road to mainnet launch. Multichain Capital led a $20 million funding round that concluded in July 2019. More recently, the company ran a Dutch auction through Coinlist for the sale of 8 million Sol tokens, raising a further $1.76 million from 91 companies. In total, Solana has sold 186 million tokens and raised $25.6 million from token sales. 
Solana has also attracted several companies to participate in Tour de Sol, its incentivized testnet. The most high-profile of these is Bison Trails, which is also part of the Libra Association. Bison Trails serves as a validator node on the Solana testnet but has also integrated support for Solana to its infrastructure-as-a-service offering.
The issues at hand
Like many other blockchain projects, the Solana team has the scalability challenge in mind while developing the platform. However, Solana aims to achieve scalability without compromising on security or decentralization. Both have been an issue with other blockchains, particularly those using delegated proof-of-stake, which has proven itself prone to manipulation. 
Solana also aims to solve another problem inherent in blockchain consensus: agreement on time. In any ledger, the time that the entry is made is critical, as it forms the backbone of the ledger’s chronology. If a ledger is held on a centralized server, the system clock simply timestamps entries as they’re recorded. However, in a decentralized system, all nodes are working to their own clocks. Therefore, time is something that the network nodes must agree on as much as the nature of the transaction itself. 
Furthermore, in Bitcoin and other proof-of-work blockchains, the amount of time a miner takes to solve a cryptographic nonce is what governs the difficulty level. So, in the context of a blockchain, recording the passage of time is key. Different blockchains solve this challenge in different ways. However, achieving agreement on time ends up consuming a heavy load in messaging between network nodes.
For example, Hedera Hashgraph, a platform with similar goals to Solana, takes a timestamp from a supermajority of nodes on the network and calculates the median. This has allowed the Hashgraph network to quickly overtake Ethereum in transaction numbers. Christian Hasker, the chief marketing officer of Hedera Hashgraph, told Cointelegraph: 
“Since open access of our platform in September 2019 (roughly 6 months), Hedera has seen over 80 million transactions conducted on our network. In comparison, it took Ethereum a little over two and a half years to hit that same milestone.”
Proof-of-history
To overcome the challenge of recording time, Solana uses a unique protocol called proof-of-history, otherwise known as PoH, that encodes the passage of time into the blockchain data itself without requiring specific inputs or messaging between network nodes. It uses a feature called a verifiable delay function, or VDF, which takes a known amount of time to compute and is limited to operating on a single central processing unit core, meaning processing can’t be expedited by using multiple processors. 
The Solana protocol encodes the results of each VDF into the block of its successor. In doing so, it provides an immutable log of the passage of time before consensus even takes place. By removing the load of time-based messaging, Solana claims to achieve transaction speeds of nearly 50,000 per second.
Yakovenko concisely explained the importance of reaching consensus regarding time within a blockchain environment, telling Cointelegraph: “Because we had PoH, we were able to make strong assumptions about time and reduce a lot of the complexity in the implementation.” Regarding the role of VDFs in future blockchain implementations, Yakovenko elaborated on the complexity of implementing them:
“VDFs are still fairly new, and their proposed implementations require a lot of verification hardware like ours, or new ASICS. […] Since our scaling approach depends on modern systems, our VDF works exceptionally well for our network. With our current infrastructure, we’ve been able to leapfrog the current state of the art and deliver throughput of 50,000 transactions per second with 400ms block times on the mainnet today.”
Tower Byzantine fault tolerance and proof-of-stake
Solana uses a variation on the practical Byzantine fault tolerance model used by Hyperledger Fabric and others called tower Byzantine fault tolerance. This consensus model is designed to incentivize network participants to act in the interests of the network at all times. Nodes stake their tokens on the validity of the most recent proof-of-history hash in a similar way to how they’d stake tokens on block validity in other blockchains.
Similar to pBFT, the more hashes that are confirmed after any given vote, the longer it will take to roll back that vote. Validators cannot vote for a fork once they’ve voted on a particular hash without being penalized. Solana also uses proof-of-stake to determine who participates in the network as a validator. Token holders who don’t have the hardware to join as a validator can delegate a validator to participate in block production. 
To summarize, proof-of-history acts as a clock for the network, whereas tBFT incentivizes and penalizes validators to act in the network’s interests. PoS enables token holders to act as delegators, deciding who serves as a validator. 
Taking on scalability
The Solana team didn’t stop at inventing an entirely new consensus method to overcome the scalability challenge, and as Yakovenko told Cointelegraph, proof-of-history, tBFT and PoS are just for consensus. He added: “We had to innovate 8 more times to continue unblocking other scaling problems ranging from parallel transaction processing to real-time block streaming across the globe.”
Eight other innovations supposedly all play a role in speeding up processing time or generally making Solana run more efficiently. For example, Sealevel is a feature that enables the processing of multiple smart contracts in parallel. Turbine works in a way that’s comparable to BitTorrent, breaking data up into smaller packets to enable scalability between nodes, allowing Solana to support thousands of nodes running concurrently. 
Developers needed
Recently, Solana teamed up with oracle provider Chainlink to build a superfast oracle that updates every 400 milliseconds. Yakovenko told Cointelegraph that the move was in response to recent market failures due to network congestion. He expanded on the company’s plans to involve more developers and partners over time, telling Cointelegraph:
“We have a great accelerator program that has over 450 applicants already, so developers are going out of their way to find us. They want to build consumer-grade apps but that simply isn’t possible with the infrastructure at their disposal today. Given the pent up demand to build, we’re hopeful that developers will come to check out Solana and that a sizable percentage of those that do will migrate their dapps.”
Hasker said that Hedera Hashgraph similarly sees that there’s an unmet demand from developers, stating: 
“In addition to addressing the scalability and security required for applications, dApp developers prize ease-of-use and cost as major drivers of adoption. In addition, dApps want to know that the platform is stable and that it won’t fork so they don’t have to maintain multiple code bases. Finally, they want reassurance that the platform will be around for the long term, and that it’s governed by a trusted council that understands how business runs, and what businesses need.”
0 notes
notsadrobotxyz · 5 years ago
Text
Oracle DBA interview Question with Answer (All in One Doc)
1. General DB Maintenance2. Backup and Recovery3. Flashback Technology4. Dataguard5. Upgration/Migration/Patches6. Performance Tuning7. ASM8. RAC (RAC (Cluster/ASM/Oracle Binaries) Installation Link 9. Linux Operating10. PL/SQLGeneral DB Maintenance Question/Answer:When we run a Trace and Tkprof on a query we see the timing information for three phase?Parse-> Execute-> FetchWhich parameter is used in TNS connect identifier to specify number of concurrent connection request?QUEUESIZEWhat does AFFIRM/NOFFIRM parameter specify?AFFIRM specify redo transport service acknowledgement after writing to standby (SYNC) where as NOFFIRM specify acknowledgement before writing to standby (ASYNC).After upgrade task which script is used to run recompile invalid object?utlrp.sql, utlprpDue to too many cursor presents in library cache caused wait what parameter need to increase?Open_cursor, shared_pool_sizeWhen using Recover database using backup control file?To synchronize datafile to controlfileWhat is the use of CONSISTENT=Y and DIRECT=Y parameter in export?It will take consistent values while taking export of a table. Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.What the parameter COMPRESS, SHOW, SQLFILE will do during export?If you are using COMPRESS during import, It will put entire data in a single extent. if you are using SHOW=Y during import, It will read entire dumpfile and confirm backup validity even if you don’t know the formuser of export can use this show=y option with import to check the fromuser.If you are using SQLFILE (which contains all the DDL commands which Import would have executed) parameter with import utility can get the information dumpfile is corrupted or not because this utility will read entire dumpfile export and report the status.Can we import 11g dumpfile into 10g using datapump? If so, is it also  possible between 10g and 9i?Yes we can import from 11g to 10g using VERSION option. This is not possible between 10g and 9i as datapump is not there in 9iWhat does KEEP_MASTER and METRICS parameter of datapump?KEEP_MASTER and METRICS are undocumented parameter of EXPDP/IMPDP. METRICS provides the time it took for processing the objects and KEEP_MASTER prevents the Data Pump Master table from getting deleted after an Export/Import job completion.What happens when we fire SQL statement in Oracle?First it will check the syntax and semantics in library cache, after that it will create execution plan. If already data is in buffer cache it will directly return to the client (soft parse) otherwise it will fetch the data from datafiles and write to the database buffer cache (hard parse) after that it will send server and finally server send to the client.What are between latches and locks?1. A latch management is based on first in first grab whereas lock depends lock order is last come and grap. 2. Lock creating deadlock whereas latches never creating deadlock it is handle by oracle internally. Latches are only related with SGA internal buffer whereas lock related with transaction level. 3. Latches having on two states either WAIT or NOWAIT whereas locks having six different states: DML locks (Table and row level-DBA_DML_LOCKS ), DDL locks (Schema and Structure level –DBA_DDL_LOCKS), DBA_BLOCKERS further categorized many more.What are the differences between LMTS and DMTS? Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces, the dictionary tables are created on SYSTEM tablespace and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.Difference of Regular and Index organized table?The traditional or regular table is based on heap structure where data are stored in un-ordered format where as in IOT is based on Binary tree structure and data are stored in order format with the help of primary key. The IOT is useful in the situation where accessing is commonly with the primary key use of where clause statement. If IOT is used in select statement without primary key the query performance degrades.What are Table portioning and their use and benefits?Partitioning the big table into different named storage section to improve the performance of query, as the query is accessing only the particular partitioned instead of whole range of big tables. The partitioned is based on partition key. The three partition types are: Range/Hash/List Partition.Apart from table an index can also partitioned using the above partition method either LOCAL or GLOBAL.Range partition:How to deal online redo log file corruption?1. Recover when only one redo log file corrupted?If your database is open and you lost or corrupted your logfile then first try to shutdown your database normally does not shutdown abort. If you lose or corrupted only one redo log file then you need only to open the database with resetlog option. Opening with resetlog option will re-create your online redo log file.RECOVER DATABASE UNTIL CANCEL;  then ALTER DATABASE OPEN RESETLOGS;2. Recover when all the online redo log file corrupted?When you lose all member of redo log group then the step of maintenance depends on group ‘STATUS’ and database status Archivelog/NoArchivelog.If the affected redo log group has a status of INACTIVE then it is no longer required crash recovery then issues either clear logfile or re-create the group manually.ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; -- you are in archive mode and group still not archivedALTER DATABASE CLEAR LOGFILE GROUP 3; noarchive mode or group already archivedIf the affected redo log group has a status ACTIVE then it is still required for crash recovery. Issue the command ALTER SYSTEM CHECKPOINT, if successful then follow the step inactive if fails then you need to perform incomplete recovery up to the previous log file and open the database with resetlog option.If the affected redo log group is CURRENT then lgwr stops writing and you have to perform incomplete recovery up to the last logfile and open the database with resetlog option and if your database in noarchive then perform the complete recovery with last cold backup.Note: When the online redolog is UNUSED/STALE means it is never written it is newly created logfile.What is the function of shared pool in SGA?The shared pool is most important area of SGA. It control almost all sub area of SGA. The shortage of shared pool may result high library cache reloads and shared pool latch contention error. The two major component of shared pool is library cache and dictionary cache.The library cache contains current SQL execution plan information. It also holds PL/SQL procedure and trigger.The dictionary cache holds environmental information which includes referential integrity, table definition, indexing information and other metadata information.Backup & Recovery Question/Answer:Is target database can be catalog database?No recovery catalog cannot be the same as target database because whenever target database having restore and recovery process it must be in mount stage in that period we cannot access catalog information as database is not open.What is the use of large pool, which case you need to set the large pool?You need to set large pool if you are using: MTS (Multi thread server) and RMAN Backups. Large pool prevents RMAN & MTS from competing with other sub system for the same memory (specific allotment for this job). RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these parameters is enabled, then Oracle allocates backup buffers from local process memory rather than shared memory. Then there is no use of large pool.How to take User-managed backup in RMAN or How to make use of obsolete backup? By using catalog command: RMAN>CATALOG START WITH '/tmp/KEEP_UNTIL_30APRIL2010;It will search into all file matching the pattern on the destination asks for confirmation to catalog or you can directly change the backup set keep until time using rman command to make obsolete backup usable.RMAN> change backupset 3916 keep until time "to_date('01-MAY-2010','DD-MON-YYYY')" nologs;This is important in the situation where our backup become obsolete due to RMAN retention policy or we have already restored prior to that backup. What is difference between using recovery catalog and control file?When new incarnation happens, the old backup information in control file will be lost where as it will be preserved in recovery catalog .In recovery catalog, we can store scripts. Recovery catalog is central and can have information of many databases. This is the reason we must need to take a fresh backup after new incarnation of control file.What is the benefit of Block Media Recovery and How to do it?Without block media recovery if the single block is corrupted then you must take datafile offline and then restore all backup and archive log thus entire datafile is unavailable until the process is over but incase of block media recovery datafile will be online only the particular block will be unavailable which needs recovery. You can find the details of corrupted block in V$database_Block_Corruption view as well as in alert/trace file.Connect target database with RMAN in Mount phase:RMAN> Recover datafile 8 block 13;RMAN> Recover CORRUPTION_LIST;  --to recover all the corrupted block at a time.In respect of oracle 11g Active Dataguard features (physical standby) where real time query is possible corruption can be performed automatically. The primary database searches for good copies of block on the standby and if they found repair the block with no impact to the query which encounter the corrupt block.By default RMAN first searches the good block in real time physical standby database then flashback logs then full and incremental rman backup.What is Advantage of Datapump over Traditional Export?1. Data pump support parallel concept. It can write multiple dumps instead of single sequential dump.2. Data can be exported from remote database by using database link.3. Consistent export with Flashback_SCN, Flashback_Time supported in datapump.4. Has ability to attach/detach from job and able to monitor the job remotely.5. ESTIMATE_ONLY option can be used to estimate disk space requirement before perform the job.6. Explicit DB version can be specified so only supported object can be exported.7. Data can be imported from one DB to another DB without writing into dump file using NETWORK_LINK.8. During impdp we change the target file name, schema, tablespace using: REMAP_Why datapump is faster than traditional Export. What to do to increase datapump performace?Data Pump is block mode, exp is byte mode.Data Pump will do parallel execution.Data Pump uses direct path API and Network link features.Data pump export/import/access file on server rather than client by providing directory structure grant.Data pump is having self-tuning utilities, the tuning parameter BUFFER and RECORDLENGTH no need now.Following initialization parameter must be set to increase data pump performance:· DISK_ASYNCH_IO=TRUE· DB_BLOCK_CHECKING=FALSE· DB_BLOCK_CHECKSUM=FALSEFollowing initialization must be set high to increase datapump parallelism:· PROCESSES· SESSIONS· PARALLEL_MAX_SERVERS· SHARED_POOL_SIZE and UNDO_TABLESPACENote: you must set the reasonable amount of STREAMS_POOL_SIZE as per database size if SGA_MAXSIZE parameter is not set. If SGA_MAXSIZE is set it automatically pickup reasonable amount of size.Flashback Question/AnswerFlashback Archive Features in oracle 11gThe flashback archiving provides extended features of undo based recovery over a year or lifetime as per the retention period and destination size.Limitation or Restriction on flashback Drop features?1. The recyclebin features is only for non-system and locally managed tablespace. 2. When you drop any table all the associated objects related with that table will go to recyclebin and generally same reverse with flashback but sometimes due to space pressure associated index will finished with recyclebin. Flashback cannot able to reverse the referential constraints and Mviews log.3. The table having fine grained auditing active can be protected by recyclebin and partitioned index table are not protected by recyclebin.Limitation or Restriction on flashback Database features?1. Flashback cannot use to repair corrupt or shrink datafiles. If you try to flashback database over the period when drop datafiles happened then it will records only datafile entry into controlfile.2. If controlfile is restored or re-created then you cannot use flashback over the point in time when it is restored or re-created.3. You cannot flashback NOLOGGING operation. If you try to flashback over the point in time when NOLOGGING operation happens results block corruption after the flashback database. Thus it is extremely recommended after NOLOGGING operation perform backup.What are Advantages of flashback database over flashback Table?1. Flashback Database works through all DDL operations, whereas Flashback Table does not work with structural change such as adding/dropping a column, adding/dropping constraints, truncating table. During flashback Table operation A DML exclusive lock associated with that particular table while flashback operation is going on these lock preventing any operation in this table during this period only row is replaced with old row here. 2. Flashback Database moves the entire database back in time; constraints are not an issue, whereas they are with Flashback Table. 3. Flashback Table cannot be used on a standby database.How should I set the database to improve Flashback performance? Use a fast file system (ASM) for your flash recovery area, configure enough disk space for the file system that will hold the flash recovery area can enable to set maximum retention target. If the storage system used to hold the flash recovery area does not have non-volatile RAM (ASM), try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance. For large production databases set LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.Performance Tuning Question/Answer:If you are getting complain that database is slow. What should be your first steps to check the DB performance issues?In case of performance related issues as a DBA our first step to check all the session connected to the database to know exactly what the session is doing because sometimes unexpected hits leads to create object locking which slow down the DB performance.The database performance directly related with Network load, Data volume and Running SQL profiling.1.  So check the event which is waiting for long time. If you find object locking kill that session (DML locking only) will solve your issues.To check the user sessions and waiting events use the join query on views: V$session,v$session_wait2.  After locking other major things which affect the database performance is Disk I/O contention (When a session retrieves information from datafiles (on disk) to buffer cache, it has to wait until the disk send the data). This waiting time we need to minimize.We can check these waiting events for the session in terms of db file sequential read (single block read P3=1 usually the result of using index scan) and db file scattered read (multi block read P3 >=2 usually the results of for full table scan) using join query on the view v$system_eventSQL> SELECT a.average_wait "SEQ READ", b.average_wait "SCAT READ"  2    FROM sys.v_$system_event a, sys.v_$system_event b  3   WHERE a.event = 'db file sequential read'AND b.event = 'db file scattered read';  SEQ READ  SCAT READ---------- ----------       .74        1.6When you find the event is waiting for I/O to complete then you must need to reduce the waiting time to improve the DB performance. To reduce this waiting time you must need to perform SQL tuning to reduce the number of block retrieve by particular SQL statement.How to perform SQL Tuning?1. First of all you need to identify High load SQL statement. You can identify from AWR Report TOP 5 SQL statement (the query taking more CPU and having low execution ratio). Once you decided to tune the particular SQL statement then the first things you have to do to run the Tuning Optimizer. The Tuning optimize will decide: Accessing Method of query, Join Method of query and Join order.2. To examine the particular SQL statement you must need to check the particular query doing the full table scan (if index not applied use the proper index technique for the table) or if index already applied still doing full table scan then check may be table is having wrong indexing technique try to rebuild the index.  It will solve your issues somehow…… otherwise use next step of performance tuning.3. Enable the trace file before running your queries, then check the trace file using tkprof created output file. According to explain_plan check the elapsed time for each query, and then tune them respectively.To see the output of plan table you first need to create the plan_table from and create a public synonym for plan_table @$ORACLE_HOME/rdbms/admin/utlxplan.sql)SQL> create public synonym plan_table for sys.plan_table;4. Run SQL Tuning Advisor (@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as you find in V$session view. You can provide rights to the particular schema for the use of SQL Tuning Advisor:         Grant Advisor to HR;         Grant Administer SQL Tuning set to HR;SQL Tuning Advisor will check your SQL structure and statistics. SQL Tuning Advisor suggests indexes that might be very useful. SQL Tuning Advisor suggests query rewrites. SQL Tuning Advisor suggests SQL profile. (Automatic reported each time)5. Now in oracle 11g SQL Access Advisor is used to suggests new index for materialized views. 6. More: Run TOP command in Linux to check CPU usage information and Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.7. Optimizer Statistics are used by the query optimizer to choose the best execution plan for each SQL statement. Up-to-date optimizer statistics can greatly improve the performance of SQL statements.8. A SQL Profile contains object level statistics (auxiliary statistics) that help the optimizer to select the optimal execution plan of a particular SQL statement. It contains object level statistics by correcting the statistics level and giving the Tuning Advisor option for most relevant SQL plan generation.DBMS_SQLTUNE.ACCEPT_SQL_PROFILE – to accept the correct plan from SQLplusDBMS_SQLTUNE.ALTER_SQL_PROFILE – to modify/replace existing plan from SQLplus.DBMS_SQLTUNE.DROP_SQL_PROFILE – to drop existing plan.Profile Type: REGULAR-PROFILE, PX-PROFILE (with change to parallel exec)SELECT NAME, SQL_TEXT, CATEGORY, STATUS FROM   DBA_SQL_PROFILES; 9. SQL Plan Baselines are a new feature in Oracle Database 11g (previously used stored outlines, SQL Profiles) that helps to prevent repeatedly used SQL statements from regressing because a newly generated execution plan is less effective than what was originally in the library cache. Whenever optimizer generating a new plan it is going to the plan history table then after evolve or verified that plan and if the plan is better than previous plan then only that plan going to the plan table. You can manually check the plan history table and can accept the better plan manually using the ALTER_SQL_PLAN_BASELINE function of DBMS_SPM can be used to change the status of plans in the SQL History to Accepted, which in turn moves them into the SQL Baseline and the EVOLVE_SQL_PLAN_BASELINE function of the DBMS_SPM package can be used to see which plans have been evolved. Also there is a facility to fix a specific plan so that plan will not change automatically even if better execution plan is available. The plan base line view: DBA_SQL_PLAN_BASELINES.Why use SQL Plan Baseline, How to Generate new plan using Baseline 10. SQL Performance Analyzer allows you to test and to analyze the effects of changes on the execution performance of SQL contained in a SQL Tuning Set. Which factors are to be considered for creating index on Table? How to select column for index? 1. Creation of index on table depends on size of table, volume of data. If size of table is large and you need only few data What are Different Types of Index? Is creating index online possible? Function Based Index/Bitmap Index/Binary Tree Index/4. implicit or explicit index, 5. Domain Index You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.An index can be considered for re-building under any of these circumstances:We must first get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE, ANALYZE INDEX COMPUTE STATISTICS command* The % of deleted rows exceeds 30% of the total rows (depending on table length). * If the ‘HEIGHT’ is greater than 4, as the height of level 3 we can insert millions of rows. * If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.Differentiate the use of Bitmap index and Binary Tree index? Bitmap indexes are preferred in Data warehousing environment when cardinality is low or usually we have repeated or duplicate column. A bitmap index can index null value Binary-tree indexes are preferred in OLTP environment when cardinality is high usually we have too many distinct column. Binary tree index cannot index null value.If you are getting high “Busy Buffer waits”, how can you find the reason behind it? Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait. Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait then after another query by putting the above P1, P2 and P3 values. SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait Where event = 'buffer busy waits'; SQL> Select owner, segment_name, segment_type from dba_extents Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;What is STATSPACK and AWR Report? Is there any difference? As a DBA what you should look into STATSPACK and AWR report?STATSPACK and AWR is a tools for performance tuning. AWR is a new feature for oracle 10g onwards where as STATSPACK reports are commonly used in earlier version but you can still use it in oracle 10g too. The basic difference is that STATSPACK snapshot purged must be scheduled manually but AWR snapshots are purged automatically by MMON BG process every night. AWR contains view dba_hist_active_sess_history to store ASH statistics where as STASPACK does not storing ASH statistics.You can run $ORACLE_HOME/rdbms/admin/spauto.sql to gather the STATSPACK report (note that Job_queue_processes must be set > 0 ) and awrpt to gather AWR report  for standalone environment and awrgrpt for RAC environment.In general as a DBA following list of information you must check in STATSPACK/AWR report. ¦ Top 5 wait events (db file seq read, CPU Time, db file scattered read, log file sync, log buffer spac)¦ Load profile (DB CPU(per sec) Instance efficiency hit ratios (%Non-Parse CPU nearer to 100%)¦ Top 5 Time Foreground events (wait class is ‘concurrency’ then problem if User IO, System IO then OK)¦ Top 5 SQL (check query having low execution and high elapsed time or taking high CPU and low execution)¦ Instance activity¦ File I/O and segment statistics¦ Memory allocation¦ Buffer waits¦ Latch waits 1. After getting AWR Report initially crosscheck CPU time, db time and elapsed time. CPU time means total time taken by the CPU including wait time also. Db time include both CPU time and the user call time whereas elapsed time is the time taken to execute the statement.2. Look the Load profile Report: Here DB CPU (per sec) must be . If it is not means there is a CPU bound need more CPU (check happening for fraction time or all the time) and then look on this report Parse and Hard Parse. If the ratio of hard parse is more than parse then look for cursor sharing and application level for bind variable etc.3. Look instance efficiency Report: In this statistics you have to look ‘%Non-Parse CPU’, if this value nearer to 100% means most of the CPU resource are used into operation other than parsing which is good for database health.4. Look TOP five Time foreground Event: Here we should look ‘wait class’ if the wait class is User I/O, system I/O then OK if it is ‘Concurrency’ then there is serious problem then look Time(s) and Avg Wait time(s) if the Time (s) is more and Avg Wait Time(s) is less then you can ignore if both are high then there is need to further investigate (may be log file switch or check point incomplete).5. Look Time Model Statistics Report: This is detailed report of system resource consumption order by Time(s) and % of DB Time.6. Operating system statistics Report7. SQL ordered by elapsed time: In this report look for the query having low execution and high elapsed time so you have to investigate this and also look for the query using highest CPU time but the lower the execution.What is the difference between DB file sequential read and DB File Scattered Read? DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan. The DB file sequential read, reads block into contiguous (single block) memory and DB File scattered read gets from multiple block and scattered them into buffer cache.  Dataguard Question/AnswerWhat are Benefits of Data Guard?Using Data guard feature in your environment following benefit:High availability, Data protection, Offloading backup operation to standby, Automatic gap detection and resolution in standby database, Automatic role transitions using data guard broker.Oracle Dataguard classified into two types:1. Physical standby (Redo apply technology)2. Logical Standby (SQL Apply Technology)Physical standby are created as exact copy (matching the schema) of the primary database and keeping always in recoverable mode (mount stage not open mode). In physical standby database transactions happens in primary database synchronized by using Redo Apply method by continually applying redo data on standby database received from primary database. Physical standby database can be opened for read only transitions only that time when redo apply is not going on. But from 11g onward using active data guard option (extra purchase) you can simultaneously open the physical standby database for read only access and can apply redo log received from primary in the meantime.Logical standby does not matching the same schema level and using the SQL Apply method to synchronize the logical standby database with primary database. The main advantage of logical standby database over physical standby is you can use logical standby database for reporting purpose while you are apply SQL.What are different services available in oracle data guard?1. Redo Transport Service: Transmit the redo from primary to standby (SYNC/ASYNC method). It responsible to manage the gap of redo log due to network failure. It detects if any corrupted archive log on standby system and automatically perform replacement from primary. 2. Log Apply Service: It applies the archive redo log to the standby. The MRP process doing this task.3. Role Transition service: it control the changing of database role from primary to standby includes: switchover, switchback, failover.4. DG broker: control the creation and monitoring of data guard through GUI and command line.What is different protection mode available in oracle data guard? How can check and change it?1. Maximum performance: (default): It provides the high level of data protection that is possible without affecting the performance of a primary database. It allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.2. Maximum protection: This protection mode ensures that no data loss will occur if the primary database fails. In this mode the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.3. Maximum availability: This provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.Step to create physical standby database?On Primary site Modification:1. Enable force logging: Alter database force logging;2. Create redolog group for standby on primary server:Alter database add standby logfile (‘/u01/oradata/--/standby_redo01.log) size 100m;3. Setup the primary database pfile by changing required parameterLog_archive_dest_n – Primary database must be running in archive modeLog_archive_dest_state_nLog_archive_config  -- enble or disable the redo stream to the standby site.Log_file_name_convert , DB_file_name_convert  -- these parameter are used when you are using different directory structure in standby database. It is used for update the location of datafile in standby database.Standby_File_Management  -- by setting this AUTO so that when oracle file added or dropped from primary automatically changes made to the standby.              DB_Unique_Name,  Fal_server, Fal_client4. Create password file for primary5. Create controlfile for standby database on primary site:alter database create standby controlfile as ‘STAN.ctl;6. Configure the listner and tnsname on primary database.On Standby Modification:1. Copy primary site pfile and modify these pfile as per standby name and location:2. Copy password from primary and modify the name.3. Startup standby database in nomount using modified pfile and create spfile from it4. Use the created controlfile to mount the database.5. Now enable DG Broker to activate the primary or standby connection.6. Finally start redo log apply.How to enable/disable log apply service for standby?Alter database recover managed standby database disconnect; apply in backgroundAlter database recover managed standby database using current logfile; apply in real time.Alter database start logical standby apply immediate; to start SQL apply for logical standby database.What are different ways to manage long gap of standby database?Due to network issue sometimes gap is created between primary and standby database but once the network issue is resolved standby automatically starts applying redolog to fill the gap but in case when the gap is too long we can fill through rman incremental backup in three ways.1. Check the actual gap and perform incremental backup and use this backup to recover standby site.2. Create controlfile for standby on primary and restore the standby using newly created controlfile.3. Register the missing archive log.Use the v$archived_log view to find the gap (archived not applied yet) then find the Current_SCN and try to take rman incremental backup from physical site till that SCN and apply on standby site with recover database noredo option. Use the controlfile creation method only when fail to apply with normal backup method. Create new controlfile for standby on primary site using backup current controlfile for standby; Copy this controlfile on standby site then startup the standby in nomount using pfile and restore with the standby using this controlfile: restore standby controlfile from ‘/location of file’; and start MRP to test.If still alert.log showing log are transferred to the standby but still not applied then need to register these log with standby database with Alter database register logfile ‘/backup/temp/arc10.rc’;What is Active DATAGUARD feature in oracle 11g?In physical standby database prior to 11g you are not able to query on standby database while redo apply is going on but in 11g solve this issue by quering  current_scn from v$database view you are able to view the record while redo log applying. Thus active data guard feature s of 11g allows physical standby database to be open in read only mode while media recovery is going on through redo apply method and also you can open the logical standby in read/write mode while media recovery is going on through SQL apply method.How can you find out back log of standby?You can perform join query on v$archived_log, v$managed_standbyWhat is difference between normal Redo Apply and Real-time Apply?Normally once a log switch occurs on primary the archiver process transmit it to the standby destination and remote file server (RFS) on the standby writes these redo log data into archive. Finally MRP service, apply these archive to standby database. This is called Redo Apply service.In real time apply LGWR or Archiver on the primary directly writing redo data to standby there is no need to wait for current archive to be archived. Once a transaction is committed on primary the committed change will be available on the standby in real time even without switching the log.What are the Back ground processes for Data guard?On primary:Log Writer (LGWR): collects redo information and updates the online redolog . It can also create local archive redo log and transmit online redo log to standby.Archiver Process (ARCn): one or more archiver process makes copies of online redo log to standby locationFetch Archive Log (FAL_server): services request for archive log from the client running on different standby server.On standby:Fetch Archive Log (FAL_client): pulls archive from primary site and automatically initiates transfer of archive when it detects gap.Remote File Server (RFS): receives archives on standby redo log from primary database. Archiver (ARCn):  archived the standby redo log applied by managed recovery process.Managed Recovery Process (MRP): applies archives redo log to the standby server.Logical Standby Process (LSP): applies SQL to the standby server.ASM/RAC Question/AnswerWhat is the use of ASM (or) Why ASM preferred over filesystem?ASM provides striping and mirroring. You must put oracle CRD files, spfile on ASM. In 12c you can put oracle password file also in ASM. It facilitates online storage change and also rman recommended to backed up ASM based database.What are different types of striping in ASM & their differences?Fine-grained striping is smaller in size always writes data to 128 kb for each disk, Coarse-grained striping is bigger in size and it can write data as per ASM allocation unit defined by default it is 1MB.Default Memory Allocation for ASM? How will backup ASM metadata?Default Memory allocation for ASM in oracle 10g in 1GB in Oracle 11g 256M in 12c it is set back again 1GB.You can backup ASM metadata (ASM disk group configuration) using Md_Backup.How to find out connected databases with ASM or not connected disks list?ASMCMD> lsctSQL> select DB_NAME from V$ASM_CLIENT;ASMCMD> lsdgselect NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;What are required parameters for ASM instance Creation?INSTANCE_TYPE = ASM by default it is RDBMSDB_UNIQUE_NAME = +ASM1 by default it is +ASM but you need to alter to run multiple ASM instance.ASM_POWER_LIMIT = 11 It defines maximum power for a rebalancing operation on ASM by default it is 1 can be increased up to 11. The higher the limit the more resources are allocated resulting in faster rebalancing. It is a dynamic parameter which will be useful for rebalancing the data across disks.ASM_DISKSTRING = ‘/u01/dev/sda1/c*’it specify a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed disk group mount time and the speed of adding a disk to disk group.ASM_DISKGROUPS = DG_DATA, DG_FRA: List of disk group that will be mounted at instance startup where DG_DATA holds all the datafiles and FRA holds fast recovery area including online redo log and control files. Typically FRA disk group size will be twice of DATA disk group as it is holding all the backups.How to Creating spfile for ASM database?SQL> CREATE SPFILE FROM PFILE = ‘/tmp/init+ASM1.ora’;Start the instance with NOMOUNT option: Once an ASM instance is present disk group can be used for following parameter in database instance to allow ASM file creation:DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_n, DB_RECOVERY_FILE_DEST, CONTROL_FILESLOG_ARCHIVE_DEST_n,LOG_ARCHIVE_DEST,STANDBY_ARCHIVE_DESTWhat are DISKGROUP Redundancy Level?Normal Redundancy: Two ways mirroring with 2 FAILURE groups with 3 quorum (optionally to store vote files)High Redundancy: Three ways mirroring requiring three failure groupsExternal Redundancy: No mirroring for disk that are already protecting using RAID on OS level.CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY  FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1,'/devices/diska2' NAME diska2  FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1,'/devices/diskb2' NAME diskb2;We are going to migrate new storage. How we will move my ASM database from storage A to storage B? First need to prepare OS level to disk so that both the new and old storage accessible to ASM then simply add the new disks to the ASM disk group and drop the old disks. ASM will perform automatic rebalance whenever storage will change. There is no need to manual i/o tuning. ASM_SQL> alter diskgroup DATA drop disk data_legacy1, data_legacy2, data_legacy3 add disk ‘/dev/sddb1’, ‘/dev/sddc1’, ‘/dev/sddd1’;What are required component of Oracle RAC installation?:1. Oracle ASM shared disk to store OCR and voting disk files.2. OCFS2 for Linux Clustered database3. Certified Network File system (NFS)4. Public IP: Configuration: TCP/IP (To manage database storage system)5. Private IP:  To manager RAC cluster ware (cache fusion) internally.6. SCAN IP: (Listener): All connection to the oracle RAC database uses the SCAN in their client connection string with SCAN you do not have to change the client connection even if the configuration of cluster changes (node added or removed). Maximum 3 SCAN is running in oracle.7. Virtual IP: is alternate IP assigned to each node which is used to deliver the notification of node failure message to active node without being waiting for actual time out. Thus possibly switchover will happen automatically to another active node continue to process user request.Steps to configure RAC database:1. Install same OS level on each nodes or systems.2. Create required number of group and oracle user account.3. Create required directory structure or CRS and DB home.4. Configure kernel parameter (sysctl.config) as per installation doc set shell limit for oracle user account.5. Edit etc/host file and specify public/private/virtual ip for each node.6. Create required level of partition for OCR/Votdisk and ASM diskgroup.7. Install OCFSC2 and ASM RPM and configure with each node.8. Install clustware binaries then oracle binaries in first node.9. Invoke netca to configure listener. 10. Finally invoke DBCA to configure ASM to store database CRD files and create database.What is the structure change in oracle 11g r2?1. Grid and (ASM+Clustware) are on home. (oracle_binaries+ASM binaries in 10g)2. OCR and Voting disk on ASM.3. SAN listener4. By using srvctl can manage diskgroups, SAN listener, oracle home, ons, VIP, oc4g.5. GSDWhat are oracle RAC Services?Cache Fusion: Cache fusion is a technology that uses high speed Inter process communication (IPC) to provide cache to cache transfer of data block between different instances in cluster. This eliminates disk I/O which is very slow. For example instance A needs to access a data block which is being owned/locked by another instance B. In such case instance A request instance B for that data block and hence access the block through IPC this concept is known as Cache Fusion.Global Cache Service (GCS): This is the main heart of Cache fusion which maintains data integrity in RAC environment when more than one instances needed particular data block then GCS full fill this task:In respect of instance A request GCS track that information if it finds read/write contention (one instance is ready to read while other is busy with update the block) for that particular block with instance B then instance A creates a CR image for that block in its own buffer cache and ships this CR image to the requesting instance B via IPC but in case of write/write contention (when both the instance ready to update the particular block) then instance A creates a PI image for that block in its own buffer cache, and make the redo entries and ships the particular block to the requesting instance B. The dba_hist_seg_stats is used to check the latest object shipped.Global Enqueue Service (GES): The GES perform concurrency (more than one instance accessing the same resource) control on dictionary cache lock, library cache lock and transactions. It handles the different lock such as Transaction lock, Library cache lock, Dictionary cache lock, Table lock.Global Resource Directory (GRD): As we know to perform any operation on data block we need to know current state of the particular data block. The GCS (LMSN + LMD) + GES keep track of the resource s, location and their status of (each datafiles and each cache blocks ) and these information is recorded in Global resource directory (GRD). Each instance maintains their own GRD whenever a block transfer out of local cache its GRD is updated.Main Components of Oracle RAC Clusterware?OCR (Oracle Cluster Registry): OCR manages oracle clusterware (all node, CRS, CSD, GSD info) and oracle database configuration information (instance, services, database state info).OLR (Oracle Local Registry): OLR resides on every node in the cluster and manages oracle clusterware configuration information for each particular node. The purpose of OLR in presence of OCR is that to initiate the startup with the local node voting disk file as the OCR is available on GRID and ASM file can available only when the grid will start. The OLR make it possible to locate the voting disk having the information of other node also for communicate purpose.Voting disk: Voting disk manages information about node membership. Each voting disk must be accessible by all nodes in the cluster for node to be member of cluster. If incase a node fails or got separated from majority in forcibly rebooted and after rebooting it again added to the surviving node of cluster. Why voting disk place to the quorum disk or what is split-brain syndrome issue in database cluster?Voting disk placed to the quorum disk (optionally) to avoid the possibility of split-brain syndrome. Split-brain syndrome is a situation when one instance trying to update a block and at the same time another instance also trying to update the same block. In fact it can happen only when cache fusion is not working properly. Voting disk always configured with odd number of disk series this is because loss of more than half of your voting disk will cause the entire cluster fail. If it will be even number node eviction cannot decide which node need to remove due to failure. You must store OCR and voting disk on ASM. Thus if necessary you can dynamically add or replace voting disk after you complete the Cluster installation process without stopping the cluster.ASM Backup:You can use md_backup to restore ASM disk group configuration in case of ASM disk group storage loss.OCR and Votefile Backup: Oracle cluster automatically creates OCR backup (auto backup managed by crsd) every four hours and retaining at least 3 backup (backup00.ocr, day.ocr, week.ocr on the GRID) every times but you can take OCR backup manually at any time using: ocrconfig –manualbackup   --To take manual backup of ocrocrconfig –showbackup -- To list the available backup.ocrdump –backupfile ‘bak-full-location’ -- To validate the backup before any restore.ocrconfig –backuploc   --To change the OCR configured backup location.dd if=’vote disk name’ of=’bakup file name’; To take votefile backupTo check OCR and Vote disk Location:crsctl query css votedisk/etc/orcle/ocr.loc or use ocrcheckocrcheck   --To check the OCR corruption status (if any).Crsctl check crs/cluster --To check crs status on local and remote nodeMoving OCR and Votedisk:Login with root user as the OCR store on root and for votedisk stops all crs first.Ocrconfig –replace ocrmirror/ocr -- Adding/removing OCR mirror and OCR file.Crsctl add/delete css votedisks --Adding and Removing Voting disk in Cluster.List to check all nodes in your cluster from root or to check public/private/vi pip info.olsnodes –n –p –I How can Restore the OCR in RAC environment?1. Stop clusterware  all node and restart with one node in exclusive mode to restore. The nocrs ensure crsd process and OCR do not start with other node.# crsctl stop crs, # crsctl stop crs –f # crsctl start crs –excel –nocrs  Check if crsd still open then stop it: # crsctl stop resource ora.crsd  -init 2. If you want to restore OCR to and ASM disk group then you must check/activate/repair/create diskgroup with the same name and mount from local node. If you are not able to mount that diskgroup locally then drop that diskgroup and re-create it with the same name. Finally run the restore with current backup.# ocrconfig –restore file_name;   3. Verify the integrity of OCR and stop exclusive mode crs# ocrcheck # crsctl stop crs –f4. Run ocrconfig –repair –replace command all other node where you did not use the restore. For example you restore the node1 and have 4 node then run that rest of node 3,2,4.# ocrconfig –repair –replace  5. Finally start all the node and verify with CVU command# crsctl start crs# cluvfy comp ocr –n all –verboseNote: Using ocrconfig –export/ocrconfig –import also enables you to restore OCR Why oracle recommends to use OCR auto/manual backup to restore the OCR instead of Export/Import?1. An OCR auto/manual backup is consistent snapshot of OCR whereas export is not.2. Backup are created when the system is online but you must shutdown all node in clusterware to take consistent export.3. You can inspect a backup using OCRDUMP utility where as you cannot inspect the contents of export.4. You can list and see the backup by using ocrconfig –showbackup where as you must keep track of each export.How to Restore Votedisks?1. Shutdown the CRS on all node in clusterCrsctl stop crs2. Locate current location of the vote disk restore each of the votedisk using dd command from previous good backup taken using the same dd command.Crsctl query css votedisksDd if= of=3. Finally start crs of all node.Crsctl start crsHow to add node or instance in RAC environment?1. From the ORACLE_HOME/oui/bin location of node1 run the script addNode.sh$ ./addNode.sh -silent "CLUSTER_NEW_NODES={node3}"2. Run from ORACLE_HOME/root.sh script of node33. Run from existing node srvctl config db -d db_name then create a new mount point4. Mkdir –p ORACLE_HOME_NEW/”mount point name”;5. Finally run the cluster installer for new node and update the inventory of clusterwareIn another way you can start the dbca and from instance management page choose add instance and follow the next step.How to Identify master node in RAC ? # /u1/app/../crsd>grep MASTER crsd.log | tail -1 (or) cssd >grep -i  "master node" ocssd.log | tail -1 OR You can also use V$GES_RESOURCE view to identify the master node.Difference crsctl and srvctl?Crsctl managing cluster related operation like starting/enabling clusters services where srcvctl manages oracle related operation like starting/stoping oracle instances. Also in oracle 11gr2 srvctl can be used to manage network,vip,disks etc.What are ONS/TAF/FAN/FCF in RAC?ONS is a part of clusterware and is used to transfer messages between node and application tiers.Fast Application Notification (FAN) allows the database to notify the client, of any changes either node UP/DOWN, Database UP/DOWN.Transport Application Failover (TAF) is a feature of oracle Net services which will move a session to the backup connection whenever a session fails.FCF is a feature of oracle client which receives notification from FAN and process accordingly. It clean up connection when down event receives and add new connection when up event is received from FAN.How OCCSD starts if voting disk & OCR resides on ASM?Without access to the voting disk there is no css to join or accelerate to start the CLUSTERWARE as the voting disk stored in ASM and as per the oracle order CSSD starts before ASM then how it become possible to start OCR as the CSSD starts before ASM. This is due to the ASM disk header in 11g r2 having new metadata kfdhbd.vfstart, kfdhbd.vfend (which tells the CSS where to find the voting files). This does not require to ASM instance up. Once css getting the voting files it can join the cluster easily.Note: Oracle Clusterware can access the OCR and the voting disks present in ASM even if the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.Upgration/Migration/Patches Question/AnswerWhat are Database patches and How to apply?CPU (Critical Patch Update or one-off patch):  security fixes each quarter. They are cumulative means fixes from previous oracle security alert. To Apply CPU you must use opatch utility.- Shutdown all instances and listener associated with the ORACLE_HOME that you are updating.- Setup your current directory to the directory where patch is located and then run the opatch utility.- After applying the patch startup all your services and listner and startup all your database with sysdba login and run the catcpu.sql script.- Finally run the utlrp.sql to validate invalid object.To rollback CPU Patch:- Shutdown all instances or listner.- Go to the patch location and run opatch rollback –id 677666- Start all the database and listner and use catcpu_rollback.sql script.- Bounce back the database use utlrp.sql script.PSU (Patch Set Update): Security fixes and priority fixes. Once a PSU patch is applied only a PSU can be applied in near future until the database is upgraded to the newer version.You must have two things two apply PSU patch:  Latest version for Opatch, PSU patch that you want to apply1. Check and update Opatch version: Go to ORACLE_HOME/OPATCH/opatch versionNow to Update the latest opatch. Take the backup of opatch directory then remove the current opatch directory and finally unzip the downloaded patch into opatch directory. Now check again your opatch version.2. To Apply PSU patch:unzip p13923374_11203_.zipcd 13923374opatch apply  -- in case of RAC optach utility will prompt for OCM (oracle configuration manager) response file. You have to provide complete path of OCM response if you have already created.3. Post Apply Steps: Startup database with sys as sysdbaSQL> @catbundle.sql psu applySQL> quitOpatch lsinventory  --to check which psu patch is installed.Opatch rollback –id 13923374  --Rolling back a patch you have applied.Opatch nrollback –id 13923374, 13923384 –Rolling back multiple patch you have applied.SPU (Security Patch Update): SPU cannot be applied once PSU is applied until the database is upgraded to the new base version.Patchset: (eg. 10.2.0.1 to 10.2.0.3): Applying a patchset usually requires OUI.Shutdown all database services and listener then Apply the patchset to the oracle binaries. Finally Startup services and listner then apply post patch script.Bundle Patches: it is for windows and Exadata which include both quarterly security patch as well as recommended fixes.You have collection of patch nearly 100 patches. How can you apply only one of them?By napply itself by providing the specific patch id and you can apply one patch from collection of many patch by using opatch util napply - id9- skip_subset-skip_duplicate. This will apply only patch 9 within many extracted patches.What is rolling upgrade?It is new ASM feature in oracle 11g. This enables to patch ASM node in clustered environment without affecting database availability. During rolling upgrade we can maintain node while other node running different software.What happens when you use STARTUP UPGRADE?The startup upgrade enables you to open a database based on earlier version. It restrict sysdba logon and disable system trigger. After startup upgrade only specific view query can be used no other views can be used till catupgrd.sql is executed.
0 notes
iyarpage · 6 years ago
Text
Server Side Swift with Vapor: Full Book Now Available!
Great news everyone: Our complete Server Side Swift with Vapor book is now available!
And best yet, you can still grab it at the special release price — for a limited time.
If you’re a beginner to web development, but have worked with Swift for some time, you’ll find it’s easy to create robust, fully featured web apps and web APIs with Vapor 3, and this book will teach you how to do it.
This release adds the last four chapters to round out the book:
Chapter 22: Google Authentication
Chapter 23: Database/API Versioning & Migration
Chapter 25: Middleware
Chapter 26: Deploying with Heroku
What’s Inside Server Side Swift with Vapor?
Here’s the full set of chapters you’ll get in this edition of the book:
Chapter 1: Introduction: Get a quick overview of the history of the Vapor project and how the book is structured.
Chapter 2: Hello Vapor: Beginning a project using a new technology can be daunting. Vapor makes it easy to get started. It even provides handy scripts to make sure that your computer is configured correctly. In this chapter, you’ll start by installing the Vapor Toolbox, then use it to build and run your first project. You’ll finish by learning about routing, accepting data and returning JSON.
Get started with Vapor — no previous web development experience required!
Chapter 3: HTTP Basics: Before you begin your journey with Vapor, you’ll first review the fundamentals of how the web and HTTP operate, including its methods and most common response codes. You’ll also learn how Vapor differs from other Swift frameworks, its benefits, and how it can augment your web development experience.
Chapter 4: Async: In this chapter, you’ll learn about asynchronous and non-blocking architectures. You’ll cover Vapor’s approach to these architectures and how to use them. Finally, the chapter will provide a foundational overview of SwiftNIO, a core technology used by Vapor.
Chapter 5: Fluent and Persisting Models: In Chapter 2, “Hello, Vapor!”, you learned the basics of creating a Vapor app, including how to create routes. Chapter 5 explains how to use Fluent to save data in Vapor apps. You’ll also learn how to deploy your app using Vapor Cloud.
Learn how to deploy your projects up to Vapor Cloud!
Chapter 6: Configuring a Database: Databases allow you to persist data in your apps. In this chapter, you’ll learn how to configure your Vapor app to integrate with the database of your choice. Finally, you’ll deploy your app to Vapor Cloud and learn how to set up the database there.
Chapter 7: CRUD Database Operations: Chapter 5, “Fluent and Persisting Models”, explained the concept of models and how to store them in a database using Fluent. Chapter 7 concentrates on how to interact with models in the database. You’ll learn about CRUD operations and how they relate to REST APIs. You’ll also see how to leverage Fluent to perform complex queries on your models. Finally, like all chapters in this section, you’ll deploy your code to Vapor Cloud.
Chapter 8: Controllers: In previous chapters, you wrote all the route handlers in one file. This isn’t sustainable for large projects as the file quickly becomes too big and cluttered. This chapter introduces the concept of controllers to help manage your routes and models, using both basic controllers and RESTful controllers. Finally, you’ll deploy your code to Vapor Cloud.
Chapter 9: Parent Child Relationships: Chapter 5, “Fluent and Persisting Models”, introduced the concept of models. This chapter will show you how to set up a parent child relationship between two models. You’ll learn the purpose of these relationships, how to model them in Vapor and how to use them with routes. You’ll complete the tutorial by deploying your code to Vapor Cloud.
Easily send test requests and try out your Vapor projects with RESTed!
Chapter 10: Sibling Relationships: In Chapter 9, “Parent Child Relationships”, you learned how to use Fluent to build parent child relationships between models. Chapter 10 will show you how to implement the other type of relationship: sibling relationships. You’ll learn how to model them in Vapor and how to use them in routes. Finally, you’ll deploy your code to Vapor Cloud.
Chapter 11: Testing: In this chapter, you’ll learn how to write tests for your Vapor applications. You’ll learn why testing is important and how it works with Swift Package Manager. Then, you’ll learn how to write tests for the TIL application from the previous chapters. Finally, you’ll see why testing matters on Linux and how to test your code on Linux using Docker.
Chapter 12: Creating a Simple iPhone App I: In the previous chapters, you created an API and interacted with it using RESTed. However, users expect something a bit nicer to use TIL! The next two chapters show you how to build a simple iOS app that interacts with the API. In this chapter, you’ll learn how to create different models and get models from the database.
Build a simple iPhone app to interact with your Vapor backend!
Chapter 13: Creating a Simple iPhone App II: In this chapter, you’ll expand the app to include viewing details about a single acronym. You’ll also learn how to perform the final CRUD operations: edit and delete. Finally, you’ll learn how to add acronyms to categories.
Chapter 14: Templating with Leaf: In a previous section of the book, you learned how to create an API using Vapor and Fluent. This section explains how to use Leaf to create dynamic websites in Vapor applications. Just like the previous section, you’ll deploy the website to Vapor Cloud.
Chapter 15: Beautifying Pages: In this chapter, you’ll learn how to use the Bootstrap framework to add styling to your pages. You’ll also learn how to embed templates so you only have to make changes in one place. Next, you’ll also see how to serve files with Vapor. Finally, like every chapter in this section, you’ll deploy the new website to Vapor Cloud.
Learn how to style your pages with the Bootstrap framework!
Chapter 16: Making a Simple Web App I: In the previous chapters, you learned how to display data in a website and how to make the pages look nice with Bootstrap. In this chapter, you’ll learn how to create different models and how to edit acronyms.
Chapter 17: Making a Simple Web App II: In this chapter, you’ll learn how to allow users to add categories to acronyms in a user-friendly way. Finally, you’ll deploy your completed web application to Vapor Cloud.
Chapter 18: API Authentication, Part I: In this chapter, you’ll learn how to protect your API with authentication. You’ll learn how to implement both HTTP basic authentication and token authentication in your API.
Chapter 19: API Authentication, Part II: Once you’ve implemented API authentication, neither your tests nor the iOS application work any longer. In this chapter, you’ll learn the techniques needed to account for the new authentication requirements, and you’ll also deploy the new code to Vapor Cloud.
Chapter 20: Cookies and Sessions: In this chapter, you’ll see how to implement authentication for the TIL website. You’ll see how authentication works on the web and how Vapor’s Authentication module provides all the necessary support. You’ll then see how to protect different routes on the website. Next, you’ll learn how to use cookies and sessions to your advantage. Finally, you’ll deploy your code to Vapor Cloud.
Chapter 21: Validation: In this chapter, you’ll learn how to use Vapor’s Validation library to verify some of the information users send the application. You’ll create a registration page on the website for users to sign up. You’ll validate the data from this form and display an error message if the data isn’t correct. Finally, you’ll deploy the code to Vapor Cloud.
Learn how to use OAuth 2.0 to authenticate your users!
Chapter 22: Google Authentication: Sometimes users don’t want to create extra accounts for an application and would prefer to use their existing accounts. In this chapter, you’ll learn how to use OAuth 2.0 to delegate authentication to Google, so users can log in with their Google accounts instead.
Chapter 23: Database/API Versioning & Migration: Once you’re in production, you can’t just delete your database and start over. Instead, you can use Vapor’s Migration protocol to cautiously introduce your modifications while still having a revert option should things not go as expected.
Chapter 24: Caching: Whether you’re creating a JSON API, building an iOS app, or even designing the circuitry of a CPU, you’ll eventually need a cache. In this chapter, you’ll learn the philosophy behind and uses of caching to make your app feel snappier and more responsive.
Learn how to create middleware for Vapor to view and modify requests!
Chapter 25: Middleware: In the course of building your application, you’ll often find it necessary to integrate your own steps into the request pipeline, via middleware. This allows you to do things like log incoming requests, catch errors and display messages, rate-limit traffic to particular routes and more.
Chapter 26: Deploying with Heroku: Heroku is a popular hosting solution that simplifies deployment of web and cloud applications. It supports a number of popular languages and database options. In this chapter, you’ll learn how to deploy a Vapor web app with a Postgres database on Heroku.
Chapter 27: WebSockets: WebSockets, like HTTP, define a protocol used for communication between two devices. Unlike HTTP, the WebSocket protocol is designed for realtime communication. Vapor provides a succinct API to create a WebSocket server or client. In this chapter, you’ll build a simple server/client application that allows users to share their current location with others, who can then view this on a map in realtime.
Where to Go From Here?
Here’s how you can get your full copy of Server Side Swift with Vapor:
If you’ve pre-ordered Server Side Swift with Vapor, you can log in to the store and download the complete Server Side Swift with Vapor book here.
If you haven’t yet bought Server Side Swift with Vapor, you can get it at the limited-time, release price of $44.99.
Don’t wait though — the release price is only good until the end of Friday, August 17. I’d hate for you to miss out!
Whether you’re looking to create a backend for your iOS app or want to create fully featured web apps, Vapor is the perfect platform for you.
This book starts with the basics of web development and introduces the basics of Vapor; it then walks you through creating APIs and web backends, shows you how to create and configure databases, explains how to deploy to Heroku, walks you through implementing OAuth 2.0, how to perform nearly effortless migrations, how to work with WebSockets, and more!
The Vapor book team and I truly hope that you enjoy Server Side Swift with Vapor!
The post Server Side Swift with Vapor: Full Book Now Available! appeared first on Ray Wenderlich.
Server Side Swift with Vapor: Full Book Now Available! published first on https://medium.com/@koresol
0 notes
news47ell · 6 years ago
Text
Lightning Base Review: Managed WordPress Hosting
New Post has been published on https://www.news47ell.com/reviews/lightning-base-review-managed-wordpress-hosting/
Lightning Base Review: Managed WordPress Hosting
If you’re looking for the best managed WordPress hosting provider then you’ve come to the right place.
Since Day one, back on March 26th, 2016, the day I signed up for Lightning Base, I knew that it was going to be the place I call home for News47ell.
As I begin this Lightning Base Review, I’m going to say it now: Sign up. If you are looking for the most secure, fastest and the best managed WordPress hosting provider, look no more, because Lightning Base is all of this and much more.
Keep reading my Lightning Base review to get an in-depth look at Lightning Base and it’s exclusive features that make it one of the best managed WordPress hosting providers out there.
Lightning Base review
Coming soon.
— Lightning Base (@LightningBase) December 13, 2011
Choosing a hosting provider for your WordPress site is like choosing a house for you and your family. It needs to be:
Secure & Private: You don’t want anyone to be able to access your home.
Great environment: You want your family to live in a great neighborhood.
Built by professionals: You don’t want the house to collapse.
Upgradeable: An unplanned baby? No problem, build an extra room.
24/7 support: When something breaks inevitably, there should be a team of experts who are ready with a fix.
All managed WordPress hosting providers promise to offer the features above, but not all follow on their promises. Many fall short on the environment for example by using old architecture, some fall short on support, some on security and privacy which is really scary since you’re trusting them with your hard work. It’s unacceptable.
Lightning Base, on the other hand, follows up on everything I mentioned above. Making News47ell, more secure, with a lightning-fast loading time and a 99.9% uptime. It’s unlike anything else I have ever seen before in my life.
Why I moved to a new host
Before Lightning Base, I migrated from one host to another and finally, I settled on one that I thought was reliable. I was completely wrong.
News47ell.com started to flatline multiple times a day. I did my best trying to figure out what the hell was causing this issue but I had no luck. And, with no fix on the horizon even after I contacted their support team, I decided it was time to take News47ell somewhere else.
Yes! The red highligh on line 1 is an 8 hours and 40 minutes downtime!
I wanted to host News47ell with a top-notch managed WordPress hosting provider where I don’t even need to contact the support team in case of an emergency because there will be no case of emergencies. A host of servers that are already configured properly and maintained by professionals, where I won’t face any major issues randomly during the day that takes News47ell offline multiple times a day.
I wanted a managed WordPress hosting provider that I could watch News47ell grow old with and offer a few specific things:
PHP 7
Staging Environment
Let’s Encrypt
Options to scale
Reasonable prices
After a few days of research, I came across Lightning Base and I decided to send the support team an email to see if they offer the things I listed above.
When I got a reply from Lightning Base, I was surprised to see that the person who replied to me was Chris Piepho, the Founder of Lightning Base.
We sent emails back and forth for a week, I asked him about every tiny detail I could think of before signing up. He was very detail oriented in his replies, answering every question I had with as much information as possible. Towards the end, he assured me I would find everything I was looking for AND much more with Lightning Base.
And so, on March 26th, 2016, at 8:43 PM, I signed up for Lightning Base and with that, News47ell started a new chapter and a new journey with a new host. It actually feels more like a home now.
Make sure to read my announcement article which I published when I moved my site to Lightning Base.
The 3 Promises:
Lightning Base describes themselves as a Fast, Secure and Managed WordPress hosting provider. After hosting my site with them for nearly 2 years (1 Year, 11 Months) I can say Lightning Base kept their word and delivered.
So let’s get technical and dive in deeper in my Lightning Base review to see how Lightning Base delivers on being a fast, secure and managed WordPress hosting provider with all the features it has to offer.
Fast WordPress Hosting:
The reason why Lightning Base is a Fast WordPress hosting provider comes from the fact that Lightning Base doesn’t use NGINX, Apache, Lighttpd, Facebook HHMV or Microsoft IIS. Instead, it uses LiteSpeed.
To be more specific, LiteSpeed Enterprise, an Apache compatible, proprietary web server and a server-level caching software, developed and maintained by LiteSpeed Technologies.
LiteSpeed Web Server is the 4th most popular web server with a market share of 3.3%. It includes the following features:
HTTP/2: LiteSpeed is the first commercial server to offer full HTTP/2 support. HTTP/2 features include binary protocol, fully multiplexed and header compression.
Gzip compression: Save bandwidth by compressing the files sent to the client.
Apache Compatibility: LiteSpeed Web Server has been designed to run off Apache’s httpd.conf and .htaccess files.
Apache modules: LiteSpeed Web Server is compatible with Apache core modules like mod_rewrite, mod_security, mod_include, and mod_cache.
.Htaccess caching: LiteSpeed Web Server uses .htaccess caching to make use of .htaccess files without the performance hit.
And many, many more features…
Along with that, Lightning Base servers use 100% SSD based storage. Arranged in a Raid 10 configuration. These SSDs provide redundancy, speed and combining them with Cloudflare CDN makes for the absolute fastest access times.
Secure WordPress Hosting
According to Lightning Base:
Our servers are protected by a comprehensive, dynamic firewall, followed by a web app firewall configured to prevent malicious code.
Lightning Base isolates filesystem for each and every user and uses a set of security features that come packed with LiteSpeed. For example:
Anti-DDoS connection
Per-IP Throttling
Anti SSL BEAST
SSL Renegotiation Protection
Strict HTTP request validation
Mod_security
And much more…
If all of this isn’t enough, you can always let your traffic go through Cloudflare and use the infamous Wordfence Security WordPress plugin. And always use a complex, long password and keep it safe using 1Password.
For more info on how to keep intruders away from your site, read my tutorial: How to Protect WordPress Login Page.
Managed WordPress Hosting
Imagine this scenario: You’re busy at work and you get a phone call that a water pipe broke and water is spewing everywhere in your house. What would you do?
Call a technician to fix it?
Take time off of work, go home and try to fix it yourself? Keep in mind that you don’t know anything about fixing water pipes.
I would go with option A. Let professionals do their job.
That’s what managed WordPress hosting is all about. Letting the server administrator fix any issue the server might have, keep it fast, secure and up and running. While you focus on creating a beautiful, read-worthy content.
Throughout my time with Lightning Base, I never had to deal with any issue. It’s all managed by Chris.
Lightning Base Features
The architecture of LiteSpeed
As I mentioned above, Lightning Base isn’t your typical managed WordPress hosting provider. It uses LiteSpeed web servers. Which is capable of handling thousands of concurrent clients with minimal memory consumption and CPU usage.
Each Lightning Base site comes pre-installed with LiteSpeed cache WordPress plugin.
This plugin is loaded with dozens of features that take your WordPress site speed to the next level.
Such as:
OPcode – Object Cache (Memcached/LSMCD/Redis)
Minify CSS, JavaScript, and HTML
Combine and load CSS/JS Asynchronously
Browser Cache Support
Smart preload crawler with support for SEO-friendly sitemap
Database Cleaner and Optimizer
HTTP/2 Push for CSS/JS (on web servers that support it)
DNS Prefetch
Cloudflare API
WebP image format support
Heartbeat control
The features of this awesome plugin don’t end here. There is a set of features that are exclusive to hosts that are LiteSpeed-powered, like Lightning Base, and those features include:
Automatic page caching to greatly improve site performance
Automatic purge of related pages based on certain events
Private cache for logged-in users
Caching of WordPress REST API calls
Ability to schedule purge for specified URLs
WooCommerce and bbPress support
WordPress CLI commands
HTTP/2 & QUIC support (QUIC not available in OpenLiteSpeed)
ESI (Edge Side Includes) support.
For a full list of all exclusive and non-exclusive features, check the plugin in the WordPress directory.
As for the architecture behind Lightning Base, each and every server uses the following software:
CentOS
LiteSpeed
PHP 7.2: You can switch between PHP 5.6, 7, 7.1, and 7.2 in cPanel
MariaDB: Community developed, free fork and drop-in replacement of MySQL
Let’s Encrypt
These work together to increase the performance and speed of your site to a whole new level.
As well as making it more secure and reliable, while maintaining very similar features that everyone knows and loves. Thanks to its compatibility with Apache, its ability to read and run off Apache’s httpd.conf and .htaccess files with no configuration required.
Servers locations
When signing up, you get to choose the location of your server between 3 options: US Central – Australia Melbourne – Netherlands Amsterdam.
Chi252
Chi351
Chi352
Chi353
Chi354
Chi355
Chi356
Iwa251
Ams251
Ams252
Mel251
Both the Chicago and Amsterdam servers are powered by SingleHop, a leading global provider of hosted IT infrastructure and cloud computing that brings enterprise-class technologies to deliver a customized cloud infrastructure experience for enterprises of all sizes.
As for Melbourne, that server is powered by IBM Softlayer. Now known as IBM Cloud. A global cloud infrastructure platform built for Internet scale with a modular architecture that provides unparalleled performance, control and a global network for secure, low-latency communications.
Each one of these servers is placed strategically and optimized in a way to deliver your content anywhere around the world in a lightning fast, reliable and secure way with the absolute minimum waiting time and minimum to no downtime.
Lightning Base Dashboard
Lightning Base has a pretty clean, straightforward and easy to understand dashboard which contains everything you need to manage your account.
Active Products/Service: Easily manage your site(s) from here, access cPanel and upgrade your server and your email, or the email’s cPanel to manage spam filter and email forwarding.
Support Tickets: Have a quick access to past and current tickets and open new ones right there from your dashboard.
Domains: Transfer and Purchase domains with any .TLD .gTLD .cctld etc.
Clicking on your site in the product and service widget, it will reveal extra info about it, such as:
Disk / Bandwidth usage
Quick access to cPanel
Few cPanel shortcuts
Extra info about your site
CDN
Lightning Base takes advantage of the amazing Cloudflare CDN which makes your site load even faster anywhere around the world. As well as adding an extra layer of security and give you a set of tools that will optimize and speed up your site.
News47ell currently runs on Cloudflare. It supports both HTTP2 and Railgun, and best of all, it’s free. But if you want to pay CDN, check out KeyCDN or BunnyCDN.
Uptime
This was my main frustration with my previous host, a daily downtime of about 30 minutes with no solution in sight.
With Lightning Base, things are again very different, with an uptime of 99.9% on both Pingdom and Uptime Robot.
You can check out our public status page.
Email
With each Lightning Base account, you will get 5 email addresses, 5 GB of storage and unlimited forwarders. It’s great, free and easy to set up. You can either go with the RoundCube web client or set up apps like Mail, Spark or Thunderbird.
Free SSL
Nowadays everybody knows about Let’s Encrypt. The free, easy to deploy SSL certificate. Lightning Base offers an easy solution to deploy your SSL certificate onto your site through cPanel.
Have your own certificate? No problem! Lightning Base also offers the option to bring your own certificate and deploy it, free of charge.
Staging environment
Lightning Base does offer a staging environment. It’s a part of the cPanel software ecosystem and it works really well.
You can easily clone your production site with one click, do all the changes and development that you want. And once you’re ready, one-click is all that separates you from pushing the staging environment back to the production site.
Backups
When something is important to you, back it up, not just once, but multiple times and put the backup in multiple places. Online and offline.
That’s what Lightning Base does with their customer’s data. They perform a daily backup of all the data they have and then they copy these backups to an off-site location in case of a disaster.
Full backups are performed weekly and a database backup is performed daily. You can increase the frequency of the backups but note that it will take more of your plans bandwidth.
If you have your own storage, you can send these backups to yourself.
Customer Support
Support at Lightning Base is something out of this world. Not all managed WordPress hosting providers have a great customer support like the one provided by Lightning Base.
So why is it so good?
Most of the time when I contacted the support, it was Chris who replied to me. He’s the only one who knows exactly how his hosting environment runs.
When someone else replies, your ticket won’t travel between multiple support agents until it gets fixed. Only one is assigned to it and only that agent will reply to you. That’s good because then you wouldn’t have to re-explain your issue to every new agent that replies to a single ticket.
You will wait only a short amount of time before you get a reply from/to your tickets. It will be as detailed as you like which helps you understand exactly what caused this, how it was solved, and how to prevent it from happening again.
Isn’t that enough?
I think it is. Seeing that Chris handles most of the support tickets by himself, this helps create a great relationship between the customer and the founder. This isn’t easy to have with other managed WordPress hosting providers. This was one of the main reasons in the first place why I started writing my Lightning Base review.
Affiliate Program
This is pretty straightforward, like many affiliate programs out there. You will get a special link. Once someone visits this link, a cookie will be placed on the visitor’s computer and it will stay active for 90 days.
Once the visitor signs up, you will get 20% of the hosting plan revenue. When your account reaches $100, you can have the balance paid via Paypal.
Pricing:
Lightning Base offers a wide range of plans for everyone. Whether it’s your personal or business site. This fast and reliable Managed WordPress Hosting provider offers very generous plans to meet your demand.
Starting at $9.95/mo, this personal plan will get you 1 WordPress with 10,000 Monthly Pageviews, 1GB SSD Storage, and 10GB bandwidth.
Prices go up all the way up to $99.95/mo which allows up to 25 WordPress sites with 140,000 Monthly Pageviews, 14GB SSD Storage, and 140GB bandwidth.
All plans come with 24/7 support, backup, PHP 7 and Let’s Encrypt.
Let’s take News47ell for a spin
I did a few tests on News47ell because I wanted my Lightning Base review to contain test results taken from a live site, rather than a demo site. Because let’s face it, you want to host a real site, with millions of visitors. Not a dummy site.
After conducting multiple tests, News47ell got an impressive score on all the tests.
Bitcatcha – Test Result
Bitcatcha allows you to determine how fast your server is by testing it in 8 different locations around the world.
As you can see, News47ell got a pretty impressive score with a rating of A+
Pingdom – Test Result
Pingdom is a very popular tool that allows you to monitor the up/downtime of your website. They also provide a tool that allows you to analyze your site, check it’s load time and find any bottlenecks.
As you can see, News47ell’s performance grade is A 95 with a load time of 536 ms.
WebPageTest – Test Result
WebPageTest is another popular and much more detailed tool that tests the speed of your site using a wide range of mobile devices as well as desktop, on any browser of your choosing in locations all around the world.
As you can see, News47ell scored 1.539s on the first loading time and 0.231s on the first byte. And A grades across all other tests performed by WebPageTest
SSL Labs Powered by Qualys SSL Labs – Test Result
Last but not least there’s Qualys SSL Labs. It’s a test that determines whether your server SSL configuration is done the right way. Scott Helme describes it in a simple way in this article by saying:
‘It’s a great way to get a feel for whether or not you’re doing SSL right.’
As you can see, again, Lightning Base scores an impressive A+ rating.
Hashtag LightningBase
Here are some tweets that people sent out about their experience with Lightning Base.
Congrats to @asmallorange @KinstaHosting @LightningBase @pagely @getpantheon @pressidium @presslabs pic.twitter.com/kQ72QbVtiC
— Review Signal (@ReviewSignal) July 28, 2015
I've had a great experience migrating #WordPress sites to @LightningBase. Top notch performance and support. Wish I knew of them years ago.
— Scott Carter (@sc456a) June 30, 2016
Loving @LightningBase — awesome WP hosting. Highly recommended. Also, their support is awesome. Thanks Chris! #hosting #WordPress
— Hannah Wright (@hannahwrightAK) June 14, 2016
My new hosting company is so fast they respond to my tickets before I even finish typing them. Thanks @LightningBase
— Brittney Wilson, BSN (@TheNerdyNurse) January 10, 2015
Conclusion
Thank you for reading all the way to the end. Although we are at the end of my Lightning Base review, hopefully, this will be the start of your online journey with this amazing managed WordPress hosting provider.
The amount of love, dedication, and work that Chris put into Lightning Base is unlike anything I have ever seen. It’s truly remarkable.
Chris managed to build a state of the art managed WordPress hosting environment, unlike anything you’ve seen before. Using software that works all together in harmony to create the ultimate hosting experience for everyone using WordPress.
Don’t miss out on experiencing what it’s like to host your WordPress site with Lightning Base and maybe one day, you will write your own Lightning Base review.
Lightning Base
#LightningBase #WordPress #Hosting #LiteSpeed #WebHosting #WebPerf
0 notes
suzanneshannon · 4 years ago
Text
Let’s Create Our Own Authentication API with Nodejs and GraphQL
Authentication is one of the most challenging tasks for developers just starting with GraphQL. There are a lot of technical considerations, including what ORM would be easy to set up, how to generate secure tokens and hash passwords, and even what HTTP library to use and how to use it. 
In this article, we’ll focus on local authentication. It’s perhaps the most popular way of handling authentication in modern websites and does so by requesting the user’s email and password (as opposed to, say, using Google auth.)
Moreover, This article uses Apollo Server 2, JSON Web Tokens (JWT), and Sequelize ORM to build an authentication API with Node.
Handling authentication
As in, a log in system:
Authentication identifies or verifies a user.
Authorization is validating the routes (or parts of the app) the authenticated user can have access to. 
The flow for implementing this is:
The user registers using password and email
The user’s credentials are stored in a database
The user is redirected to the login when registration is completed
The user is granted access to specific resources when authenticated
The user’s state is stored in any one of the browser storage mediums (e.g. localStorage, cookies, session) or JWT.
Pre-requisites
Before we dive into the implementation, here are a few things you’ll need to follow along.
Node 6 or higher
Yarn (recommended) or NPM
GraphQL Playground
Basic Knowledge of GraphQL and Node
…an inquisitive mind!
Dependencies 
This is a big list, so let’s get into it:
Apollo Server: An open-source GraphQL server that is compatible with any kind of GraphQL client. We won’t be using Express for our server in this project. Instead, we will use the power of Apollo Server to expose our GraphQL API.
bcryptjs: We want to hash the user passwords in our database. That’s why we will use bcrypt. It relies on Web Crypto API‘s getRandomValues interface to obtain secure random numbers.
dotenv: We will use dotenv to load environment variables from our .env file. 
jsonwebtoken: Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. jsonwebtokenwill be used to generate a JWT which will be used to authenticate users.
nodemon: A tool that helps develop Node-based applications by automatically restarting the node application when changes in the directory are detected. We don’t want to be closing and starting the server every time there’s a change in our code. Nodemon inspects changes every time in our app and automatically restarts the server. 
mysql2: An SQL client for Node. We need it connect to our SQL server so we can run migrations.
sequelize: Sequelize is a promise-based Node ORM for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server. We will use Sequelize to automatically generate our migrations and models. 
sequelize cli: We will use Sequelize CLI to run Sequelize commands. Install it globally with yarn add --global sequelize-cli  in the terminal.
Setup directory structure and dev environment
Let’s create a brand new project. Create a new folder and this inside of it:
yarn init -y
The -y flag indicates we are selecting yes to all the yarn init questions and using the defaults.
We should also put a package.json file in the folder, so let’s install the project dependencies:
yarn add apollo-server bcrpytjs dotenv jsonwebtoken nodemon sequelize sqlite3
Next, let’s add Babeto our development environment:
yarn add babel-cli babel-preset-env babel-preset-stage-0 --dev
Now, let’s configure Babel. Run touch .babelrc in the terminal. That creates and opens a Babel config file and, in it, we’ll add this:
{   "presets": ["env", "stage-0"] }
It would also be nice if our server starts up and migrates data as well. We can automate that by updating package.json with this:
"scripts": {   "migrate": " sequelize db:migrate",   "dev": "nodemon src/server --exec babel-node -e js",   "start": "node src/server",   "test": "echo \"Error: no test specified\" && exit 1" },
Here’s our package.json file in its entirety at this point:
{   "name": "graphql-auth",   "version": "1.0.0",   "main": "index.js",   "scripts": {     "migrate": " sequelize db:migrate",     "dev": "nodemon src/server --exec babel-node -e js",     "start": "node src/server",     "test": "echo \"Error: no test specified\" && exit 1"   },   "dependencies": {     "apollo-server": "^2.17.0",     "bcryptjs": "^2.4.3",     "dotenv": "^8.2.0",     "jsonwebtoken": "^8.5.1",     "nodemon": "^2.0.4",     "sequelize": "^6.3.5",     "sqlite3": "^5.0.0"   },   "devDependencies": {     "babel-cli": "^6.26.0",     "babel-preset-env": "^1.7.0",     "babel-preset-stage-0": "^6.24.1"   } }
Now that our development environment is set up, let’s turn to the database where we’ll be storing things.
Database setup
We will be using MySQL as our database and Sequelize ORM for our relationships. Run sequelize init (assuming you installed it globally earlier). The command should create three folders: /config /models and /migrations. At this point, our project directory structure is shaping up. 
Let’s configure our database. First, create a .env file in the project root directory and paste this:
NODE_ENV=development DB_HOST=localhost DB_USERNAME= DB_PASSWORD= DB_NAME=
Then go to the /config folder we just created and rename the config.json file in there to config.js. Then, drop this code in there:
require('dotenv').config() const dbDetails = {   username: process.env.DB_USERNAME,   password: process.env.DB_PASSWORD,   database: process.env.DB_NAME,   host: process.env.DB_HOST,   dialect: 'mysql' } module.exports = {   development: dbDetails,   production: dbDetails }
Here we are reading the database details we set in our .env file. process.env is a global variable injected by Node and it’s used to represent the current state of the system environment.
Let’s update our database details with the appropriate data. Open the SQL database and create a table called graphql_auth. I use Laragon as my local server and phpmyadmin to manage database tables.
What ever you use, we’ll want to update the .env file with the latest information:
NODE_ENV=development DB_HOST=localhost DB_USERNAME=graphql_auth DB_PASSWORD= DB_NAME=<your_db_username_here>
Let’s configure Sequelize. Create a .sequelizerc file in the project’s root and paste this:
const path = require('path') 
 module.exports = {   config: path.resolve('config', 'config.js') }
Now let’s integrate our config into the models. Go to the index.js in the /models folder and edit the config variable.
const config = require(__dirname + '/../../config/config.js')[env]
Finally, let’s write our model. For this project, we need a User model. Let’s use Sequelize to auto-generate the model. Here’s what we need to run in the terminal to set that up:
sequelize model:generate --name User --attributes username:string,email:string,password:string
Let’s edit the model that creates for us. Go to user.js in the /models folder and paste this:
'use strict'; module.exports = (sequelize, DataTypes) => {   const User = sequelize.define('User', {     username: {       type: DataTypes.STRING,     },     email: {       type: DataTypes.STRING,       },     password: {       type: DataTypes.STRING,     }   }, {});   return User; };
Here, we created attributes and fields for username, email and password. Let’s run a migration to keep track of changes in our schema:
yarn migrate
Let’s now write the schema and resolvers.
Integrate schema and resolvers with the GraphQL server 
In this section, we’ll define our schema, write resolver functions and expose them on our server.
The schema
In the src folder, create a new folder called /schema and create a file called schema.js. Paste in the following code:
const { gql } = require('apollo-server') const typeDefs = gql`   type User {     id: Int!     username: String     email: String!   }   type AuthPayload {     token: String!     user: User!   }   type Query {     user(id: Int!): User     allUsers: [User!]!     me: User   }   type Mutation {     registerUser(username: String, email: String!, password: String!): AuthPayload!     login (email: String!, password: String!): AuthPayload!   } ` module.exports = typeDefs
Here we’ve imported graphql-tag from apollo-server. Apollo Server requires wrapping our schema with gql. 
The resolvers
In the src folder, create a new folder called /resolvers and create a file in it called resolver.js. Paste in the following code:
const bcrypt = require('bcryptjs') const jsonwebtoken = require('jsonwebtoken') const models = require('../models') require('dotenv').config() const resolvers = {     Query: {       async me(_, args, { user }) {         if(!user) throw new Error('You are not authenticated')         return await models.User.findByPk(user.id)       },       async user(root, { id }, { user }) {         try {           if(!user) throw new Error('You are not authenticated!')           return models.User.findByPk(id)         } catch (error) {           throw new Error(error.message)         }       },       async allUsers(root, args, { user }) {         try {           if (!user) throw new Error('You are not authenticated!')           return models.User.findAll()         } catch (error) {           throw new Error(error.message)         }       }     },     Mutation: {       async registerUser(root, { username, email, password }) {         try {           const user = await models.User.create({             username,             email,             password: await bcrypt.hash(password, 10)           })           const token = jsonwebtoken.sign(             { id: user.id, email: user.email},             process.env.JWT_SECRET,             { expiresIn: '1y' }           )           return {             token, id: user.id, username: user.username, email: user.email, message: "Authentication succesfull"           }         } catch (error) {           throw new Error(error.message)         }       },       async login(_, { email, password }) {         try {           const user = await models.User.findOne({ where: { email }})           if (!user) {             throw new Error('No user with that email')           }           const isValid = await bcrypt.compare(password, user.password)           if (!isValid) {             throw new Error('Incorrect password')           }           // return jwt           const token = jsonwebtoken.sign(             { id: user.id, email: user.email},             process.env.JWT_SECRET,             { expiresIn: '1d'}           )           return {            token, user           }       } catch (error) {         throw new Error(error.message)       }     }   }, 
 } module.exports = resolvers
That’s a lot of code, so let’s see what’s happening in there.
First we imported our models, bcrypt and  jsonwebtoken, and then initialized our environmental variables. 
Next are the resolver functions. In the query resolver, we have three functions (me, user and allUsers):
me query fetches the details of the currently loggedIn user. It accepts a user object as the context argument. The context is used to provide access to our database which is used to load the data for a user by the ID provided as an argument in the query.
user query fetches the details of a user based on their ID. It accepts id as the context argument and a user object. 
alluser query returns the details of all the users.
user would be an object if the user state is loggedIn and it would be null, if the user is not. We would create this user in our mutations. 
In the mutation resolver, we have two functions (registerUser and loginUser):
registerUser accepts the username, email  and password of the user and creates a new row with these fields in our database. It’s important to note that we used the bcryptjs package to hash the users password with bcrypt.hash(password, 10). jsonwebtoken.sign synchronously signs the given payload into a JSON Web Token string (in this case the user id and email). Finally, registerUser returns the JWT string and user profile if successful and returns an error message if something goes wrong.
login accepts email and password , and checks if these details match with the one that was supplied. First, we check if the email value already exists somewhere in the user database.
models.User.findOne({ where: { email }}) if (!user) {   throw new Error('No user with that email') }
Then, we use bcrypt’s bcrypt.compare method to check if the password matches. 
const isValid = await bcrypt.compare(password, user.password) if (!isValid) {   throw new Error('Incorrect password') }
Then, just like we did previously in registerUser, we use jsonwebtoken.sign to generate a JWT string. The login mutation returns the token and user object.
Now let’s add the JWT_SECRET to our .env file.
JWT_SECRET=somereallylongsecret
The server
Finally, the server! Create a server.js in the project’s root folder and paste this:
const { ApolloServer } = require('apollo-server') const jwt =  require('jsonwebtoken') const typeDefs = require('./schema/schema') const resolvers = require('./resolvers/resolvers') require('dotenv').config() const { JWT_SECRET, PORT } = process.env const getUser = token => {   try {     if (token) {       return jwt.verify(token, JWT_SECRET)     }     return null   } catch (error) {     return null   } } const server = new ApolloServer({   typeDefs,   resolvers,   context: ({ req }) => {     const token = req.get('Authorization') || ''     return { user: getUser(token.replace('Bearer', ''))}   },   introspection: true,   playground: true }) server.listen({ port: process.env.PORT || 4000 }).then(({ url }) => {   console.log(`🚀 Server ready at ${url}`); });
Here, we import the schema, resolvers and jwt, and initialize our environment variables. First, we verify the JWT token with verify. jwt.verify accepts the token and the JWT secret as parameters.
Next, we create our server with an ApolloServer instance that accepts typeDefs and resolvers.
We have a server! Let’s start it up by running yarn dev in the terminal.
Testing the API
Let’s now test the GraphQL API with GraphQL Playground. We should be able to register, login and view all users — including a single user — by ID.
We’ll start by opening up the GraphQL Playground app or just open localhost://4000 in the browser to access it.
Mutation for register user
mutation {   registerUser(username: "Wizzy", email: "[email protected]", password: "wizzyekpot" ){     token   } }
We should get something like this:
{   "data": {     "registerUser": {       "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzAwLCJleHAiOjE2MzA3OTc5MDB9.gmeynGR9Zwng8cIJR75Qrob9bovnRQT242n6vfBt5PY"     }   } }
Mutation for login 
Let’s now log in with the user details we just created:
mutation {   login(email:"[email protected]" password:"wizzyekpot"){     token   } }
We should get something like this:
{   "data": {     "login": {       "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzcwLCJleHAiOjE1OTkzMjY3NzB9.PDiBKyq58nWxlgTOQYzbtKJ-HkzxemVppLA5nBdm4nc"     }   } }
Awesome!
Query for a single user
For us to query a single user, we need to pass the user token as authorization header. Go to the HTTP Headers tab.
Tumblr media
…and paste this:
{   "Authorization": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzcwLCJleHAiOjE1OTkzMjY3NzB9.PDiBKyq58nWxlgTOQYzbtKJ-HkzxemVppLA5nBdm4nc" }
Here’s the query:
query myself{   me {     id     email     username   } }
And we should get something like this:
{   "data": {     "me": {       "id": 15,       "email": "[email protected]",       "username": "Wizzy"     }   } }
Great! Let’s now get a user by ID:
query singleUser{   user(id:15){     id     email     username   } }
And here’s the query to get all users:
{   allUsers{     id     username     email   } }
Summary
Authentication is one of the toughest tasks when it comes to building websites that require it. GraphQL enabled us to build an entire Authentication API with just one endpoint. Sequelize ORM makes creating relationships with our SQL database so easy, we barely had to worry about our models. It’s also remarkable that we didn’t require a HTTP server library (like Express) and use Apollo GraphQL as middleware. Apollo Server 2, now enables us to create our own library-independent GraphQL servers!
Check out the source code for this tutorial on GitHub.
The post Let’s Create Our Own Authentication API with Nodejs and GraphQL appeared first on CSS-Tricks.
You can support CSS-Tricks by being an MVP Supporter.
Let’s Create Our Own Authentication API with Nodejs and GraphQL published first on https://deskbysnafu.tumblr.com/
0 notes
crackinstaller · 6 years ago
Text
Turbo Studio 18.4.1080.5 Full Crack
Turbo Studio 18.4.1080.5 Full Crack is the best graphics editor for application virtualization. The software fully supports 32bit applications as well as 64bit. Turbo Studio Crack allows you to create also edit application virtualization. You can also improve portable applications that can simply run in the windows operating system. Turbo Studio 18 free download support NET Framework, Java, also based on air applications. After installing the software, you will also explore a wide range of features, presets, and improvements.
With everything in mind, we can say that Spoon Virtual App Studio commits and manages to change the way your application is deployed. It can be used to eliminate the need for any other third party requirements, or simply migrate the entire application and ongoing projects to a new computer. In addition, the application allows you to select the runtime environment that your product depends on and grab the system configuration so that it does not need them on the deployed PC.
Once completed, the installation package turbo software is created in a custom directory with a few clicks. Turbo Studio 18.4.1080.5 Full Crack is an easy-to-use graphics editing software for application virtualization, portable application creation and digital distribution. spoon virtual application studio (formerly Xenocode Virtual Application Studio or Spoon Studio) offers a powerful set of software products and services for creating and editing application virtualization.
Review Turbo Studio
This program allows you to create portable executable applications that can be run immediately on any Windows system. Deployed in an independent predecessor, running your application does not require installation, no dependencies or conflicts.Turbo Studio supports virtualization of 32- and 64-bit applications, supporting all versions of the .
NET Framework, Java, and AIR-based applications. Spoon Studio allows you to convert existing applications into portable applications. It simulates the operating system features required for applications, application files, settings, runtimes, system services, databases, and components. In addition, it can output a separate EXE, MSI package, browser plug-in, and more.
Turbo Studio 18.4.1080.5 Full Crack A very powerful application for virtualizing programs, for this purpose, will use a special virtual container, or even simpler – an executable file that does not require you to install, you can download the spoon virtual The following application studio. Within the created container, we will find the environment for the files and registry keys.
You can easily create stand-alone portable programs that you can easily run on any desired computer.turbo software program has many advantages. For example, when using a competitor app studio download is usually impossible to create a portable program because it is not possible to place large files in the sandbox frequently. This is not in the spoon virtual application studio.
In addition, this program is better for compressing containers, there is a directly imported ThinApp project, built-in simulation plugin AppLinks. Russian support, I did not find (there is a Russifier), but the interface is very functional and convenient, generally it is quite possible to understand, I I hope the program will come in handy. Thank you for your attention.
Turbo Studio 18.4.1080.5 Final release crack download in izofile. Good software that enables you to run can also execute applications without installation. Turbo Studio 18 crack lets you deploy your application in independent predecessor also with Turboservers without any other external tools. Turbo Studio 18.4.1080.5 Full Crack provides an intuitive user interface that makes it easier for beginners and professional users to work.
Turbo Studio 18.4.1080.5 Full Crack is a powerful multi-purpose program (previously Spoon Virtual App Studio). In addition, it is designed for quick and easy application virtualization. In addition, it does not need to be installed. Unlike traditional methods of distributing applications, portable applications do not require separate steps to install external components.
spoon virtual application studio (formerly Spoon Virtual App Studio) – With this application, you can group together the files you need to run the application in a package that can be used without deployment. The application allows you to create a virtual container and run the program in a virtual environment without having to install and make changes in the registry and system.
The computer restarts and the administrator also manages the application. It is completely isolated from other system applications. In other words, the Turbine 18 studio is Sydney’s first indoor cycling training house. Since we opened the door, we have evolved into outdoor training. One in a coach, a training camp, or even more. Finally, you can get cracked settings and activate your application.
Turbo Studio 18.4.1080.5 Crack With Latest Full Version is Here
Portable applications created in the studio, isolated from other system applications. As a result, DLL conflicts and other deployment nightmares are prevented. In addition, just portable programs started in Turbo include full support for running legacy applications on Windows 10. Turbo.net allows legacy applications such as Internet Explorer 6 on Windows 7 and 8 to cancel the deployment of the operating system and ensure business continuity.
In addition, virtualized 32bit and 64bit applications, databases (such as SQL server), IIS and other services, as well as DCOM and SxS. Turbo Studio 18.4.1080.5 Full Crack is a new generation of spoon studio, which comes with new support for Windows 10. New publishing options and other new features and improvements. It is powerful software created by application virtualization and portable applications. With this software, you will easily create portable applications in a single executable. Consolidating multiple files into one package without deploying creates a separate virtual environment.
Turbo Studio Lightweight allows you to virtualize the current application instantly, without the need for provisioning, allowing the end customer to run from the Web without facing DLLs or other types of issues. Turbo Virtual Studio 16.0.475 The full version can combine the documents needed to create an application so that it can be used without an application. Freeprokeyz.net enabled legacy applications such as Web Explorer 6 running on Windows 7 and 8 to unblock the work system and ensure company continuity.
Virtualize each 32-bit and 64-bit program, database (eg SQL server), IIS and other solutions such as DCOM and SxS. Turbo Studio 18.4.1080.5 Full Crack is an efficient multi-purpose system (previously virtual software studio). It means easy-to-use software virtualization. Try not to organize. Unlike standard technologies for selling applications, lightweight applications do not require isolation to demonstrate external parts. The computer restarts and the supervisor is completely separate from other framework software.
As driver and technology updates are constantly being created to reduce the likelihood of errors, it must be trapped in your program and software logs so that they can remain. However, with the help of public services, such as turbo software  portable, you can mix the necessary files so that the software runs in a package agreement ready for use without an application. In different languages, Turbo 16 is Sydney’s first indoor cycle teaching building. Since we opened our entrance, we have created an external plan.
app studio download is in a training, visiting and planning camp with greatly arrived. If you are reluctant to invest a lot of time with the content product package setting up the necessary files, you can run a wizard with several available options. Wizards allow you to quickly create virtual software for designing or downloading media, scan your computer software, and select one into the action process. Handling third-party programs uses snapshots, just as a wizard’s manual construction works.
In addition, you can directly participate in many functions that the main window enters for you. Through the side panel, you can navigate through key areas such as file programs, registry, construction, components, construction, and termination. Depending on the selected group, the workspace offers a variety of special conditions. new features! app studio download 18 directly linked the Freeprokeyz.
Topics and Import Wizards will be updated on-the-fly with an up-to-date, central article. new features! New Lightweight Application (.exe) Result Model Generates Removable Container and Turbine Client Mosaics, All New in a Single Exe! Generates an Application by Setting Software (MSI or EXE) to a Boxed Console . Turbo Studio 18.4.1080.5 Full Crack option for the overview process does not require a clean operating procedure, nor does it dirty the sponsor’s file system or computer registry.
Key Features Of Turbo Studio 18.4.1080.5 Full Crack:
An easy-to-use graphical editing software for application virtualization.
Friendly user workflow interface.
In addition to digital distribution, it also provides perfect results for portable application creation.
Easy access to local files also printers.
Provides a powerful set of software products and services for creating and editing application virtualization.
Easy to convert existing applications to portable applications.
Start multiple applications at the same time.
Create portable executable applications that can be easily run on any windows operating system.
Highly supports all soup spoon studio versions.
Supports virtualization of 32- and 64-bit applications.
You can simply output a separate EXE, MSI package, and browser plugins.
You can also automatically synchronize file settings.
Run on multiple windows operating systems.
Access local files and printers
Automatically sync files and settings
Create application virtualization
Create a portable application (exe)
Create a virtual container application
Deployed in independent predecessors and MSIs
Start multiple applications at the same time
Painless migration application
Run in the sandbox and eliminate conflicts
Supports 32-bit and 64-bit applications
Supports .NET, Java, AIR, and SQL CE
Virtualize your custom application
Allow legacy applications, etc.
What’s new:
Newly integrated turbine hub
New portable application creation
New Template and Import Wizard
Other bug fixes and improvements.
Minimum Requirements:
Windows XP/Vista/7/8/8.1/10 (32-bit or 64-bit – all versions)
Computer speed is reasonable
600 MB free disk space
1024 x 768 screen resolution
How To Install
Disconnect from the Internet (most important)
Unzip and Install the Program (Run Setup)
Do not run, exit the program, if run
Copy crack file from crack to install dir #
Installer’s directory
Always block the program on your firewall!
Or just extract and run the portable version
First, click on the direct download link below.
Also, turn off your internet connection.
In addition, the decompression also installs software settings.
Also, do not run and also quit the program if it is running.
Also, copy the crack file from the crack to the installation directory.
In addition, block the program in the firewall.
In addition, just extracting also runs the portable version.
At last.
The post Turbo Studio 18.4.1080.5 Full Crack appeared first on My Blog.
from My Blog https://ift.tt/2KQIz0N via IFTTT
0 notes
siliconwebx · 6 years ago
Text
A Handy Guide to Reseller Hosting
As a freelance web designer, one of the most common thoughts to run through my mind is: how can I generate more recurring revenue?
One effective and potentially pain-free way to do this is to become a hosting reseller for your clients.
Even if you’ve only seen the term as you were browsing hosting plans for your clients’ websites, you’ve probably noticed it as an option.
What is reseller hosting?
Put simply, this form of web hosting allows you to purchase hosting at a wholesale price from a provider like GoDaddy, Bluehost, etc. Just like a retail store gets its merchandise at a substantially reduced price, by purchasing a reseller hosting plan, you get a great price for the amount of disk space and bandwidth you receive from the hosting provider.
When you become a hosting reseller, you’ll be given tools to partition, sell, and manage the resources you purchase from a hosting provider.
If it sounds a little daunting – you’re not wrong: it can be. But, there are steps you can take to start slowly, get your bearings, and ensure you’re not getting in over your head. If I can do it, with a pretty limited understanding of how it all worked when I started, you can too!
Before we dive in, I want to describe the scope of the hosting service that I offer my clients and why I wanted to keep my hosting offering a little more limited.
Primarily, I wanted my business to remain a web design provider. I don’t advertise myself as a hosting provider at all. Rather, I offer hosting as a way to add value and close deals.
I do my best to limit my hosting option to my smaller clients who, for the most part, won’t require very much assistance after initial set-up. This may reduce the amount of money I bring in, but it helps me achieve balance by keeping in check the time I need to dedicate to that side of my business.
To achieve this more limited scope, I do my best to follow these simple rules:
I offer only two email addresses per account by default. And if my client will need more than 3-4 email addresses, I generally steer them toward a different hosting provider.
I don’t offer tiered plans and don’t limit clients’ disk space or bandwidth./li>
I only charge annually.
Pros of Offering Hosting to your Clients
There are a lot of really great reasons to sign up as a hosting reseller and offer this to your clients. Here are a few that stick out to me after several years of offering this service:
1. Recurring revenue: Any person or business with a website must purchase a hosting package. Somebody’s going to get paid, so why not take a cut? As of this writing, one year of the most basic hosting (including promos) on GoDaddy costs $57.44, and on Hostgator it’s $53.88. Both of those increase in price after the first year.
2. Convenience (for you): Reseller hosting allows you to have immediate access to your clients’ cPanel accounts without keeping track of individual usernames and passwords. Need to check which version of PHP your client is running? Easy! Need to create an FTP account? Easy! You have a master control panel (Web Host Manager, or WHM) which allows you this ability.
3. Convenience (for clients): This one may be a little more controversial, but for the clients I target for my hosting solution, being able to email me with an issue they’re having is much more convenient than calling a support technician.
4. Repeat business: Just like any business – keeping in contact with previous clients is a great way to ensure you’re the one they call when they’re ready to grow their business.
For instance, say a small mom & pop shop that you built a website for two years ago is ready to create a new site with an eCommerce store. Would they be more likely to reach out to you if they’ve had semi-regular contact for the last two years as their website host, or if you were a distant memory?
5. You’ll learn (a lot): This may not be true for everybody, but whether it’s fixing my car or fixing DNS settings – I learn best by being confronted with a problem and figuring it out. In the beginning, you’ll probably be on the phone with support for many issues you and your clients encounter. But as time goes on – and probably more quickly than you expect – you’ll learn to diagnose and fix many hosting issues yourself.
Cons of Offering Hosting to Your Clients
Don’t get me wrong. Just like any solution you offer to your clients, there are negatives.
1. You’ll be the first-level of tech support: Try as you might to strategically target low-maintenance clients, you will get emails and phone calls for support requests ranging from changing passwords to websites going down. The good news is, at least in my experience, the vast majority of issues you’ll experience are easy to resolve. No matter how experienced you are, however, there will also be times you’ll have to call the hosting provider to help resolve an issue.
2. You’ll be on call: If having the ability to turn off your phone, close your laptop, and leave the world behind for days at a time is a priority for you – being a hosting reseller will be tricky. While it’s not common, you do get emergency requests for help that come at all times of the day or night. I’ve learned to prioritize which issues are truly emergencies and which can wait until the morning.
I’ve had only one stretch in three years of offering reseller hosting in which I had to be completely out of reach for more than a day (it was 4 days, in my case). To head off issues, I let my clients know I’d be unreachable and arranged for a person I trusted to take care of emergencies in my absence. Not only did no emergencies arise, but no issues came up at all.
3. You’re at the mercy of your host: Choosing the right hosting provider is vital. If you choose poorly, it can make you look bad and may result in you having more support requests from your clients. What’s worse, if you decide to make a change, moving to a new hosting company as a reseller is a painful process and can cost you money.
Choosing the Right Hosting Provider
There are plenty of factors to take stock of when you’re browsing around for the right reseller hosting provider. You’ll find that many of the price points and features offered are very similar. For instance, most include vital tools like WHM, one-click installs, billing solutions, as well as up-time guarantees.
But there are also some pretty drastic variations between offerings, and I wanted to explore a few things that tend to be different:
1. Tech support: Choosing a hosting provider with great tech support will save you a ton of headaches down the road. You’ll want to be sure the company you choose offers 24/7 phone support, has knowledgeable and pleasant support staff, and won’t leave you on hold for 20+ minutes every time you call.
You should search for reviews of the hosting provider’s tech support, and even go so far as to call them several times at different times of the day (and night) to gauge how long you’ll have to wait on hold when you do have to ask for help.
2. Server response time: As search engines place a greater emphasis on site-speed for their rankings, having a web host that prioritizes short server response times can be much more important than you realize. For a fantastic breakdown of web host server speeds, check out this blog post.
3. Free SSL certificates: Beginning soon, Google will be making changes that will be pretty detrimental to websites that are not encrypted. While many hosting providers are licking their chops at all the money they’ll be making, others offer reseller packages that include free website encryption.
Being able to offer SSL certificates to your clients at no cost could be the value add that closes a deal. Or, charging them could be a way to make a little extra money with no up-front cost.
4. Free cPanel migration: Even if you have no clients lined up to host before you launch your reseller hosting offering, you probably have your own website and email account that needs to be migrated over. You shouldn’t have to do this yourself!
Some hosting providers offer limited, or even unlimited, migrations from a previous hosting provider to them. Be on the lookout for how many migrations a host will provide to a new reseller. If that number is limited or even zero, ask how much they charge per migration.
Pulling the Trigger
After you’ve done your research and signed up as a hosting reseller, you’ll want to take action on a few items before offering your new reseller hosting service to your clients.
1. Migrate Your Own Website & Email: A great first step into the world of reseller hosting is to use yourself as a guinea pig. Most reseller hosting accounts include at least a few account transfers at no additional cost. The processes for requesting a transfer vary depending on the host, but in most cases, you will find a form in the hosting console after logging into your reseller account.
In my case, I provide FTP information and cPanel logins, and my hosting provider does the rest – usually in less than an hour or two.
After you receive confirmation that the account has been migrated, you can log into WHM (more info on that below) to see the new account, change bandwidth and disk space quotas, and access the account’s cPanel.
The final step will be to log in to your Domain Registrar and change the Name Servers to your new hosting provider, to ensure the domain is pointing to the newly migrated website. Changing nameservers can take up to 24 hours to take effect, but it’s usually less than that. Here is nice tool to check and see if the nameservers have updated, and the URL is pointing to the migrated account.
Another important note: things get a little more dicey if the account your migrating in is not a cPanel-based account. In that case, you may have to either migrate WordPress manually and export/import your existing emails to the new hosting account. This is a pretty rare occurrence, however. Only twice now, in the years I’ve been offering this, has this issue come up – and both times I was able to pay a fee to my web host to do the migration for me.
Doing a migration on your own website and email will give you a good primer on migrating future clients’ accounts.
2. Decide on Pricing: This part doesn’t have to be as tricky as it sounds. For my web design business, offering hosting has always been about adding value for my clients and creating convenience for myself.
To achieve this, I offer hosting and 2 email addresses free for one year, and then charge $99/year after that. This accomplishes the two goals I stated above, and because there’s no up-front cost, the majority of my web design clients agree to host their websites with me.
3. Learn WHM (Web Host Manager): The vast majority of reseller hosting packages offer WHM as the way you’ll manage your clients’ hosting accounts. It may look intimidating when you first log in, but it’s not nearly as difficult as it may appear on first-glance.
Check back tomorrow for a detailed dive into setting up your reseller account with WHM!
4. Selling your first hosting account: My best advice would be to start small. Wait for a client who you think will be low-maintenance, who won’t need more than one or two email accounts, and who you have a good relationship with. You may even consider offering them free hosting for an allotment of time, in exchange for them being your guinea pig.
As you get more comfortable with reseller hosting, offering your service to bigger clients can be a great way to generate even more revenue.
In Closing
Depending on your price point, it may take a few months to begin making money in your new service. But before long, you’ll have a reliable source of recurring revenue, you’ll learn more than you ever thought possible about how hosting works, and you’ll have a new way to remain top-of-mind with previous clients.
Happy Hosting!
Featured image via Macrovector / shutterstock.com
This post is a community submission. If you’d like to become a contributor the Elegant Themes blog too, see our blogging guidelines and follow the submission process instructions.
hr{border-style: solid; margin: 0 0 40px 0; border: 1px solid #EAEBEB;}
The post A Handy Guide to Reseller Hosting appeared first on Elegant Themes Blog.
from Elegant Themes Blog https://ift.tt/2IiS4ZB via SiliconWebDesign
0 notes