#302 is also the html code for 'found'
Explore tagged Tumblr posts
chisie12 · 5 years ago
Text
Kisame Week Day 1: Swordsmen
Another mini series just cause it’s fun. A modern AU! @kisamesharkweek
Also I’m late for day 1 but doing everything on the phone is a nightmare. Fun, but a nightmare. 
In case it gets confusing, Misty Forest is Yamato here. Kakashi and the rest will eventually make their appearances too :D Akira/Clear Shadow is an OC.
Formatting in the phone also sucks, as usual.
AO3: https://archiveofourown.org/works/20036536/chapters/47446585
~☆~
{Set in the land of Eden, players will control their customizable characters to traverse the vast horizons of magic and wonders, while darkness looms in the distance. Gods, created by the faith of mortals and given the consciences birthed by centuries of belief, began to make their moves. Embroiled in their ploys, players are caught in the eye of the storm as Eden finds its peace.}
Eden had been launched for approximately three months, topping the charts for best-selling game and earning itself the title “Game of the Year 20XX” after an entire year of hyped anticipation. It was the newest creation of Mad Games, the same company that released the popular RPG game “Age of Glory”. Unlike “Age of Glory” that was playable across multiple platforms, Eden was only playable using the VR glasses jointly created by Mad Games and Technivia, the same company that created the popular Game Station consoles. Despite the unrest and opposition from the public at the single platform requirement, it soon turned into excitement. The VR glasses were easy to store and deceptively light, unlike its helmet predecessor that was heavy and bulky, and with its price oddly, relatively affordable, almost everyone that wanted to play the game owned the VR glasses…
Clear Shadow clung onto a piece of driftwood, but even that crumbled at the face of nature’s wrath. Raging waves slammed and pushed her below as the torrential river rushed her forward like a crowd excitedly passing the – unwanted – crowd surfer over their heads. She didn’t struggle against the force, letting it push and pull her away as she maintained her calm in the face of danger. Bubbles of air escaped her nose, mouth clammed shut while she forced to keep her eyes open. Sharp rocks in the river slashed at her body and a string of damage values appeared above her head. Her head throbbed, her lungs constricted. Determination coursed through her veins. Noticing the short clearway in the river and with a forceful kick of her legs, she flipped over to float on her back and greedily gulped in a large mouthful of oxygen. “There she is! Get her!” 
Out of the corner of her eyes, she spotted a party of four men rushing along the riverbed, weapons drawn and names coloured red. With a twist of her shoulders, her body was submerged face down into the water once again. “Hurry! Quick!” The Thief taking point cried out to his party members as he activated sprint. His steps increased in speed, dashing up and over the boulders along the riverbank and reaching Clear Shadow’s position within seconds. Tall trees lined the river, protruding roots large and strong along the banks, curling around the moss-covered rocks. The lone mage in the party suddenly halted his run, cloak billowing at the wind brought forth by the remaining two party members sprinting ahead. The brunet raised his wand, lips parting in a murmured chant. Surges of magic gathered around his wand and with the last syllable, he pointed the wand forward as an earthy coloured energy shone within the trees in the riparian zone. The thick roots shuddered, lifting off the rocks like awakened snakes bending their wills to the lull of a snake charmer and lashed out into the river where Clear Shadow’s silhouette hid inside. Demented Earth, a level 25 channelling spell that changes according to the terrain. Danger prickled her senses. In that split second, she decisively reached out her hand, sharp jagged edges of the rock digging into her taut fingers and pulled. Pulled as hard as she could in that one motion. Because in her next breath, a sharp pain jolted from her leg as she barely avoided the cone-shaped tendrils that speared the very spot she was at a moment before. -258! The small damage floated above her head as her health pool finally dipped below half. “You missed, Misty Forest! Fuck, are you even using your eyes!?” One of the remaining two Blade Masters swore at the Earth Elementalist behind them, only to be graced by a serene smile. “Ah! I’m so sorry! I’m still fairly new at the game.” Clicking his tongue, the Blade Masters dashed away after giving Misty Forest another dirty look. With the belittling eyes away from him, the Elementalist lost the calm in his upturned lips, soft brown eyes turning sharp, with steps striding forward in a rhythmic, unhurried manner. With the rapid waves, a lot of the party’s physical attacks couldn’t even reach their target, the force easily sweeping away the shurikens and easily rendering the Blade Masters’ attacks inaccurate. “She can’t stay underwater forever!” And that was the truth. When a character submerges below the water, an oxygen bar will appear below the mana pool, which will start to tick away, and hers was – just like her health – already below half. The Thief readied his shuriken, closely watching the shadowy silhouette while flanked by his pair of Blade Masters, swords at ready. There finally was a slight shift in her calm when she glanced at the mini map; relief and happiness relaxing her mind. She wasn’t too far now. At the end of the river was an estuary and with the currents, she would arrive by the sea in roughly half a minute. She held on strongly, body tilting and turning to minimise the damage from the river, but even she couldn’t deny that any more and she’d be forcefully dead by the system. With a heave of her arms, her head plunged above the surface, gasping for desperate oxygen. Her vision, blurred and dark, a sign that her character was about to drown, immediately cleared. Bright blue water and thinning trees flooded her sight, and she realised that the currents were slowing. A sharp whistling tore through the air. She turned her head, noticing the lone thief fixatedly watching his shurikens flying her way. She hurriedly gulped a lungful of air and ducked back into the river. -16! She ignored the shuriken slicing her cheek and quickly swam towards the estuary. Her arms stretched forward to propel herself forward as her tongue peeked out slightly, tasting the salt mixing with the fresh water. The Blade Masters sneered at the figure swimming quickly, swords poised at ready and patiently awaiting at either sides of the estuary; the right one, Turnip Killer, was at level 30 and the left, Stone King was at level 32. Turnip Killer leapt across the river while his sword lit up with a blinding white radiance. The sword drew a half circle radiance in mid-air as he focused on the silhouette. The blade plunged into the water with a splash, salty droplets pelting his face while his Upwind Slash attack met with a resistance when it successfully connected with Clear Shadow’s body. With a huff, the Blade Master swung upwards, forcing her out of the water and knocking her up into the air. Although its effect was slightly reduced from the water’s drag, the simple level 20 warrior skill’s knock-up effect was successfully activated as Clear Shadow’s body was bent backwards in mid-air from the attack. Stone King bent his knees and jumped, coming level with Clear Shadow as he pulled his arms back, both hands gripping his sword tightly. Equally covered in a white radiance, he cried out and activated Whirlwind Slash. Seeing the falling sword, Clear Shadow hurriedly lifted her own sword and activated Block just as the attack connected, negating the damage, but it wasn’t over. Following his momentum, the Blade Master spun a full turn in a slight vertical manner and once more heavily brought his sword down. With Block on cooldown, Clear Shadow tilted her body back, catching a brief glimpse of a green energy near the riverbank from the corner of her eyes before her eyes refocused on the enemy and activated Upwind Slash. Her upwards strike met with his second attack, with the rebounding force powerful enough to send her splashing into the river again. -547! -25! Clear Shadow quickly downed a small health potion, causing her low health to recover till it was more than half. Despite being higher levelled than the attacking party at level 35, her health and dexterity had taken a hit in stat points as she mainly focused on strength and intellect, with a minor focus on speed. The knockback had sent her deep enough, feet touching the riverbed. She kicked off the ground, a cloud of soil browning the clear water and she shot towards the ocean, swimming with all her might that her muscles screamed and ached at the shoulders. Legs started to tire and refuse to kick, and she unwillingly resurfaced for oxygen. A booming roar shook the skies, followed by a massive crash into the ocean. Her health that was recovered was immediately reduced by the attack. Large, towering waves surfed the water, crashing into Clear Shadow and forced her back underwater just as Misty Forest finished chanting. Standing deep in the water, he was flanked by large rocks that protected him from the currents while also preventing him from getting washed away. Steady on his feet, he waved his staff forward after uttering the last syllable. Clear Shadow, still submerged, watched as the algae growing near the estuary rapidly grew and she hurriedly swam out further when a large figure, bloodied and battered, appeared before her eyes. Its width spanned easily over fifteen meters with a height possibly over fifty meters and its half submerged body, from what she could deduce, was wide at the top that narrowed near the feet. Long tentacles made up its feet, waving and keeping it afloat, and more sharp tentacle tendrils thrashed about in an enraged manner. Her eyes flew open at the sound of something rushing towards her from behind and she twisted her body to the side, thanking heavens that sound travelled a lot faster underwater than it did in the air. The overgrown algae speared through bubbles, missing her entirely and she watched as it continued attacking in its path towards monster. Blood was drawn as the attack landed a critical hit, gaining a damage boost because of the elemental advantage. The monster roared, shaking the seas and earth with its fury, and Clear Shadow was thrown out with a wave of its tentacles, painfully landing on the spit that stretched out near the estuary. -2376! Inwardly swearing at her low health,  she scooted a distance away; for fear of stray attacks and splash damage, for fear of dying as her health potion was still on cooldown, and silently observed the situation unfold. The level 45 boss monster Sea Monk had only but a sliver of health remaining. Its eyes were a glaring red as it spun a full circle, tentacle arms sweeping out into an area of effect attack. With the increased strength and speed from its berserk state, its attack gained a wider range as powerful waves crashed onto the pair of Blade Masters and its initial attacker. “The fuck are you guys doing!” A husky voice shouted after he was slammed onto the ground. The snarl was almost animalistic, feral like a beast in the wild. ‘And he certainly looks the part,’ Clear Shadow was slightly taken aback at the sight. Blue, a deep blue like the dark ocean depths, was his skin that peeked out from the armour. Fingers reached towards the fallen broadsword and gripped it tight as he got to his feet upon the same spit she was on. Her eyes followed his movements as they slowly widened alongside his straightening figure, back straight and shoulders square, but fury rolled off him in angry waves. The blue-skinned Blade Master has been thrown too far away, landing just right outside of the boss’ aggro range. He waved his sword around, stretching out his arms, muscles thick and defined rippling with his movements. Generally a player’s appearance is modelled after them in real life with a beautified touch, but players themselves still did retain the option to customise the characters. Yet, although curious at his choice of colour, she chose to remain silent as she inspected his player details. Level 38 Blade Master, Tailless Beast. Tailless Beast looked at the Sea Monk getting further away from him and snarled at Clear Shadow, mouth full of pointy teeth bared like a predator. She looked back inquisitively, an eyebrow raised in slight defiance and slight surprise. Small black eyes narrowed but wordlessly, he turned back to the berserk boss and activated sprint. At the end of the spit, he jumped while his sword gleamed white, and activated Blade Rush. ‘Huh… He’s using an offensive skill as a movement skill,’ Clear Shadow noted in surprise. Steady battle cries to the side caught her attention when the Sea Monk shifted its aggro, she realised, towards the brunet Elementalist. Turnip Killer and Stone King patiently waited at the estuary as Misty Forest chanted another spell with the Thief positioned at ready behind him. They knew a battle in the water would mean certain death. Not only would their skills be reduced, but their movements would be slowed and restricted as well, not to mention they had zero experience in fighting in such a scenario. Sea Monk charged forward instinctively, roaring and lashing its tentacles at everything around it. Planting their feet steady on the ground, they endured the damage from the waves as the Sea Monk neared closer. Fifteen meters. The Sea Monk screeched, activating a sound wave attack that affected the party of four, the tunnel-like sound waves sending water swirling everywhere. Ten meters. The pair of Blade Masters rushed forward to intercept. Sea Monk raised its tentacle arms, each one as thick as a barrel, high above its bleeding head. It snarled at the Blade Masters, round mouth full of many rows of sharp pointy teeth. With a screech, it brought its arms down while its preys hurriedly activated Block, but the skill’s negating effect was cancelled when facing against a berserk boss, instead becoming an effect that reduced the damage by 50%. A tremor shook the earth and skies as its arms slammed upon the swords, causing the ground beneath their feet to cave in. -3296! Seeing their health immediately plummet, they screamed at Misty Forest. “Can’t you help - !” A gust zipped past the Blade Masters cheek, the speed of the object so quick that all they saw was a blurry shadow. Their eyes followed the attack’s trajectory and watched in time as thick roots speared through Sea Monk’s open mouth and exited through the skull. -5476! A critical hit! Misty Forest lowered his wand, now standing back on land. A bright light enveloped him while a notification chimed, indicating that he levelled up. Just as the party of four thought that they were out of harm’s way as they collected the dropped loot, an enraged roar bellowed. Charging from across the water, Tailless Beast landed at the estuary with a glare. “The boss was mine!” “It attacked us!” Stone King retorted and picked up the dropped weapon. “We acted in self defence.” Anger rocked in the pits of Tailless Beast’s stomach, swirling and crashing like waves in a storm, and his snarl curved into a feral grin when they stepped back from fear. He took a step forward, the pressure bearing down onto them. The Sea Monk wouldn’t have attacked them if its aggro wasn’t pulled away, if he wasn’t sent away. He had been here first, training solo and far away from public, and it was peaceful enough until this ragtag bunch appeared. And besides, he was never one for words. “You probably would have died to the boss anyway,” Stone King continued. “You wouldn’t have lasted another – !” He shut his eyes from the sudden gust of wind. A blinding flash of white so familiar appeared and momentarily, he wanted to ignore it, only to have his eyes flying open at the pain sprouting from his gut. Tailless Beast followed up his Upwind Slash with Blade Rush, his sword slicing through Stone King’s side as he travelled a distance forward. Turning at the waist, he swept his arm out and executed a basic slash attack towards the falling neck, sending a fountain of blood to spurt in mid-air and shaving away the last bits of his health, not giving even the slightest bit of chance to recover his health. He glanced from the corner of his eyes and firmly planted his feet on the ground. His body tilted to the side, the Piercing Thrust missing him by an inch, and he returned a tooth for a tooth. Calmly, his arm straightened and he sent his sword thrusting straight out towards Turnip Killer. With no way to dodge or block, he could only receive Tailless Beast’s attack head-on. Even though they were the same attacks, being eight levels higher did have its advantages after all. Blood was drawn when Tailless Beast’s sword pierced the other Blade Master’s shoulder. Both parties instantly distanced themselves as their attacks ended. “Why are you attacking us?” Turnip Killer panted and quickly downed a health potion. “You stole my boss first.” “We’re sorry!” The Blade Master hurriedly jumped back when Tailless Beast swung his sword. “We’ll give you the loot!” The sword was swung again and he ducked in panic. “And some compensation!” As they argued, more sword swinging than words on Tailless Beast’s part, there was a surge of magic and the chilly temperature rose. Sweat started to bead across their foreheads and their armour started to feel warm, only Misty Forest fared better with his cotton robes. In the next instant, a fireball was cast, shooting towards the Thief sneaking around Tailless Beast. Staggering in his steps, the Thief was materialised out of stealth. Shock and disbelief coloured his face. “How did you – !” The moment the Thief spoke, frost had covered the cracked ground in a linear path, rapidly snaking towards him. His words were caught in his throat, movements forced to slow to a stop as icy blue frost crawled up his legs to fully encase his body. A shadow dashed passed Tailless Beast with Blade Rush and she activated Whirlwind Slash when the Thief entered its range. The frozen shell cracked, shattering into pieces like a broken mirror at the first slash, slicing into his body mercilessly before giving way to the second slash to slice his throat. The damage from Frost Spread had been negligible, unlike the damage from the fireball he had eaten head on, but the Whirlwind Slash was enough to fully deplete his full health of a level 28 Thief. Seeing half his party killed, Turnip Killer activated his Return Scroll in a fluster. His eyes watched, frightened, at Clear Shadow turning on her heels to face him with the biggest smirk and a victorious glint in those navy eyes. His breathing grew ragged, mind in a flurry and he prayed, so hard that the three seconds channelling of the scroll would hurry up. His heartbeat thumped with the seconds. One. Hurry up!! Two. A bit more! Relief, he could almost taste the sweetness of the escape. If he died now, he would have dropped a level and an equipment for having died with a red name, an effect from attacking other players first, and he had spent too much time building this character. Escaping now would save him from the dull grinding, escaping now would mean he could save his equipment. The channelling bar was almost full. Just a little… bit… more! At this time, Turnip Killer turned to smirk at Clear Shadow, unbothered that she wasn’t making any last ditch attempts at attacking him because his return scroll would have been interrupted, but he wasn’t going to complain. 97% completion. A broadsword filled his vision, his eyes widening in fear at the growing sight. 99% completion. No!! Tailless Beast ruthlessly stabbed his sword through the torso, blade poking out from the back as the light from the return scroll dimmed, just like the lifeless eyes of the corpse. Blade Rush had just gotten off cooldown when he activated it again, easily closing the distance between them in the blink of an eye and killing the escaping Blade Master. He pulled out his sword and the body fell with a dull thump. “Idiot. Should have used a health potion instead,” he scowled while walking away, having seen the figure of Misty Forest disappearing into the light, successfully returning to the city. Clear Shadow walked up to him, stopping a couple of meters away. “Thanks for saving me.” He leered down at her, his near two meters tall stature towering over her slightly over one and a half meter height. “I wasn’t saving you. Just returning the favour for stealing the boss.” He hefted the large broadsword onto his shoulder and began walking away when a notification popped up. [Clear Shadow sent you a friend request. Accept? Reject?] With the snarl still present, he immediately rejected it and sprinted away. Clear Shadow watched with a smile, unaffected by his rejection. Seeing that his large figure disappeared into the water, she looked at the time and decided to exit the game.
~*~*~
Akira opened her eyes, revealing a pair of soft brown orbs instead of navy hidden beneath the lids. She removed the helmet off her head and the device automatically folded itself to return to its original state with a mechanical whirl. The ends of the helmet folded inwards, turning back into the glasses’ arms. She placed the sleek, black framed glasses onto the bedside table and stood up from the bed. Its lens were wide, which could cover the entire area around her eyes and curved around her temples. She then stepped into her kitchen, flicking the lights on, picked her orange cup from the dishrack and pulled open a drawer for her favourite hot chocolate. Stirring the hot chocolate gently with a spoon, Akira walked towards the living room and drew the curtains open, the crimson colour a clear contrast to her white walls. Small, white crystals dotted the moonless sky. The lights to the room across from her on the other apartment fifteen meters building lit up, catching her attention. A tall, muscular figure appeared by the window, an arm lifted to also coincidentally drink from his cup. Soft laughter rumbled under her breath as she similarly drank her hot chocolate. She double tapped on her sleeping phone and the screen lit up. 3:02 am. Unlocking her phone, she sent a quick text off as a report: [Target found. Contact established.] Her mission had just begun.
12 notes · View notes
archerptdy981 · 4 years ago
Text
HTTP response status codes indicate whether a specific HTTP request has been successfully finished
HTTP reaction status codes reveal no matter whether a certain HTTP request is successfully accomplished. Replies are grouped in five courses:
Informational responses (100--199)
Prosperous responses (200--299)
Redirects (three hundred--399)
Consumer problems (400--499)
Server mistakes (500--599)
Within the party you receive a response that is not During this listing, it really Get more information is a non-typical response, possibly custom made to the server's software program.
Information responses
100 ContinueThis interim reaction implies that anything is OK and that the customer really should carry on the request, or disregard the response In the event the ask for is presently concluded. 101 Switching ProtocolThis code is sent in response to an Upgrade request header in the consumer, and signifies the protocol the server is switching into. 102 Processing (WebDAV)This code indicates the host has obtained which is processing the request, but no response is obtainable still. 103 Historical HintsThis status code is primarily intended to generally be employed utilizing the hyperlink header, permitting the purchaser agent begin preloading applications though the host prepares a solution. The that means with the achievements relies upon upon the HTTP process:
GET: The source has been fetched which is despatched from the information human body.
Set or Publish: The resource describing the outcome with the motion is transmitted in the message system.
201 Produced The ask for has succeeded along with a new source is created As a result. This is ordinarily the solution sent immediately after Submit requests, or any Set requests. 202 AcceptedThe ask for has actually been received although not however acted on. It really is noncommittal, considering that there is no way in HTTP to Later on ship an asynchronous response indicating the benefits in the ask for. It truly is intended for situations in which A further treatment or host handles the request, or for batch processing. 203 Non-Authoritative Info This response code suggests the returned meta-facts is not really particularly the same as is obtainable from the supply server, but is gathered from the neighborhood or maybe a 3rd party backup. That is largely utilized for mirrors or even copies of another source. Except for that unique case, the"200 Okay" reaction is chosen to this standing. 204 No Contentthere is absolutely not any information to ship for this petition, although the headers may well be handy. 205 Reset ContentTells the user-agent to reset the doc which sent this petition. 206 Partial ContentThis response code is employed when the Range header is shipped by the consumer to ask for just A part of a source. 207 Multi-Status (WebDAV)Conveys specifics of many sources, for cases the place several position codes may possibly be ideal. 208 Already Reported (WebDAV)Used inside of a reaction factor to avoid consistently enumerating the inner associates of several bindings towards the similar assortment. 226 IM Used (HTTP Delta encoding)The server has fulfilled that has a GET ask for for your source, plus the answer is actually a illustration of your final result of just one or additional instance-manipulations applied to The present occasion.
Redirection messages
300 Multiple ChoiceThe petition has in excess of just one attainable response. The person-agent or buyer ought to decide on one among these. (There may be no standardized way of picking out among These solutions, but HTML hyperlinks into the choices are recommended so the person can pick out.) The new URL is provided while in the response. 302 FoundThis reply code implies the URI of asked for source is changed temporarily. Even more modifications in the URI could be acquired Down the road. Thus, this same URI must be used from the customer in long term requests. 303 See OtherThe waiter despatched this reaction to guide the customer to obtain the requested source at A further URI with a GET ask for. 304 Not ModifiedThat might be utilized for caching functions. It tells the customer the reply hasn't but been modified, Therefore the customer can continue on to employ the exact same cached Model of this reaction. 305 Use Proxy Defined in a preceding Variation on the HTTP specification to signify that a requested answer must be retrieved by a proxy. It's got been deprecated due to protection problems about in-band set up of a proxy. 306 unusedThis reply code isn't any for a longer time made use of; it really is just reserved. It were used in a prior Variation of the HTTP/1.1 specification. 307 Temporary RedirectThe server sends this reaction to guidebook the client to get the requested supply at A different URI with identical process that was utilized from the prior ask for. This has the exact same semantics as the 302 Observed HTTP response code, Along with the exception that the consumer agent should not modify the HTTP technique applied: When a POST was employed at the to start with petition, a Publish must be utilized at the subsequent petition. 308 Permanent RedirectThis implies that the resource is currently completely located at a unique URI, specified from the Site: HTTP Reaction header. This has the very same semantics because the 301 Moved Completely HTTP reaction code, With all the exception which the person consultant should not change the HTTP approach applied: If a Put up was applied from the first petition, a Write-up has got to be applied while in the second ask for.
Client mistake responses
400 Bad RequestThe server could not understand the ask for as a consequence of invalid syntax. To paraphrase, the client will have to authenticate itself to get the asked for response. The Preliminary intent for building this code was making use of it for electronic payment devices, nonetheless this standing code is applied very almost never and no normal convention is existing. 403 ForbiddenThe shopper won't have entry legal rights to the materials; this is, it really is unauthorized, or so the server is refusing to give the requested source. Opposite to 401, the customer's identification is recognized to the server. 404 Not FoundThe device can't Identify the requested resource. From the browser, this indicates the URL will not be regarded. In an API, this can also indicate which the endpoint is valid although the resource itself doesn't exist. Servers can also send out this reaction as opposed to 403 to conceal the event of a source from an unauthorized purchaser. This reaction code is most likely essentially the most famous 1 as a result of its Regular prevalence on the internet. 405 Method Not AllowedThe ask for technique is understood with the server but has actually been disabled and can't be applied. By the use of example, an API may perhaps forbid DELETE-ing a resource. The 2 required solutions, GET and HEAD, will have to never ever be disabled and should not return this mistake code. 406 Not AcceptableThis answer is shipped when the Website server, just after carrying out server-driven content negotiation, isn't going to obtain any content material that adheres for the requirements equipped because of the user representative. 407 Proxy Authentication RequiredThis is similar to 401 however authentication is expected for being obtained by signifies of a proxy. 408 Request TimeoutThis reaction is sent on an idle connection by some servers, even with no prior request by the client. It typically implies which the server would like to near this down unused link. This response is used substantially additional for the reason that some browsers, like Chrome, Firefox 27+, or IE9, benefit from HTTP pre-link mechanisms to speed up searching. Also Notice that quite a few servers only shut down the link devoid of sending this message. 409 ConflictThis reply is distributed whenever a request conflicts Using the existing state of your host. 410 GoneThis response is sent in the event the asked for material was forever deleted from server, with no forwarding deal with. Purchasers are predicted to eliminate their caches and backlinks to the resource. The HTTP specification options this standing code for use for"restricted-time, marketing methods". APIs shouldn't come to feel pressured to point methods which have been deleted with this standing code. 411 Length RequiredServer rejected the request considering the fact that the Material-Duration header industry isn't really outlined and also the server wants it. 412 Precondition FailedThe consumer has indicated preconditions in its personal headers which the host isn't going to meet up with. 413 Payload Too LargeTalk to entity is much larger than limitations described by server; the server could close the connection or return the Retry-Following header area. 415 Unsupported Media TypeThe media framework of this asked for knowledge is not supported through the server, And so the server is rejecting the request. 416 Range Not SatisfiableThe selection specified from the Variety header area from the request can't be fulfilled; it truly is achievable the scope is away from the scale of the focus on URI's information. 417 Expectation FailedThis reaction code signifies the anticipation indicated from the Hope ask for header discipline won't be able to be fulfilled by the server. 421 Misdirected RequestThe ask for was directed in a host which will not be ready to generate a response. This may be transmitted by a host which is not configured to create responses for the combination of technique and authority which are A part of the request URI. 422 Unprocessable Entity (WebDAV)The ask for was perfectly-shaped but was unable to generally be followed due to semantic problems. 423 Locked (WebDAV)The source that has long been accessed is locked. 425 Too Early indicators that the server is hesitant to chance processing a petition That may be replayed. 426 Upgrade RequiredThe server is not going to complete the request applying the latest protocol but may possibly be Completely ready to do so next the customer updates to a special protocol. The server sends an Upgrade header at a 426 response to signify the necessary protocol(s). 428 Precondition RequiredThe origin server calls for the petition to be conditional. This response is intended to prevent the'missing update' problem, exactly where a purchaser Receives a supply's issue, modifies it, and PUTs back to the server, when a 3rd party has modified the condition to the host, bringing about a fight. 429 Too Many RequestsThe consumer has sent a lot of requests inside of a specified interval of your time ("amount restricting"). 431 Request Header Fields Too LargeThe host is unwilling to procedure the request mainly because its personal header fields are excessively big. 451 Unavailable For Legal ReasonsThe user-agent requested a resource which may't legally be offered, such as an online web site censored by a governing administration.
Server mistake answers
Tumblr media
youtube
500 Internal Server ErrorThe server has encountered a circumstance it doesn't learn how to manage. 501 Not ImplementedThe petition approach isn't supported by the host and can not be managed. The only techniques that servers are envisioned to aid (and as a result that should not return this code) are GET and HEAD. 502 Bad GatewayThis error reaction signifies the server, though employed to be a gateway to have a reaction needed to manage the petition, got an invalid reaction. 503 Service UnavailableThe server isn't ready to handle the ask for. Frequent brings about are a host which is down for maintenance or that is overloaded. Observe that with this reaction, a person-helpful website page describing the issue ought to be despatched. This solutions need to be utilized for short-term ailments in conjunction with also the Retry-Just after: HTTP header should, if possible, involve the believed time just before the recovery of the support. The webmaster will have to also watch out about the caching-relevant headers that are despatched collectively with this solution, as these momentary issue responses shouldn't be cached. 504 Gateway TimeoutThis mistake response is provided when the server is acting as a gateway and can't find a reaction in time. 508 Loop Detected (WebDAV)The server detected an infinite loop when processing the petition. 510 Not ExtendedExtra extensions to the ask for are needed for your waiter to match it. 511 Network Authentication RequiredThe 511 status code implies which the consumer should authenticate to attain network obtain.
2 notes · View notes
troydrea415 · 4 years ago
Text
HTTP response status codes indicate if a particular HTTP request was successfully finished
HTTP reaction status codes reveal no matter whether a specific HTTP petition has been successfully completed. Responses are grouped in 5 courses:
If you are provided a response that isn't really Within this listing, it really is a non-normal reply, potentially custom made to the server's program.
Information answers
100 ContinueThis interim response suggests that anything up to now is OK and the shopper need to continue on the ask for, or disregard the response If your petition is now finished. 101 Switching ProtocolThis code is sent in reaction to an Upgrade ask for header from the buyer, and indicates the protocol the server is switching to. 103 Historical HintsThis standing code is largely meant to generally be utilized with the hyperlink header, enabling the shopper agent commence preloading means although the host prepares an answer.
Successful responses
200 OKThe request has succeeded. The that means with the achievement depends on the HTTP strategy:
GET: The useful resource was fetched and is also transmitted from the concept overall body.
PUT or Put up: The resource describing the end result of the motion is despatched in the message system.
201 Produced The ask for has succeeded in addition to a brand-new resource was developed For that reason. This is normally the response despatched right after Article requests, or any Place requests. 202 AcceptedThe petition was received although not still acted upon. It is actually noncommittal, considering the fact that you will find absolutely not any way in HTTP to later mail an asynchronous response suggesting the effects with the request. It really is intended for circumstances in which Yet another method or server handles the request, or for batch processing. 203 Non-Authoritative Info This reaction code means the returned meta-information and facts isn't precisely the same as is offered through the supply server, but is collected from the community or a third-social gathering backup. This is chiefly used for mirrors or backups of a unique resource. Except for that specific scenario, the"two hundred OK" reaction is favored to this standing. 204 No Contentthere is absolutely not any product to ship for this ask for, however the headers could be useful. 205 Reset ContentTells the consumer-agent to reset the report which sent this petition. 206 Partial ContentThis reply code is applied when the Range header is shipped from the consumer to ask for just Component of a resource. 207 Multi-Status (WebDAV)Conveys specifics of many sources, this kind of circumstances where by various position codes may well be ideal. 208 Already Reported (WebDAV)Used within a reaction element to stay away from consistently enumerating the inner members of a number of bindings towards the identical selection. 226 IM Used (HTTP Delta encoding)The server has fulfilled a GET petition for the resource, and also the reaction is often a illustration with the consequence of additional occasion-manipulations placed on the current situation.
youtube
Redirection messages
300 Multiple Option The ask for has about one prospective reaction. The user-agent or user has to decide on between them. (There is no standardized way of picking among the responses, but HTML links into the possibilities are recommended so the person can decide.) 301 Moved PermanentlyThe URL on the asked for useful resource has long been altered for good. The brand new URL is provided within the reply. 302 FoundThis reply code implies the URI of requested resource was adjusted briefly. Even more modifications while in the URI could be made later on. Therefore, this exact same URI really should be applied through the shopper in foreseeable future requests. 303 See OtherThe waiter sent this reaction to immediate the customer to receive the asked for source at Yet another URI with a GET ask for. 304 Not ModifiedThis is utilized for caching applications. It tells the shopper the reaction hasn't nonetheless been altered, And so the buyer can go on to employ exactly the same cached Variation of this reaction. 305 Use Proxy Outlined in a very past Variation in the HTTP specification to signify that a asked for reaction has got to be retrieved by indicates of the proxy. It's got been deprecated because of safety concerns regarding in-band configuration of the proxy. 306 unusedThis reaction code isn't any much more utilised; It truly is just reserved. It had been Utilized in a preceding Edition of this HTTP/one.1 specification. 307 Temporary RedirectThe host sends this response to direct the customer to get the requested resource at Yet another URI with very same process that was utilized in the past petition. This has the exact same semantics as the 302 Identified HTTP reaction code, Using the exception the consumer representative should not modify the HTTP approach utilized: When a Article was utilised in the First ask for, a Publish should be utilized at the second petition. 308 Permanent RedirectThis suggests which the useful resource has grown to be forever Found at another URI, supplied from the Locale: HTTP Response header. This has the very same semantics since the 301 Moved Permanently HTTP response code, Along with the exception the user agent need to not change the HTTP strategy utilised: If a Write-up was utilized in the initially request, a Publish must be applied in the next petition.
Client error responses
400 Bad RequestThe server couldn't understand the request due to invalid syntax. To paraphrase, the client ought to authenticate itself to get the requested response. 402 Payment Required This reply code is reserved for foreseeable future utilization. The First intention for producing this code was making use of it for electronic payment techniques, but this status code could be utilised pretty sometimes and no standard Conference exists. 403 ForbiddenThe customer will not have obtain rights to the content material; that is, it truly is unauthorized, so the server is not able to give the requested useful resource. Contrary to 401, the customer's identity is named the host. 404 Not FoundThe equipment cannot locate the asked for useful resource. In the browser, this usually means the URL is not regarded. In an API, this can also point out the endpoint is legit but the resource itself isn't going to exist. Servers may also deliver this reaction as opposed to 403 to hide the existence of the source from an unauthorized client. This reply code is most likely probably the most famed just one because of its Repeated occurrence on the net. 405 Method Not AllowedThe request strategy is comprehended through the server but is disabled and won't be able to be employed. Through instance, an API might forbid DELETE-ing a resource. Both Obligatory techniques, GET and HEAD, really should in no way be disabled and shouldn't return this mistake code. 406 Not AcceptableThis reply is despatched when the World-wide-web server, following accomplishing server-driven content negotiation, will not discover any material which conforms into the benchmarks given because of the consumer agent. 407 Proxy Authentication RequiredThis resembles 401 having said that authentication is essential being achieved by indicates of a proxy. 408 Request TimeoutThis reply is despatched on an idle website link by some servers, even without prior ask for with the customer. It commonly usually means the host would like to shut down this new connection. This response is employed A lot much more because some browsers, like Chrome, Firefox 27+, or IE9, use HTTP pre-connection mechanisms to speed up browsing. Also Be aware that some servers only shut down the link without having sending this data. 409 ConflictThis reaction is shipped whenever a ask for conflicts Along with the current condition in the host. 410 GoneThis respond to is despatched if the asked for information was completely deleted from server, without having forwarding tackle. Consumers are expected to get rid of their caches and hyperlinks to your source. The HTTP specification intends this standing code for use for"restricted-time, marketing products and services". APIs should not experience pressured to indicate resources that are actually deleted with this position code. 411 Length RequiredServer rejected the request due to the fact the Content material-Duration header discipline is just not defined as well as the server desires it. 412 Precondition FailedThe client has indicated preconditions in its own headers that the server doesn't meet. 413 Payload Too LargeAsk for entity is even bigger than limitations described by server; the server may well close the connection or return an Retry-Right after header Discover more here area. 414 URI Too LongThe URI requested with the shopper is lengthier compared to server is prepared to interpret. 415 Unsupported Media TypeThe media framework of this info that may be asked for is just not supported through the server, Therefore the server is rejecting the ask for. 416 Range Not SatisfiableThe scope specified by the Selection header subject in the ask for cannot be fulfilled; it is actually probable which the scope is far from the size of the concentrate on URI's information. 417 Expectation FailedThis response code signifies the anticipation indicated through the Anticipate request header industry can not be fulfilled from the host. 421 Misdirected RequestThe petition was directed in a host which is not in the position to generate a reaction. This can be despatched by a server which is not configured to develop responses with the combo of plan and authority which are A part of the request URI. 422 Unprocessable Entity (WebDAV)The ask for was well-fashioned but was not able for being followed due to semantic problems. 423 Locked (WebDAV)The resource that has actually been accessed is locked. 424 Failed Dependency (WebDAV)The ask for failed because of failure of the previous petition. 425 Too Early Suggests which the server is unwilling to risk processing a petition which may be replayed. 426 Upgrade RequiredThe server will likely not complete the request with the latest protocol but could be keen to do so subsequent the purchaser updates to a unique protocol. The server sends an Update header in a 426 reaction to signify the expected protocol(s). 428 Precondition RequiredThe origin server needs the petition to come to be conditional. This reaction is meant to guard versus the'misplaced update' challenge, by which a purchaser Receives a source's point out, modifies it, and Places back to the host, when In the meantime a 3rd party has modified the condition on the host, resulting in a fight. 429 Too Many RequestsThe user has sent a lot of requests inside of a precise period of your time ("fee limiting"). 431 Request Header Fields Too LargeThe server is unwilling to procedure the request due to the fact its very own header fields are excessively huge. The ask for may very well be resubmitted soon after minimizing the sizing with the ask for header fields. 451 Unavailable For Legal ReasonsThe person-agent questioned a source that cannot lawfully be supplied, like a web site censored by a govt.
Server mistake answers
Tumblr media
500 Internal Server ErrorThe server has encountered a scenario it would not understand how to handle. 501 Not ImplementedThe ask for technique just isn't supported from the host and can't be dealt with. The sole methods that servers are needed to support (and hence that need to not return this code) are GET and HEAD. 502 Bad GatewayThis oversight response usually means the server, though employed to be a gateway to get a response needed to tackle the ask for, acquired an invalid response. 503 Service UnavailableThe server just isn't All set to handle the ask for. Widespread results in can be a host which is down for maintenance or that's overloaded. Recognize that with this particular reaction, a person friendly website page describing the problem should be despatched. This responses really should be employed for temporary conditions as well as the Retry-After: HTTP header must, if possible, involve the estimated time in advance of the recovery of this ceremony. The webmaster really should also acquire care regarding the caching-relevant headers which are despatched together using this reaction, as these momentary issue responses should not be cached. 504 Gateway TimeoutThis error reaction is offered when the device is acting to be a gateway and cannot get a response in time. 506 Variant Also NegotiatesThe machine has an internal configuration error: the decided on variant useful resource is configured to engage in transparent content material negotiation by itself, and is thus not an appropriate end level within the negotiation procedure. 507 Insufficient Storage (WebDAV)The process couldn't be performed on the resource because the server is unable to retail outlet the illustration required to correctly total the request. 508 Loop Detected (WebDAV)The
1 note · View note
rowanpbon745 · 4 years ago
Text
HTTP response status codes indicate if a particular HTTP request was successfully finished
HTTP reaction standing codes suggest no matter whether or not a precise HTTP ask for has long been productively done. Responses are grouped in 5 courses:
From the function you obtain a response that just isn't During this checklist, it's a non-standard response, probably personalized into the host's application.
Information responses
100 ContinueThis interim reaction suggests that every thing is Okay and the client must go on the request, or dismiss the solution In case the petition is presently completed. 101 Switching ProtocolThis code is sent in reaction to a Upgrade request header from the purchaser, and signifies the protocol the server is switching to. 103 Historical HintsThis standing code is mainly supposed to get used applying the hyperlink header, letting the purchaser agent start out preloading assets when the server prepares an answer.
Successful responses
200 OKThe request has succeeded. The significance in the accomplishment is contingent on the HTTP approach:
GET: The supply was fetched and it is despatched during the information entire body.
Place or Write-up: The resource describing the result from the action is despatched into the concept system.
201 Produced The ask for has succeeded and also a new source was made Consequently. This is certainly usually the response despatched after Publish requests, or any PUT requests. 202 AcceptedThe petition continues to be received although not yet acted on. It really is noncommittal, due to the fact there's Certainly not any way in HTTP to Later on deliver an asynchronous reaction suggesting the consequence from the request. It's intended for cases where a special course of action or host handles the request, or for batch processing. 203 Non-Authoritative Info This reply code signifies the returned meta-data isn't accurately the same as is out there with the supply server, but is collected from the regional or even a third-occasion backup. This is certainly generally useful for mirrors or backups of another useful resource. Aside from that unique circumstance, the"200 OK" reaction is desired to this status. 204 No Contentthere is completely not any material to ship for this petition, although the headers may perhaps be practical. The person-agent may well update its cached headers for this useful resource While using the new ones. 205 Reset ContentTells the consumer-agent to reset the doc which sent this petition. 206 Partial ContentThis response code is employed although the Range header is sent by the shopper to request only Portion of a resource. 208 Already Reported (WebDAV)Utilized inside of a remedy component to keep away from regularly enumerating the internal members of several bindings to the exact same collection. 226 IM Used (HTTP Delta encoding)The server has fulfilled a GET ask for to the resource, as well as the solution is often a representation on the consequence of additional occasion-manipulations applied to the current occasion.
youtube
Redirection messages
300 Multiple Option The petition has above 1 attainable response. The user-agent or client has to pick between these. (There is certainly no standardized way of deciding upon one among the answers, but HTML links to the prospects are advised so the consumer can select.) The brand new URL is presented inside the reaction. 302 FoundThis response code suggests that the URI of requested resource was modified briefly. Additional variations within the URI may well be gained Later on. So, this identical URI ought to be used with the customer in potential requests. 303 See OtherThe waiter despatched this response to manual the consumer to acquire the requested supply at Yet another URI with a GET ask for. 304 Not ModifiedThis is used for caching features. It tells the consumer that the reaction has not nevertheless been modified, Therefore the consumer can proceed to implement a similar cached version of this response. 305 Use Proxy Outlined in a very previous Model from the HTTP specification to reveal a asked for response needs to be retrieved by suggests of the proxy. It can be been deprecated because of security concerns pertaining to in-band configuration of the proxy. 306 unusedThis reply code is not any much more utilized; it truly is just reserved. It was used in a prior version of the HTTP/1.one specification. 307 Temporary RedirectThe server sends this reaction to immediate the consumer to get the requested source at a special URI with exact same strategy which was applied in the earlier petition. This has the identical semantics given that the 302 Located HTTP answer code, with the exception that the consumer agent shouldn't change the HTTP system utilized: When a Article was utilised in the 1st request, a Put up will have to be utilized in the second request. 308 Permanent RedirectThis usually means the useful resource is now permanently Situated at A different URI, presented from the Locale: HTTP Reaction header. This has the same semantics given that the 301 Moved Permanently HTTP response code, While using the exception the consumer consultant have to not improve the HTTP system employed: When a Submit was utilized in the Preliminary petition, a Submit should be applied while in the up coming ask for.
Tumblr media
Client mistake answers
400 Bad RequestThe server couldn't realize the ask for due to invalid syntax. That may be, the customer need to authenticate by itself to get the requested answer. The Preliminary goal for producing this code has been working with it for electronic payment solutions, but this status code is often used really infrequently and no conventional Conference exists. 403 ForbiddenThe client doesn't have entry rights to this substance; that is, it can be unauthorized, so the server is refusing to deliver the requested useful resource. Compared with 401, the consumer's identification is known to the host. 404 Not FoundThe device cannot locate the requested source. In the browser, this normally usually means the URL is not regarded. In an API, this can also reveal that the endpoint is genuine nevertheless the useful resource alone isn't going to exist. Servers may also ship this response as an alternative to 403 to conceal the event of the supply from an unauthorized consumer. This reaction code is almost certainly by far the most famed one as a consequence of the frequent event online. 405 Method Not AllowedThe petition method is regarded by the server but has been disabled and can't be employed. By the use of occasion, an API may possibly forbid DELETE-ing a useful resource. The two mandatory procedures, GET and HEAD, need to never ever be disabled and should not return this error code. 406 Not AcceptableThis solution http://discorddownn.moonfruit.com/?preview=Y is sent when the server, right after accomplishing server-driven content negotiation, won't uncover any material which adheres on the criteria supplied from the consumer consultant. 407 Proxy Authentication RequiredThis resembles 401 but authentication is essential for being performed by a proxy. 408 Request TimeoutThis reaction is shipped on an idle connection by a few servers, even without prior ask for from the shopper. It commonly implies the server would like to close this down new link. This response may be utilized noticeably more since some browsers, like Chrome, Firefox 27+, or IE9, make the most of HTTP pre-link mechanisms to quicken browsing. Also Be aware that many servers simply shut down the connection with out sending this information. 409 ConflictThis reaction is sent any time a ask for conflicts While using the current state of the server. 410 GoneThis reaction is delivered once the requested information has become forever deleted from server, with out a forwarding address. Consumers are predicted to clear away their caches and hyperlinks on the useful resource. The HTTP specification ideas this position code to be used for"minimal-time, promotional expert services". APIs shouldn't feel pressured to indicate sources that are deleted with this standing code. 411 Length RequiredServer turned down the ask for since the Material-Length header industry just isn't outlined as well as the server needs it. 412 Precondition FailedThe consumer has indicated preconditions in its own headers that the server doesn't satisfy. 413 Payload Too Big Ask for entity is greater than limitations defined by host; the server may well shut the website link or return the Retry-After header area. 414 URI Too LongThe URI requested through the purchaser is longer compared to the server is ready to interpret. 415 Unsupported Media TypeThe media format of the info which is asked for just isn't supported with the host, Therefore the server is rejecting the petition. 416 Range Not SatisfiableThe variety specified by the Assortment header industry in the ask for can't be fulfilled; it is achievable which the scope is outdoors the scale of this concentrate on URI's facts. 417 Expectation FailedThis reaction code signifies the expectation indicated from the Foresee ask for header industry cannot be fulfilled by the host. 418 I'm a teapotThe server refuses the try to brew coffee using a teapot. 421 Misdirected RequestThe ask for was directed in a server which will not be capable to generate a reaction. This might be transmitted by a server that is not configured to develop responses with the combo of scheme and authority that are included in the ask for URI. 422 Unprocessable Entity (WebDAV)The petition was perfectly-fashioned but was not able to get followed because of semantic errors. 423 Locked (WebDAV)The source that has become obtained is locked. 424 Failed Dependency (WebDAV)The request failed on account of failure of a preceding request. 425 Too Early Suggests which the host is unwilling to threat processing a petition that might be replayed. 426 Upgrade RequiredThe server is not going to do the request with the current protocol but might be ready to take action subsequent the client upgrades to One more protocol. The server sends an Update header at a 426 reaction to suggest the obligatory protocol(s). 428 Precondition RequiredThe supply server demands the petition to turn out to be conditional. This response is intended to safeguard against the'shed update' issue, the place a client gets a useful resource's point out, modifies it, and Places back again to the server, when a third party has altered the state within the server, leading to a battle. 429 Too Many RequestsThe person has delivered too many requests in the precise sum of your time ("amount restricting"). 431 Request Header Fields Too Big The server is unwilling to course of action the ask for simply because its header fields are excessively massive. The request may be resubmitted right after lessening the dimension from the request header fields. 451 Unavailable For Legal ReasonsThe person-agent requested a resource which often can't lawfully be furnished, including an internet site censored by a federal government.
Server error responses
500 Internal Server ErrorThe server has encountered a situation it isn't going to learn how to tackle. 501 Not ImplementedThe petition technique isn't supported with the server and are unable to be handled. The one procedures that servers are required to persuade (and as a result that should not return this code) are GET and HEAD. 502 Bad GatewayThis mistake reaction means that the server, when employed as a gateway to have a reaction necessary to manage the ask for, obtained an invalid reaction. 503 Service UnavailableThe machine is just not ready to contend with the request. Regular leads to really are a host that is definitely down for routine maintenance or that is certainly overloaded. Notice that with this response, a person-welcoming web site detailing the problem must be despatched. This solutions should really be useful for short-term disorders in conjunction with also the Retry-Right after: HTTP header must, if at all possible, include the approximated time previous to the recovery of this service. The webmaster need to also acquire care about the caching-related headers that are sent collectively with this particular response, since these short term condition answers ought to ordinarily not be cached. 504 Gateway TimeoutThis error response is provided when the device is performing as a gateway and are unable to obtain a response in time. 505 HTTP Version Not SupportedThe HTTP Variation used in the petition is not really supported with the host. 508 Loop Detected (WebDAV)The server detected an infinite loop whilst processing the ask for. 510 Not ExtendedAdded extensions to the ask for are demanded to the waiter to match it.
1 note · View note
nummero123 · 2 years ago
Text
Every SEO Should Aim For This Link-Earning Stack
Tumblr media
When it involves SEO and trying to enhance the optimization of an internet site,
server header status codes shouldn't be overlooked.
they will both improve and severely damage your onsite SEO.
Therefore it’s vital that program optimizers understand
how they work and the way they will be interpreted by search engines like Google and Bing.
How Header Server Status Codes are Served Up
When a user requests a URL from their website, the server on which your website is hosted,
the server will return a header server status code.
Ideally, the server should return a ‘200 OK’ status code to tell
the browser that everything is okay with the page and therefore the page exists within the requested location.
This status code also comes with additional information which incorporates
HTML code that the user’s browser uses to present
the page content, images, and video accordingly because the website owner has defined it.
The above status code typically only served up when there are not any server-side issues with a specific page.
Other codes may be served up
and which give information on the availability of a particular page and whether it even exists or not.
Below we outline the desirable status codes and people who are more detrimental to your SEO efforts and website rankings.
Desirable Server Status Codes
Status Code: 200 OK –
The 200 OK status code confirms that the webpage exists and is in a position to be served up OK for the asking.
this is often the foremost desirable status code you'll see when analyzing an internet site for SEO.
The 200 status code positively interpreted by the search engines,
informing them that the page exists within
the requested location and there are not any issues with resources not being available for the page.
Status Code: 301 Moved Permanently –
this is often usually wont to show that a page is no longer at the requested location and has permanently moved to a different location.
301s the foremost assured way of informing both users and search engines
that page content has moved to a special URL permanently.
The permanency of this point of URL means search engines like Google will transfer any rankings and link weight and link authority permanently to a replacement
URL It also will help the search engines know to get rid of the old URL from their indexes and replace them with the new URL.
Detrimental Server Status Codes
Status Code: 500 Internal Server Error –
This status code may be a general server status error that indicates to both visitors and search engines
that the website web server features a problem.
If this code regularly occurs then this not only appears negatively to visitors and makes your website experience poor,
but it also conveys an equivalent message to look engines,
and any ranking you've got or may have had are going to  greatly reduced within the program rankings.
Status Code: 302 Found –
This code usually utilized in temporary redirection or URLs.
meant to define where a URL  temporarily redirecting to a different location
but  probably going to vary in the future or switched to a 301 permanent redirect.
Often 302 temporary redirects employed by mistake, rather than 301 redirects.
this will mean that page content given less preference
because the search engines think the URL or content could change
and isn't as fixed for users as a page that has  permanently redirected.
Traditionally, this status code also doesn't pass link authority
and may cause links to de-indexed in time. generally,  
advised to not use this sort of redirect unless an internet site is fresh and has little link authority anyway,
or in very specific special cases where it's going to add up to only te
mporarily redirect URLs.
Status Code: 404 Not Found –
This server status code means the requested URL has not found and there's usually a message on-page saying
“The page or file you're trying to access doesn’t exist”. the matter with 404’s
if they're appearing for URLs that previously did exist then search engines
will interpret them because the page has moved or removed.
As a result, the pages will quickly  de-indexed
as they serve little content and any link authority remains on the Not Found URL.
The simplest solution if you’re experiencing many 404’s is to review
them and check out and re-direct any relevant URLs to corresponding matching or similar URLs.
Google Webmaster Tools often produces a report showing 404s
that Google’s bots are finding,
allowing users to map out 301 permanent redirections to the foremost related
URLs and thus expire any link weight and rankings that  previously held.
Conclusion
webmasters and SEOs must use 301 redirects to resolve
any URLs which are throwing up 500, 302, or 404 server status codes.
Search engines won’t rank URLs that don't permanently resolve to a relevant
URL so it's worth taking the time to review and resolve your URLs.
you'll use data and tools like Google Webmaster Tools,
Screaming Frog’s SEO Spider Tool to seek out erroneous status codes and resolve them.
For the best internet marketing services get in touch with nummero
we are the best digital marketing company in Bangalore.
0 notes
wickedbananas · 6 years ago
Text
An 8-Point Checklist for Debugging Strange Technical SEO Problems
Posted by Dom-Woodman
Occasionally, a problem will land on your desk that's a little out of the ordinary. Something where you don't have an easy answer. You go to your brain and your brain returns nothing.
These problems can’t be solved with a little bit of keyword research and basic technical configuration. These are the types of technical SEO problems where the rabbit hole goes deep.
The very nature of these situations defies a checklist, but it's useful to have one for the same reason we have them on planes: even the best of us can and will forget things, and a checklist will provvide you with places to dig.
Fancy some examples of strange SEO problems? Here are four examples to mull over while you read. We’ll answer them at the end.
1. Why wasn’t Google showing 5-star markup on product pages?
The pages had server-rendered product markup and they also had Feefo product markup, including ratings being attached client-side.
The Feefo ratings snippet was successfully rendered in Fetch & Render, plus the mobile-friendly tool.
When you put the rendered DOM into the structured data testing tool, both pieces of structured data appeared without errors.
2. Why wouldn’t Bing display 5-star markup on review pages, when Google would?
The review pages of client & competitors all had rating rich snippets on Google.
All the competitors had rating rich snippets on Bing; however, the client did not.
The review pages had correctly validating ratings schema on Google’s structured data testing tool, but did not on Bing.
3. Why were pages getting indexed with a no-index tag?
Pages with a server-side-rendered no-index tag in the head were being indexed by Google across a large template for a client.
4. Why did any page on a website return a 302 about 20–50% of the time, but only for crawlers?
A website was randomly throwing 302 errors.
This never happened in the browser and only in crawlers.
User agent made no difference; location or cookies also made no difference.
Finally, a quick note. It’s entirely possible that some of this checklist won’t apply to every scenario. That’s totally fine. It’s meant to be a process for everything you could check, not everything you should check.
The pre-checklist check
Does it actually matter?
Does this problem only affect a tiny amount of traffic? Is it only on a handful of pages and you already have a big list of other actions that will help the website? You probably need to just drop it.
I know, I hate it too. I also want to be right and dig these things out. But in six months' time, when you've solved twenty complex SEO rabbit holes and your website has stayed flat because you didn't re-write the title tags, you're still going to get fired.
But hopefully that's not the case, in which case, onwards!
Where are you seeing the problem?
We don’t want to waste a lot of time. Have you heard this wonderful saying?: “If you hear hooves, it’s probably not a zebra.”
The process we’re about to go through is fairly involved and it’s entirely up to your discretion if you want to go ahead. Just make sure you’re not overlooking something obvious that would solve your problem. Here are some common problems I’ve come across that were mostly horses.
You’re underperforming from where you should be.
When a site is under-performing, people love looking for excuses. Weird Google nonsense can be quite a handy thing to blame. In reality, it’s typically some combination of a poor site, higher competition, and a failing brand. Horse.
You’ve suffered a sudden traffic drop.
Something has certainly happened, but this is probably not the checklist for you. There are plenty of common-sense checklists for this. I’ve written about diagnosing traffic drops recently — check that out first.
The wrong page is ranking for the wrong query.
In my experience (which should probably preface this entire post), this is usually a basic problem where a site has poor targeting or a lot of cannibalization. Probably a horse.
Factors which make it more likely that you’ve got a more complex problem which require you to don your debugging shoes:
A website that has a lot of client-side JavaScript.
Bigger, older websites with more legacy.
Your problem is related to a new Google property or feature where there is less community knowledge.
1. Start by picking some example pages.
Pick a couple of example pages to work with — ones that exhibit whatever problem you're seeing. No, this won't be representative, but we'll come back to that in a bit.
Of course, if it only affects a tiny number of pages then it might actually be representative, in which case we're good. It definitely matters, right? You didn't just skip the step above? OK, cool, let's move on.
2. Can Google crawl the page once?
First we’re checking whether Googlebot has access to the page, which we’ll define as a 200 status code.
We’ll check in four different ways to expose any common issues:
Robots.txt: Open up Search Console and check in the robots.txt validator.
User agent: Open Dev Tools and verify that you can open the URL with both Googlebot and Googlebot Mobile.
To get the user agent switcher, open Dev Tools.
Check the console drawer is open (the toggle is the Escape key)
Hit the … and open "Network conditions"
Here, select your user agent!
IP Address: Verify that you can access the page with the mobile testing tool. (This will come from one of the IPs used by Google; any checks you do from your computer won't.)
Country: The mobile testing tool will visit from US IPs, from what I've seen, so we get two birds with one stone. But Googlebot will occasionally crawl from non-American IPs, so it’s also worth using a VPN to double-check whether you can access the site from any other relevant countries.
I’ve used HideMyAss for this before, but whatever VPN you have will work fine.
We should now have an idea whether or not Googlebot is struggling to fetch the page once.
Have we found any problems yet?
If we can re-create a failed crawl with a simple check above, then it’s likely Googlebot is probably failing consistently to fetch our page and it’s typically one of those basic reasons.
But it might not be. Many problems are inconsistent because of the nature of technology. ;)
3. Are we telling Google two different things?
Next up: Google can find the page, but are we confusing it by telling it two different things?
This is most commonly seen, in my experience, because someone has messed up the indexing directives.
By "indexing directives," I’m referring to any tag that defines the correct index status or page in the index which should rank. Here’s a non-exhaustive list:
No-index
Canonical
Mobile alternate tags
AMP alternate tags
An example of providing mixed messages would be:
No-indexing page A
Page B canonicals to page A
Or:
Page A has a canonical in a header to A with a parameter
Page A has a canonical in the body to A without a parameter
If we’re providing mixed messages, then it’s not clear how Google will respond. It’s a great way to start seeing strange results.
Good places to check for the indexing directives listed above are:
Sitemap
Example: Mobile alternate tags can sit in a sitemap
HTTP headers
Example: Canonical and meta robots can be set in headers.
HTML head
This is where you’re probably looking, you’ll need this one for a comparison.
JavaScript-rendered vs hard-coded directives
You might be setting one thing in the page source and then rendering another with JavaScript, i.e. you would see something different in the HTML source from the rendered DOM.
Google Search Console settings
There are Search Console settings for ignoring parameters and country localization that can clash with indexing tags on the page.
A quick aside on rendered DOM
This page has a lot of mentions of the rendered DOM on it (18, if you’re curious). Since we’ve just had our first, here’s a quick recap about what that is.
When you load a webpage, the first request is the HTML. This is what you see in the HTML source (right-click on a webpage and click View Source).
This is before JavaScript has done anything to the page. This didn’t use to be such a big deal, but now so many websites rely heavily on JavaScript that the most people quite reasonably won’t trust the the initial HTML.
Rendered DOM is the technical term for a page, when all the JavaScript has been rendered and all the page alterations made. You can see this in Dev Tools.
In Chrome you can get that by right clicking and hitting inspect element (or Ctrl + Shift + I). The Elements tab will show the DOM as it’s being rendered. When it stops flickering and changing, then you’ve got the rendered DOM!
4. Can Google crawl the page consistently?
To see what Google is seeing, we're going to need to get log files. At this point, we can check to see how it is accessing the page.
Aside: Working with logs is an entire post in and of itself. I’ve written a guide to log analysis with BigQuery, I’d also really recommend trying out Screaming Frog Log Analyzer, which has done a great job of handling a lot of the complexity around logs.
When we’re looking at crawling there are three useful checks we can do:
Status codes: Plot the status codes over time. Is Google seeing different status codes than you when you check URLs?
Resources: Is Google downloading all the resources of the page?
Is it downloading all your site-specific JavaScript and CSS files that it would need to generate the page?
Page size follow-up: Take the max and min of all your pages and resources and diff them. If you see a difference, then Google might be failing to fully download all the resources or pages. (Hat tip to @ohgm, where I first heard this neat tip).
Have we found any problems yet?
If Google isn't getting 200s consistently in our log files, but we can access the page fine when we try, then there is clearly still some differences between Googlebot and ourselves. What might those differences be?
It will crawl more than us
It is obviously a bot, rather than a human pretending to be a bot
It will crawl at different times of day
This means that:
If our website is doing clever bot blocking, it might be able to differentiate between us and Googlebot.
Because Googlebot will put more stress on our web servers, it might behave differently. When websites have a lot of bots or visitors visiting at once, they might take certain actions to help keep the website online. They might turn on more computers to power the website (this is called scaling), they might also attempt to rate-limit users who are requesting lots of pages, or serve reduced versions of pages.
Servers run tasks periodically; for example, a listings website might run a daily task at 01:00 to clean up all it’s old listings, which might affect server performance.
Working out what’s happening with these periodic effects is going to be fiddly; you’re probably going to need to talk to a back-end developer.
Depending on your skill level, you might not know exactly where to lead the discussion. A useful structure for a discussion is often to talk about how a request passes through your technology stack and then look at the edge cases we discussed above.
What happens to the servers under heavy load?
When do important scheduled tasks happen?
Two useful pieces of information to enter this conversation with:
Depending on the regularity of the problem in the logs, it is often worth trying to re-create the problem by attempting to crawl the website with a crawler at the same speed/intensity that Google is using to see if you can find/cause the same issues. This won’t always be possible depending on the size of the site, but for some sites it will be. Being able to consistently re-create a problem is the best way to get it solved.
If you can��t, however, then try to provide the exact periods of time where Googlebot was seeing the problems. This will give the developer the best chance of tying the issue to other logs to let them debug what was happening.
If Google can crawl the page consistently, then we move onto our next step.
5. Does Google see what I can see on a one-off basis?
We know Google is crawling the page correctly. The next step is to try and work out what Google is seeing on the page. If you’ve got a JavaScript-heavy website you’ve probably banged your head against this problem before, but even if you don’t this can still sometimes be an issue.
We follow the same pattern as before. First, we try to re-create it once. The following tools will let us do that:
Fetch & Render
Shows: Rendered DOM in an image, but only returns the page source HTML for you to read.
Mobile-friendly test
Shows: Rendered DOM and returns rendered DOM for you to read.
Not only does this show you rendered DOM, but it will also track any console errors.
Is there a difference between Fetch & Render, the mobile-friendly testing tool, and Googlebot? Not really, with the exception of timeouts (which is why we have our later steps!). Here’s the full analysis of the difference between them, if you’re interested.
Once we have the output from these, we compare them to what we ordinarily see in our browser. I’d recommend using a tool like Diff Checker to compare the two.
Have we found any problems yet?
If we encounter meaningful differences at this point, then in my experience it’s typically either from JavaScript or cookies
Why?
Googlebot crawls with cookies cleared between page requests
Googlebot renders with Chrome 41, which doesn’t support all modern JavaScript.
We can isolate each of these by:
Loading the page with no cookies. This can be done simply by loading the page with a fresh incognito session and comparing the rendered DOM here against the rendered DOM in our ordinary browser.
Use the mobile testing tool to see the page with Chrome 41 and compare against the rendered DOM we normally see with Inspect Element.
Yet again we can compare them using something like Diff Checker, which will allow us to spot any differences. You might want to use an HTML formatter to help line them up better.
We can also see the JavaScript errors thrown using the Mobile-Friendly Testing Tool, which may prove particularly useful if you’re confident in your JavaScript.
If, using this knowledge and these tools, we can recreate the bug, then we have something that can be replicated and it’s easier for us to hand off to a developer as a bug that will get fixed.
If we’re seeing everything is correct here, we move on to the next step.
6. What is Google actually seeing?
It’s possible that what Google is seeing is different from what we recreate using the tools in the previous step. Why? A couple main reasons:
Overloaded servers can have all sorts of strange behaviors. For example, they might be returning 200 codes, but perhaps with a default page.
JavaScript is rendered separately from pages being crawled and Googlebot may spend less time rendering JavaScript than a testing tool.
There is often a lot of caching in the creation of web pages and this can cause issues.
We’ve gotten this far without talking about time! Pages don’t get crawled instantly, and crawled pages don’t get indexed instantly.
Quick sidebar: What is caching?
Caching is often a problem if you get to this stage. Unlike JS, it’s not talked about as much in our community, so it’s worth some more explanation in case you’re not familiar. Caching is storing something so it’s available more quickly next time.
When you request a webpage, a lot of calculations happen to generate that page. If you then refreshed the page when it was done, it would be incredibly wasteful to just re-run all those same calculations. Instead, servers will often save the output and serve you the output without re-running them. Saving the output is called caching.
Why do we need to know this? Well, we’re already well out into the weeds at this point and so it’s possible that a cache is misconfigured and the wrong information is being returned to users.
There aren’t many good beginner resources on caching which go into more depth. However, I found this article on caching basics to be one of the more friendly ones. It covers some of the basic types of caching quite well.
How can we see what Google is actually working with?
Google’s cache
Shows: Source code
While this won’t show you the rendered DOM, it is showing you the raw HTML Googlebot actually saw when visiting the page. You’ll need to check this with JS disabled; otherwise, on opening it, your browser will run all the JS on the cached version.
Site searches for specific content
Shows: A tiny snippet of rendered content.
By searching for a specific phrase on a page, e.g. inurl:example.com/url “only JS rendered text”, you can see if Google has manage to index a specific snippet of content. Of course, it only works for visible text and misses a lot of the content, but it's better than nothing!
Better yet, do the same thing with a rank tracker, to see if it changes over time.
Storing the actual rendered DOM
Shows: Rendered DOM
Alex from DeepCrawl has written about saving the rendered DOM from Googlebot. The TL;DR version: Google will render JS and post to endpoints, so we can get it to submit the JS-rendered version of a page that it sees. We can then save that, examine it, and see what went wrong.
Have we found any problems yet?
Again, once we’ve found the problem, it’s time to go and talk to a developer. The advice for this conversation is identical to the last one — everything I said there still applies.
The other knowledge you should go into this conversation armed with: how Google works and where it can struggle. While your developer will know the technical ins and outs of your website and how it’s built, they might not know much about how Google works. Together, this can help you reach the answer more quickly.
The obvious source for this are resources or presentations given by Google themselves. Of the various resources that have come out, I’ve found these two to be some of the more useful ones for giving insight into first principles:
This excellent talk, How does Google work - Paul Haahr, is a must-listen.
At their recent IO conference, John Mueller & Tom Greenway gave a useful presentation on how Google renders JavaScript.
But there is often a difference between statements Google will make and what the SEO community sees in practice. All the SEO experiments people tirelessly perform in our industry can also help shed some insight. There are far too many list here, but here are two good examples:
Google does respect JS canonicals - For example, Eoghan Henn does some nice digging here, which shows Google respecting JS canonicals.
How does Google index different JS frameworks? - Another great example of a widely read experiment by Bartosz Góralewicz last year to investigate how Google treated different frameworks.
7. Could Google be aggregating your website across others?
If we’ve reached this point, we’re pretty happy that our website is running smoothly. But not all problems can be solved just on your website; sometimes you’ve got to look to the wider landscape and the SERPs around it.
Most commonly, what I’m looking for here is:
Similar/duplicate content to the pages that have the problem.
This could be intentional duplicate content (e.g. syndicating content) or unintentional (competitors' scraping or accidentally indexed sites).
Either way, they’re nearly always found by doing exact searches in Google. I.e. taking a relatively specific piece of content from your page and searching for it in quotes.
Have you found any problems yet?
If you find a number of other exact copies, then it’s possible they might be causing issues.
The best description I’ve come up with for “have you found a problem here?” is: do you think Google is aggregating together similar pages and only showing one? And if it is, is it picking the wrong page?
This doesn’t just have to be on traditional Google search. You might find a version of it on Google Jobs, Google News, etc.
To give an example, if you are a reseller, you might find content isn’t ranking because there's another, more authoritative reseller who consistently posts the same listings first.
Sometimes you’ll see this consistently and straightaway, while other times the aggregation might be changing over time. In that case, you’ll need a rank tracker for whatever Google property you’re working on to see it.
Jon Earnshaw from Pi Datametrics gave an excellent talk on the latter (around suspicious SERP flux) which is well worth watching.
Once you’ve found the problem, you’ll probably need to experiment to find out how to get around it, but the easiest factors to play with are usually:
De-duplication of content
Speed of discovery (you can often improve by putting up a 24-hour RSS feed of all the new content that appears)
Lowering syndication
8. A roundup of some other likely suspects
If you’ve gotten this far, then we’re sure that:
Google can consistently crawl our pages as intended.
We’re sending Google consistent signals about the status of our page.
Google is consistently rendering our pages as we expect.
Google is picking the correct page out of any duplicates that might exist on the web.
And your problem still isn’t solved?
And it is important?
Well, shoot.
Feel free to hire us…?
As much as I’d love for this article to list every SEO problem ever, that’s not really practical, so to finish off this article let’s go through two more common gotchas and principles that didn’t really fit in elsewhere before the answers to those four problems we listed at the beginning.
Invalid/poorly constructed HTML
You and Googlebot might be seeing the same HTML, but it might be invalid or wrong. Googlebot (and any crawler, for that matter) has to provide workarounds when the HTML specification isn't followed, and those can sometimes cause strange behavior.
The easiest way to spot it is either by eye-balling the rendered DOM tools or using an HTML validator.
The W3C validator is very useful, but will throw up a lot of errors/warnings you won’t care about. The closest I can give to a one-line of summary of which ones are useful is to:
Look for errors
Ignore anything to do with attributes (won’t always apply, but is often true).
The classic example of this is breaking the head.
An iframe isn't allowed in the head code, so Chrome will end the head and start the body. Unfortunately, it takes the title and canonical with it, because they fall after it — so Google can't read them. The head code should have ended in a different place.
Oliver Mason wrote a good post that explains an even more subtle version of this in breaking the head quietly.
When in doubt, diff
Never underestimate the power of trying to compare two things line by line with a diff from something like Diff Checker. It won’t apply to everything, but when it does it’s powerful.
For example, if Google has suddenly stopped showing your featured markup, try to diff your page against a historical version either in your QA environment or from the Wayback Machine.
Answers to our original 4 questions
Time to answer those questions. These are all problems we’ve had clients bring to us at Distilled.
1. Why wasn’t Google showing 5-star markup on product pages?
Google was seeing both the server-rendered markup and the client-side-rendered markup; however, the server-rendered side was taking precedence.
Removing the server-rendered markup meant the 5-star markup began appearing.
2. Why wouldn’t Bing display 5-star markup on review pages, when Google would?
The problem came from the references to schema.org.
<div itemscope="" itemtype="https://schema.org/Movie"> </div> <p> <h1 itemprop="name">Avatar</h1> </p> <p> <span>Director: <span itemprop="director">James Cameron</span> (born August 16, 1954)</span> </p> <p> <span itemprop="genre">Science fiction</span> </p> <p> <a href="../movies/avatar-theatrical-trailer.html" itemprop="trailer">Trailer</a> </p> <p></div> </p>
We diffed our markup against our competitors and the only difference was we’d referenced the HTTPS version of schema.org in our itemtype, which caused Bing to not support it.
C’mon, Bing.
3. Why were pages getting indexed with a no-index tag?
The answer for this was in this post. This was a case of breaking the head.
The developers had installed some ad-tech in the head and inserted an non-standard tag, i.e. not:
<title>
<style>
<base>
<link>
<meta>
<script>
<noscript>
This caused the head to end prematurely and the no-index tag was left in the body where it wasn’t read.
4. Why did any page on a website return a 302 about 20–50% of the time, but only for crawlers?
This took some time to figure out. The client had an old legacy website that has two servers, one for the blog and one for the rest of the site. This issue started occurring shortly after a migration of the blog from a subdomain (blog.client.com) to a subdirectory (client.com/blog/…).
At surface level everything was fine; if a user requested any individual page, it all looked good. A crawl of all the blog URLs to check they’d redirected was fine.
But we noticed a sharp increase of errors being flagged in Search Console, and during a routine site-wide crawl, many pages that were fine when checked manually were causing redirect loops.
We checked using Fetch and Render, but once again, the pages were fine. Eventually, it turned out that when a non-blog page was requested very quickly after a blog page (which, realistically, only a crawler is fast enough to achieve), the request for the non-blog page would be sent to the blog server.
These would then be caught by a long-forgotten redirect rule, which 302-redirected deleted blog posts (or other duff URLs) to the root. This, in turn, was caught by a blanket HTTP to HTTPS 301 redirect rule, which would be requested from the blog server again, perpetuating the loop.
For example, requesting https://www.client.com/blog/ followed quickly enough by https://www.client.com/category/ would result in:
302 to http://www.client.com - This was the rule that redirected deleted blog posts to the root
301 to https://www.client.com - This was the blanket HTTPS redirect
302 to http://www.client.com - The blog server doesn’t know about the HTTPS non-blog homepage and it redirects back to the HTTP version. Rinse and repeat.
This caused the periodic 302 errors and it meant we could work with their devs to fix the problem.
What are the best brainteasers you've had?
Let’s hear them, people. What problems have you run into? Let us know in the comments.
Also credit to @RobinLord8, @TomAnthonySEO, @THCapper, @samnemzer, and @sergeystefoglo_ for help with this piece.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog https://ift.tt/2lfAXtQ via IFTTT
2 notes · View notes
muhamedshafaan · 4 years ago
Text
Common HTTP Errors
Tumblr media
Status Codes
Behind every error page you see on the web there is an HTTP status code sent by the web server. Status codes come in the format of 3 digit numbers. The first digit marks the class of the status code:
1XX status codes have informational purposes
2XX indicates success
3XX is for redirection
None of these three classes result in an HTML error page as in this cases the client knows what to do and goes on with the task without hesitation. What we usually see are the 4XX and 5XX kind:
4XX represent client-side errors
5XXs indicate problems on the server side
HTML error pages are displayed in these cases because the client has no idea about what how to move on.
Client side errors(4xx)
Tumblr media
1)400-bad request
Whenever the client sends a request the server is unable to understand, the 400 Bad Request error page shows up. It usually happens when the data sent by the browser doesn’t respect the rules of the HTTP protocol, so the web server is clueless about how to process a request containing a malformed syntax.
When you see a 400 error page the reason is most likely that there’s something unstable on the client side: a not sufficiently protected operating system, an instable internet connection, a defective browser or a caching problem. So it’s always a good idea to test a bit your own PC before you contact the owner of the website.
Open the same webpage in a different browser, clear the cache, and check if you are due with security updates. If you regularly meet the 400 error on different sites, your PC or Mac is awaiting a thorough security checkup.
2)401-authorization required
When there’s a password-protected web page behind the client’s request, the server responds with a 401 Authorization Required code. 401 doesn’t return a classical error message at once, but a popup that asks the user to provide a login-password combination. If you have the credentials, everything is all right, and you can go on without any problem and get access to the protected site. Otherwise you are redirected to the Authorization Required error page.
3)403-forbidden
You can encounter the 403 Forbidden error page when the server understands the client’s request clearly, but for some reasons refuses to fulfil it. This is neither a malformation nor an authorization problem. By returning the 403 status code the server basically rejects the client with a big loud “No” without any explanation
The most common reason is that the website owner doesn’t permit visitors to browse the file directory structure of the site. When this kind of protection is enabled you can’t access folders directly on the website. The other frequent reason is that the specific file the client requested doesn’t have the permission to be viewed from the web.
You can set 403 protection for security reasons on your own site. It can be useful to harden your site against being hacked by hiding the directory structure or files that contain vulnerable information.
4)404-not found
404 is the most well-known HTTP status code out there, and you have surely read many great posts about how to customize 404 pages. The browser returns a 404 HTML page when the server doesn’t find anything on the requested location.
There are two main scenarios that can result in a 404 Not Found page. Either the visitor mistyped the URL, or the permalink structure of the site has been changed and the incoming links point to pages that were moved to different locations. 404 error pages sometimes can appear on top level URLs too. It usually happens when a site has recently moved to another web server and the DNS still points to the old location. This kind of problem usually disappears after a short time.
You may want to reduce the number of your 404s because they increase the bounce rate (people who leave immediately) of your site. The most common solution for this is using 301 redirects for permanently removed pages, and 302s for those that are temporarily unavailable.
5)408-request time out
When the request of the client takes too long, the server times out, closes the connection, and the browser displays a 408 Request Time-Out error message. The time-out happens because the server didn’t receive a complete request from the client within the timeframe it was prepared to wait. Persistent 408 errors can occur because of the heavy workload on either the server or on the client’s system.
Bigger websites tend to customize 408 error pages just like most of you do, in case of 404s. 408 errors can usually be fixed by reloading the page with the help of the F5 button.
6)410-gone
The 410 Gone error page is very close to the well-known 404. Both mean that the server doesn’t find the requested file, but while 404 suggests that the target file may be available somewhere on the server, 410 indicates a permanent condition.
410 shows the client that the resource was made intentionally unavailable, and the website owner wants incoming links to be removed from the Web. 404 is used when the server is unsure if the unavailability of the file is permanent, but 410 always indicates a complete certainty.
server errors(5xx)
7)500-internal server error
Internal Server Error is the most well-known server error, as it’s used whenever the server encounters an unexpected condition that prevents it from fulfilling the client’s request. The 500 error code is a generic one, it’s returned when no other server-side 5XX error codes make any sense.
Although in this case the problem is not on your end, you can do some things to resolve it such as reload the page (as the error may be temporary), clear your browser’s cache (as the issue may occur with the cached version of the site), and delete your browser’s cookies and restart the browser.
8)502-bad gateway
The 502 error message represents a communication problem between two servers. It occurs when the client connects to a server acting as a gateway or a proxy that needs to access an upstream server that provides additional service to it. The other server is located higher in the server hierarchy. It can be for example an Apache web server that’s accessed by a proxy server, or the name server of a large internet service provider that’s accessed by a local name server.
When you encounter the Bad Gateway error page the server receives an invalid response from an upstream server.
9)503-service temporarily unavailable
You see the Service Temporarily Unavailable (sometimes Out of Resources) message any time there’s a temporary overload on the server, or when it’s going through a scheduled maintenance. The 503 error code means that the web server is currently not available. This is usually a temporary condition that will be resolved after some delay.
10)504-gateway timeout
There is a server-server communication problem behind the Gateway Time-Out error message, just like behind the 502 Bad Gateway error code. When the 504 status code is returned there’s also a higher-level server in the background that is supposed to send data to the server that is connected to our client. In this case the lower-level server doesn’t receive a timely response from the upstream server it accessed.
This is the same time-out problem that occurs in case of the 408 Request Time-Out status code, but here it doesn’t happen between the client and the server but between two servers in the back end. The Gateway Time-Out error page usually indicates slow communication between the two servers, and it can also happen that the higher-level server is completely down.
https://transorze.com/
0 notes
xhostcom · 5 years ago
Text
Page Redirects in WordPress or ClassicPress
Tumblr media
Page redirects in WordPress or ClassicPress are not the most straightforward topic if you are dealing with it for the first time. Many people have heard of page redirects before but aren’t always sure when you need to use them or how to implement them. These are sometimes needed when maintaining a Wordpress or Woocommerce site. In the following blog post, you will learn everything you need to know about page redirects (in WordPress and otherwise). This includes what they are and why they matter when to use what type of redirect, where to apply them, and different ways of correctly implementing page redirects on your WordPress site, so to start lets take a look at what they are.
Tumblr media
What Are Page Redirects and Why Do You Need Them?
Page redirects are basically like a send-on notice for the post office. When you move, you can get one of those and any mail that was sent to your old house will automatically be delivered to your new mailing address. Redirects are the same thing but for web pages only that, instead of letters and parcels, it sends visitors and search spiders to another web address. Implementing page redirects can be necessary for many reasons: A mistake in your title and URL that you want to correct Attempting to add/target a different keyword with your page The entire permalink structure of your site has changed Some external link is pointing to the wrong address and you want visitors to find the right page You want to change parts of your URL, like remove www or switch to HTTPS (or both) You have moved to an entirely new domain (or merged another site with yours) and want the traffic and SEO value of the old URL to land on the new one Why Do They Matter? From the above list, it’s probably already obvious why page redirects are a good idea. Of course, if your entire site moves, you don’t want to start from scratch but instead, benefit from the traffic and links you have already built. However, even if you only change one page, implementing a redirect makes sense. That’s because having non-existent pages on your site is both bad for visitors and search engine optimization. When someone tries to visit them, they will see a 404 error page. This is not a pleasant experience and usually very annoying (as entertaining as 404 pages can be). Because of that, search engines are also not a big fan of this kind of error and might punish you for it. Also, you want them to understand your site structure and index it correctly, don’t you? Therefore, it’s a good idea to leave a “this page no longer exists, please have a look over here ” message whenever necessary. Different Redirect Codes and What They Mean When talking about redirects, you need to know that there are several different types. These are categorized by the HTTP codes that they have been assigned to, similar to the aforementioned 404 error code for a missing page. However, for redirects, they are all in the 300 category: 301 — This is the most common kind. It means that a page has moved permanently and the new version can from now on be found at another location. This page redirect passes on 90-99 percent of SEO value. 302 — This means a page has moved temporarily. The original URL is currently not available but will come back and you can use the new domain in the meantime. It passes no link value. 303 — Only used for form submissions to stop users from re-submitting when someone uses the browser back button. This is probably not relevant to you unless you are a developer. 307 — The same as a 302 but for HTML 1.1. It means something has been temporarily moved. 308 — The permanent version of the 307.
Tumblr media
When to Use What? Of course, the biggest question is, when to use which type of page redirect? While there are several options, you usually only need two of them: 301 and 302. Out of those, probably more than 90 percent of the time, you will use a 301. That’s because for the rest (except 303), it’s not always clear how search engines handle them, so you basically stick to those two options. As for when to use which, much of it you can already understand from what the code tells the browser or search spider, however, here’s a detailed description: 301 — Use this when you are planning on deleting a page and want to point visitors to another relevant URL or when you want to change a page’s permalink (including the domain). 302 — Use this, for example, when making changes to a page that visitors are not supposed to see or when you redirect them to a temporary sales page that will soon turn back to the original. That way, search engines won’t de-index the existing page. Redirects and Page Speed While page redirects are great tools for webmasters and marketers, the downside of them is that they can have an effect on page speed. As you can imagine, they represent an extra step in the page loading process. While that’s not much, in a world where visitors expect page load times mere seconds, it matters. In addition, page redirects use up crawl budget from search engines, so you can potentially keep them from discovering your whole site by having too many of them. Therefore, here are some important rules for their usage: Avoid redirect chains — This means several hops from an old to a new page. This is especially important when you redirect http to https and www to non-www. These should all resolve to the same domain directly (https://domain.com), not ping pong from one to the next. Don’t redirect links that are in your control — This means, if there is a faulty link inside a menu, inline, or similar, change them manually. Don’t be lazy. Try to correct external links — If the fault is with an incoming link, consider reaching out to the originator and ask them to correct it on their end. In essence, keep page redirects to a minimum. To see if you have multiple redirects in place, you can use the Redirect Mapper.
How to Find Pages to Redirect and Prepare the Right URLs
So, besides when you do a site or page move, how do you find pages to redirect? A good place to start is the 404 errors/crawl errors in Google Search Console. You find them under Coverage. Note that Search Console now only shows 404 errors that threaten your pages from being indexed and not all of them. Therefore, to track down non-existent pages, you can also use a crawler like Screaming Frog. Some of the WordPress plugins below also help you with that, additionally you can take a look at SEMRush, and SEO management tool which is very popular, and used by many experts and beginners alike, you can get a free trial via the link above. Then, to prepare your page redirects: Get the correct to and from URL — This means to stay consistent in the format. For example, if you are using a trailing slash, do it for both URLs. Also, always redirect to the same website version, meaning your preferred domain including www/non-www, http/https, etc. Get the slug, not the URL — This means /your-page-slug instead of http://yoursite.com/your-page-slug. This way, you make your redirects immune to any changes to the top-level domain such as switching from www to non-ww or from http to https. Redirect to relevant pages — Meaning similar in topic and intent. Don’t just use the homepage or something else, try to anticipate search intent and how you can further serve it.
How to Correctly Implement Page Redirects in WordPress
You have different methods of implementing page redirects in WordPress. Basically, you can either use a plugin or do it (somewhat) manually via .htaccess. Both come with pros and cons: Plugin — Easy to use, nontechnical, however, potentially slower because many of them use wp_redirect, which can cause performance issues. .htaccess — This is a server file and very powerful. For example, you can include directives for using gzip compression in it. Using this is faster because page redirects are set up at the server level, not somewhere above it. However, making a mistake can mess up and/or disable your entire site. Let’s go over both options: 1. Using a Plugin You have different plugin options for redirects in WordPress. Among them are: Redirection — This is the most popular solution in the WordPress directory. It can redirect via Core, htaccess, and Nginx server redirects. Simple 301 Redirects — Easy to use, few options, does just what you need and nothing more. Safe Redirect Manager — With this plugin, you can choose which redirect code you want to use (remember what we talked about earlier!). It also only redirects to white-listed hosts for additional security. Easy Redirect Manager — Suitable for 301 and 302 redirects. The plugin is well designed and comes with many options. All of the plugins work in a very similar way. They provide you with an interface where you can enter a URL to redirect and where it should lead instead.
Tumblr media
Some of them, like the Redirection plugin, also have additional functionality. For example, this plugin also tracks whenever a visitor lands on a page that doesn’t exist so you can set up appropriate page redirects. 2. Using .htaccess .htaccess usually resides on your server inside your WordPress installation. You can access it by dialing up via FTP.
Tumblr media
Be aware though that it is hidden by default, so you might have to switch on the option to show hidden files in your FTP client of choice.
Tumblr media
The first thing you want to do is download and save it in a safe place so you have a copy of your old file in case something goes wrong. After that, you can edit the existing file (or another local copy) with any text or code editor. A simple redirect from one page on your site to another can be set up like this: RewriteEngine On Redirect 301 /old-blog-url/ /new-blog-url/ If the brackets already exist (as they should when you are using WordPress), all you need is this: Redirect 301 /old-blog-url/ /new-blog-url/ Just be sure to include it right before the closing bracket. You can also use wildcards in redirects. For example, the code below is used to redirect all visitors from the www to the non-www version of a website. RewriteCond %{HTTP_HOST} ^www.mydomain.com$ RewriteRule (.*) http://mydomain.com/$1 To explore more options and if you don’t want to write them out manually, there is this useful tool that creates redirect directives for you. When you are done, save/re-upload and you should be good to go. Be sure to test thoroughly!
Conclusion
Page Redirects in WordPress can be very useful & page redirects have a very important function. They keep visitors and search engines from landing on non-existent pages and are, therefore, a matter of both usability and SEO. Above, you have learned all you need to know about their usage and how to implement them. You are now officially ready to start sending visitors and search spiders wherever you want. Note that these aren’t the only ways to implement page redirects. However, they are the most common and recommended. If you want to know less common ways, check this article on CSS Tricks. What do you use to implement page redirects in WordPress? Any more tools or tips? Share them in the comments section below & if you enjoyed this post, why not check out this article on WordPress Building Trends For 2020! Post by Xhostcom Wordpress & Digital Services, subscribe to newsletter for more! Read the full article
1 note · View note
infoskaterstech-blog · 5 years ago
Text
Why SEO Is Important
Tumblr media
Digital Marketing Company Marathahalli
SEO stands for Search Engine Optimization. Search engine optimizer is focused upon acquiring website traffic and high search result rankings organically(without paying for web traffic).SEO practices and strategies are constantly changing to remain effective as internet search engines continuously update algorithms. Infoskaters is the best Digital Marketing Company in Marathahalli, Bangalore.
There are various types of SEO
Technical SEO :
Technical SEO takes care of the technical parameters that affect the visibility of a website in search engine. The main work of the Technical SEO is to ensure that search engine crawlers can crawl and index a problem without any problems.
On-Page SEO:
On-Page, SEO is the practice of optimizing elements on the website, to improve search engine ranking and visibility.
Tumblr media
Fundamentals of On-Page SEO
On-Page Ranking Factor: On-page ranking factors make a huge impact on your pages ability to rank if optimised properly. The biggest on-page factor that affects the page ranking is the content of the page.
Title Tag: A title tag is an HTML element that specifies the title of the web page. The title tags are displayed on search engines result from pages as a clickable headline for a given result.
Meta Description: A meta description is an HTML attribute that provides a brief description of a web page.
Alt-Text: Is an HTML code to describe the appearance and function on the image on a page.
Duplicate Content: Duplicate content is content that appears on the Internet more the one time.
Robots.txt:Robots.txt is file webmasters create to instruct web robots how to crawl pages on their web site.
Robots Meta Directives: Robots meta directives are pieces that provide crawlers instruction for how to crawl.
Schema.org Markup: Schema.org is a semantic vocabulary of tags that can add your HTML to improve the way search engines read and represent your page SERP’s
Page Speed: Page speed is often confused as site speed but is the page speed for a sample of page views on a site. Page time is described as page load time.
Domain: means names which is unique, human-readable Internet address or website.
There are 3 types of Domain
Top-level domain
A domain name and
Subdomain.
11.URL(Uniform Resource Locator):  A URL specifies the location retrieve that resource, also known as protocol such as HTTP, HTTPS etc.
12.Redirects: Redirection is the process of forwarding one URL to a different URL. A redirect is a way to send both users and search engines to different URL from the one originally requested.
There are 3 types of redirects
301, Moved permanently
302, Found or Moved temporarily
Meta refresh
Content SEO:
Content SEO refers to creating content that helps your web page to rank high in the search engines.
Off-Page SEO:
Off-Page SEO refers to actions taken outside of your website to impact your rankings within the search engine results page.
Tumblr media
Fundamentals of Off-Page SEO:
1.Internal Links: Internal link is one that points to another page on the same website.
2.External Links: If another website links to you, this is considered as an external link to your site.
3. Anchor Text: Anchor text is a visible, clickable text in a hyperlink. It is often blue and underlined.
4. Backlinks: Backlinks also called inbound links or incoming links, these links are created when one website links to another.
5. Link Equity: It is a search engine ranking factor based on the idea that certain links pass value and authority from one web page to another.  
Why Off-Page SEO is important?
While search algorithms and ranking factors are constantly changing, the consensus with SEO community is that the relevance and authority that effective off-page SEO affords a website still play a major role in a page ability to rank.  
Local SEO:
Local Search Engine Optimization is referred to as local search engine marketing which is an effective way to market local business online. As it helps you to promote your products and services to the local customers.
Tumblr media
E-commerce SEO:
E-commerce SEO is the process of making your online store more visible in the search engine result pages.
Tumblr media
0 notes
seo1code-blog · 7 years ago
Text
What mistakes does the Hreflang Testing Tool look for?
There are many mistakes you could make when implementing Hreflang tags on your website. Via the online Hreflang testing tool, we try to catch as many of them as we can. Here’s a list, complete with how to fix each type of problem.
Page-level errors
Some errors can be noticed simply by looking at an individual page. These are:
Broken pages: It is not uncommon that we crawl the pages in a sitemap and find some that are broken e.g. 404 page not found errors, or 301/302 redirects, or even pages that are completely blank or do not have HTML markup.
An Hreflang tag with broken markup. e.g. a <link> tag with missing href attribute
Incorrect Hreflang: Acceptable values for the language code used in Hreflang attributes must be in ISO-639-1 or the ISO 3166-1 Alpha 2 format. e.g. while “en-US” is correct, “en-UK” is incorrect. The correct value for the UK is actually “en-GB”.
Page not linking to itself: When you implement Hreflang tags on a page (say Page A), you obviously want to include <link>s to the version of page A in other languages. So you link to pages B, C and D. But search engine guidelines specify that page A must also link to itself (specifying the language used on that page, of course).
Missing x-default: Another guideline from Google is that an “x-default” must be included as the default page to be shown to users whose language is not among the languages that you have pages for. Usually this is English, and usually it’s the page that is in the XML sitemap.
Same page, multiple languages: Sometimes when Hreflang tags are incorrectly implemented, all (or multiple) language versions point to the same page. (see example here). You will see this error if two different languages — say en and fr — point to the same page. However, if the two hreflang attributes both use the same high-level language — say en-US and en-GB — then they can point to the same page and it will not throw an error.
Duplicate (or multiple different) Hreflang tags for the same language
HTML lang attribute does not match hreflang: The “lang” attribute of the <html> tag on the page is different from the “hreflang” attribute for that page in the <link> tag. This error is usually because of a CMS (content management system) template problem. The <html> tag has an optional lang attribute to specify what language this page is in. This tag is generated by the back-end CMS and most marketers don’t pay any attention to it because it’s not an important SEO meta tag like robots, description or hreflang. Since all pages served by the CMS tend to use the same hard-coding for the lang attribute, we find that pages in German, French etc. — even if they have the correct hreflang attribute — continue to use <html lang="en">
Hreflang in HTML and HTTP headers: This is rare but some sites specify Hreflang tags in both their HTML and the HTTP headers returned by the URL. Use only one and keep it simple for yourself and for search engines.
Errors related to a set of pages
Other types of errors require you to take a look at a set of pages that all have the same content. All pages in the same set have the same content, just in different languages. That is why they are grouped into a set and the set is examined collectively. All such pages should point to each other (and to themselves). What’s more, they should point to the canonical version of each other. The errors we look for are:
Pages not linking to each other (aka no return tags or missing return tags): All pages in a set must link to each other (and to themselves). Sometimes we see the default page (say Page A) linking to pages B, C and D but each of those only link back to page A. That is a mistake. The correct way to implement it is to have the exact same Hreflang tags on all pages in a set. Remembering this will greatly simplify your implementation. This error is explained in detail in this blog post.
Not linking to the canonical version: You have a set of pages all linking to each other. Wonderful! But sometimes when we crawl these pages, we discover that a page specifies that its canonical version is different from the URL that was in the Hreflang tags (or in the sitemap you are testing). This is a mistake because when you are dealing with search engines, you only want to specify the canonical version of a URL, both in your sitemap and in any hreflang tags. All other versions of that page (that point to the canonical version) are discovered by the search engine crawler when it is spidering the web (on your website or from outside). But you do not want to include non-canonical versions of a page in your sitemap, or any structured data that you provide to search engines (like hreflang tags).
Other Errors
Other errors we check for are a byproduct of crawling the pages supplied. These are not related specifically to Hreflang:
Invalid (mal-formed) canonical URL
More than 1 canonical URL specified for a given page
Further Reading
SEO consultant Aleyda Solis also has a write-up about the most common Hreflang mistakes she encounters.
7 notes · View notes
sagar-jaybhay · 5 years ago
Text
How to handle 404 Not Found Error in Asp.Net core?
New Post has been published on https://is.gd/vGHPar
How to handle 404 Not Found Error in Asp.Net core?
Tumblr media
Different Ways to handle status code errors in asp.net core by sagar jaybhay
If resources are not found for specific value or id.
In this, we need to redirect user to custom error page where he can find the error message that for corresponding value he didn’t found any information in our scenario student not found for the corresponding student id. Means in this we need to display an error message in a way that end user will understand that whatever he looking for is not found on the server.
In our home controller in Details method we check the is there any student present for corresponding studentid on database if it found null then we redirect our user to custom error page.
Code in controller
public ViewResult Details(int id) HmDetailsVM hmDetailsVM = new HmDetailsVM(); hmDetailsVM.student = _repository.GetStudents(id); if (hmDetailsVM.student == null) return View("ErrorPage",id); hmDetailsVM.DivisonOfStudent = "9-A"; ViewBag.TitleNew = "Student Info"; return View(hmDetailsVM);
Code in Html View:
@model int @ ViewData["Title"] = "ErrorPage"; <h1>Custom Error Page</h1> <div class="row alert-danger"> For Student You Looking For is Not Found On The server whose StudentID:@Model </div>
Tumblr media
The URL does not match with any route means whatever URL user entered is not present on that controller or our route map.  Means if you entered some URL like https://domain.com/something/unexpected which is not present on your domain then we need to address this
Tumblr media
So in this we are going to handle 404 pages not found error by centralize way. Below are the 3 middle-wares are used to handle status code errors.
Tumblr media
We know that configure method in startup class which handles our http request processing pipeline.
UseStatusCodePages
This middleware comes with the default response handler with status code 400 to 599 which do not have response body. We rarely use this in real world application because it’s simple and only text response. In production scenario we need to display some custom page error message so StatusCodePagesWithRedirect and UseStatusCodePagesWithRexecute are used.
public void Configure(IApplicationBuilder app, IHostingEnvironment env) DeveloperExceptionPageOptions pageOptions = new DeveloperExceptionPageOptions SourceCodeLineCount = 10; if (env.IsDevelopment()) app.UseDeveloperExceptionPage(pageOptions); else app.UseStatusCodePages(); app.UseStaticFiles(); app.UseHsts(); app.UseMvc(routes => routes.MapRoute(name: "default", template: "controller=Home/action=Index/id?"); );
Tumblr media
Output of UseStatusCodePages
Tumblr media
UseStatusCodePagesWithRedirects
In this middleware when we have no success URL then we can use this to redirect the user to our custom error page by giving URL to this method parameter. In this middleware we pass controller name and action name to this middleware. So if any URL doesn’t match it will redirect our route to custom Error page.
public void Configure(IApplicationBuilder app, IHostingEnvironment env) { DeveloperExceptionPageOptions pageOptions = new DeveloperExceptionPageOptions SourceCodeLineCount = 10; if (env.IsDevelopment()) app.UseDeveloperExceptionPage(pageOptions); else app.UseStatusCodePagesWithRedirects("/Error/StatusCodeHandle"); app.UseStaticFiles(); app.UseHsts(); app.UseMvc(routes => routes.MapRoute(name: "default", template: "controller=Home/action=Index/id?"); );
Tumblr media
Now, this is Simple But if we want to show different messages based on status code then we need to modify above method a bit as follows.
public void Configure(IApplicationBuilder app, IHostingEnvironment env) DeveloperExceptionPageOptions pageOptions = new DeveloperExceptionPageOptions SourceCodeLineCount = 10; if (env.IsDevelopment()) app.UseDeveloperExceptionPage(pageOptions); else app.UseStatusCodePagesWithRedirects("/Error/0"); app.UseStaticFiles(); app.UseHsts(); app.UseMvc(routes => routes.MapRoute(name: "default", template: "controller=Home/action=Index/id?"); //routes.MapRoute(name: "default", template: "sagar/controller=Home/action=Index/id?"); );
And controller code like below
[Route("Error/StatusCode")] public IActionResult StatusCodeHandle(int statusCode) switch (statusCode) case 404: ViewBag.ErrorMessasge = $"I am having statusCode" +" Error code Message"; break; return View(statusCode);
Tumblr media
Steps to handle 404 Error
1) Include status code page in the startup class’s configure method.
public void Configure(IApplicationBuilder app, IHostingEnvironment env) DeveloperExceptionPageOptions pageOptions = new DeveloperExceptionPageOptions SourceCodeLineCount = 10; if (env.IsDevelopment()) app.UseDeveloperExceptionPage(pageOptions); else app.UseStatusCodePagesWithRedirects("/Error/0"); app.UseStaticFiles(); app.UseHsts(); app.UseMvc(routes => routes.MapRoute(name: "default", template: "controller=Home/action=Index/id?"); //routes.MapRoute(name: "default", template: "sagar/controller=Home/action=Index/id?"); );
2. Implement Error controller means you need to add Error Controller in your project with a method.
Tumblr media
3. Add a corresponding view to that error controllers method.
Tumblr media
What is the difference between UseStatusCodePagesWithRedirects and UseStatusCodePagesWithReExecute?
UseStatusCodePagesWithRedirects and UseStatusCodePagesWithReExecute these both methods are static methods and used to send custom error page with different messages based on status code.
But the key difference between them is that when you use UseStatusCodePagesWithRedirects it will first send 302 response to browser and after that again it will give 200 response to browser which is not good behavior as per end-user the output which it produces is correct and good but the way it does is something not good.
Also it will change the URL which we enter in our application I hit the URL https://localhost:44387/foo/boo but the output URL is completely different. See below image and responses present in the network tab.
Tumblr media
So see in above image it issues redirect so the URL in the address bar changes.
Also it returns a success status code when actually error occurred which isn’t semantically correct.
In UseStatusCodePagesWithReExecute the response is the same but when you see the response from server first it sends 404 response only and our URL is not changed.
Tumblr media
In UseStatusCodePagesWithReExecute it re-executes the pipeline and return original status code 404.
It re-executes the pipeline and not issue a redirect request, we also preserve the original URL in the address bar.
0 notes
nancydsmithus · 5 years ago
Text
Designing And Building A Progressive Web Application Without A Framework (Part 3)
Designing And Building A Progressive Web Application Without A Framework (Part 3)
Ben Frain
2019-07-30T14:00:00+02:002019-07-30T12:07:09+00:00
Back in the first part of this series, we explained why this project came to be. Namely a desire to learn how a small web application could be made in vanilla JavaScript and to get a non-designing developer working his design chops a little.
In part two we took some basic initial designs and got things up and running with some tooling and technology choices. We covered how and why parts of the design changed and the ramifications of those changes.
In this final part, we will cover turning a basic web application into a Progressive Web Application (PWA) and ‘shipping’ the application before looking at the most valuable lessons learned by making the simple web application In/Out:
The enormous value of JavaScript array methods;
Debugging;
When you are the only developer, you are the other developer;
Design is development;
Ongoing maintenance and security issues;
Working on side projects without losing your mind, motivation or both;
Shipping some product beats shipping no product.
So, before looking at lessons learned, let’s look at how you turn a basic web application written in HTML, CSS, and JavaScript into a Progressive Web Application (PWA).
In terms of total time spent on making this little web-application, I’d guestimate it was likely around two to three weeks. However, as it was done in snatched 30-60 minute chunks in the evenings it actually took around a year from the first commit to when I uploaded what I consider the ‘1.0’ version in August 2018. As I’d got the app ‘feature complete’, or more simply speaking, at a stage I was happy with, I anticipated a large final push. You see, I had done nothing towards making the application into a Progressive Web Application. Turns out, this was actually the easiest part of the whole process.
Making A Progressive Web Application
The good news is that when it comes to turning a little JavaScript-powered app into a ‘Progressive Web App’ there are heaps of tools to make life easy. If you cast your mind back to part one of this series, you’ll remember that to be a Progressive Web App means meeting a set of criteria.
To get a handle on how your web-application measures up, your first stop should probably be the Lighthouse tools of Google Chrome. You can find the Progressive Web App audit under the ‘Audits’ tab.
This is what Lighthouse told me when I first ran In/Out through it.
Tumblr media
Initial scores for Progressive Web App weren’t great. (Large preview)
At the outset In/Out was only getting a score of 55⁄100 for a Progressive Web App. However, I took it from there to 100⁄100 in well under an hour!
The expediency in improving that score was little to do with my ability. It was simply because Lighthouse told me exactly what was needed to be done!
Some examples of requisite steps: include a manifest.json file (essentially a JSON file providing metadata about the app), add a whole slew of meta tags in the head, switch out images that were inlined in the CSS for standard URL referenced images, and add a bunch of home screen images.
Making a number of home screen images, creating a manifest file and adding a bunch of meta tags might seem like a lot to do in under an hour but there are wonderful web applications to help you build web applications. How nice is that! I used https://app-manifest.firebaseapp.com. Feed it some data about your application and your logo, hit submit and it furnishes you with a zip file containing everything you need! From there on, it’s just copy-and-paste time.
Things I'd put off for some time due to lack of knowledge, like a Service Worker, were also added fairly easily thanks to numerous blog posts and sites dedicated to service workers like https://serviceworke.rs. With a service worker in place it meant the app could work offline, a requisite feature of a Progressive Web Application.
Whilst not strictly related to making the application a PWA, the 'coverage' tab of the Chrome Dev Tools were also very useful. After so much sporadic iteration on the design and code over months, it was useful to get a clear indication of where there was redundant code. I found a few old functions littering the codebase that I'd simply forgotten about!
In short order, having worked through the Lighthouse audit recommendations I felt like the teacher’s pet:
Tumblr media
Lighthouse makes it easy to get good scores by telling you exactly what to change. (Large preview)
The reality is that taking the application and making it a Progressive Web Application was actually incredibly straightforward.
With that final piece of development concluded I uploaded the little application to a sub-domain of my website and that was it.
Retrospective
Months have passed since parking up development my little web application.
I’ve used the application casually in the months since. The reality is much of the team sports organization I do still happens via text message. The application is, however, definitely easier than writing down who is in and out than finding a scrap of paper every game night.
So, the truth is that it’s hardly an indispensable service. Nor does it set any bars for development or design. I couldn’t tell you I’m 100% happy with it either. I just got to a point I was happy to abandon it.
But that was never the point of the exercise. I took a lot from the experience. What follows are what I consider the most important takeaways.
Design Is Development
At the outset, I didn’t value design enough. I started this project believing that my time spent sketching with a pad and pen or in the Sketch application, was time that could be better spent with coding. However, it turns that when I went straight to code, I was often just being a busy fool. Exploring concepts first at the lowest possible fidelity, saved far more time in the long run.
There were numerous occasions at the beginning where hours were spent getting something working in code only to realize that it was fundamentally flawed from a user experience point of view.
My opinion now is that paper and pencil are the finest planning, design and coding tools. Every significant problem faced was principally solved with paper and a pencil; the text editor merely a means of executing the solution. Without something making sense on paper, it stands no chance of working in code.
The next thing I learned to appreciate, and I don’t know why it took so long to figure out, is that design is iterative. I’d sub-consciously bought into the myth of a Designer with a capital “D”. Someone flouncing around, holding their mechanical pencil up at straight edges, waxing lyrical about typefaces and sipping on a flat white (with soya milk, obviously) before casually birthing fully formed visual perfection into the world.
This, not unlike the notion of the ‘genius’ programmer, is a myth. If you’re new to design but trying your hand, I’d suggest you don’t get hung up on the first idea that piques your excitement. It’s so cheap to try variations so embrace that possibility. None of the things I like about the design of In/Out were there in the first designs.
I believe it was the novelist, Michael Crichton, who coined the maxim, “Books are not written — they’re rewritten”. Accept that every creative process is essentially the same. Be aware that trusting the process lessens the anxiety and practice will refine your aesthetic understanding and judgment.
You Are The Other Dev On Your Project
I’m not sure if this is particular to projects that only get worked on sporadically but I made the following foolhardy assumption:
“I don’t need to document any of this because it’s just me, and obviously I will understand it because I wrote it.”
Nothing could be further from the truth!
There were evenings when, for the 30 minutes I had to work on the project, I did nothing more than try to understand a function I had written six months ago. The main reasons code re-orientation took so long was a lack of quality comments and poorly named variables and function arguments.
I’m very diligent in commenting code in my day job, always conscientious that someone else might need to make sense of what I’m writing. However, in this instance, I was that someone else. Do you really think you will remember how the block of code works you wrote in six months time? You won’t. Trust me on this, take some time out and comment that thing up!
I’ve since read a blog post entitled, Your syntax highlighter is wrong on the subject of the importance of comments. The basic premise being that syntax highlighters shouldn’t fade out the comments, they should be the most important thing. I’m inclined to agree and if I don’t find a code editor theme soon that scratches that itch I may have to adapt one to that end myself!
Debugging
When you hit bugs and you have written all the code, it’s not unfair to suggest the error is likely originating between the keyboard and chair. However, before assuming that, I would suggest you test even your most basic assumptions. For example, I remember taking in excess of two hours to fix a problem I had assumed was due to my code; in iOS I just couldn’t get my input box to accept text entry. I don’t remember why it hadn’t stopped me before but I do remember my frustration with the issue.
Turns out it was due to a, still yet to be fixed, bug in Safari. Turns out that in Safari if you have:
* { user-select: none; }
In your style sheet, input boxes won’t take any input. You can work around this with:
* { user-select: none; } input[type] { user-select: text; }
Which is the approach I take in my “App Reset” CSS reset. However, the really frustrating part of this was I had learned this already and subsequently forgotten it. When I finally got around to checking the WebKit bug tracking whilst troubleshooting the issue, I found I had written a workaround in the bug report thread more than a year ago complete with reduction!
Want To Build With Data? Learn JavaScript Array Methods
Perhaps the single biggest advance my JavaScript skills took by undergoing this app-building exercise was getting familiar with JavaScript Array methods. I now use them daily for all my iteration and data manipulation needs. I cannot emphasize enough how useful methods like map(), filter(), every(), findIndex(), find() and reduce() are. You can solve virtually any data problem with them. If you don’t already have them in your arsenal, bookmark https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array now and dig in as soon as you are able. My own run-down of my favored array methods is documented here.
ES6 has introduced other time savers for manipulating arrays, such as Set, Rest and Spread. Indulge me while I share one example; there used to be a bunch of faff if you wanted to remove duplicates from even a simple flat array. Not anymore.
Consider this simple example of an Array with the duplicate entry, “Mr Pink”:
let myArray = [ "Mr Orange", "Mr Pink", "Mr Brown", "Mr White", "Mr Blue", "Mr Pink" ];
To get rid of the duplicates with ES6 JavaScript you can now just do:
let deDuped = [...new Set(myArray)]; // deDuped logs ["Mr Orange", "Mr Pink", "Mr Brown", "Mr White", "Mr Blue"]
Something that used to require hand-rolling a solution or reaching for a library is now baked into the language. Admittedly, on such as short Array that may not sound like such a big deal but imagine how much time that saves when looking at arrays with hundreds of entries and duplicates.
Maintenance And Security
Anything you build that makes any use of NPM, even if just for build tools, carries the possibility of being vulnerable to security issues. GitHub does a good job of keeping you aware of potential problems but there is still some burden of maintenance.
For something that is a mere side-project, this can be a bit of a pain in the months and years that follow active development.
The reality is that every time you update dependencies to fix a security issue, you introduce the possibility of breaking your build.
For months, my package.json looked like this:
{ "dependencies": { "gulp": "^3.9.1", "postcss": "^6.0.22", "postcss-assets": "^5.0.0" }, "name": "In Out", "version": "1.0.0", "description": "simple utility to see who’s in and who’s out", "main": "index.js", "author": "Ben Frain", "license": "MIT", "devDependencies": { "autoprefixer": "^8.5.1", "browser-sync": "^2.24.6", "cssnano": "^4.0.4", "del": "^3.0.0", "gulp-htmlmin": "^4.0.0", "gulp-postcss": "^7.0.1", "gulp-sourcemaps": "^2.6.4", "gulp-typescript": "^4.0.2", "gulp-uglify": "^3.0.1", "postcss-color-function": "^4.0.1", "postcss-import": "^11.1.0", "postcss-mixins": "^6.2.0", "postcss-nested": "^3.0.0", "postcss-simple-vars": "^4.1.0", "typescript": "^2.8.3" } }
And by June 2019, I was getting these warnings from GitHub:
Tumblr media
Keeping dependencies listed on GitHub means infrequent security warnings. (Large preview)
None were related to plugins I was using directly, they were all sub-dependencies of the build tools I had used. Such is the double-edged sword of JavaScript packages. In terms of the app itself, there was no problem with In/Out; that wasn’t using any of the project dependencies. But as the code was on GitHub, I felt duty-bound to try and fix things up.
It’s possible to update packages manually, with a few choice changes to the package.json. However, both Yarn and NPM have their own update commands. I opted to run yarn upgrade-interactive which gives you a simple means to update things from the terminal.
Tumblr media
Yarn makes upgrading project dependencies a little more predicatble. (Large preview)
Seems easy enough, there’s even a little colored key to tell you which updates are most important.
You can add the --latest flag to update to the very latest major version of the dependencies, rather than just the latest patched version. In for a penny…
Trouble is, things move fast in the JavaScript package world, so updating a few packages to the latest version and then attempting a build resulted in this:
Tumblr media
Gulp build error (Large preview)
As such, I rolled back my package.json file and tried again this time without the --latest flag. That solved my security issues. Not the most fun I’ve had on a Monday evening though I’ll be honest.
That touches on an important part of any side project. Being realistic with your expectations.
Side Projects
I don’t know if you are the same but I’ve found that a giddy optimism and excitement makes me start projects and if anything does, embarrassment and guilt makes me finish them.
It would be a lie to say the experience of making this tiny application in my spare time was fun-filled. There were occasions I wish I’d never opened my mouth about it to anyone. But now it is done I am 100% convinced it was worth the time invested.
That said, it’s possible to mitigate frustration with such a side project by being realistic about how long it will take to understand and solve the problems you face. Only have 30 mins a night, a few nights a week? You can still complete a side project; just don’t be disgruntled if your pace feels glacial. If things can’t enjoy your full attention be prepared for a slower and steadier pace than you are perhaps used to. That’s true, whether it’s coding, completing a course, learning to juggle or writing a series of articles of why it took so long to write a small web application!
Simple Goal Setting
You don’t need a fancy process for goal setting. But it might help to break things down into small/short tasks. Things as simple as ‘write CSS for drop-down menu’ are perfectly achievable in a limited space of time. Whereas ‘research and implement a design pattern for state management’ is probably not. Break things down. Then, just like Lego, the tiny pieces go together.
Thinking about this process as chipping away at the larger problem, I’m reminded of the famous Bill Gates quote:
“Most people overestimate what they can do in one year and underestimate what they can do in ten years.”
This from a man that’s helping to eradicate Polio. Bill knows his stuff. Listen to Bill y’all.
Shipping Something Is Better Than Shipping Nothing
Before ‘shipping’ this web application, I reviewed the code and was thoroughly disheartened.
Although I had set out on this journey from a point of complete naivety and inexperience, I had made some decent choices when it came to how I might architect (if you’ll forgive so grand a term) the code. I’d researched and implemented a design pattern and enjoyed everything that pattern had to offer. Sadly, as I got more desperate to conclude the project, I failed to maintain discipline. The code as it stands is a real hodge-bodge of approaches and rife with inefficiencies.
In the months since I’ve come to realize that those shortcomings don’t really matter. Not really.
I’m a fan of this quote from Helmuth von Moltke.
“No plan of operations extends with any certainty beyond the first contact with the main hostile force.”
That’s been paraphrased as:
“No plan survives first contact with the enemy”.
Perhaps we can boil it down further and simply go with “shit happens”?
I can surmise my coming to terms with what shipped via the following analogy.
If a friend announced they were going to try and run their first marathon, them getting over the finish line would be all that mattered — I wouldn’t be berating them on their finishing time.
I didn’t set out to write the best web application. The remit I set myself was simply to design and make one.
More specifically, from a development perspective, I wanted to learn the fundamentals of how a web application was constructed. From a design point of view, I wanted to try and work through some (albeit simple) design problems for myself. Making this little application met those challenges and then some. The JavaScript for the entire application was just 5KB (gzipped). A small file size I would struggle to get to with any framework. Except maybe Svelte.
If you are setting yourself a challenge of this nature, and expect at some point to ‘ship’ something, write down at the outset why you are doing it. Keep those reasons at the forefront of your mind and be guided by them. Everything is ultimately some sort of compromise. Don’t let lofty ideals paralyze you from finishing what you set out to do.
Summary
Overall, as it comes up to a year since I have worked on In/Out, my feelings fall broadly into three areas: things I regretted, things I would like to improve/fix and future possibilities.
Things I Regretted
As already alluded to, I was disappointed I hadn’t stuck to what I considered a more elegant method of changing state for the application and rendering it to the DOM. The observer pattern, as discussed in the second part of this series, which solved so many problems in a predictable manner was ultimately cast aside as ‘shipping’ the project became a priority.
I was embarrassed by my code at first but in the following months, I have grown more philosophical. If I hadn’t used more pedestrian techniques later on, there is a very real possibility the project would never have concluded. Getting something out into the world that needs improving still feels better than it never being birthed into the world at all.
Improving In/Out
Beyond choosing semantic markup, I’d made no affordances for accessibility. When I built In/Out I was confident with standard web page accessibility but not sufficiently knowledgeable to tackle an application. I’ve done far more work/research in that area now, so I’d enjoy taking the time to do a decent job of making this application more accessible.
The implementation of the revised design of ‘Add Person’ functionality was rushed. It’s not a disaster, just a bit rougher than I would like. It would be nice to make that slicker.
I also made no consideration for larger screens. It would be interesting to consider the design challenges of making it work at larger sizes, beyond simply making it a tube of content.
Possibilities
Using localStorage worked for my simplistic needs but it would be nice to have a ‘proper’ data store so it wasn’t necessary to worry about backing up the data. Adding log-in capability would also open up the possibility of sharing the game organization with another individual. Or maybe every player could just mark whether they were playing themselves? It’s amazing how many avenues to explore you can envisage from such simple and humble beginnings.
SwiftUI for iOS app development is also intriguing. For someone who has only ever worked with web languages, at first glance, SwiftUI looks like something I’m now emboldened to try. I’d likely try rebuilding In/Out with SwiftUI — just to have something specific to build and compare the development experience and results.
And so, it’s time to wrap things up and give you the TL;DR version of all this.
If you want to learn how something works on the web, I’d suggest skipping the abstractions. Ditch the frameworks, whether that’s CSS or JavaScript until you really understand what they are dong for you.
Design is iterative, embrace that process.
Solve problems in the lowest fidelity medium at your disposal. Don’t go to code if you can test the idea in Sketch. Don’t draw it in Sketch if you can use pen and paper. Write out logic first. Then write it in code.
Be realistic but never despondent. Developing a habit of chipping away at something for as little as 30 minutes a day can get results. That fact is true whatever form your quest takes.
Tumblr media
(dm, il, ra)
0 notes
jerryj1a · 6 years ago
Text
Using Python to recover SEO site traffic (Part two)
Tumblr media Tumblr media
Automating the process of narrowing down site traffic issues with Python gives you the opportunity to help your clients recover fast. This is the second part of a three-part series. In part one, I introduced our approach to nail down the pages losing traffic. We call it the “winners vs losers” analysis. If you have a big site, reviewing individual pages losing traffic as we did on part one might not give you a good sense of what the problem is. So, in part two we will create manual page groups using regular expressions. If you stick around to read part three, I will show you how to group pages automatically using machine learning. You can find the code used in part one, two and three in this Google Colab notebook. Let’s walk over part two and learn some Python.
Incorporating redirects
As the site our analyzing moved from one platform to another, the URLs changed, and a decent number of redirects were put in place. In order to track winners and losers more accurately, we want to follow the redirects from the first set of pages. We were not really comparing apples to apples in part one. If we want to get a fully accurate look at the winners and losers, we’ll have to try to discover where the source pages are redirecting to, then repeat the comparison. 1. Python requests We’ll use the requests library which simplifies web scraping, to send an HTTP HEAD request to each URL in our Google Analytics data set, and if it returns a 3xx redirect, we’ll record the ultimate destination and re-run our winners and losers analysis with the correct, final URLs. HTTP HEAD requests speed up the process and save bandwidth as the web server only returns headers, not full HTML responses. Below are two functions we’ll use to do this. The first function takes in a single URL and returns the status code and any resulting redirect location (or None if there isn’t a redirect.) The second function takes in a list of URLs and runs the first function on each of them, saving all the results in a list. View the code on Gist. This process might take a while (depending on the number of URLs). Please note that we introduce a delay between requests because we don’t want to overload the server with our requests and potentially cause it to crash. We also only check for valid redirect status codes 301, 302, 307. It is not wise to check the full range as for example 304 means the page didn’t change. Once we have the redirects, however, we can repeat the winners and losers analysis exactly as before. 2. Using combine_first In part one we learned about different join types. We first need to do a left merge/join to append the redirect information to our original Google Analytics data frame while keeping the data for rows with no URLs in common. To make sure that we use either the original URL or the redirect URL if it exists, we use another data frame method called combine_first() to create a true_url column. For more information on exactly how this method works, see the combine_first documentation. We also extract the path from the URLs and format the dates to Python DateTime objects. View the code on Gist. 3. Computing totals before and after the switch View the code on Gist. 4. Recalculating winners vs losers View the code on Gist. 5. Sanity check View the code on Gist. This is what the output looks like.
Tumblr media
Using regular expressions to group pages
Many websites have well-structured URLs that make their page types easy to parse. For example, a page with any one of the following paths given below is pretty clearly a paginated category page. /category/toys?page=1 /c/childrens-toys/3/ Meanwhile, a path structure like the paths given below might be a product page. /category/toys/basketball-product-1.html /category/toys/p/action-figure.html We need a way to categorize these pages based on the structure of the text contained in the URL. Luckily this type of problem (that is, examining structured text) can be tackled very easily with a “Domain Specific Language” known as Regular Expressions or “regex.” Regex expressions can be extremely complicated, or extremely simple. For example, the following regex query (written in python) would allow you to find the exact phrase “find me” in a string of text. regex = r"find me" Let’s try it out real quick. text = "If you can find me in this string of text, you win! But if you can't find me, you lose" regex = r"find me" print("Match index", "tMatch text") for match in re.finditer(regex, text): print(match.start(), "tt", match.group()) The output should be: Match index     Match text 11          find me 69          find me Grouping by URL Now we make use of a slightly more advanced regex expression that contains a negative lookahead. Fully understanding the following regex expressions is left as an exercise for the reader, but suffice it to say we’re looking for “Collection” (aka “category”) pages and “Product” pages. We create a new column called “group” where we label any rows whose true_url match our regex string accordingly. Finally, we simply re-run our winners and losers’ analysis but instead of grouping by individual URLs like we did before, we group by the page type we found using regex. View the code on Gist. The output looks like this:
Tumblr media
Plotting the results
Finally, we’ll plot the results of our regex-based analysis, to get a feel for which groups are doing better or worse. We’re going to use an open source plotting library called Plotly to do so. In our first set of charts, we’ll define 3 bar charts that will go on the same plot, corresponding to the traffic differences, data from before, and data from after our cutoff point respectively. We then tell Plotly to save an HTML file containing our interactive plot, and then we’ll display the HTML within the notebook environment. Notice that Plotly has grouped together our bar charts based on the “group” variable that we passed to all the bar charts on the x-axis, so now we can see that the “collections” group very clearly has had the biggest difference between our two time periods. View the code on Gist. We get this nice plot which you can interact within the Jupyter notebook!
Tumblr media
Next up we’ll plot a line graph showing the traffic over time for all of our groups. Similar to the one above, we’ll create three separate lines that will go on the same chart. This time, however, we do it dynamically with a “for loop”. After we create the line graph, we can add some annotations using the Layout parameter when creating the Plotly figure. View the code on Gist. This produces this painful to see, but valuable chart.
Tumblr media
Results
From the bar chart and our line graph, we can see two separate events occurred with the “Collections” type pages which caused a loss in traffic. Unlike the uncategorized pages or the product pages, something has gone wrong with collections pages in particular. From here we can take off our programmer hats, and put on our SEO hats and go digging for the cause of this traffic loss, now that we know that it’s the “Collections” pages which were affected the most. During further work with this client, we narrowed down the issue to massive consolidation of category pages during the move. We helped them recreate them from the old site and linked them from a new HTML sitemap with all the pages, as they didn’t want these old pages in the main navigation. Manually grouping pages is a valuable technique, but a lot of work if you need to work with many brands. In part three, the final part of the series, I will discuss a clever technique to group pages automatically using machine learning. Hamlet Batista is the CEO and founder of RankSense, an agile SEO platform for online retailers and manufacturers. He can be found on Twitter @hamletbatista. The post Using Python to recover SEO site traffic (Part two) appeared first on Search Engine Watch.   Read the full article
0 notes
anthonykrierion · 6 years ago
Text
10 Quick Wins We Can Make Using ODN as a Meta CMS
The Distilled Optimization Delivery Network (ODN) is most famous for SEO A/B testing and more recently full-funnel testing. But fewer people are familiar with one of the other main features; the ability to act as a meta-CMS and change pretty much anything you want in the HTML of your site, without help from your development team or writing tickets. DistilledODN is platform independent, sitting between your website servers and website visitors, similar to a Content Delivery Network (CDN), as shown in the below diagram.
This use case for ODN has been popular for many of our enterprise clients who have restrictions on their ability to make on-the-fly changes to their websites for a variety of reasons. A picture (or a gif) is worth a thousand words, so here are 10 common website changes you can make using ODN that you may not be aware of.
We’ve used a variety of websites and brands that use different platforms and technologies to show anyone can make use of this software regardless of your CMS or technology stack.
Before we get started, there is some jargon you will want to understand:
Site section: A site section is the group of pages that we want to make a specific change to
Global rules: These are rules that you want to apply to all pages within a site section as opposed to only a percentage of pages (like you would with an experiment). An example might be something like “Insert self-referencing canonical”. Rules are made up of individual steps.
Steps: These are nested within global rules, and are the steps you have to take to get to the end goal. Some global rules will only have one step, others can have much more.
In the example global rule above, the steps could be something like, “Remove existing canonical”, “Replace with self-referencing canonical”
On-page values: On-page values are constant values that we extract them from the pages in the site section. You can use these in steps. So for the above rule, we’d have to create two on-page values the “existing canonical” and the “path” of the URL we want to add the self-referencing canonical to. An example site where we’ve done this is included below.
The image below shows how these different components interact with each other.
If you’d like a more detailed explanation about any of this stuff, a good place to start is this blog post; what is SEO split-testing.
Now that you’re familiar with the terminology, here are our 10 common website changes made with ODN, with GIFs:
1. Forever 21 – Trailing slash redirect
Having URLs that return a 200 status code for both the trailing slash and non-trailing slash versions can lead to index bloat and duplicate content issues. On Forever21’s homepage, you can see both “/uk/shop” and “/uk/shop/” are 200 pages.
To fix this using ODN, we create a site section that has the homepage entered as the page we want our global rule to apply to.
Then we need to create an on-page value for the page without a trailing slash. In this example, we’ve extracted this value using regex. Having this value defined means that this fix would be easy to apply to a bulk set of URLs on the website if necessary.
Next, we create our global rule. This rule only has one step, to redirects the URL in our site section to the one created using the on-page value, .
Tumblr media
2. SmartWater Technology – Duplicate home page redirects
Often, websites will have multiple versions of their homepage that return status codes, like when they have both an http:// version and an https:// version, or a www version and a non-www version. This is a problem because it means the authority of your strongest page is split across two URLs. It also means you may have a non-desirable version ranking in search results.
We can see this on SmartWater Technology’s homepage. We can fix this problem by deploying ODN on the non-www version of their site, and creating a site section for the homepage. We only have one page we want to work on in this example, so we don’t need to create any additional on-page values.
We then set up a global rule to redirect the non-www version of the homepage to the www version, which has one step. In the step we select to redirect the URL in our path list (the homepage), to the new destination we’ve entered, https://www.smartwater.com/.
Tumblr media
3. Bentley – Adding self-referencing canonicals
As mentioned in the introduction, we can use ODN to insert self-referencing canonicals on a list of pages. We’ve done this with Bentley Motors as an example, which doesn’t have a canonical on their homepage (or any other pages).
We can fix this by setting a global rule with one step to insert this block of HTML after the <title> element:
<link rel="canonical" href="https://www.bentleymotors.com">
We didn’t have to create an on-page value for , since it was created by entering the homepage in our path list. This rule will add a self-referencing canonical to any page that we include in our site section.
If we wanted to, we can also use ODN to apply canonicals that aren’t self-referencing by mapping out the pages we want to add canonicals to, with their canonical page as a value created with a csv upload.
Tumblr media
4. Patagonia – Fixing soft 404s
Patagonia uses this landing page, that returns a 200 status code, for 404s, rather than a page that returns a genuine 404 status code. The problem with using soft-404s such as the one Patagonia uses is it won’t send the 404 signal to crawlers, even if the content on the page has the 404 message. This means search engines will see this as a real page, preventing the URL you intended to delete from being removed from the index.
To fix this using ODN, I’ve created a site section with the page path /404/. If you have multiple pages that are soft 404s, you can use other methods to define the pages in the site section. For example, you could match on any page that has “Page Not Found” in the title, or for Patagonia, we could use regex to match on any url that contains “/404/” in it.
Once we’ve defined what pages we want in our site section, we create a global rule with one step that changes the status code from 200 to 404.
Tumblr media
5. Amazon Jobs – Changing 302s to 301s
When a redirect is truly temporary, using a 302 status code instead of a 301 makes sense; but if you’re not planning on reverting back to the original URL, using a 302 instead of a 301 redirect means you aren’t passing link equity from one URL to the next.
Once again, this fix is simple to deploy using ODN. We have done it with Amazon Jobs in the GIF below. First, we’ve created a site section with path of the URL we want to change the status code of. I have also changed the response code to match 302 rather than 200, which is the default for ODN.
Again, no need to create an on-page value in this instance. All that’s required is a global rule with one step, to change the status code on those URLs that match what we have in our path list from 302 to 301.
Tumblr media
6. Etsy – Changing sitewide links that 30x/404
When you have a sitewide link that has a 30x or 404 status code, it not only might be a frustrating experience for users, it can also have a negative impact on your SEO. If a heavily linked to page on your site has a 301 redirect, for example, you are preventing it from being passed all the link equity available to it.
To fix this with ODN, we can replace the 301 link with the destination 200 link. We have done this on Etsy’s homepage in the GIF below.
First, we create a site section for the homepage, then a global rule with a step to replace the old blog URL. This step replaces the content of the element we’ve selected using a CSS selector with the HTML in the box.
In this case the css selector we have used is “a[href="https://www.distilled.net/blog/uk/?ref=ftr"]”. Using the test feature, we can see this selector grabs the element “<a class="text-gray-darker pt-xs-1 pb-xs-2 pb-md-1 display-block width-full" href="https://www.distilled.net/blog/uk/?ref=ftr"> <span>Etsy blog</span> </a>”. That’s what we are looking to replace.
We then set it to replace the above element with “<a class="text-gray-darker pt-xs-1 pb-xs-2 pb-md-1 display-block width-full" href="https://blog.etsy.com/uk/?ref=ftr"> <span>Etsy blog</span> </a>”, which has the link to the 200 version of Etsy’s blog. Now the footer link goes to the blog.etsy URL rather than the 301 /blog/uk/?ref=ftr URL.  
Tumblr media
7. Pixel Eyewear – Adding title tags
Changing title tags is often a desire for content creators, as metadata is one of the strongest signals you can send to Google on what your page is about and what keywords you want to target.
Say you worked at Pixel Eyewear, and after some keyword research decided you wanted to target the keyword “computer screen glasses”, rather than simply “computer glasses”. We can use ODN to make that update, and again this rule can easily be set to target a bulk set of pages.
In the path list, we include all the URLs we want this change to apply to. Then we create a global rule to add “Screen” to our page titles. This has one step, where we use the CSS selector to select the title element of the page. We then enter the HTML we want instead.
Tumblr media
8. Pixel Eyewear – Adding content to product pages
This is an example of when a site section has multiple rules. Say that you worked at Pixel Eyewear, and you also wanted to update the descriptions on your product pages, in addition to adding “Screen” to your page titles, and you want to do this on the same pages included in the previous section.  
To do this with ODN, we create a second global rule to edit the product description. This uses a different CSS selector, “div[class="pb-3"]”. You just want the main description to be more descriptive, so you replace the first paragraph of the element “Meet the most advanced eyewear engineered for the digital world.” to “Our most popular product, the Capra will have you looking stylish while wearing the most advanced eyewear engineered for the digital world.”
Since there are two global rules in this section, the order you place them in will matter. ODN works from top to bottom, as shown in the diagram in the intro, so it will apply the first global rule and its steps first before moving to the second. If one of your global rules depends on something created in another, you want to be sure that global rule is listed first.
Tumblr media
9. Liberty London – Adding meta descriptions
Meta descriptions are an important meta property to entice users to click through to your webpage from the SERP, but it’s common for website owners to not have them at all, or on important pages on their site, as seen with Liberty London on their UK featured page.
We can edit the meta description content with ODN, and insert a description. First, we include the path of the target page in our path list, then create a global rule with a single step that grabs the meta description with a CSS selector. This time we set it to “Set or update the attribute of an element.” The attribute we want to replace is the content, and we want to replace it with the content entered.
This can also be used to add in meta descriptions when they’re missing entirely, or when you want to insert new ones. If you want to apply in bulk, you can upload a CSV that has the desired meta descriptions for each target URL as a value.
Tumblr media
10. CamelBak – Removing duplicate content
E-commerce and other websites frequently wind up with duplicate content on their websites, which can lead to drops in traffic and rankings. Faceted navigation is a common culprit. We can see this in action on Camelbak’s website, where parametered URLs like https://international.camelbak.com/en/bottles/bottle-accessories?sortValue=af41b41832b34f02975423ad5ad46b1e return 200 status codes and have no canonical tags.
We’ve fixed this in ODN by adding canonical tags to the non-parameterized URL. First, we add the relevant URL paths to our path list. Then we need to create an on-page value for the non-parameterized version of the URL. This rule uses regex to extract the content of the URL that comes before the “?” character.
Once we have this on-page value, we can use it in our global rule. Since there are no canonicals already, this global rule has one step. If there were already canonicals on these pages, self-referencing ones, for example, that still referred to the parameterized URL, then we’d have to remove that canonical before we could add in a new one.
The step to add in the canonical inserts a block of HTML after the <title> element. Then we enter the HRML that we want to be inserted. You can see that this uses the on-page value we created, giving us this string:
<link rel="canonical" href="https://international.camelbak.com"/>
Because we’ve used an on-page value, we put a list of paths for relevant parameterized URLs in our path list, and it will insert a canonical to their non-parameterized parent.
This tactic can be adjusted to account for pagination with rel=”prev” and rel=”next” tags and many other variations. Another way to address duplicate content issues with ODN is to redirecting unwanted URLs, among others.
Tumblr media
Summary
These examples are only a selection of the types of fixes ODN can employ for your website. There are many more, in addition to being able to perform SEO A/B testing and full-funnel testing. The ability to create custom values and use CSS selectors means there’s a lot of room for any of these fixes to be customized to meet the needs of your website.
If you work on a website that has a difficult time being able to make these kinds of changes (you’re not the only one), then get in touch to get a free demo of our platform in action on your website.
10 Quick Wins We Can Make Using ODN as a Meta CMS was originally posted by Video And Blog Marketing
0 notes
ronijashworth · 6 years ago
Text
10 Quick Wins We Can Make Using ODN as a Meta CMS
The Distilled Optimization Delivery Network (ODN) is most famous for SEO A/B testing and more recently full-funnel testing. But fewer people are familiar with one of the other main features; the ability to act as a meta-CMS and change pretty much anything you want in the HTML of your site, without help from your development team or writing tickets. DistilledODN is platform independent, sitting between your website servers and website visitors, similar to a Content Delivery Network (CDN), as shown in the below diagram.
This use case for ODN has been popular for many of our enterprise clients who have restrictions on their ability to make on-the-fly changes to their websites for a variety of reasons. A picture (or a gif) is worth a thousand words, so here are 10 common website changes you can make using ODN that you may not be aware of.
We’ve used a variety of websites and brands that use different platforms and technologies to show anyone can make use of this software regardless of your CMS or technology stack.
Before we get started, there is some jargon you will want to understand:
Site section: A site section is the group of pages that we want to make a specific change to
Global rules: These are rules that you want to apply to all pages within a site section as opposed to only a percentage of pages (like you would with an experiment). An example might be something like “Insert self-referencing canonical”. Rules are made up of individual steps.
Steps: These are nested within global rules, and are the steps you have to take to get to the end goal. Some global rules will only have one step, others can have much more.
In the example global rule above, the steps could be something like, “Remove existing canonical”, “Replace with self-referencing canonical”
On-page values: On-page values are constant values that we extract them from the pages in the site section. You can use these in steps. So for the above rule, we’d have to create two on-page values the “existing canonical” and the “path” of the URL we want to add the self-referencing canonical to. An example site where we’ve done this is included below.
The image below shows how these different components interact with each other.
If you’d like a more detailed explanation about any of this stuff, a good place to start is this blog post; what is SEO split-testing.
Now that you’re familiar with the terminology, here are our 10 common website changes made with ODN, with GIFs:
1. Forever 21 – Trailing slash redirect
Having URLs that return a 200 status code for both the trailing slash and non-trailing slash versions can lead to index bloat and duplicate content issues. On Forever21’s homepage, you can see both “/uk/shop” and “/uk/shop/” are 200 pages.
To fix this using ODN, we create a site section that has the homepage entered as the page we want our global rule to apply to.
Then we need to create an on-page value for the page without a trailing slash. In this example, we’ve extracted this value using regex. Having this value defined means that this fix would be easy to apply to a bulk set of URLs on the website if necessary.
Next, we create our global rule. This rule only has one step, to redirects the URL in our site section to the one created using the on-page value, .
Tumblr media
2. SmartWater Technology – Duplicate home page redirects
Often, websites will have multiple versions of their homepage that return status codes, like when they have both an http:// version and an https:// version, or a www version and a non-www version. This is a problem because it means the authority of your strongest page is split across two URLs. It also means you may have a non-desirable version ranking in search results.
We can see this on SmartWater Technology’s homepage. We can fix this problem by deploying ODN on the non-www version of their site, and creating a site section for the homepage. We only have one page we want to work on in this example, so we don’t need to create any additional on-page values.
We then set up a global rule to redirect the non-www version of the homepage to the www version, which has one step. In the step we select to redirect the URL in our path list (the homepage), to the new destination we’ve entered, https://www.smartwater.com/.
Tumblr media
3. Bentley – Adding self-referencing canonicals
As mentioned in the introduction, we can use ODN to insert self-referencing canonicals on a list of pages. We’ve done this with Bentley Motors as an example, which doesn’t have a canonical on their homepage (or any other pages).
We can fix this by setting a global rule with one step to insert this block of HTML after the <title> element:
<link rel="canonical" href="https://www.bentleymotors.com">
We didn’t have to create an on-page value for , since it was created by entering the homepage in our path list. This rule will add a self-referencing canonical to any page that we include in our site section.
If we wanted to, we can also use ODN to apply canonicals that aren’t self-referencing by mapping out the pages we want to add canonicals to, with their canonical page as a value created with a csv upload.
Tumblr media
4. Patagonia – Fixing soft 404s
Patagonia uses this landing page, that returns a 200 status code, for 404s, rather than a page that returns a genuine 404 status code. The problem with using soft-404s such as the one Patagonia uses is it won’t send the 404 signal to crawlers, even if the content on the page has the 404 message. This means search engines will see this as a real page, preventing the URL you intended to delete from being removed from the index.
To fix this using ODN, I’ve created a site section with the page path /404/. If you have multiple pages that are soft 404s, you can use other methods to define the pages in the site section. For example, you could match on any page that has “Page Not Found” in the title, or for Patagonia, we could use regex to match on any url that contains “/404/” in it.
Once we’ve defined what pages we want in our site section, we create a global rule with one step that changes the status code from 200 to 404.
Tumblr media
5. Amazon Jobs – Changing 302s to 301s
When a redirect is truly temporary, using a 302 status code instead of a 301 makes sense; but if you’re not planning on reverting back to the original URL, using a 302 instead of a 301 redirect means you aren’t passing link equity from one URL to the next.
Once again, this fix is simple to deploy using ODN. We have done it with Amazon Jobs in the GIF below. First, we’ve created a site section with path of the URL we want to change the status code of. I have also changed the response code to match 302 rather than 200, which is the default for ODN.
Again, no need to create an on-page value in this instance. All that’s required is a global rule with one step, to change the status code on those URLs that match what we have in our path list from 302 to 301.
Tumblr media
6. Etsy – Changing sitewide links that 30x/404
When you have a sitewide link that has a 30x or 404 status code, it not only might be a frustrating experience for users, it can also have a negative impact on your SEO. If a heavily linked to page on your site has a 301 redirect, for example, you are preventing it from being passed all the link equity available to it.
To fix this with ODN, we can replace the 301 link with the destination 200 link. We have done this on Etsy’s homepage in the GIF below.
First, we create a site section for the homepage, then a global rule with a step to replace the old blog URL. This step replaces the content of the element we’ve selected using a CSS selector with the HTML in the box.
In this case the css selector we have used is “a[href="https://www.distilled.net/blog/uk/?ref=ftr"]”. Using the test feature, we can see this selector grabs the element “<a class="text-gray-darker pt-xs-1 pb-xs-2 pb-md-1 display-block width-full" href="https://www.distilled.net/blog/uk/?ref=ftr"> <span>Etsy blog</span> </a>”. That’s what we are looking to replace.
We then set it to replace the above element with “<a class="text-gray-darker pt-xs-1 pb-xs-2 pb-md-1 display-block width-full" href="https://blog.etsy.com/uk/?ref=ftr"> <span>Etsy blog</span> </a>”, which has the link to the 200 version of Etsy’s blog. Now the footer link goes to the blog.etsy URL rather than the 301 /blog/uk/?ref=ftr URL.  
Tumblr media
7. Pixel Eyewear – Adding title tags
Changing title tags is often a desire for content creators, as metadata is one of the strongest signals you can send to Google on what your page is about and what keywords you want to target.
Say you worked at Pixel Eyewear, and after some keyword research decided you wanted to target the keyword “computer screen glasses”, rather than simply “computer glasses”. We can use ODN to make that update, and again this rule can easily be set to target a bulk set of pages.
In the path list, we include all the URLs we want this change to apply to. Then we create a global rule to add “Screen” to our page titles. This has one step, where we use the CSS selector to select the title element of the page. We then enter the HTML we want instead.
Tumblr media
8. Pixel Eyewear – Adding content to product pages
This is an example of when a site section has multiple rules. Say that you worked at Pixel Eyewear, and you also wanted to update the descriptions on your product pages, in addition to adding “Screen” to your page titles, and you want to do this on the same pages included in the previous section.  
To do this with ODN, we create a second global rule to edit the product description. This uses a different CSS selector, “div[class="pb-3"]”. You just want the main description to be more descriptive, so you replace the first paragraph of the element “Meet the most advanced eyewear engineered for the digital world.” to “Our most popular product, the Capra will have you looking stylish while wearing the most advanced eyewear engineered for the digital world.”
Since there are two global rules in this section, the order you place them in will matter. ODN works from top to bottom, as shown in the diagram in the intro, so it will apply the first global rule and its steps first before moving to the second. If one of your global rules depends on something created in another, you want to be sure that global rule is listed first.
Tumblr media
9. Liberty London – Adding meta descriptions
Meta descriptions are an important meta property to entice users to click through to your webpage from the SERP, but it’s common for website owners to not have them at all, or on important pages on their site, as seen with Liberty London on their UK featured page.
We can edit the meta description content with ODN, and insert a description. First, we include the path of the target page in our path list, then create a global rule with a single step that grabs the meta description with a CSS selector. This time we set it to “Set or update the attribute of an element.” The attribute we want to replace is the content, and we want to replace it with the content entered.
This can also be used to add in meta descriptions when they’re missing entirely, or when you want to insert new ones. If you want to apply in bulk, you can upload a CSV that has the desired meta descriptions for each target URL as a value.
Tumblr media
10. CamelBak – Removing duplicate content
E-commerce and other websites frequently wind up with duplicate content on their websites, which can lead to drops in traffic and rankings. Faceted navigation is a common culprit. We can see this in action on Camelbak’s website, where parametered URLs like https://international.camelbak.com/en/bottles/bottle-accessories?sortValue=af41b41832b34f02975423ad5ad46b1e return 200 status codes and have no canonical tags.
We’ve fixed this in ODN by adding canonical tags to the non-parameterized URL. First, we add the relevant URL paths to our path list. Then we need to create an on-page value for the non-parameterized version of the URL. This rule uses regex to extract the content of the URL that comes before the “?” character.
Once we have this on-page value, we can use it in our global rule. Since there are no canonicals already, this global rule has one step. If there were already canonicals on these pages, self-referencing ones, for example, that still referred to the parameterized URL, then we’d have to remove that canonical before we could add in a new one.
The step to add in the canonical inserts a block of HTML after the <title> element. Then we enter the HRML that we want to be inserted. You can see that this uses the on-page value we created, giving us this string:
<link rel="canonical" href="https://international.camelbak.com"/>
Because we’ve used an on-page value, we put a list of paths for relevant parameterized URLs in our path list, and it will insert a canonical to their non-parameterized parent.
This tactic can be adjusted to account for pagination with rel=”prev” and rel=”next” tags and many other variations. Another way to address duplicate content issues with ODN is to redirecting unwanted URLs, among others.
Tumblr media
Summary
These examples are only a selection of the types of fixes ODN can employ for your website. There are many more, in addition to being able to perform SEO A/B testing and full-funnel testing. The ability to create custom values and use CSS selectors means there’s a lot of room for any of these fixes to be customized to meet the needs of your website.
If you work on a website that has a difficult time being able to make these kinds of changes (you’re not the only one), then get in touch to get a free demo of our platform in action on your website.
from Digital Marketing https://www.distilled.net/resources/odn-as-a-meta-cms/ via http://www.rssmix.com/
0 notes
heavenwheel · 6 years ago
Text
10 Quick Wins We Can Make Using ODN as a Meta CMS
The Distilled Optimization Delivery Network (ODN) is most famous for SEO A/B testing and more recently full-funnel testing. But fewer people are familiar with one of the other main features; the ability to act as a meta-CMS and change pretty much anything you want in the HTML of your site, without help from your development team or writing tickets. DistilledODN is platform independent, sitting between your website servers and website visitors, similar to a Content Delivery Network (CDN), as shown in the below diagram.
This use case for ODN has been popular for many of our enterprise clients who have restrictions on their ability to make on-the-fly changes to their websites for a variety of reasons. A picture (or a gif) is worth a thousand words, so here are 10 common website changes you can make using ODN that you may not be aware of.
We’ve used a variety of websites and brands that use different platforms and technologies to show anyone can make use of this software regardless of your CMS or technology stack.
Before we get started, there is some jargon you will want to understand:
Site section: A site section is the group of pages that we want to make a specific change to
Global rules: These are rules that you want to apply to all pages within a site section as opposed to only a percentage of pages (like you would with an experiment). An example might be something like “Insert self-referencing canonical”. Rules are made up of individual steps.
Steps: These are nested within global rules, and are the steps you have to take to get to the end goal. Some global rules will only have one step, others can have much more.
In the example global rule above, the steps could be something like, “Remove existing canonical”, “Replace with self-referencing canonical”
On-page values: On-page values are constant values that we extract them from the pages in the site section. You can use these in steps. So for the above rule, we’d have to create two on-page values the “existing canonical” and the “path” of the URL we want to add the self-referencing canonical to. An example site where we’ve done this is included below.
The image below shows how these different components interact with each other.
If you’d like a more detailed explanation about any of this stuff, a good place to start is this blog post; what is SEO split-testing.
Now that you’re familiar with the terminology, here are our 10 common website changes made with ODN, with GIFs:
1. Forever 21 – Trailing slash redirect
Having URLs that return a 200 status code for both the trailing slash and non-trailing slash versions can lead to index bloat and duplicate content issues. On Forever21’s homepage, you can see both “/uk/shop” and “/uk/shop/” are 200 pages.
To fix this using ODN, we create a site section that has the homepage entered as the page we want our global rule to apply to.
Then we need to create an on-page value for the page without a trailing slash. In this example, we’ve extracted this value using regex. Having this value defined means that this fix would be easy to apply to a bulk set of URLs on the website if necessary.
Next, we create our global rule. This rule only has one step, to redirects the URL in our site section to the one created using the on-page value, .
Tumblr media
2. SmartWater Technology – Duplicate home page redirects
Often, websites will have multiple versions of their homepage that return status codes, like when they have both an http:// version and an https:// version, or a www version and a non-www version. This is a problem because it means the authority of your strongest page is split across two URLs. It also means you may have a non-desirable version ranking in search results.
We can see this on SmartWater Technology’s homepage. We can fix this problem by deploying ODN on the non-www version of their site, and creating a site section for the homepage. We only have one page we want to work on in this example, so we don’t need to create any additional on-page values.
We then set up a global rule to redirect the non-www version of the homepage to the www version, which has one step. In the step we select to redirect the URL in our path list (the homepage), to the new destination we’ve entered, https://www.smartwater.com/.
Tumblr media
3. Bentley – Adding self-referencing canonicals
As mentioned in the introduction, we can use ODN to insert self-referencing canonicals on a list of pages. We’ve done this with Bentley Motors as an example, which doesn’t have a canonical on their homepage (or any other pages).
We can fix this by setting a global rule with one step to insert this block of HTML after the <title> element:
<link rel="canonical" href="https://www.bentleymotors.com">
We didn’t have to create an on-page value for , since it was created by entering the homepage in our path list. This rule will add a self-referencing canonical to any page that we include in our site section.
If we wanted to, we can also use ODN to apply canonicals that aren’t self-referencing by mapping out the pages we want to add canonicals to, with their canonical page as a value created with a csv upload.
Tumblr media
4. Patagonia – Fixing soft 404s
Patagonia uses this landing page, that returns a 200 status code, for 404s, rather than a page that returns a genuine 404 status code. The problem with using soft-404s such as the one Patagonia uses is it won’t send the 404 signal to crawlers, even if the content on the page has the 404 message. This means search engines will see this as a real page, preventing the URL you intended to delete from being removed from the index.
To fix this using ODN, I’ve created a site section with the page path /404/. If you have multiple pages that are soft 404s, you can use other methods to define the pages in the site section. For example, you could match on any page that has “Page Not Found” in the title, or for Patagonia, we could use regex to match on any url that contains “/404/” in it.
Once we’ve defined what pages we want in our site section, we create a global rule with one step that changes the status code from 200 to 404.
Tumblr media
5. Amazon Jobs – Changing 302s to 301s
When a redirect is truly temporary, using a 302 status code instead of a 301 makes sense; but if you’re not planning on reverting back to the original URL, using a 302 instead of a 301 redirect means you aren’t passing link equity from one URL to the next.
Once again, this fix is simple to deploy using ODN. We have done it with Amazon Jobs in the GIF below. First, we’ve created a site section with path of the URL we want to change the status code of. I have also changed the response code to match 302 rather than 200, which is the default for ODN.
Again, no need to create an on-page value in this instance. All that’s required is a global rule with one step, to change the status code on those URLs that match what we have in our path list from 302 to 301.
Tumblr media
6. Etsy – Changing sitewide links that 30x/404
When you have a sitewide link that has a 30x or 404 status code, it not only might be a frustrating experience for users, it can also have a negative impact on your SEO. If a heavily linked to page on your site has a 301 redirect, for example, you are preventing it from being passed all the link equity available to it.
To fix this with ODN, we can replace the 301 link with the destination 200 link. We have done this on Etsy’s homepage in the GIF below.
First, we create a site section for the homepage, then a global rule with a step to replace the old blog URL. This step replaces the content of the element we’ve selected using a CSS selector with the HTML in the box.
In this case the css selector we have used is “a[href="https://www.distilled.net/blog/uk/?ref=ftr"]”. Using the test feature, we can see this selector grabs the element “<a class="text-gray-darker pt-xs-1 pb-xs-2 pb-md-1 display-block width-full" href="https://www.distilled.net/blog/uk/?ref=ftr"> <span>Etsy blog</span> </a>”. That’s what we are looking to replace.
We then set it to replace the above element with “<a class="text-gray-darker pt-xs-1 pb-xs-2 pb-md-1 display-block width-full" href="https://blog.etsy.com/uk/?ref=ftr"> <span>Etsy blog</span> </a>”, which has the link to the 200 version of Etsy’s blog. Now the footer link goes to the blog.etsy URL rather than the 301 /blog/uk/?ref=ftr URL.  
Tumblr media
7. Pixel Eyewear – Adding title tags
Changing title tags is often a desire for content creators, as metadata is one of the strongest signals you can send to Google on what your page is about and what keywords you want to target.
Say you worked at Pixel Eyewear, and after some keyword research decided you wanted to target the keyword “computer screen glasses”, rather than simply “computer glasses”. We can use ODN to make that update, and again this rule can easily be set to target a bulk set of pages.
In the path list, we include all the URLs we want this change to apply to. Then we create a global rule to add “Screen” to our page titles. This has one step, where we use the CSS selector to select the title element of the page. We then enter the HTML we want instead.
Tumblr media
8. Pixel Eyewear – Adding content to product pages
This is an example of when a site section has multiple rules. Say that you worked at Pixel Eyewear, and you also wanted to update the descriptions on your product pages, in addition to adding “Screen” to your page titles, and you want to do this on the same pages included in the previous section.  
To do this with ODN, we create a second global rule to edit the product description. This uses a different CSS selector, “div[class="pb-3"]”. You just want the main description to be more descriptive, so you replace the first paragraph of the element “Meet the most advanced eyewear engineered for the digital world.” to “Our most popular product, the Capra will have you looking stylish while wearing the most advanced eyewear engineered for the digital world.”
Since there are two global rules in this section, the order you place them in will matter. ODN works from top to bottom, as shown in the diagram in the intro, so it will apply the first global rule and its steps first before moving to the second. If one of your global rules depends on something created in another, you want to be sure that global rule is listed first.
Tumblr media
9. Liberty London – Adding meta descriptions
Meta descriptions are an important meta property to entice users to click through to your webpage from the SERP, but it’s common for website owners to not have them at all, or on important pages on their site, as seen with Liberty London on their UK featured page.
We can edit the meta description content with ODN, and insert a description. First, we include the path of the target page in our path list, then create a global rule with a single step that grabs the meta description with a CSS selector. This time we set it to “Set or update the attribute of an element.” The attribute we want to replace is the content, and we want to replace it with the content entered.
This can also be used to add in meta descriptions when they’re missing entirely, or when you want to insert new ones. If you want to apply in bulk, you can upload a CSV that has the desired meta descriptions for each target URL as a value.
Tumblr media
10. CamelBak – Removing duplicate content
E-commerce and other websites frequently wind up with duplicate content on their websites, which can lead to drops in traffic and rankings. Faceted navigation is a common culprit. We can see this in action on Camelbak’s website, where parametered URLs like https://international.camelbak.com/en/bottles/bottle-accessories?sortValue=af41b41832b34f02975423ad5ad46b1e return 200 status codes and have no canonical tags.
We’ve fixed this in ODN by adding canonical tags to the non-parameterized URL. First, we add the relevant URL paths to our path list. Then we need to create an on-page value for the non-parameterized version of the URL. This rule uses regex to extract the content of the URL that comes before the “?” character.
Once we have this on-page value, we can use it in our global rule. Since there are no canonicals already, this global rule has one step. If there were already canonicals on these pages, self-referencing ones, for example, that still referred to the parameterized URL, then we’d have to remove that canonical before we could add in a new one.
The step to add in the canonical inserts a block of HTML after the <title> element. Then we enter the HRML that we want to be inserted. You can see that this uses the on-page value we created, giving us this string:
<link rel="canonical" href="https://international.camelbak.com"/>
Because we’ve used an on-page value, we put a list of paths for relevant parameterized URLs in our path list, and it will insert a canonical to their non-parameterized parent.
This tactic can be adjusted to account for pagination with rel=”prev” and rel=”next” tags and many other variations. Another way to address duplicate content issues with ODN is to redirecting unwanted URLs, among others.
Tumblr media
Summary
These examples are only a selection of the types of fixes ODN can employ for your website. There are many more, in addition to being able to perform SEO A/B testing and full-funnel testing. The ability to create custom values and use CSS selectors means there’s a lot of room for any of these fixes to be customized to meet the needs of your website.
If you work on a website that has a difficult time being able to make these kinds of changes (you’re not the only one), then get in touch to get a free demo of our platform in action on your website.
from Digital https://www.distilled.net/resources/odn-as-a-meta-cms/ via http://www.rssmix.com/
0 notes