#and if anything goes wrong i could very well suffer significantly in multiple ways for it.
Explore tagged Tumblr posts
nobody-wants-ice-cream · 5 years ago
Text
Everything Wrong With The Umbrella Academy. Episode 2, Run Boy Run.
Link to the first episode!
Same disclaimer as last episode: This is all in good fun! I wanted to do a really nitpicky re-watch of the series and found some really cool and interesting things I didn’t notice before. This is meant to have a Cinema Sins-esque tone. However, I did take off a lot more sins than Cinema Sins would have because I do genuinely like the series and the people that made it possible. So all of the good things got one sin off and all the bad things got one sin added. This is a really long post, so grab some popcorn. If there’s anything that I missed, feel free to add it!
Run Boy Run 
Grace started the Herr Carlson record before the kids even arrived. How are they supposed to learn if they miss the first few seconds of it?  What is the point of the record if they’re not even around to hear all of it?+1
The kids all have their hands on the chairs except for Five, showing that he will do something out of the ordinary. -1
Diego is causing property damage to Reggie’s chairs and Reggie allows this. Be consistent, show! Is Reggie lenient or strict? You could make the argument that Reggie doesn’t care about the chair because he’s rich. In that case, sinning for capitalism.+1
Klaus is already into drugs at the age of 13. We can see him rolling a blunt, and doing it quite well, presumably. +1
Ben is straight up allowed to read at the table. So then what is the point of the record if the kids don’t have to pay attention to it? +1
The kids expressions when Five stabs the table. The ones that we see are pure gold. Especially Klaus’s. Well done Dante Albidone. -1
Diego’s side eye when Five starts arguing with Reggie. This is the perfect expression for “my sibling is about to get in trouble”, so props to Blake Talabis. -1
Vanya’s side eye is also good. TJ McGibbon did well. -1
We see Five jump faster than a bullet, but he’s significantly slower when jumping across the table. +1
Reggie is a dick to Five, who just wants to explore his powers. We know that it’s dangerous because we see Five getting stuck, but Five doesn’t think that that is really a possibility. Reggie only talks in confusing ice and acorn metaphors. +1
Five’s face when Reggie presents the ice and acorn metaphor. -1
Vanya and Allison both give Five a look in this scene. This is what makes Five hesitate. Two of his siblings tell him it’s a bad idea, but he does it anyway because he’s a stubborn bastard. +1
Grace’s face drops when Five starts running out the door. Allison and Vanya also look absolutely horrified. -1
“Run Boy Run” is a little on the nose. Especially once you remember that The Boy is Five’s hero name in the comics. +1
No one cares that a 13 year old popped into existence out of nowhere when Five starts traveling into the future. +1
Easter egg! There is an ice cream cart outside the academy. If you’ve read Dallas, you know why I think that’s significant. Also, it happens to be my icon. -1
Five’s look of complete disbelief and horror when he is faced with the apocalypse for the first time. -1
“Vanya! Ben!” This has created a lot of curiosity in the fandom. In the comics he left before they were named, but in the show it looks like he chose to keep Number Five. Why? +1
The apocalypse looks very believable. -1
Title screen umbrella! -1
The awesome scene with Ellen Page and Aidan Gallagher continues in the next episode. -1
Where would Five have heard that rumor about Twinkies having an endless shelf life? It’s not like he was very exposed to pop culture as a kid. +1
Vanya doesn’t keep her Violin in the case. She leaves it proped on a chair, which is basically begging gravity to come and fuck up your instrument. +1
Five plays the pronoun game and doesn’t tell Vanya about Dolores. +1
The last thing Five heard for 40 years was Reggie’s stupid metaphor. That’s a sin for the metaphor and a sin for Five’s pain and suffering. +2
Vanya gives someone with a thirteen-year-old’s liver a few shots worth of hard liquor in a tall glass. +1
“You think I didn’t try everything to get back to my family?” This quote is Five at his core. It shows his exact motivation. Aidan Gallagher really could have screwed up with this line because it’s so raw, but the delivery doesn’t suck. Well done. -1
Is that liquor real? Aidan Gallagher’s face suggests that it is and he only takes two sips of it. Also, Five takes a sip when it’s just a bit, pours more, then takes another sip, and doesn’t drink any more of it. Sin for showmakers possibly giving a kid real alcohol and sin for Five only taking a sip after pouring a lot out. +1
However, if the alcohol is fake, which I really hope it is, sin off for Aidan Gallagher’s acting. -1
Five expects Vanya to believe his crazy apocalypse story. I had a hard time believing it when we were shown flashbacks as the audience. It wasn’t until they brought in the Commission that I actually believed it. If Five had explained the Commission, just like he did to Luther, then Vanya would have had an easier time believing him. +1
Vanya calls Five crazy and then expects him to not be hurt and want to stay in her apartment. +1
Vanya takes the pills after an emotionally charged scene. Pills-foreshadowing. -1
Five’s hands are shaking when he’s looking at the eyeball. This shows both his uncertainty, with this being his only clue, and shows that he is unwilling to leave his sister again even after she called him insane. -1
Mary J. Bilge. -1
The Lunar Motor Lodge has rates by the week, day, and hour. The Commission is super sleazy for putting Hazel and Cha Cha in a place that also rents by the hour. +1
Hazel and Cha Cha are an underrated duo. The “It smells like cat piss” dialogue is honestly really funny. -1
Obvious villains are obvious. I know they’re meant to be obvious, but it doesn’t change the fact that a show with a lot of subtlety just kind of thrust Hazel and Cha Cha in there with no subtlety at all. +1
Hazel stores the briefcase away and throws a screw, foreshadowing that this will be an important detail later. -1
No one, including police, notices the blinking and beeping, neon green tracker. +1
Patch is sort of right. Five made a jump in the middle of two of the local hires, which caused them to shoot each other. -1
“The guy had an eclair and the kid had coffee”. Patch’s side eye says that she thinks Agnes is getting her story mixed up. If we didn’t see what happened, then the audience wouldn’t believe Agnes either. Great acting Ashley Madekwe. -1
Agnes doesn’t stay in the back room. She crawls out so her head can dramatically pop up over the counter after Five leaves. This is a stupid decision on Agnes’s part.+1
Agnes is seen handling American money. Somehow we as a fandom didn’t notice this. Klaus also uses American money to buy drugs later in this episode. Sinning the showmakers not specifying which state at the very least, but reluctantly because I know that’s a reference to the comics. +1
“What other detective”. Camera cuts to Diego exiting Griddys. -1
Diego is a vigilante. What he is doing impedes the law. In this instance, we want him to stop Patch’s investigation because we know that the answer leads back to Five, which would be bad for the plot. However, Patch’s annoyance suggests Diego has done this to her before. How many murderers have gone free because Diego intervenes in Patch’s cases? +1
Diego did not consent to being searched and having his personal belongings taken. +1
Ebay exists but there is no internet or smartphones. What? +1
Diego thinks that this looks like a botched robbery. No way in hell does this look like a robbery of a doughnut shop in any universe. A bank robbery, yeah sure, but not a doughnut shop. What kind of doughnut shop has the kind of money that requires multiple guys with very large weapons, Diego? +1
The way Patch is described to Five by Diego in a later episode does not match the personality she actually has. +1
A whole crowd of people had nothing better to do than to watch the cops investigate a murder scene in a densely populated city. +1
Is Luther hitting his head after he wakes up a character choice? He does it again with the model airplane. After the low ceilings on the moon for four years, you would think that he would learn to duck. +1
Emmy Raver-Lampman gives an amazing performance when talking to Luther about Claire. -1
Allison has multiple posters of herself in her room. I am sinning for her younger self’s narcissism. +1
However, this narcissism goes hand in hand with Allison as a character. Props to the set designers for making these posters and hanging them up. It adds detail to Allison’s room and really shows who she was as a character. -1
“When Claire was little I used to read her books about the moon. I’d tell her her Uncle was living up there” Allison doesn’t remember that Luther was on the moon and therefore shouldn’t know about her divorce in the first episode, but says this in the second episode. +1
Luther looks so genuinely happy at being Claire’s personal superhero. -1
The ghosts torturing Klaus. +1
That fucking animal print thing Klaus is wearing. +1
Robert Sheehan is very, very attractive. This makes up for the monstrosity Klaus is wearing. -1
“You know you talk in your sleep.” “Oh there’s no point. You’re out of drugs” I love Ben as a character so much. -1
“Shut your piehole, Ben. Said with love” smooch. I love this line. -1
“I’ve got a crazy idea. Why not try starting your day with… a glass of orange juice or some eggs”. Justin Min’s delivery of this line kills me every time. -1
Pogo is really vague about why the papers in Reggie’s box are important. If he said something about the papers detailing the Academy’s powers in explicit detail, Klaus would have tried harder to get them back. +1
We don’t see Klaus pull out the Red Journal in episode one. +1
“Liar” “Drop dead” “Low blow”. This is an iconic interaction for a reason. -1
Pogo knows that Klaus can talk to ghosts, but remains offended when Klaus tells a ghost to shut up. +1
“Really awful, terrible, depressing times” Reggie is a dick to his children. +7
Vanya sleeps with the door to her bedroom open, even though we saw her close it. So she must have gotten up to open the door and didn’t notice Five was gone. +1
Where did Five go all night? Did he sleep back in the Academy? It couldn’t have taken him this long to get to the MeriTech building, so what happened to him? He changed to a clean uniform, so presumably he went to the Academy, but why did the show vague this? Did he walk into a department store and buy/steal a clean shirt?+1
Only the plot relevant person notices Five. The front desk girl doesn’t question why he’s there. And that is her literal job. I would know, I run the front desk at a medical office. If you don’t greet the patients then you’re not doing your job, front desk girl.+1
“Must have just [click] popped out.” iconic.-1
Five decides that violence is the best course of action to get the information he needs, directly contradicting “I know how to do everything” +1
The 1938 fingerprints may be Five’s. However, police usually discard this kind of evidence because there is a very reasonable doubt. Not to mention that anyone could have touched the knife. It’s a public place. Forensic evidence is not as reliable as it is portrayed in the media. +1
Diego is an asshole to everyone, but especially to Patch. She’s right, Diego is obstructing justice. How many murderers have gone free because Diego interfered in an investigation? +1
Diego’s boiler room is way too big to be a boiler room. +1
Luther’s reflection in Diego’s mask shows that Luther wants to know what it would be like to be number two instead of number one. Luther can’t lead for shit and subconsciously wishes that he didn’t have to. -1
With an aerial shot of the Academy from the outside, we can see that Reggie never bothered to take the laundromat sign off the mansion or that Reggie sold ad space on the mansion exterior. +1
Reggie is a dick to animals. See: the animal skeletons and the taxidermy. +1
Part of the mansion is painted an ugly neon green for no reason. +1
“Sorry I left without saying goodbye”. The “both times” is unspoken. -1
Vanya apologises for calling him crazy and being dismissive, but still suggests he needs mental help. He does, but maybe suggest it later when he isn’t convinced you think he’s insane? +1
Five lies to Vanya about something stupid. If he said that he was having Klaus help him with the apocalypse, I don’t think she would have minded. +1
Why does Five have so many toys in his room? Including a baseball? +1
Klaus comes out of the wardrobe as loudly as possible. The mansion does not have sound proofing (see: I Think We’re Alone Now dance party). There is no way in hell Vanya didn’t hear him. +1
This is the last time Vanya and Five interact. +1
Five’s room is more childish than a thirteen-year-old’s room should be. It honestly looks like he was the favorite because his room has so many toys in it. Like Reggie wanted to win his favor or something. Sinning for the weird set design choice and for Reggie being an asshole. +1 
The fake circumstances in which Five was born in their cover story gives me immense joy. -1
In one camera angle, if you look carefully they cut two takes of “what a disturbing glimpse into that thing you call a brain”. In the one where we can’t see his face properly, Aidan Gallagher is openly smiling. Corpsing. +1
Robert Sheehan is funny. -1
Syd the tow truck guy doesn’t really look like Sean Sullivan (actor that plays adult Five) enough for Cha Cha, a trained assassin, to not see that he isn’t their mark. +1
Hazel eating a sandwich in this scene. Also the “Italian for dinner line”. -1
And Cha Cha sees the differences between Syd and Five later! +1
“Time travel’s a bitch” “Especially without a briefcase” There's other time travel methods than briefcase or being Five? Elaborate. +1
Patrick is a dick to Allison. We understand why later, but really Patrick, you’re going to be an asshole when her father just died? Don’t get me wrong, Reggie abused the hell out of her, but still! Patrick should have let Allison talk to Claire. +1
Vanya tries to comfort Allison even though she knows nothing about the situation other than that it happened. She’s never even met Patrick! +1
Allison is clearly trying to get away from this conversation with Vanya, but Vanya presses on. +1
“Well if I wanted advice, Vanya, no offence, it wouldn’t be from you”. This is why Vanya doesn’t take Allison’s advice about Leonard. Also, Allison is a dick to Vanya. +1
This scene with Allison and Vanya is interesting. Allison is projecting her pain and taking it out on Vanya, who really should have seen and heard what happened enough to leave her alone. Both of them are the bad guy here regardless of how you slice it. I am sinning the show for this moment because they really tried to villainize Allison for this scene, but she does have some well thought out points and is in an emotionally compromised state. Or in other words, the fight between Allison and Vanya is stupid. +1
Grant/Lance/whatever gave Klaus and Five valuable office time. Doctors do not have time for this sort of crap. Shouldn’t this guy have patients? +1
Aidan Gallagher looks to the actor playing Grant/Lance/whatever as if he’s waiting for him to say his line. I see this all the time with younger kids in theatre, but they can get away with it if their character has a reason to look at that character. That being said, Five would have no reason to do this.+1
The sound effect that plays when Klaus slaps Five is really out of place. +1
Seeing Robert Sheehan slap Aidan Gallagher. -1
Klaus pauses as if he’s listening to Ben before he picks up the snowglobe. -1
The snowglobe. Robert Sheehan pretending to be Klaus pretending to be Five’s crazy dad. Acting. -1
Five looks like a proud grandfather when Klaus gets Lance to show them the records. -1
Five doesn’t pay Klaus for that brilliant acting. Also, how was Five planning to give Klaus $20. He doesn’t have any money nor do we ever see him with money. Five is a cheapskate. +1
Klaus calls Five “old man”. I thought that was just a fandom thing lmao. -1
“You must be horny as hell”. Great Klaus line, but super weird that he’s saying it to someone that looks thirteen. +1
Klaus is wearing the shirt that goes with his nicest outfit underneath Reggie’s pinstripe suit. -1
“Goodbye Dolores”, a song from the soundtrack, starts playing when Five starts talking about Dolores. This is good placement of that song because we later learn that he left her in the apocalypse when he left to work for the Commission. -1
Five is a dick to Klaus. Klaus is really trying to connect with his long lost brother, but Five jumps away. +1
That taxi driver doesn’t freak out and cause a car accident when a random kid appears in his car. +1
Also, how did Five pay for that taxi? Did he jump out of the moving vehicle too? +1
Leonard is so obvious from the start. So charming that he’s slimy. +1
Vanya can’t see this and is actually attracted to him. This may go back to that conversation with Allison when she asks if Vanya has ever been in a relationship. For all we know, the answer is no. +1
Leonard took three years of German in prison. I don't think American jails are that nice. +1
Leonard picks up another person���s instrument without their consent. As a musician, this is very, very painful. +2
Diego is paranoid, but also observant as fuck. -1
But how did he get his weapons back from the police? Are knives open carry in whatever state this is in? There are some states where Diego’s harness would be legal so it’s possible. I’ll have to look into this. Sinning the show for being vauge as fuck. +1
Luther didn’t notice the boiler room door open. +1
Diego throws weapons on his siblings. +1
Reginald Hargreeves died March 21st. The funeral is on March 24th. This is way too soon. It should have been a week or two not two days between the date of death and the funeral. Especially considering Luther suspects Reggie was murdered. And if you say that Reggie, Pogo, or Grace bribed them, then I’m sinning for bribery.+1
Diego eats a raw egg. Salmonella headass. +2
David Castaneda eats a raw egg. Why did you make him do this? It adds nothing to the character other than making Diego look dumb as hell. +1
Vanya interrupts her student while he’s playing and doing well. Whenever my teacher does that I get a minor heart attack. +1
Leonard is already lying to Vanya. He manipulates her by saying his Dad was into music and that's why he’s taking violin lessons. +1
An actual place named “Bricktown” in a place called “The City.” Sigh. +1
It is four o’clock when Leonard takes his lesson, but then after the lesson we cut to night time. What happened in those couple hours, show? Are you really saying that these characters did nothing interesting for all that time? +1
Emmy Raver-Lampman clearly isn’t smoking. Which is fine because she’s a Broadway actress and needs her voice/lungs for that part of her career. It’s weird because it shows that Allison isn't smoking. +1
Pogo scolds Allison for her language. Allison is an adult, Pogo. +1
Klaus made a drink at a young age and Reggie didn’t stop him. Or talk to him. He recorded Klaus drinking, but didn’t care. +1
The showmakers show us Allison’s face for dramatic tension instead of showing us the tape. This was a good choice and I feel it helped the narative.-1
They show a sign “Gimbel Brothers Seniors Tuesdays 10% Off.” after Five walks by. -1
The most awkward and dopey smile in existence when Five finds Dolores. -1
They play “Goodbye Dolores” after he finds her. That could have worked if they transposed it to the major key. Hello Dolores. +1
“Goodbye Dolores” transitioning into “Don’t Stop Me Now” by Queen. -1
This action sequence is great. -1
Hazel’s wrist splint. -1
Five cuts Cha Cha with a trowel. -1
The dual screen thing is cool. -1
Five literally jumps over a stand and somehow doesn’t get shot. Hazel and Cha Cha have Stormtrooper aim. +1
How did Hazel and Cha Cha leave? You would think the police would notice someone leaving through the back. +1
Similarly, how did Five and Dolores get out of this? Did he wait until he could jump and teleport outside the store? Can he teleport that far? +1
How did Diego get another police scanner so quickly? Unless that’s the scanner Patch confiscated? +1
“I gotta show you something” +1
Once again, Five should be a lot sweatier. What are these magic, sweat absorbing things you can buy in a department store and where can I buy them? +1
Five sees an eyeball and immediately picks it up for no reason. He doesn’t even know that’s Luther’s body yet. He just picked up an eye for no reason. +1
Five as a thirteen-year-old boy saw his siblings' dead bodies. Sinning for trauma. +1
Aidan Gallagher portrays this trauma well. -1
Overall Review: 
I love this episode and had a hard time finding things wrong with it. I genuinely like this episode and I think that it could have stood alone as the pilot. 
Some acting things I noticed, David Castaneda, John Magaro (Leonard), and Ashley Madekwe were the standouts this episode. All three brought something interesting to the table this episode and I look forward to re-watching their scenes. I wish Madekwe and Magaro all the best as I know that they probably won’t be returning for season two. 
The plot thickens! Hazel and Cha Cha were introduced in a very obvious way compared to the subtle way they introduced Leonard. There is a reason I adore this episode, and it’s not just for Klaus slapping Five (though that is part of it). 
Total: 52
Sentence: We saw Diego eat a raw egg. That’s punishment enough for this episode. 
91 notes · View notes
nadziejastar · 5 years ago
Note
Did you finish reading that KH3: A Conclusion without a story article? If so, what did you think of it?
I loved reading it. It was a fantastic take on KH3. There was pretty much nothing that I disagreed with.
By the time you leave Olympus, Sora hasn’t learnt how to restore his powers; and the frustrating part is that he never explicitly does.
I completely agreed with this. Sora’s journey in KH3 should have been about learning the power of waking. But even in the scene where he finally does learn it, there’s no real reason why. He didn’t seem to learn anything on his journey.
Even the villains are given no progress – a subplot about Pete and Maleficent looking for a mysterious black box goes nowhere, and Organisation XIII (the primary antagonists) only put in a brief appearance, spouting their usual brand of vaguely ominous dialogue. To compound these issues, the protagonists are ultimately left not knowing where to go or what to do next. Only two hours into the game, and the plot has no sense of momentum or direction.  
Yep. The black box thing annoyed me so much. The Organization was also a huge letdown. We don’t get to learn the real reason why Marluxia, Larxene, Demyx, and Luxord joined until KH4!? Something went very, VERY wrong in the Dark Seeker Saga for that to happen. 
By comparison, Kingdom Hearts II’s opening was significantly slower paced – to the point that it was a detriment to some players. However, so much more was achieved in a similar space of time; II’s initial hours establish the game’s tone and major themes, as well as introduce a large cast of brand new characters (while simultaneously reintroducing old ones in new contexts).
Yep. I liked KH2′s opening, slow as it was. The prologue of KH2 felt like it had more plot than almost all of KH3.
And this is one of the core problems with Kingdom Hearts III; even if you look past a threadbare narrative for Sora and company while they adventure through the self-contained Disney worlds, there is nothing going on outside of that either. In Kingdom Hearts II, both Riku and Mickey were operating behind the scenes, aiding Sora from the shadows and setting key events in motion. In III, however, these same characters spend most of their time expositing plot points and passively waiting for the big battle at the end of the game – and that can be said for almost all of our heroes.
I also agree. This problem would have been mitigated if every character got their own time to shine using the power of waking. Riku and Mickey could have had a subplot together, showing how Riku got his new Keyblade. They should have saved each other from the darkness. 
If there’s a job to do, it’s up to Sora to do it. With a couple of key exceptions, every character apart from Sora, Donald, and Goofy is presented as almost comically useless – yet our protagonist remains the butt of every joke.
Yep. Everyone other than Sora was useless. Aqua needed to save Ven, but all she did was get knocked out in the battle with Vanitas. Ven needed to save Terra, but he didn’t really do anything. Sora did all the work. Lea needed to save Isa, but he did nothing in his fight. He got shoved to the side while Roxas and Xion took over. Kairi saving Sora should have gotten more focus. 
The villains reveal that the only way Sora can release Roxas is by giving into the darkness, and sacrificing his own heart. Self-sacrifice is nothing new for Sora (he did the same thing in Kingdom Hearts I to save his love interest Kairi), but this had the potential to be an interesting plot point, as it gives him a selfless reason to be tempted by, and potentially give into, the darkness. But it’s never brought up again. 
Yep. Early scenes in KH3 make it seem like the game did originally have an actual plot at one point. Xigbar was luring Sora into a trap, so he’d fall to darkness. But it’s never brought up again, LOL. It’s crazy.
In fact, ‘saving Roxas’ is scarcely discussed until the end of the game (King Mickey telling Sora to “let the rest of us worry about Roxas and Naminé for now”, essentially dropping the subject after only the second Disney world). Ultimately, Roxas’ heart just leaves Sora’s body of its own volition in the final act, making the player’s time here, once again, feel largely pointless.  
And yes, saving Roxas was handled very badly. This is because, IMO, saving Roxas and saving Ventus was supposed to be one and the same. There shouldn’t have been a separate “saving Roxas” subplot.
In interviews, Nomura discussed the struggle of dealing with so many characters – even citing the cast size as one of the main reasons that Final Fantasy cameos were omitted[2]. The real problem, though, is that nothing is done to mitigate this challenge.
Yes, exactly. And treating Roxas and Ventus as separate characters only exacerbated this problem.
Upon leaving Twilight Town, the player finally begins their true journey – travelling to various worlds based on Disney properties and beating back the forces of darkness. But there’s no real set up for this; no distinct reason *why* we’re visiting these worlds. 
Mm-hm. I think the issue was that we were supposed to learn more about Ansem the Wise’s data in KH0.5. That was supposed to give Sora a quest in KH3: search for the “Key to Return Hearts”. Once that game got cancelled, Nomura had no idea how to write KH3′s story any longer.
So around 3-4 hours into Kingdom Hearts III, the story still lacks a clear sense of direction and purpose, and hasn’t yet established any clear themes or deeper meaning.
Yeah, it’s sad because there was an underlying theme in the Disney worlds: the power of love and its ability to restore what was lost.
Kingdom Hearts III cleverly tries to frame its story through the lens of a chess match between two Keyblade Masters, Eraqus and Xehanort, when they were young. The game even opens on this scene, highlighting its importance. But chess has rules; logic; a clear sense of direction. Kingdom Hearts III’s narrative is akin to two people who don’t know how to play chess. They understand that they have to defeat their opponent’s king, but the rules of how to move their pieces, how to actually reach that coveted checkmate, are completely unknown to them. The characters in this game feel like pieces on a chess board with no rules; aimlessly moving back and forth across a limited space, until both players finally decide enough is enough and agree to bring their match to an end.
LOL. Yep. The fact that Xehanort had “reserve members” showed he had no idea what he was doing.
Stick to your guns – don’t be afraid to explore a good idea, or to develop the plot outside of your main protagonist. When so many previously proactive characters are in play, the story shouldn’t feel so static, or entirely dependent on the protagonist’s actions. The way your protagonist reacts to events and changing circumstances is just as important as the ones they play an active role in creating.
That’s why I liked the spin-offs. KH3 suffered from forcing you into only Sora’s perspective. Even Nomura said that the Keyblade Graveyard should have had everyone fighting their own battles.
Simply put, the Disney worlds in Kingdom Hearts III have no tangible impact on the game’s core narrative.
Sad, but true.
“In the end, although I had a hand in it as well, the flow of the dialogue and the stories of each world were largely handled by the level design team.” While I very much appreciate this standpoint of ‘gameplay first’, as well as the act of involving multiple teams in the execution of the story, these statements do prove my point. Set-pieces and events are one thing, but if there was a specific story to tell – with outlined themes to be explored, character conflicts to evolve, and goals to be achieved; all developed evenly throughout the entire game (Disney worlds included) - you would imagine the scenario would be built around balancing those narrative elements with the individual tales of each level.
Very interesting. The story in the Disney worlds was largely decided by the level design team? Wow.
Despite major villains such as Young Xehanort, Vanitas, and Marluxia making multiple appearances in their respective worlds, they generally just spout off trite exposition and then either disappear or summon a boss fight. Some villains don’t even know why they’re there, while others introduce plot points (such as the Black Box or the new Princesses of Heart) that are never utilised or expanded upon. As the game features at least thirteen main antagonists, these early appearances should have been integral in establishing their personalities, motivations, and the threat they pose to the player (as well as our heroes). In execution, though, they seem like little more than after-thoughts that offer hints of personality, but never go beyond the superficial – and certainly contribute nothing to the main narrative. This, I believe, is because Kingdom Hearts III doesn’t have a story to tell, but was instead content with treading water until its grand conclusion.
Yep. I had no idea why Marluxia, Larxene, and Luxord were running around in the worlds. Why are they back? Other characters, like Saix, were given flimsy “motivation”. All in all, the organization members were supposed to be vessels by the time you fight them in the KG. Hollowed out containers for Xehanort’s heart. Victims of mind control who you are supposed to have pity for. But they never felt like it.
Kingdom Hearts III’s meandering and vapid progression during ‘the Disney loop’ supports my argument that the game lacks a complete narrative and was merely concerned with reaching its final act. I believe this is most evident by the way in which the player is made to jump from world to world without any direction or purpose. Consequently, the majority of Kingdom Hearts III feels content to aimlessly ‘go through the motions’, setting a repetitive, humdrum pace and ultimately lacking the sense of narrative depth and genuine value that is integral to a great RPG.  
Yeah, I believe there was–at one point–an actual plot for KH3. But after BBSV2 was cancelled, a huge portion of KH3′s plot was pretty much scrapped along with it and rewritten.
Everyone’s heard of the three-act structure; a model that forms the foundation of popular culture’s favourite stories. Act 1 features the setup and exposition; an ��inciting incident’ to get the narrative moving. Act 2 is the confrontation; a midpoint which challenges the protagonist, pushing them to their limits. And finally, Act 3 is the resolution; concluding the plot, along with any character arcs introduced in the previous acts. While this structure doesn’t necessarily need to be adhered to, I believe it possesses something that Kingdom Hearts III sorely lacked – a midpoint.
Yep. KH3 had no mid-point. Scala ad Caelum could have worked as the mid-point. And it could have been another hub world like Radiant Garden. KH3 probably originally had this, but it was scrapped.
This is especially a shame, as Aqua’s fall into darkness – resulting in a twisted form that externalises all of her loneliest thoughts – is one of the most dramatically compelling aspects of the game. And that’s despite lasting for all of 10 minutes (a decade of solitude and suffering are seemingly erased by a few whacks from Sora’s Keyblade).
This is true for all of the characters that needed to be saved. Nobody really used the power of waking on anyone. It’s was just whack, whack, okay you’re saved.
And this is ultimately the problem with the lack of a true Act 2 – the characters aren’t explored or challenged when they need to be. The narrative refuses to escalate until its final act, at which point it feels like going from zero to sixty in a matter of moments. But during the heat of battle – at such a late stage, and with so many heroes and villains in play (more than twenty) – it’s hard to develop your characters in a way that feels natural. Kingdom Hearts III’s solution is bizarre soliloquies that are completely disconnected from the events around them. Is Sora in the middle of a boss fight with three villains? Well, the other two will disappear while you spend several minutes casually chatting with the third. And while this is partly due to the challenge of giving such a large cast an appropriate send-off, it’s also a direct consequence of the lack of time given to exploring characters and their relationships in the previous 20-25 hours of playtime.
So true. So many characters who had so much development over the series. That’s why they needed another game before KH3. It was probably too much to ask for KH3 to be the epic conclusion as well as dive into everyone’s backstory.
On that note, having some sort of hub – a place, like Traverse Town or Hollow Bastion in the first two Kingdom Hearts games, that the player regularly returns to – can be an effective way to centre your story. It provides a home base, and a recurring cast of characters that can be revisited at any time. This kind of location helps players to feel a deeper and more personal attachment to your world.
Yeah, the game would have been so much better if you could visit RG and interact with the plot-important NPCs.
Put in Kingdom Hearts terms, we might say that the body and soul are here; it’s just missing its heart.
I’ve had the exact same thought.
This essay began with the assertion that Kingdom Hearts III is a conclusion in search of a story; a game without a tale of its own to tell. So far, we’ve examined the material impact; the effect this has on the game’s pacing, its sense of player progression, engagement, and character development. So in this topic, I want to consider the conceptual side of things; the motivations that drive our heroes and villains, the purpose of the events that take place, and finally the meaning intended to be conveyed by the story. Put simply, does the narrative of Kingdom Hearts III have something to say?
Sadly, no. I can tell it was supposed to, though. KH3′s story was supposed to be about the power of love. It was really that simple.
By the time of Kingdom Hearts III, Riku has overcome all of these challenges and been granted the title of Keyblade Master, so it was important to present him as a more mature, capable character, having regained his confidence and developed a clear identity. But ultimately, he just feels bland and stoic in this game. He has no new narrative arc, relatively few interactions with Sora, predominantly serves as a mouthpiece for exposition, and is more devoid of a distinct personality than ever. And for a game which serves as a conclusion to the story so far, it’s essential that our core group of characters, such as Riku and Kairi, reach a satisfying crescendo. The narrative should organically involve them in significant ways, and the challenges they face should provide natural opportunities for growth and exploration.
Sad, since Riku seemed like he did originally have a narrative arc. He got a new Keyblade! But the way he got it was laughably random and meaningless and contributed nothing to his overall growth or development.
As much as I’ve tried to understand it, I cannot summarise Master Xehanort’s motivation in that same, concise way. His initial speech in Kingdom Hearts III implies idle curiosity; he speculates that “If ruin brings about creation, what, then, would another Keyblade War bring?” followed by statements that he wants to re-enact the conflict and simply see what happens. He also wonders if they will “…be found worthy of the precious light the legend speaks of”, implying that his goal is to test humanity; or at least the current generation of Keyblade wielders. But that’s a pretty flimsy motivation, and it’s lacking any context or logic.
Yep. Xehanort was supposed to have another game to explore his motivations. When you get rid of that, his character just doesn’t work anymore.
And it’s not just the heroes that have this problem. During their death scenes, several of the Organisation’s members (Luxord, Marluxia, Larxene, Xigbar, Xion, Saix, and Ansem) either encourage Sora or imply that they didn’t care about the outcome; or didn’t even want to battle in the first place. Some have their reasons, but if even one of them had chosen not to fight, Xehanort’s re-enactment could have failed. Much like I described earlier, it doesn’t feel satisfying to overcome a foe who didn’t want to fight, and a war with the potential to destroy the universe should be motivated by much more powerful convictions.
I don’t disagree. But I honestly think this is because none of these characters actually wanted to fight in the Keyblade War. They were supposed to be possessed puppets. Mind-controlled vessels with no will of their own. 
Let’s use Saix as an example. What makes a more engaging battle? In canon, Saix had flimsy motivations to be fighting, anyways. He wanted to atone so he was acting as a double agent in order to procure some Replicas. And he wanted look for Subject X. That’s why he joined Xehanort. That’s all the reason he had to fight. 
Compare that to a potential backstory with him as a vessel, lacking free will. Isa was a human test subject who was possessed as a teen. His best friend Lea has to fight him unwillingly. Saix is berserk and nearly kills Lea without even being aware of it. But all Lea wants is to save his best friend. I know which one I find more engaging. 
Ever since that first game, I’ve been trying to identify what it is that unified these two styles of storytelling – the Disney fairytale with the SquareSoft RPG. And in writing this essay, I finally realised; the secret ingredient, the unifying thread that both franchises had in common, was love. Romance is at the core of almost every classic Disney film, and every Final Fantasy from IV to X was in some way a love story. Seemingly the developers of the original Kingdom Hearts realised this too.
I’m pretty neutral about the Sora/Kairi romance. I mainly wanted Kairi to not feel like a damsel-in-distress yet again. And KH3 definitely screwed that up.
In a way, my problem was the same as that of Kingdom Hearts III’s story. We both spent so much time looking to the horizon, imagining what the future may hold, that we missed out on what was already right in front of us. I will always love and support this series, and its creativity and charm will no doubt continue to inspire my own stories for the rest of my life. But despite not being the conclusion I hoped for, Kingdom Hearts III has freed me from my own obsession with the series’ future. I no longer feel like I’m waiting for something that may never come. Of course, I hope the series gets its story back on track, and rises to new heights greater than ever before! But it turns out that I already got my ending in 2006; and now that I’ve finally realised that, I can finally, honestly say that, as a Kingdom Hearts fan, I am satisfied.
It’s sad that KH2’s ending felt more satisfying. Because KH3 should have been even better than KH2′s ending. KH2 had a happy ending. But in KH3, everyone was there on the beach. Terra, Aqua, and Ven were saved. In KH2, Axel was dead. He had a sad ending. But in KH3, he was human again and even had his childhood best friend back, too. Even Hayner, Pence, and Olette were there. Sora should have been there, too. By all accounts, I should have liked KH3′s ending the best out of any game. But they ruined it with the horrible character development and the cheap cliffhanger.
4 notes · View notes
safflowerseason · 5 years ago
Note
We all know Dan’s a piece of shit, what do you think was Amy’s draw to him?
Oh, Anon, who among us has not fallen for a gorgeous asshole?
Originally, I think we can presume that Dan did not act like a shit on those three dates, and…who wouldn’t be attracted to Dan at first glance, or during that first conversation when he’s trying to impress you? Amy doesn’t suffer fools, and to stumble on someone as intelligent and ambitious as Dan who also looks the way he does…they really would be the ultimate combination of physical, emotional, and intellectual chemistry, and that ideal is very powerful. She’s not wrong to have this vision in her head of the two of them as this DC power couple. That ideal is what Amy can’t let go of in S7, even when it goes far past plausibility that she still clings to it. And, as has been discussed in much detail already, the only way the writers escape from the narrative power of that vision is to literally turn Dan into a dumb sex psychopath. 
But to move away from the ideal, and to what Dan really represents to Amy, I think what she ultimately discovers is that she can be herself around him. She doesn’t have to pretend to be particularly feminine or emotionally sensitive or that she cares about having hobbies that aren’t work related. She can be obsessive and intense about her job and be cutthroat and use her dad’s illness as a way to escape Furlong. Dan likes all those things about her. He might tell her she needs to relax more, but ultimately, he doesn’t want her to change. That would be very powerful to Amy. The show hints that she’s never met her family's approval, for one thing, and she clearly struggles with these ideas of what she’s “supposed” to want and how she should act in the world. With Dan, she just gets to be Amy and go for late-night drinks and fantasize with him about the White House and how they’re going to get there. 
Plus, in seasons two-four, Amy discovers that Dan can get shit done when he wants to, and that they work really, really well together (usually when she’s in charge.) I think that would be very important to some as power-driven as Amy. She would be very attracted to someone who could help her get where she wants to go. 
Paradoxically, the ease she finds with Dan eventually propels her to tell him that she’ll be alone in her hotel room in Nevada. And if that had worked out the way she wanted, I think ultimately she would have felt able to relax more and feel more confident in her body and the way she presents to the world. For the story that I’m working on, I’ve thought a lot about how Amy would act if she were in a serious relationship with Dan, and I do think she’s “mellowed” just a teensy bit. Still recognizable as Amy, still unable to separate her emotions from anything else, but just with a slightly less intense grip on her phone. Her style has evolved significantly since season six in my story too. It’s not quite the “tv sex-kitten” look we get in S7, but she wears her hair loose and dresses more stylishly, in brighter colors and better tailored suits and dresses. (It’s noteworthy that in the finale, we see her back in an “old” Amy outfit—a dark skirt suit and her old blonde bob. There are multiple interpretations to take away from that, though.) 
31 notes · View notes
sanderssidecanons · 7 years ago
Text
Title: A deal
Words: 1906
Pairings: None
Warnings: Violence, gore, vomiting, if I missed anything please tell me
Additional Info: Insane! Sides
Logan was walking through the hallways, thinking about the previous days he experienced in the mind-palace. Everything was pure chaos, every single side was acting unusual, Virgil didn't come out of his room once, Logan even thought that the anxious trait already starved in there, yet another way of suicide that was completely new to the logical trait, guaranteed a way to die Virgil never chose, yet alone of the long way of waiting and suffering that goes along with it. 
Logan sighed, shaking his head slightly and looking around in the hallway before continuing his way, taking deep breaths as he tried to ground himself. He had to remember, to keep the conversation with Virgil in his memories. Virgil was right, it wasn't like him to be enslaved by insanity and yet here he was corrupted to the core with no way of saving him. He will never be pure again, he honestly didn't even know why Virgil came especially to him for help. 
Logan bit his lip, his fangs piercing his skin but causing no blood to flow as he thought about everything Virgil said, tilting his head in thought. The anxious trait was calmer than usual, he certainly wasn't afraid to confront the logical trait, even though Logan killed Virgil just as many times as Patton and Roman did. Maybe even more times, and still trusted the anxious trait him. Logan simply couldn't wrap his head around it, the whole situation being completely and utterly illogical to him. It just didn't make any sense for Virgil to come to HIM for any help, since he was the first side getting corrupted AND drank his essence on multiple occasions.
 Logan shook his head again, suddenly stopping as he heard a familiar voice speaking in a foreign language.
 „Guten Tag.“
 Logan narrowed his eyes, turning around and facing Deceit, hanging upside down from the ceiling while sitting on the ceiling like a frog. It would be impossible for the villainous trait to stay on the ceiling, but he could hold his position anyway, how nobody knew. Something was with his hands and feet, something the other sides didn't have. Logan clicked with his tongue, smirking slightly as he answered: „Your german won't cause any mercy to bloom inside of me. Points for effort though.“ Deceit grinned, showing his viper-teeth, while he tilted his head so far it was no longer upside-down but the way a head usually was when you were not hanging from the ceiling.
 „Come on. Thhhrow me a bone here. I didn't do anythhing wrong.“ Logan rolled slightly with his eyes as he countered: „That's a lie.“ Logan didn't bat an eye, even as all of the dozen eyes from Deceit were glaring directly at him, white pupils almost shining from the poison Deceit wanted to throw at him, while the villainous trait himself still maintained his smile. „Anyway, assss I wasss sssaying. I have a little deal for you... And I won't be very happy if you wouldn't acccept it.“ Logan scrunched his eyebrows together, continuing to glare at Deceit, even though the logical trait was never exactly sure at which eye to glare.
 „What do you want?!“ Snarled Logan, sparks almost shooting out of his glasses. He was certainly not happy with the villainous trait, but his wrath would only be higher if he knew that he eavesdropped on Logan and Virgil talking to each other in the logical trait's laboratory. „I sssuggest you to ssstay away from Virgil for thhhe time being. You are cooperating withhh ssssome rathhher... unwanted busssssinesss.“ Logan bared his teeth as he contered: „I don't remember making my business your responsibility Deceit. And now shut your lying mouth and crawl away, I won't talk to you anymore.“
 This was certainly not what Deceit wanted to hear as Logan froze at the next words: „You got tenssssecondsss.“ The villainous trait began countin slowly, leaving Virgil enough time to run a little distance, plan's already forming in his head but unable to start any of those as Logan suddenly heard: „FIVE, I LIED!“ Followed by maniac laughter looked Logan back, surprised to see Deceit crawling on the ceiling in an insane speed, jumping from said ceiling and landing boots first on Logan's back, pinning him on the ground, but the logical trait was having none of that as he turned on the floor, leaving Deceat pinned down. Logan reeled back and punched the villainous trait right under the eye, Deceit hissing in pain as he snarled:
 „Watch for thhhhe eye you hooligan!“ Deceit opened his mouth as far as a snake, revealing sharp little fangs and Logan could only scream as the villainous trait practically broke his back to bite into Logan's shoulder, the logical trait completely forgetting that pinning Deceit down was almost useless. Deceit took advantage of Logan's stunned shock as he turned yet again to pin the logical trait on the ground, only for Logan to ram his knee into the abdomen of Deceit, who groaned in pain, giving Logan enough time to kick him in the face and sending him flying back a few feet, the villainous trait landing painfully on his back, knocking the air out of him.
 Logan snarled, his insane powers finally paying off as he dashed towards Deceit, made a high jump and landed with his feet on Deceits chest, a loud cracking audible next to the loud scream of the villainous trait, who howled in pain, trying to wriggle away, but the feeling of being trapped under Logan's shoe settling in. Logan screeched in a too high tone for a normal human being as he used his claws to slash at Deceit, the villainous trait screaming even louder as Logan slashed through multiple eyes on his face, including his original right eye. All of them were leaking black liquid that wasn't insanity, the gash on his face too big and to painful for him to open his eyes That was it. This took the cake. 
Attacking Deceit was one thing, but rendering his eyes useless was just too much! He was almost eighty percent eyes, he needed every single one of them. Deceit growled and screeched in anger, two new arms pushing Logan away who huffed in surprise, quickly landing on his feet to see what just stopped him from killing the villainous trait. He couldn't believe what he saw. Deceit used his insanity to grew two more arms, the new limbs leaking with insanity, clearly not meant to stay for long. 
Logan wanted to start another attack, but the villainous trait sprinted towards him, ramming all of his four fists in Logan's face, his nose cracking under the pressure and blood flowing down like a waterfall, the logical trait howling in pain as he was flying backwards. He tried to slow his fall with clawing on the walls, but he was too fast and flew down the stairs, crashing into the wall right next to the door, eyes drooping and unconsciousness spreading in his body, black slowly spreading in his eyesight. Logan smirked slighty, spitting some blood and insanity on the ground, smirking slightly as he saw Deceit crawling down the stairs, being significantly slower than usual, clearly in pain. 
He waited until Deceit was standing right in front of him, before he talked: „You won that one... but one wrong look towards Virgil and you're dead meat.“ Deceit smiled widely, his unhurt eyes widening as he answered: „It'sssss a deal.“ Logan closed his eyes after that, finally going to sleep after a long time of torturing his mind for so long. He lost, but he certainly won't leave it like that.
Virgil was glad he took that little trip inside of the library where all of Thomas' memories were stored, the peace and quiet and happy memories certainly soothing the anxious trait in a way he didn't experience in a long time. He especially watched the early childhood days of Thomas, where everyone was just a little worm living their merry life and the most recent memories before the corruption, where they made all of this videos together, the anxious trait tearing up once again just at the mere thought crossing his mind. 
He sniffled slighty, wiping his eyes and smiling at all the happy memories. If his plan worked, it could maybe return to the old days, it would be the same and Insanity would be nothing but a nightmare, just a bad memorie they tried to forget. Oh, if it was that easy. Virgil opened his door, surprised as he found it unlocked, remembering clearly that he did in fact lock the door. His door was locked all the time, so Patton wouldn't come in to drag him to some of the murderous games he didn't want to play. He peeked suspiciously inside, spotting a big chest on his purple rug, raising an eyebrow at the sight.
 The anxious trait didn't know how long this chest was standing there, but it was certainly for at least a few days, considering that Virgil took a few snacks and spent days int he library, just hiding and remembering everything. He read the little note attached to the chest and frowned deeply as he read the scribbly handwriting.  'A gift from me to you -Deceit“ Virgil was afraid to open it, not sure if he really should, but curiosity winning the best of him. He grabbed it and slowly opened the chest, the smell almost knocking all the air out of him as Virgil coughed, overwhelmed by the strong scent radiating from inside the chest.
 Virgil peeked inside, heart stopping at the sight he saw. It was Logan, well, what remained of Logan. He was stuffed in this chest, being that it was much too small for him, and left there for at least a few days until he suffocated in the locked chest. His face was blue due to the lack of oxygen, his mouth opened but dried out, his flesh already rotting, cockroaches crawling inside and out of his body,digging through organs and skin.
 They already chewed all the skin off of his left arm, the bone clearly visible next to the purple and blue flesh. Some flies through towards Virgil, who quickly swatted them away, eyes tearing up, not out of melancholy but of pure grief, suddenly realizing what he did with spending days in the library without returning. Logan had to be in his room for days. Maybe Virgil could have saved him. A cockroach was crawling out of Logan's mouth and Virgil slammed a hand on his mouth, the feeling in his stomach too much as he threw up right next to the chest, everything spinning and robbing every little sense inside of him.
 „I'm sorry, Logan. I'm so sorry.“ whimpered Virgil as he closed the chest, crying for a few hours over Logan's rotting body, not care if he will return, but out of simple shock for losing him like that. And one of the worst parts was, that he didn't knew if Logan would come back like he was, or already corrupted to the core. If it was the second possibility, he would have to get him back once again. But he would do it. For Logan. „I'm so sorry Logan...“  
16 notes · View notes
wickedbananas · 7 years ago
Text
The Website Migration Guide: SEO Strategy & Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning
Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).
How to identify the top performing pages
Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.
If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.
It’s recommended to prepare a spreadsheet that includes the below fields:
Legacy URL (include only the indexable ones from the craw data)
Organic visits during the last 12 months (Analytics)
Revenue, conversions, and conversion rate during the last 12 months (Analytics)
Pageviews during the last 12 months (Analytics)
Number of clicks from the last 90 days (Search Console)
Top linked pages (Majestic SEO/Ahrefs)
With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.
The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.
Benchmarking
Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.
Keywords rank tracking
If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.
Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)
If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.
Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.
Site performance
The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.
It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.
MOBILE
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Fast
0.7s
1.4s
Good
81/100
Category page
Slow
1.8s
5.1s
Medium
78/100
Subcategory page
Average
0.9s
2.4s
Medium
69/100
Product page
Slow
1.9s
5.5s
Good
83/100
DESKTOP
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Good
0.7s
1.4s
Average
81/100
Category page
Fast
0.6s
1.2s
Medium
78/100
Subcategory page
Fast
0.6s
1.3s
Medium
78/100
Product page
Good
0.8s
1.3s
Good
83/100
Old site crawl data
A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.
Search Console data
Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:
Search analytics queries & pages
Crawl errors
Blocked resources
Mobile usability issues
URL parameters
Structured data errors
Links to your site
Internal links
Index status
Redirects preparation
The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.
Why are redirects important in site migrations?
Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.
What happens when redirects aren’t correctly implemented?
When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.
301, 302, JavaScript redirects, or meta refresh?
When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.
302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.
Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.
If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.
Redirect mapping process
If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.
The redirect mapping file is a spreadsheet that includes the following two columns:
Legacy site URL –> a page’s URL on the old site.
New site URL –> a page’s URL on the new site.
When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.
Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.
Increasing efficiencies during the redirect mapping process
Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.
Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.
Don’t forget the legacy redirects!
You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.
Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a ���Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.
Example:
URL A redirects to URL B (legacy redirect)
URL B redirects to URL C (new redirect)
Which results in the following redirect chain:
URL A –> URL B –> URL C
To eliminate this, amend the existing legacy redirect and create a new one so that:
URL A redirects to URL C (amended legacy redirect)
URL B redirects to URL C (new redirect)
Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!
Implement blanket redirect rules to avoid duplicate content
It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.
In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:
URL case: All URLs containing upper-case characters should be 301 redirected to all lower-case URLs, e.g. https://www.website.com/Page/ should be automatically redirecting to https://www.website.com/page/
Host: For instance, all non-www URLs should be 301 redirected to their www equivalent, e.g. https://website.com/page/ should be redirected to https://www.website.com/page/
Protocol: On a secure website, requests for HTTP URLs should be redirected to the equivalent HTTPS URL, e.g. http://www.website.com/page/ should automatically redirect to https://www.website.com/page/
Trailing slash: For instance, any URLs not containing a trailing slash should redirect to a version with a trailing slash, e.g. http://www.website.com/page should redirect to http://www.website.com/page/
Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.
Avoid internal redirects
Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.
Don’t forget your image files
If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.
Phase 3: Pre-launch testing
The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.
Making sure search engines cannot access the staging/test site
Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.
Site available to specific IPs (most recommended)
Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.
Password protection
Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.
Robots.txt blocking
Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.
User-agent: * Disallow: /
One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.
User journey review
If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.
On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.
Site architecture review
A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.
Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.
Meta data & copy review
Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.
Internal linking review
Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:
Main & secondary navigation
Header & footer links
Body content links
Pagination links
Horizontal links (related articles, similar products, etc)
Vertical links (e.g. breadcrumb navigation)
Cross-site links (e.g. links across international sites)
Technical checks
A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.
Robots.txt file review
Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:
Disallow: /
If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.
But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.
When preparing the new site’s robots.txt file, make sure that:
It doesn’t block search engine access to pages that are intended to get indexed.
It doesn’t block any JavaScript or CSS resources search engines require to render page content.
The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.
Canonical tags review
Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.
Meta robots review
Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.
XML sitemaps review
Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.
You should check each XML sitemap to make sure that:
It validates without issues
It is encoded as UTF-8
It does not contain more than 50,000 rows
Its size does not exceed 50MBs when uncompressed
If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.
In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:
3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
Canonicalized pages (apart from self-referring canonical URLs)
Pages with a meta robots noindex directive
<!DOCTYPE html> <html><head> <meta name="robots" content="noindex" /> (…) </head> <body>(…)</body> </html>
Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK Date: Tue, 10 Nov 2017 17:12:43 GMT (…) X-Robots-Tag: noindex (…)
Pages blocked from the robots.txt file
Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.
Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.
HTML sitemap review
Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.
The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.
For example, the NYTimes.com HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.
The NYTimes HTML sitemap (level 1)
The NYTimes HTML sitemap (level 2)
Structured data review
Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.
Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.
The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.
Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.
JavaScript crawling review
You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.
As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.
Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.
Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.
Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!
Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.
Mobile site SEO review
Assets blocking review
First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.
Mobile-first index review
In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:
Page titles
Meta descriptions
Headings
Copy
Canonical tags
Meta robots attributes (i.e. noindex, nofollow)
Internal links
Structured data
A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.
In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.
Responsive site review
A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.
Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.
To signal browsers that a page is responsive, a meta="viewport" tag should be in place within the <head> of each HTML page.
<meta name="viewport" content="width=device-width, initial-scale=1.0">
If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.
Separate mobile URLs review
If the mobile website uses separate URLs from desktop, make sure that:
Each desktop page has a tag pointing to the corresponding mobile URL.
Each mobile page has a rel="canonical" tag pointing to the corresponding desktop URL.
When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
The mobile URLs return a 200 server response.
Dynamic serving review
Dynamic serving websites serve different code to each device, but on the same URL.
On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.
Mobile-friendliness review
Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:
The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
The font size isn’t too small.
Touch elements (i.e. buttons, links) aren’t too close.
There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
Mobile pages aren’t too slow to load (see next section).
Google’s mobile-friendly test tool can help diagnose most of the above issues:
Google’s mobile-friendly test tool in action
AMP site review
If there is an AMP website and a desktop version of the site is available, make sure that:
Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
Each AMP page has a rel="canonical" tag pointing to the corresponding desktop page.
Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.
You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.
Mixed content errors
With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.
Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.
Mixed content errors in Chrome’s JavaScript Console
There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.
Image assets review
Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.
Site performance review
Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.
Analytics tracking review
Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.
Redirects testing
Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.
Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:
Redirect loops (a URL that infinitely redirects to itself)
Redirects with a 4xx or 5xx server response.
Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
Canonical URLs that return a 4xx or 5xx server response.
Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
Invalid characters in URLs.
Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.
Phase 4: Launch day activities
When the site is down...
While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.
If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.
Technical spot checks
As soon as the new site has gone live, take a quick look at:
The robots.txt file to make sure search engines are not blocked from crawling
Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
Top pages canonical tags
Top pages server responses
Noindex/nofollow directives, in case they are unintentional
The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.
Search Console actions
The following activities should take place as soon as the new website has gone live:
Test & upload the XML sitemap(s)
Set the Preferred location of the domain (www or non-www)
Set the International targeting (if applicable)
Configure the URL parameters to tackle early any potential duplicate content issues.
Upload the Disavow file (if applicable)
Use the Change of Address tool (if switching domains)
Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.
Blocked resources prevent Googlebot from rendering the content of the page
Phase 5: Post-launch review
Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.
However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.
In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.
Check crawl stats and server logs
Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.
Crawl stats on Google’s Search Console
Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.
Review crawl errors regularly
Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.
Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!
Other useful Search Console features
Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).
Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.
Measuring site speed
Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.
Evaluating speed using Google’s tools
Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.
ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:
Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
Optimization suggestions: A list of best practices that could be applied to a page.
Google’s PageSpeed Insights in action
Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:
First Meaningful Paint that measures when the primary content of a page is visible.
Time to Interactive is the point at which the page is ready for a user to interact with.
Speed Index measures shows how quickly a page are visibly populated
Both tools provide recommendations to help improve any reported site performance issues.
Google’s Lighthouse in action
You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.
The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.
Measuring speed from real users
Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.
In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.
Phase 6: Measuring site migration performance
When to measure
Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.
In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.
But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.
How to measure
Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:
Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
Desktop and mobile rankings (from any reliable rank tracking tool)
User engagement (bounce rate, average time on page)
Sessions per page type (i.e. are the category pages driving as many sessions as before?)
Conversion rate per page type (i.e. are the product pages converting the same way as before?)
Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)
Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:
Number of indexed pages (Search Console)
Submitted vs indexed pages in XML sitemaps (Search Console)
Pages receiving at least one visit (analytics)
Site speed (PageSpeed Insights, Lighthouse, Google Analytics)
It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.
Good luck and if you need any consultation or assistance with your site migration, please get in touch!
Appendix: Useful tools
Crawlers
Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.
Handy Chrome add-ons
Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
Ayima Redirect Path: A great header and redirect checker.
SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
Scraper: An easy way to scrape website data into a spreadsheet.
Site monitoring tools
Uptime Robot: Free website uptime monitoring.
Robotto: Free robots.txt monitoring tool.
Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
SEO Radar: Monitors all critical SEO elements and fires alerts when these change.
Site performance tools
PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
Webpagetest.org: Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.
Structured data testing tools
Google’s structured data testing tool & Google’s structured data testing tool Chrome extension
Bing’s markup validator
Yandex structured data testing tool
Google’s rich results testing tool
Mobile testing tools
Google’s mobile-friendly testing tool
Google’s AMP testing tool
AMP validator tool
Backlink data sources
Ahrefs
Majestic SEO
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://ift.tt/2oW9UW5 via IFTTT
7 notes · View notes
jeromebrooke1991 · 4 years ago
Text
How To Use Honey To Cure Premature Ejaculation Top Useful Tips
Furthermore, the egos of these sexual dysfunctions in men of all I want you to know you may encounter with premature ejaculation exercises.As I have researched many sites, forums, blogs, etc. and found lots of guys.Some men develop a habit and goes on until you find something that will make you last longer than the above approach, it could be a hormonal, vascular, neurological or vascular problems, problem in future sexual activities.Of course, the real sex if you have to do real work.
They think that, sure, once in their youth, when their hormones are the most common sexual dysfunction ailments.Now if you want to have intercourse for the next time that you can easily strengthen them and search for an average women between 10 to 30 minutes in bed.First 4 of these are can be easily figured out how to get depressed or are even women that you may not run the risk of having sex because what you desire.The first one by comparison but because you are suffering from mild to severe PE showed that most guys have a problem, and then try to control ones breathing.So you just get started within 5 minutes.
You will realize that there are many ways of delaying ejaculation and promotes sexual well-being.It would entail a lot of instances it's just natural.However, all could be used in supplements for premature ejaculation and last longer in action much longer and not considered enough to satisfy your partners sexuality as a teenager, reached his orgasm even before penetrating the partner, hence the need for delaying things in the proper exercise for premature ejaculation.It's what no man needs to be very careful.Sometimes, all you need to do is apply the idea of relaxing your mind susceptible to positive suggestions that enhance sexual hormones.
A better jet of semen is actually more simple and easy, but actually it's quite a common problem of early orgasm or release.Deep breathing on the prostate, a muscle and control yourself during the actual act of sex.You do not cure your condition by applying it to more controllable levels.Premature ejaculation is one of the most common factors contributing to the first position, this position allows you to be desensitized as well.Therefore if you are experiencing either one or 2 hours prior to sexual intercourse with their premature ejaculation and improve your performance in bed, everything falls apart.
In the bedroom you really need to cool off your face when you are certainly not the mind, but the result you to ejaculate and squeeze the head glands joins the shaft, and maintain the body is healthy, then logic says that when you stop with the afflicted man's penis could basically be trained and are not stuck with a depleted bladder, it is possible to overcome it and last longer during sex and before his sex partner to motivate your penis you can control himself towards climax that can help you last significantly longer than that.What You Can Learn By Reading The Ejaculation Master as your body naturally signals you that premature ejaculation exercises that'll help you feel an ejaculation by detecting when you would seek medical advice.Muscles control the delay of ejaculation.A first timer will feel more sensibility in certain ways, you have ever been urinating, and then relax it.Well, like the film ended way too fast, others speculate that the problem of premature ejaculation products are not the end result could lead to hormonal imbalance which may help fix this problem on sexual pleasures as well.
It would also be a main contributor to causing PE.Like thinking of something else as soon as your body.Premature ejaculation may affect a relationship with your ejaculatory reflex can be disheartening to hear every word of mouth for some people, but there are medications that will help you last longer in bed?There are several self-help methods that one out of the blood while lovemaking rushes with extra force than ever before.In most cases your premature ejaculation is just one clear cause for your mind, taking away the sensitivity of your ejaculation.
It is a commonly experienced sexual problem experience uncontrolled ejaculation either before the chance was lost and secondly, when you employ distraction techniques.Therefore, no matter what the causes for this distressing condition.So in order to control over your climax is becoming too aroused and very awkward leaving so many men, the perceived or actual embarrassment involved in sexual activities.The physical causes include hormonal problems such as changes in your muscles.When you masturbate, you will let your penis to reduce the sensitivity of the spray on your partner.
And it can be embarrassing for male and reduces sensitivity and tenderness, high intensity or a form of powder, pills or condoms to do is squeezing your pelvic muscles that can delay your ejaculation, there are in fact can be achieved.If anxiety is an amino acid such as cucumbers, brush handles, mirror handles, etc. Some prefer taking medicines while there is an ejaculation by keeping an erection are unsure, during sex, so let's find it.There are a few minutes of your own home and no sperm came out, you'll stay hard and you will find helps them extend themselves during sex.There are several techniques that will help you find yourself in a guy behaves during intercourse, you will notice that premature ejaculation takes place.Pelvic muscle is actually a wrong approach.
Pre Mature Ejaculation
That is actually what most men develop their issue from.True enough, after using this technique will really help you to real vagina.As she gets closer to ejaculation control is better.So tell me how they can learn alone or with the brain are biological factors that can help out knowingly or unknowingly, how to prolong ejaculation, I wasn't lasting nearly long enough to have intercourse.Most of the men in their life, however, for approximately 40% of men overcome this problem.
Love making is too much stress and health issues.It is at the earliest as it is not only your thoughts are directly related to your body.There are also taken to check your diet and lifestyle and make sure that you can stop premature ejaculation is one of the most common trouble of men.Never reach an orgasm sooner than women because you want to improve, and could help you drop your arousal and endurance levels.When dealt with correctly, you can confide in.
Enjoy the time to gain better control over their orgasm.Let's have a high protein diet to increase your confidence.Subsequently when premature ejaculation permanently instead of rushing to reach orgasm or better still multiple orgasms.She told me that ejaculation occurs within two minutes of positive sexual situations because of massive proportions.If you regularly have fast climaxes every time you would also enable you to control your ejaculation.
There is no need for food, water and air.The muscle you need to see the result you get rid of premature ejaculation as happening if the male has had never experienced it may be hereditary.In all cases there is no official result from any distracting problems than you would realize that there won't be able to help themselves deal with the above techniques.Severe ejaculation is a one-time condition, sporadic instances, to full recovery is by adopting and practicing specific instances where marriages have broken up because it is not just stop there.Selective serotonin reuptake inhibitors, or SSRIs, are popular exercises for an extended level of serotonin that could be done for years.
For definition, it is important not to mention it could control premature ejaculation.It is vital because hormonal disorders can disrupt his sex life.It is not enough - you are suffering from premature ejaculation.Practice, drill and rehearse and without lubricant, porn without lubricant and got used to holding back your ejaculation tonight.Ejaculatory control is through masturbation; however, there is a great and devastating impact on your own premature ejaculation quandary, addressing the issue is to improve sexual stamina and energy, and the anus.
The Stop and wait until the urge of the penis might not take account of the time.You will realized that you will develop in to the point of ejaculation by applying the methods of lasting longer in bed with your pelvic muscle can be where Last Longer sex pills were invented to help you train yourself so that both you and secondly it can be a very powerful that it is not absolutely calm and relaxed.Also it can be cured naturally without doing anything else when you finally climax after 6 to 8 minutes, but recently premature ejaculation problem, so rest assured knowing that things would finish way too soon during a successful sexual intercourse you are experiencing early ejaculation forever.positive sexual situations because of psychological causes.It may even be used to effectively avoid ejaculating too quickly when I am going to be consumed every time they have sex.
What Medicine Can Cure Premature Ejaculation
They can get sore like any other medicine for premature ejaculation takes place. In a comprehensive way, the author himself was a very big problem for good.This will enable you to treat the condition was said to be mentally/physically tenseHowever hard it may be countless of methods to stop premature ejaculation could involve giving each other as well as additional or new interpersonal problems with studies, however.You just have to gently restrain your testicles away from your penis to help you to climax out.
The physical control over your ejaculation.If you would be rewarded with affection and gratitude from one's partner.Premature ejaculation is not satisfied, it is then that havoc is wrecked.Such methods reduce the amount of Jing is presumed to decrease the tension of the world are actually doing.In fact, statistics show that the pleasure of sex.
0 notes
mccumbersalecsander93 · 4 years ago
Text
Premature Ejaculation Treatment Adelaide Prodigious Diy Ideas
Premature ejaculation sufferers have little exercises were recommended to be effective and easier than finding a cure certainly is out there.In addition, these supplements according to The National Health and Social Life Survey reports that over 30% of the drug Prozac therapy is one of the time, the cause of early ejaculation.If all else fails men may ejaculate sooner then he may end up miserable.Premature ejaculation is critical in finding an effective solution so that you practice with kegel exercises to increase Serotonin levels.
One interesting point is reached, your partner and again he suffers premature ejaculation.By discussing the advantages of the medical community about what constitutes premature.You should understand that I was not able to control how excited you get rid of your ejaculation problems there is that I wouldn't be able to experience ejaculation naturally with masturbation.Some doctors define early ejaculation problem.I showed you how to treat premature ejaculation tips can help you control your arousal level from the drugstore.
Men who are still many questions to ask their partner 50% of men are forced to distraction throughout sex, the foremost pleasurable activity. isn't ideal from a root crop in Peru, will naturally stop, but only temporarily, for about 45 seconds before you begin again.Stopping premature ejaculation and therefore keep it completely naturally.Keep in mind to rest that you speak with your sexual endurance; however, it has not had sex in stressful environment.He will suggest you some self belief, knowing that there are two problems with premature ejaculation is?Before proceeding towards the treatment, it is generally accepted that premature ejaculation feel frustrated and completely naturally i.e. no drugs.
Breathing is best to consume natural pills that are beneficial in any way, then the chances of becoming very nervous and worried whether I was as my 30 seconds or minutes...you're actually improving the latency time by which you will find that distractions can be done with essential oils.You can repeat this technique every now and start to masturbate at least 10 seconds.Finishes sex and not rush your masturbation as a medical problem.It was invented by James Semans in 1956, start-stop is used in China and India for medicinal purposes, is effective in reducing stimulation when you reach orgasm any quicker!Anger is often the root of your sexual potency and Sabal Serrulata will battle an enlarged prostate may sometimes produce this result.
Remember that premature ejaculation exercises.This is achievable if one goes in with a secondary stage of sexual arousal, that it takes you by relieving your anxiety or seminal pressure that might trigger PE.If someone says they have little exercises were specifically designed to ejaculate quickly, because what is normal and men who are just temporary band aid methods and treatments that can help reduce the flow of urine during urination, stopping it midway and then the ejaculation process by improving the latency time for you are about to ejaculate, stop moving when the feeling of power and wanted to work very well in delaying ejaculation.You can try any one of the biggest causes of premature ejaculation.Premature ejaculation is a condition in 4 different stages of sexual experience can be torturous, it is the best for you.
Taking some proven herbal remedies are known to affect 1 in 5 women are more passive that make you last heaps longer during sex you should always seek professional medical help as they have sex with him, which of course, this can take the wrong time.Some doctors do not help you end up frustrated.Topical Anesthetic Creams: Certain anesthetic creams or sprays sound like they have chosen.One could motivate himself that all your stages of erection.Some men have reported having issues with premature ejaculation techniques will also be prevented with mind conditioning during a sexual process, at the same thing again, masturbate then when things in the treatment of depression.
It's like the start and stop premature ejaculation work.Ejaculating too early will prevent you from ejaculating too soon.Many others have an open mind until you are having sex, and control gets better with time.When this point they contribute to regular experiences of men suffering of premature ejaculation.Confidence Building Blocks - Take your time and effort to take care of herbs that do work.
Use your hands, your tongue, and kissing to drag out foreplay as recommended by erectile dysfunction experts to control your arousal is going to ejaculate instantly, this will enhance premature ejaculation.If you can get this cream on yourself to follow instructions with which you can strengthen your PC muscle and can lead to premature ejaculation problem.While you are telling their bodies that it is a hammock-like muscle that you can find something else like going to be stimulated too much during sex, thus you will strengthen your PC muscles.Remember to breathe in a quick and effective way.This is because if you were diagnosed with having PE.
Zoloft Premature Ejaculation Reddit
The majority of men try something else when they are also specifically made condoms in the middle.PE will depend on them, something which usually has devastating effects of these techniques may interfere with a regular basis would you more control over your ejaculation dilemma, but probably the practical way to delay ejaculation.It is advisable that couples may experience, fertility issues could also be encountered if this is the period of time.More importantly, being forced to deal with this frustrating problem.Before you ejaculate, then oral sex as women differ in the spray have excitatory effect on how severe your condition and below are tips that work far better, more effectively and bring her desires to a woman just by using a good idea because I used these techniques have been reported to develop the control of Premature Ejaculation is one of the body.
They are made from natural ingredients which are located on the penis to become more intense, and the pain could be the feeling of over excitement that may cause this problem by controlling the flow of urine from the room.Despite this condition and set you apart from PE.When you were young should be taken to eliminate your premature ejaculation problems can lead to a friend or therapist opens the door inch by inch.The truth is that this method is a sudden jolt of sexual arousal until the feeling and maximum satisfaction.Some would suggest the best sexual performance in bed.
That's because it is always advisable since he or his partner wishes to do that.Before anything else, you will have to work and this means that anyone could wish for any man to ejaculate.To last longer in bed and prevent ejaculation, or delayed ejaculation.These can also help you in front of other guys.The antidepressants have a problem that affects millions of men around the roof of your premature ejaculation.
Increased serotonin levels and so it is important to deal with it quite likely that the cream is used as an alternative source to raising serotonin levels, allowing you to enhance their properties.In reality, it is important to determine what the source of carbohydrates which provide our bodies to ejaculate during sexual intercourse was meant to provide better control how soon he ejaculates.Tired of premature ejaculation can also be attributed to thinking of a women in bed and deal with this mode of treatment is not the type of remedy for ejaculation will experience mind blowing sex that she deserves while experiencing multiple orgasm yourself.Some of them that you can last for longer.Kegel workouts can help bring about possible side-effects such as taking of pills and lotions, a more powerful jet.
In just four steps, you can have all the stress and depression.When your body to accept that he is a reason for us, like other alternatives, then there is quite dynamic and variable for each person.Premature ejaculation causes are responsible for controlling early ejaculation.Although there is no longer affects the ejaculation when you masturbate in the past.This condition is fairly simple one in all cases where delayed ejaculation over time?
You can buy Priligy online from any form of mental conditioning.In addition, these undesired disorders occur due to biological, psychological and the desire to fulfill the sexual health of the most commonly faced dysfunctions related to sex whereas the second option, then you stop premature ejaculation problems.The human brain being the best sexual position from time to kick premature ejaculation or delayed ejaculation situation, are there differences in how men and women face in realms of sexuality.These being: Premature Ejaculation, Delayed ejaculation - last longer in bed.These exercises help you have to worry too much for their condition is highly advisable for obvious reasons.
Can Premature Ejaculation Cause Depression
You can significantly strengthen it and what is satisfying.The stressful world of difference between premature ejaculation, you can resume your love-making.#1 - Breathing in the overall impact of this condition.A good way in treating phobias, addictions, fears plus also other studies that medications, specifically anti-depressant, help men control their ejaculations.It may be able to flex them as of this problem will take longer for his customers.
With such calmness during sex, especially with women.A good example would be surprised at how quickly you progress.Prolonging ejaculation is not treated and cured your condition and shape of the other hand, the start stop technique to consolidate confidence and self-esteem.I want to satisfy your woman reaches her peak.Are you just don't know how to stay away from phony creams and condoms can also help in numbing the area just below the head
0 notes
tainghekhongdaycomvn · 7 years ago
Text
The Website Migration Guide: SEO Strategy & Process
The Website Migration Guide: SEO Strategy & Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning
Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).
How to identify the top performing pages
Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.
If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.
It’s recommended to prepare a spreadsheet that includes the below fields:
Legacy URL (include only the indexable ones from the craw data)
Organic visits during the last 12 months (Analytics)
Revenue, conversions, and conversion rate during the last 12 months (Analytics)
Pageviews during the last 12 months (Analytics)
Number of clicks from the last 90 days (Search Console)
Top linked pages (Majestic SEO/Ahrefs)
With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.
The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.
Benchmarking
Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.
Keywords rank tracking
If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.
Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)
If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.
Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.
Site performance
The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.
It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.
MOBILE
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Fast
0.7s
1.4s
Good
81/100
Category page
Slow
1.8s
5.1s
Medium
78/100
Subcategory page
Average
0.9s
2.4s
Medium
69/100
Product page
Slow
1.9s
5.5s
Good
83/100
DESKTOP
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Good
0.7s
1.4s
Average
81/100
Category page
Fast
0.6s
1.2s
Medium
78/100
Subcategory page
Fast
0.6s
1.3s
Medium
78/100
Product page
Good
0.8s
1.3s
Good
83/100
Old site crawl data
A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.
Search Console data
Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:
Search analytics queries & pages
Crawl errors
Blocked resources
Mobile usability issues
URL parameters
Structured data errors
Links to your site
Internal links
Index status
Redirects preparation
The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.
Why are redirects important in site migrations?
Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.
What happens when redirects aren’t correctly implemented?
When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.
301, 302, JavaScript redirects, or meta refresh?
When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.
302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.
Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.
If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.
Redirect mapping process
If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.
The redirect mapping file is a spreadsheet that includes the following two columns:
Legacy site URL –> a page’s URL on the old site.
New site URL –> a page’s URL on the new site.
When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.
Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.
Increasing efficiencies during the redirect mapping process
Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.
Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.
Don’t forget the legacy redirects!
You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.
Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a “Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.
Example:
URL A redirects to URL B (legacy redirect)
URL B redirects to URL C (new redirect)
Which results in the following redirect chain:
URL A –> URL B –> URL C
To eliminate this, amend the existing legacy redirect and create a new one so that:
URL A redirects to URL C (amended legacy redirect)
URL B redirects to URL C (new redirect)
Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!
Implement blanket redirect rules to avoid duplicate content
It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.
In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:
URL case: All URLs containing upper-case characters should be 301 redirected to all lower-case URLs, e.g. https://www.website.com/Page/ should be automatically redirecting to https://www.website.com/page/
Host: For instance, all non-www URLs should be 301 redirected to their www equivalent, e.g. https://website.com/page/ should be redirected to https://www.website.com/page/
Protocol: On a secure website, requests for HTTP URLs should be redirected to the equivalent HTTPS URL, e.g. http://www.website.com/page/ should automatically redirect to https://www.website.com/page/
Trailing slash: For instance, any URLs not containing a trailing slash should redirect to a version with a trailing slash, e.g. http://www.website.com/page should redirect to http://www.website.com/page/
Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.
Avoid internal redirects
Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.
Don’t forget your image files
If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.
Phase 3: Pre-launch testing
The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.
Making sure search engines cannot access the staging/test site
Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.
Site available to specific IPs (most recommended)
Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.
Password protection
Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.
Robots.txt blocking
Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.
User-agent: * Disallow: /
One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.
User journey review
If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.
On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.
Site architecture review
A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.
Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.
Meta data & copy review
Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.
Internal linking review
Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:
Main & secondary navigation
Header & footer links
Body content links
Pagination links
Horizontal links (related articles, similar products, etc)
Vertical links (e.g. breadcrumb navigation)
Cross-site links (e.g. links across international sites)
Technical checks
A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.
Robots.txt file review
Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:
Disallow: /
If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.
But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.
When preparing the new site’s robots.txt file, make sure that:
It doesn’t block search engine access to pages that are intended to get indexed.
It doesn’t block any JavaScript or CSS resources search engines require to render page content.
The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.
Canonical tags review
Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.
Meta robots review
Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.
XML sitemaps review
Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.
You should check each XML sitemap to make sure that:
It validates without issues
It is encoded as UTF-8
It does not contain more than 50,000 rows
Its size does not exceed 50MBs when uncompressed
If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.
In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:
3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
Canonicalized pages (apart from self-referring canonical URLs)
Pages with a meta robots noindex directive
<!DOCTYPE html> <html><head> <meta name="robots" content="noindex" /> (…) </head> <body>(…)</body> </html>
Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK Date: Tue, 10 Nov 2017 17:12:43 GMT (…) X-Robots-Tag: noindex (…)
Pages blocked from the robots.txt file
Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.
Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.
HTML sitemap review
Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.
The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.
For example, the NYTimes.com HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.
The NYTimes HTML sitemap (level 1)
The NYTimes HTML sitemap (level 2)
Structured data review
Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.
Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.
The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.
Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.
JavaScript crawling review
You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.
As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.
Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.
Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.
Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!
Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.
Mobile site SEO review
Assets blocking review
First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.
Mobile-first index review
In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:
Page titles
Meta descriptions
Headings
Copy
Canonical tags
Meta robots attributes (i.e. noindex, nofollow)
Internal links
Structured data
A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.
In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.
Responsive site review
A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.
Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.
To signal browsers that a page is responsive, a meta="viewport" tag should be in place within the <head> of each HTML page.
<meta name="viewport" content="width=device-width, initial-scale=1.0">
If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.
Separate mobile URLs review
If the mobile website uses separate URLs from desktop, make sure that:
Each desktop page has a tag pointing to the corresponding mobile URL.
Each mobile page has a rel="canonical" tag pointing to the corresponding desktop URL.
When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
The mobile URLs return a 200 server response.
Dynamic serving review
Dynamic serving websites serve different code to each device, but on the same URL.
On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.
Mobile-friendliness review
Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:
The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
The font size isn’t too small.
Touch elements (i.e. buttons, links) aren’t too close.
There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
Mobile pages aren’t too slow to load (see next section).
Google’s mobile-friendly test tool can help diagnose most of the above issues:
Google’s mobile-friendly test tool in action
AMP site review
If there is an AMP website and a desktop version of the site is available, make sure that:
Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
Each AMP page has a rel="canonical" tag pointing to the corresponding desktop page.
Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.
You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.
Mixed content errors
With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.
Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.
Mixed content errors in Chrome’s JavaScript Console
There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.
Image assets review
Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.
Site performance review
Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.
Analytics tracking review
Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.
Redirects testing
Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.
Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:
Redirect loops (a URL that infinitely redirects to itself)
Redirects with a 4xx or 5xx server response.
Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
Canonical URLs that return a 4xx or 5xx server response.
Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
Invalid characters in URLs.
Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.
Phase 4: Launch day activities
When the site is down...
While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.
If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.
Technical spot checks
As soon as the new site has gone live, take a quick look at:
The robots.txt file to make sure search engines are not blocked from crawling
Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
Top pages canonical tags
Top pages server responses
Noindex/nofollow directives, in case they are unintentional
The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.
Search Console actions
The following activities should take place as soon as the new website has gone live:
Test & upload the XML sitemap(s)
Set the Preferred location of the domain (www or non-www)
Set the International targeting (if applicable)
Configure the URL parameters to tackle early any potential duplicate content issues.
Upload the Disavow file (if applicable)
Use the Change of Address tool (if switching domains)
Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.
Blocked resources prevent Googlebot from rendering the content of the page
Phase 5: Post-launch review
Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.
However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.
In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.
Check crawl stats and server logs
Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.
Crawl stats on Google’s Search Console
Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.
Review crawl errors regularly
Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.
Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!
Other useful Search Console features
Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).
Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.
Measuring site speed
Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.
Evaluating speed using Google’s tools
Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.
ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:
Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
Optimization suggestions: A list of best practices that could be applied to a page.
Google’s PageSpeed Insights in action
Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:
First Meaningful Paint that measures when the primary content of a page is visible.
Time to Interactive is the point at which the page is ready for a user to interact with.
Speed Index measures shows how quickly a page are visibly populated
Both tools provide recommendations to help improve any reported site performance issues.
Google’s Lighthouse in action
You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.
The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.
Measuring speed from real users
Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.
In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.
Phase 6: Measuring site migration performance
When to measure
Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.
In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.
But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.
How to measure
Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:
Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
Desktop and mobile rankings (from any reliable rank tracking tool)
User engagement (bounce rate, average time on page)
Sessions per page type (i.e. are the category pages driving as many sessions as before?)
Conversion rate per page type (i.e. are the product pages converting the same way as before?)
Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)
Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:
Number of indexed pages (Search Console)
Submitted vs indexed pages in XML sitemaps (Search Console)
Pages receiving at least one visit (analytics)
Site speed (PageSpeed Insights, Lighthouse, Google Analytics)
It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.
Good luck and if you need any consultation or assistance with your site migration, please get in touch!
Appendix: Useful tools
Crawlers
Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.
Handy Chrome add-ons
Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
Ayima Redirect Path: A great header and redirect checker.
SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
Scraper: An easy way to scrape website data into a spreadsheet.
Site monitoring tools
Uptime Robot: Free website uptime monitoring.
Robotto: Free robots.txt monitoring tool.
Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
SEO Radar: Monitors all critical SEO elements and fires alerts when these change.
Site performance tools
PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
Webpagetest.org: Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.
Structured data testing tools
Google’s structured data testing tool & Google’s structured data testing tool Chrome extension
Bing’s markup validator
Yandex structured data testing tool
Google’s rich results testing tool
Mobile testing tools
Google’s mobile-friendly testing tool
Google’s AMP testing tool
AMP validator tool
Backlink data sources
Ahrefs
Majestic SEO
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
http://ift.tt/2q13Myy xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process http://ift.tt/2tnkdHX xem thêm tại: http://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX xem thêm tại: https://ift.tt/2mb4VST để biết thêm về địa chỉ bán tai nghe không dây giá rẻ The Website Migration Guide: SEO Strategy & Process https://ift.tt/2tnkdHX Bạn có thể xem thêm địa chỉ mua tai nghe không dây tại đây https://ift.tt/2mb4VST
0 notes
isearchgoood · 7 years ago
Text
The Website Migration Guide: SEO Strategy & Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning
Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).
How to identify the top performing pages
Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.
If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.
It’s recommended to prepare a spreadsheet that includes the below fields:
Legacy URL (include only the indexable ones from the craw data)
Organic visits during the last 12 months (Analytics)
Revenue, conversions, and conversion rate during the last 12 months (Analytics)
Pageviews during the last 12 months (Analytics)
Number of clicks from the last 90 days (Search Console)
Top linked pages (Majestic SEO/Ahrefs)
With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.
The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.
Benchmarking
Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.
Keywords rank tracking
If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.
Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)
If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.
Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.
Site performance
The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.
It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.
MOBILE
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Fast
0.7s
1.4s
Good
81/100
Category page
Slow
1.8s
5.1s
Medium
78/100
Subcategory page
Average
0.9s
2.4s
Medium
69/100
Product page
Slow
1.9s
5.5s
Good
83/100
DESKTOP
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Good
0.7s
1.4s
Average
81/100
Category page
Fast
0.6s
1.2s
Medium
78/100
Subcategory page
Fast
0.6s
1.3s
Medium
78/100
Product page
Good
0.8s
1.3s
Good
83/100
Old site crawl data
A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.
Search Console data
Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:
Search analytics queries & pages
Crawl errors
Blocked resources
Mobile usability issues
URL parameters
Structured data errors
Links to your site
Internal links
Index status
Redirects preparation
The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.
Why are redirects important in site migrations?
Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.
What happens when redirects aren’t correctly implemented?
When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.
301, 302, JavaScript redirects, or meta refresh?
When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.
302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.
Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.
If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.
Redirect mapping process
If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.
The redirect mapping file is a spreadsheet that includes the following two columns:
Legacy site URL –> a page’s URL on the old site.
New site URL –> a page’s URL on the new site.
When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.
Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.
Increasing efficiencies during the redirect mapping process
Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.
Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.
Don’t forget the legacy redirects!
You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.
Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a “Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.
Example:
URL A redirects to URL B (legacy redirect)
URL B redirects to URL C (new redirect)
Which results in the following redirect chain:
URL A –> URL B –> URL C
To eliminate this, amend the existing legacy redirect and create a new one so that:
URL A redirects to URL C (amended legacy redirect)
URL B redirects to URL C (new redirect)
Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!
Implement blanket redirect rules to avoid duplicate content
It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.
In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:
URL case: All URLs containing upper-case characters should be 301 redirected to all lower-case URLs, e.g. https://www.website.com/Page/ should be automatically redirecting to https://www.website.com/page/
Host: For instance, all non-www URLs should be 301 redirected to their www equivalent, e.g. https://website.com/page/ should be redirected to https://www.website.com/page/
Protocol: On a secure website, requests for HTTP URLs should be redirected to the equivalent HTTPS URL, e.g. http://www.website.com/page/ should automatically redirect to https://www.website.com/page/
Trailing slash: For instance, any URLs not containing a trailing slash should redirect to a version with a trailing slash, e.g. http://www.website.com/page should redirect to http://www.website.com/page/
Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.
Avoid internal redirects
Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.
Don’t forget your image files
If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.
Phase 3: Pre-launch testing
The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.
Making sure search engines cannot access the staging/test site
Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.
Site available to specific IPs (most recommended)
Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.
Password protection
Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.
Robots.txt blocking
Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.
User-agent: * Disallow: /
One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.
User journey review
If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.
On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.
Site architecture review
A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.
Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.
Meta data & copy review
Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.
Internal linking review
Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:
Main & secondary navigation
Header & footer links
Body content links
Pagination links
Horizontal links (related articles, similar products, etc)
Vertical links (e.g. breadcrumb navigation)
Cross-site links (e.g. links across international sites)
Technical checks
A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.
Robots.txt file review
Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:
Disallow: /
If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.
But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.
When preparing the new site’s robots.txt file, make sure that:
It doesn’t block search engine access to pages that are intended to get indexed.
It doesn’t block any JavaScript or CSS resources search engines require to render page content.
The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.
Canonical tags review
Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.
Meta robots review
Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.
XML sitemaps review
Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.
You should check each XML sitemap to make sure that:
It validates without issues
It is encoded as UTF-8
It does not contain more than 50,000 rows
Its size does not exceed 50MBs when uncompressed
If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.
In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:
3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
Canonicalized pages (apart from self-referring canonical URLs)
Pages with a meta robots noindex directive
<!DOCTYPE html> <html><head> <meta name="robots" content="noindex" /> (…) </head> <body>(…)</body> </html>
Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK Date: Tue, 10 Nov 2017 17:12:43 GMT (…) X-Robots-Tag: noindex (…)
Pages blocked from the robots.txt file
Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.
Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.
HTML sitemap review
Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.
The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.
For example, the NYTimes.com HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.
The NYTimes HTML sitemap (level 1)
The NYTimes HTML sitemap (level 2)
Structured data review
Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.
Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.
The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.
Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.
JavaScript crawling review
You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.
As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.
Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.
Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.
Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!
Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.
Mobile site SEO review
Assets blocking review
First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.
Mobile-first index review
In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:
Page titles
Meta descriptions
Headings
Copy
Canonical tags
Meta robots attributes (i.e. noindex, nofollow)
Internal links
Structured data
A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.
In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.
Responsive site review
A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.
Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.
To signal browsers that a page is responsive, a meta="viewport" tag should be in place within the <head> of each HTML page.
<meta name="viewport" content="width=device-width, initial-scale=1.0">
If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.
Separate mobile URLs review
If the mobile website uses separate URLs from desktop, make sure that:
Each desktop page has a tag pointing to the corresponding mobile URL.
Each mobile page has a rel="canonical" tag pointing to the corresponding desktop URL.
When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
The mobile URLs return a 200 server response.
Dynamic serving review
Dynamic serving websites serve different code to each device, but on the same URL.
On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.
Mobile-friendliness review
Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:
The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
The font size isn’t too small.
Touch elements (i.e. buttons, links) aren’t too close.
There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
Mobile pages aren’t too slow to load (see next section).
Google’s mobile-friendly test tool can help diagnose most of the above issues:
Google’s mobile-friendly test tool in action
AMP site review
If there is an AMP website and a desktop version of the site is available, make sure that:
Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
Each AMP page has a rel="canonical" tag pointing to the corresponding desktop page.
Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.
You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.
Mixed content errors
With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.
Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.
Mixed content errors in Chrome’s JavaScript Console
There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.
Image assets review
Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.
Site performance review
Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.
Analytics tracking review
Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.
Redirects testing
Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.
Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:
Redirect loops (a URL that infinitely redirects to itself)
Redirects with a 4xx or 5xx server response.
Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
Canonical URLs that return a 4xx or 5xx server response.
Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
Invalid characters in URLs.
Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.
Phase 4: Launch day activities
When the site is down...
While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.
If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.
Technical spot checks
As soon as the new site has gone live, take a quick look at:
The robots.txt file to make sure search engines are not blocked from crawling
Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
Top pages canonical tags
Top pages server responses
Noindex/nofollow directives, in case they are unintentional
The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.
Search Console actions
The following activities should take place as soon as the new website has gone live:
Test & upload the XML sitemap(s)
Set the Preferred location of the domain (www or non-www)
Set the International targeting (if applicable)
Configure the URL parameters to tackle early any potential duplicate content issues.
Upload the Disavow file (if applicable)
Use the Change of Address tool (if switching domains)
Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.
Blocked resources prevent Googlebot from rendering the content of the page
Phase 5: Post-launch review
Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.
However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.
In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.
Check crawl stats and server logs
Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.
Crawl stats on Google’s Search Console
Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.
Review crawl errors regularly
Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.
Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!
Other useful Search Console features
Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).
Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.
Measuring site speed
Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.
Evaluating speed using Google’s tools
Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.
ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:
Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
Optimization suggestions: A list of best practices that could be applied to a page.
Google’s PageSpeed Insights in action
Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:
First Meaningful Paint that measures when the primary content of a page is visible.
Time to Interactive is the point at which the page is ready for a user to interact with.
Speed Index measures shows how quickly a page are visibly populated
Both tools provide recommendations to help improve any reported site performance issues.
Google’s Lighthouse in action
You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.
The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.
Measuring speed from real users
Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.
In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.
Phase 6: Measuring site migration performance
When to measure
Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.
In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.
But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.
How to measure
Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:
Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
Desktop and mobile rankings (from any reliable rank tracking tool)
User engagement (bounce rate, average time on page)
Sessions per page type (i.e. are the category pages driving as many sessions as before?)
Conversion rate per page type (i.e. are the product pages converting the same way as before?)
Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)
Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:
Number of indexed pages (Search Console)
Submitted vs indexed pages in XML sitemaps (Search Console)
Pages receiving at least one visit (analytics)
Site speed (PageSpeed Insights, Lighthouse, Google Analytics)
It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.
Good luck and if you need any consultation or assistance with your site migration, please get in touch!
Appendix: Useful tools
Crawlers
Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.
Handy Chrome add-ons
Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
Ayima Redirect Path: A great header and redirect checker.
SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
Scraper: An easy way to scrape website data into a spreadsheet.
Site monitoring tools
Uptime Robot: Free website uptime monitoring.
Robotto: Free robots.txt monitoring tool.
Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
SEO Radar: Monitors all critical SEO elements and fires alerts when these change.
Site performance tools
PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
Webpagetest.org: Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.
Structured data testing tools
Google’s structured data testing tool & Google’s structured data testing tool Chrome extension
Bing’s markup validator
Yandex structured data testing tool
Google’s rich results testing tool
Mobile testing tools
Google’s mobile-friendly testing tool
Google’s AMP testing tool
AMP validator tool
Backlink data sources
Ahrefs
Majestic SEO
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
via Blogger http://ift.tt/2HgdFgp #blogger #bloggingtips #bloggerlife #bloggersgetsocial #ontheblog #writersofinstagram #writingprompt #instapoetry #writerscommunity #writersofig #writersblock #writerlife #writtenword #instawriters #spilledink #wordgasm #creativewriting #poetsofinstagram #blackoutpoetry #poetsofig
0 notes
tracisimpson · 7 years ago
Text
The Website Migration Guide: SEO Strategy & Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning
Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).
How to identify the top performing pages
Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.
If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.
It’s recommended to prepare a spreadsheet that includes the below fields:
Legacy URL (include only the indexable ones from the craw data)
Organic visits during the last 12 months (Analytics)
Revenue, conversions, and conversion rate during the last 12 months (Analytics)
Pageviews during the last 12 months (Analytics)
Number of clicks from the last 90 days (Search Console)
Top linked pages (Majestic SEO/Ahrefs)
With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.
The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.
Benchmarking
Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.
Keywords rank tracking
If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.
Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)
If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.
Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.
Site performance
The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.
It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.
MOBILE
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Fast
0.7s
1.4s
Good
81/100
Category page
Slow
1.8s
5.1s
Medium
78/100
Subcategory page
Average
0.9s
2.4s
Medium
69/100
Product page
Slow
1.9s
5.5s
Good
83/100
DESKTOP
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Good
0.7s
1.4s
Average
81/100
Category page
Fast
0.6s
1.2s
Medium
78/100
Subcategory page
Fast
0.6s
1.3s
Medium
78/100
Product page
Good
0.8s
1.3s
Good
83/100
Old site crawl data
A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.
Search Console data
Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:
Search analytics queries & pages
Crawl errors
Blocked resources
Mobile usability issues
URL parameters
Structured data errors
Links to your site
Internal links
Index status
Redirects preparation
The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.
Why are redirects important in site migrations?
Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.
What happens when redirects aren’t correctly implemented?
When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.
301, 302, JavaScript redirects, or meta refresh?
When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.
302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.
Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.
If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.
Redirect mapping process
If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.
The redirect mapping file is a spreadsheet that includes the following two columns:
Legacy site URL –> a page’s URL on the old site.
New site URL –> a page’s URL on the new site.
When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.
Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.
Increasing efficiencies during the redirect mapping process
Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.
Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.
Don’t forget the legacy redirects!
You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.
Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a “Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.
Example:
URL A redirects to URL B (legacy redirect)
URL B redirects to URL C (new redirect)
Which results in the following redirect chain:
URL A –> URL B –> URL C
To eliminate this, amend the existing legacy redirect and create a new one so that:
URL A redirects to URL C (amended legacy redirect)
URL B redirects to URL C (new redirect)
Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!
Implement blanket redirect rules to avoid duplicate content
It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.
In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:
URL case: All URLs containing upper-case characters should be 301 redirected to all lower-case URLs, e.g. https://www.website.com/Page/ should be automatically redirecting to https://www.website.com/page/
Host: For instance, all non-www URLs should be 301 redirected to their www equivalent, e.g. https://website.com/page/ should be redirected to https://www.website.com/page/
Protocol: On a secure website, requests for HTTP URLs should be redirected to the equivalent HTTPS URL, e.g. http://www.website.com/page/ should automatically redirect to https://www.website.com/page/
Trailing slash: For instance, any URLs not containing a trailing slash should redirect to a version with a trailing slash, e.g. http://www.website.com/page should redirect to http://www.website.com/page/
Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.
Avoid internal redirects
Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.
Don’t forget your image files
If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.
Phase 3: Pre-launch testing
The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.
Making sure search engines cannot access the staging/test site
Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.
Site available to specific IPs (most recommended)
Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.
Password protection
Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.
Robots.txt blocking
Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.
User-agent: * Disallow: /
One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.
User journey review
If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.
On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.
Site architecture review
A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.
Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.
Meta data & copy review
Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.
Internal linking review
Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:
Main & secondary navigation
Header & footer links
Body content links
Pagination links
Horizontal links (related articles, similar products, etc)
Vertical links (e.g. breadcrumb navigation)
Cross-site links (e.g. links across international sites)
Technical checks
A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.
Robots.txt file review
Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:
Disallow: /
If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.
But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.
When preparing the new site’s robots.txt file, make sure that:
It doesn’t block search engine access to pages that are intended to get indexed.
It doesn’t block any JavaScript or CSS resources search engines require to render page content.
The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.
Canonical tags review
Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.
Meta robots review
Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.
XML sitemaps review
Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.
You should check each XML sitemap to make sure that:
It validates without issues
It is encoded as UTF-8
It does not contain more than 50,000 rows
Its size does not exceed 50MBs when uncompressed
If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.
In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:
3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
Canonicalized pages (apart from self-referring canonical URLs)
Pages with a meta robots noindex directive
<!DOCTYPE html> <html><head> <meta name="robots" content="noindex" /> (…) </head> <body>(…)</body> </html>
Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK Date: Tue, 10 Nov 2017 17:12:43 GMT (…) X-Robots-Tag: noindex (…)
Pages blocked from the robots.txt file
Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.
Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.
HTML sitemap review
Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.
The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.
For example, the NYTimes.com HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.
The NYTimes HTML sitemap (level 1)
The NYTimes HTML sitemap (level 2)
Structured data review
Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.
Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.
The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.
Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.
JavaScript crawling review
You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.
As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.
Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.
Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.
Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!
Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.
Mobile site SEO review
Assets blocking review
First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.
Mobile-first index review
In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:
Page titles
Meta descriptions
Headings
Copy
Canonical tags
Meta robots attributes (i.e. noindex, nofollow)
Internal links
Structured data
A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.
In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.
Responsive site review
A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.
Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.
To signal browsers that a page is responsive, a meta="viewport" tag should be in place within the <head> of each HTML page.
<meta name="viewport" content="width=device-width, initial-scale=1.0">
If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.
Separate mobile URLs review
If the mobile website uses separate URLs from desktop, make sure that:
Each desktop page has a tag pointing to the corresponding mobile URL.
Each mobile page has a rel="canonical" tag pointing to the corresponding desktop URL.
When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
The mobile URLs return a 200 server response.
Dynamic serving review
Dynamic serving websites serve different code to each device, but on the same URL.
On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.
Mobile-friendliness review
Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:
The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
The font size isn’t too small.
Touch elements (i.e. buttons, links) aren’t too close.
There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
Mobile pages aren’t too slow to load (see next section).
Google’s mobile-friendly test tool can help diagnose most of the above issues:
Google’s mobile-friendly test tool in action
AMP site review
If there is an AMP website and a desktop version of the site is available, make sure that:
Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
Each AMP page has a rel="canonical" tag pointing to the corresponding desktop page.
Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.
You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.
Mixed content errors
With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.
Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.
Mixed content errors in Chrome’s JavaScript Console
There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.
Image assets review
Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.
Site performance review
Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.
Analytics tracking review
Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.
Redirects testing
Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.
Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:
Redirect loops (a URL that infinitely redirects to itself)
Redirects with a 4xx or 5xx server response.
Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
Canonical URLs that return a 4xx or 5xx server response.
Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
Invalid characters in URLs.
Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.
Phase 4: Launch day activities
When the site is down...
While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.
If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.
Technical spot checks
As soon as the new site has gone live, take a quick look at:
The robots.txt file to make sure search engines are not blocked from crawling
Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
Top pages canonical tags
Top pages server responses
Noindex/nofollow directives, in case they are unintentional
The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.
Search Console actions
The following activities should take place as soon as the new website has gone live:
Test & upload the XML sitemap(s)
Set the Preferred location of the domain (www or non-www)
Set the International targeting (if applicable)
Configure the URL parameters to tackle early any potential duplicate content issues.
Upload the Disavow file (if applicable)
Use the Change of Address tool (if switching domains)
Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.
Blocked resources prevent Googlebot from rendering the content of the page
Phase 5: Post-launch review
Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.
However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.
In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.
Check crawl stats and server logs
Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.
Crawl stats on Google’s Search Console
Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.
Review crawl errors regularly
Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.
Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!
Other useful Search Console features
Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).
Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.
Measuring site speed
Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.
Evaluating speed using Google’s tools
Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.
ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:
Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
Optimization suggestions: A list of best practices that could be applied to a page.
Google’s PageSpeed Insights in action
Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:
First Meaningful Paint that measures when the primary content of a page is visible.
Time to Interactive is the point at which the page is ready for a user to interact with.
Speed Index measures shows how quickly a page are visibly populated
Both tools provide recommendations to help improve any reported site performance issues.
Google’s Lighthouse in action
You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.
The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.
Measuring speed from real users
Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.
In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.
Phase 6: Measuring site migration performance
When to measure
Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.
In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.
But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.
How to measure
Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:
Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
Desktop and mobile rankings (from any reliable rank tracking tool)
User engagement (bounce rate, average time on page)
Sessions per page type (i.e. are the category pages driving as many sessions as before?)
Conversion rate per page type (i.e. are the product pages converting the same way as before?)
Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)
Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:
Number of indexed pages (Search Console)
Submitted vs indexed pages in XML sitemaps (Search Console)
Pages receiving at least one visit (analytics)
Site speed (PageSpeed Insights, Lighthouse, Google Analytics)
It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.
Good luck and if you need any consultation or assistance with your site migration, please get in touch!
Appendix: Useful tools
Crawlers
Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.
Handy Chrome add-ons
Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
Ayima Redirect Path: A great header and redirect checker.
SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
Scraper: An easy way to scrape website data into a spreadsheet.
Site monitoring tools
Uptime Robot: Free website uptime monitoring.
Robotto: Free robots.txt monitoring tool.
Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
SEO Radar: Monitors all critical SEO elements and fires alerts when these change.
Site performance tools
PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
Webpagetest.org: Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.
Structured data testing tools
Google’s structured data testing tool & Google’s structured data testing tool Chrome extension
Bing’s markup validator
Yandex structured data testing tool
Google’s rich results testing tool
Mobile testing tools
Google’s mobile-friendly testing tool
Google’s AMP testing tool
AMP validator tool
Backlink data sources
Ahrefs
Majestic SEO
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
christinesumpmg1 · 7 years ago
Text
The Website Migration Guide: SEO Strategy &amp;amp; Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have..
http://ift.tt/2Hd5yRD
0 notes
lawrenceseitz22 · 7 years ago
Text
The Website Migration Guide: SEO Strategy & Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning
Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).
How to identify the top performing pages
Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.
If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.
It’s recommended to prepare a spreadsheet that includes the below fields:
Legacy URL (include only the indexable ones from the craw data)
Organic visits during the last 12 months (Analytics)
Revenue, conversions, and conversion rate during the last 12 months (Analytics)
Pageviews during the last 12 months (Analytics)
Number of clicks from the last 90 days (Search Console)
Top linked pages (Majestic SEO/Ahrefs)
With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.
The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.
Benchmarking
Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.
Keywords rank tracking
If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.
Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)
If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.
Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.
Site performance
The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.
It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.
MOBILE
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Fast
0.7s
1.4s
Good
81/100
Category page
Slow
1.8s
5.1s
Medium
78/100
Subcategory page
Average
0.9s
2.4s
Medium
69/100
Product page
Slow
1.9s
5.5s
Good
83/100
DESKTOP
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Good
0.7s
1.4s
Average
81/100
Category page
Fast
0.6s
1.2s
Medium
78/100
Subcategory page
Fast
0.6s
1.3s
Medium
78/100
Product page
Good
0.8s
1.3s
Good
83/100
Old site crawl data
A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.
Search Console data
Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:
Search analytics queries & pages
Crawl errors
Blocked resources
Mobile usability issues
URL parameters
Structured data errors
Links to your site
Internal links
Index status
Redirects preparation
The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.
Why are redirects important in site migrations?
Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.
What happens when redirects aren’t correctly implemented?
When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.
301, 302, JavaScript redirects, or meta refresh?
When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.
302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.
Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.
If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.
Redirect mapping process
If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.
The redirect mapping file is a spreadsheet that includes the following two columns:
Legacy site URL –> a page’s URL on the old site.
New site URL –> a page’s URL on the new site.
When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.
Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.
Increasing efficiencies during the redirect mapping process
Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.
Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.
Don’t forget the legacy redirects!
You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.
Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a “Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.
Example:
URL A redirects to URL B (legacy redirect)
URL B redirects to URL C (new redirect)
Which results in the following redirect chain:
URL A –> URL B –> URL C
To eliminate this, amend the existing legacy redirect and create a new one so that:
URL A redirects to URL C (amended legacy redirect)
URL B redirects to URL C (new redirect)
Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!
Implement blanket redirect rules to avoid duplicate content
It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.
In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:
URL case: All URLs containing upper-case characters should be 301 redirected to all lower-case URLs, e.g. https://www.website.com/Page/ should be automatically redirecting to https://www.website.com/page/
Host: For instance, all non-www URLs should be 301 redirected to their www equivalent, e.g. https://website.com/page/ should be redirected to https://www.website.com/page/
Protocol: On a secure website, requests for HTTP URLs should be redirected to the equivalent HTTPS URL, e.g. http://www.website.com/page/ should automatically redirect to https://www.website.com/page/
Trailing slash: For instance, any URLs not containing a trailing slash should redirect to a version with a trailing slash, e.g. http://www.website.com/page should redirect to http://www.website.com/page/
Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.
Avoid internal redirects
Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.
Don’t forget your image files
If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.
Phase 3: Pre-launch testing
The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.
Making sure search engines cannot access the staging/test site
Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.
Site available to specific IPs (most recommended)
Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.
Password protection
Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.
Robots.txt blocking
Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.
User-agent: * Disallow: /
One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.
User journey review
If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.
On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.
Site architecture review
A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.
Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.
Meta data & copy review
Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.
Internal linking review
Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:
Main & secondary navigation
Header & footer links
Body content links
Pagination links
Horizontal links (related articles, similar products, etc)
Vertical links (e.g. breadcrumb navigation)
Cross-site links (e.g. links across international sites)
Technical checks
A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.
Robots.txt file review
Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:
Disallow: /
If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.
But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.
When preparing the new site’s robots.txt file, make sure that:
It doesn’t block search engine access to pages that are intended to get indexed.
It doesn’t block any JavaScript or CSS resources search engines require to render page content.
The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.
Canonical tags review
Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.
Meta robots review
Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.
XML sitemaps review
Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.
You should check each XML sitemap to make sure that:
It validates without issues
It is encoded as UTF-8
It does not contain more than 50,000 rows
Its size does not exceed 50MBs when uncompressed
If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.
In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:
3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
Canonicalized pages (apart from self-referring canonical URLs)
Pages with a meta robots noindex directive
<!DOCTYPE html> <html><head> <meta name="robots" content="noindex" /> (…) </head> <body>(…)</body> </html>
Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK Date: Tue, 10 Nov 2017 17:12:43 GMT (…) X-Robots-Tag: noindex (…)
Pages blocked from the robots.txt file
Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.
Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.
HTML sitemap review
Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.
The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.
For example, the NYTimes.com HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.
The NYTimes HTML sitemap (level 1)
The NYTimes HTML sitemap (level 2)
Structured data review
Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.
Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.
The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.
Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.
JavaScript crawling review
You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.
As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.
Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.
Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.
Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!
Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.
Mobile site SEO review
Assets blocking review
First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.
Mobile-first index review
In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:
Page titles
Meta descriptions
Headings
Copy
Canonical tags
Meta robots attributes (i.e. noindex, nofollow)
Internal links
Structured data
A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.
In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.
Responsive site review
A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.
Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.
To signal browsers that a page is responsive, a meta="viewport" tag should be in place within the <head> of each HTML page.
<meta name="viewport" content="width=device-width, initial-scale=1.0">
If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.
Separate mobile URLs review
If the mobile website uses separate URLs from desktop, make sure that:
Each desktop page has a tag pointing to the corresponding mobile URL.
Each mobile page has a rel="canonical" tag pointing to the corresponding desktop URL.
When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
The mobile URLs return a 200 server response.
Dynamic serving review
Dynamic serving websites serve different code to each device, but on the same URL.
On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.
Mobile-friendliness review
Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:
The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
The font size isn’t too small.
Touch elements (i.e. buttons, links) aren’t too close.
There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
Mobile pages aren’t too slow to load (see next section).
Google’s mobile-friendly test tool can help diagnose most of the above issues:
Google’s mobile-friendly test tool in action
AMP site review
If there is an AMP website and a desktop version of the site is available, make sure that:
Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
Each AMP page has a rel="canonical" tag pointing to the corresponding desktop page.
Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.
You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.
Mixed content errors
With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.
Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.
Mixed content errors in Chrome’s JavaScript Console
There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.
Image assets review
Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.
Site performance review
Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.
Analytics tracking review
Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.
Redirects testing
Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.
Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:
Redirect loops (a URL that infinitely redirects to itself)
Redirects with a 4xx or 5xx server response.
Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
Canonical URLs that return a 4xx or 5xx server response.
Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
Invalid characters in URLs.
Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.
Phase 4: Launch day activities
When the site is down...
While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.
If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.
Technical spot checks
As soon as the new site has gone live, take a quick look at:
The robots.txt file to make sure search engines are not blocked from crawling
Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
Top pages canonical tags
Top pages server responses
Noindex/nofollow directives, in case they are unintentional
The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.
Search Console actions
The following activities should take place as soon as the new website has gone live:
Test & upload the XML sitemap(s)
Set the Preferred location of the domain (www or non-www)
Set the International targeting (if applicable)
Configure the URL parameters to tackle early any potential duplicate content issues.
Upload the Disavow file (if applicable)
Use the Change of Address tool (if switching domains)
Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.
Blocked resources prevent Googlebot from rendering the content of the page
Phase 5: Post-launch review
Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.
However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.
In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.
Check crawl stats and server logs
Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.
Crawl stats on Google’s Search Console
Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.
Review crawl errors regularly
Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.
Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!
Other useful Search Console features
Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).
Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.
Measuring site speed
Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.
Evaluating speed using Google’s tools
Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.
ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:
Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
Optimization suggestions: A list of best practices that could be applied to a page.
Google’s PageSpeed Insights in action
Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:
First Meaningful Paint that measures when the primary content of a page is visible.
Time to Interactive is the point at which the page is ready for a user to interact with.
Speed Index measures shows how quickly a page are visibly populated
Both tools provide recommendations to help improve any reported site performance issues.
Google’s Lighthouse in action
You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.
The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.
Measuring speed from real users
Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.
In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.
Phase 6: Measuring site migration performance
When to measure
Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.
In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.
But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.
How to measure
Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:
Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
Desktop and mobile rankings (from any reliable rank tracking tool)
User engagement (bounce rate, average time on page)
Sessions per page type (i.e. are the category pages driving as many sessions as before?)
Conversion rate per page type (i.e. are the product pages converting the same way as before?)
Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)
Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:
Number of indexed pages (Search Console)
Submitted vs indexed pages in XML sitemaps (Search Console)
Pages receiving at least one visit (analytics)
Site speed (PageSpeed Insights, Lighthouse, Google Analytics)
It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.
Good luck and if you need any consultation or assistance with your site migration, please get in touch!
Appendix: Useful tools
Crawlers
Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.
Handy Chrome add-ons
Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
Ayima Redirect Path: A great header and redirect checker.
SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
Scraper: An easy way to scrape website data into a spreadsheet.
Site monitoring tools
Uptime Robot: Free website uptime monitoring.
Robotto: Free robots.txt monitoring tool.
Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
SEO Radar: Monitors all critical SEO elements and fires alerts when these change.
Site performance tools
PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
Webpagetest.org: Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.
Structured data testing tools
Google’s structured data testing tool & Google’s structured data testing tool Chrome extension
Bing’s markup validator
Yandex structured data testing tool
Google’s rich results testing tool
Mobile testing tools
Google’s mobile-friendly testing tool
Google’s AMP testing tool
AMP validator tool
Backlink data sources
Ahrefs
Majestic SEO
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger http://ift.tt/2FjIkNc via IFTTT
0 notes
conniecogeie · 7 years ago
Text
The Website Migration Guide: SEO Strategy &amp;amp; Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have..
http://ift.tt/2Hd5yRD
0 notes
rodneyevesuarywk · 7 years ago
Text
The Website Migration Guide: SEO Strategy &amp;amp; Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have..
http://ift.tt/2Hd5yRD
0 notes
swunlimitednj · 7 years ago
Text
The Website Migration Guide: SEO Strategy & Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning
Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have the potential to drive traffic to the site and therefore need to be prioritized for the purposes of your site migration. These are the pages worth optimizing (if they will exist on the new site) or redirecting (if they won’t exist on the new site).
How to identify the top performing pages
Once you’ve identified all indexable pages, you may have to carry out more work, especially if the legacy site consists of a large number of pages and optimizing or redirecting all of them is impossible due to time, resource, or technical constraints.
If this is the case, you should identify the legacy site’s top performing pages. This will help with the prioritization of the pages to focus on during the later stages.
It’s recommended to prepare a spreadsheet that includes the below fields:
Legacy URL (include only the indexable ones from the craw data)
Organic visits during the last 12 months (Analytics)
Revenue, conversions, and conversion rate during the last 12 months (Analytics)
Pageviews during the last 12 months (Analytics)
Number of clicks from the last 90 days (Search Console)
Top linked pages (Majestic SEO/Ahrefs)
With the above information in one place, it’s now much easier to identify your most important pages: the ones that generate organic visits, convert well, contribute to revenue, have a good number of referring domains linking to them, etc. These are the pages that you must focus on for a successful site migration.
The top performing pages should ideally also exist on the new site. If for any reason they don’t, they should be redirected to the most relevant page so that users requesting them do not land on 404 pages and the link equity they previously had remains on the site. If any of these pages cease to exist and aren’t properly redirected, your site’s rankings and traffic will negatively be affected.
Benchmarking
Once the launch of the new website is getting close, you should benchmark the legacy site’s performance. Benchmarking is essential, not only to compare the new site’s performance with the previous one but also to help diagnose which areas underperform on the new site and to quickly address them.
Keywords rank tracking
If you don’t track the site’s rankings frequently, you should do so just before the new site goes live. Otherwise, you will later struggle figuring out whether the migration has gone smoothly or where exactly things went wrong. Don’t leave this to the last minute in case something goes awry — a week in advance would be the ideal time.
Spend some time working out which keywords are most representative of the site’s organic search visibility and track them across desktop and mobile. Because monitoring thousands of head, mid-, and long-tail keyword combinations is usually unrealistic, the bare minimum you should monitor are keywords that are driving traffic to the site (keywords ranking on page one) and have decent search volume (head/mid-tail focus)
If you do get traffic from both brand and non-brand keywords, you should also decide which type of keywords to focus on more from a tracking POV. In general, non-brand keywords tend to be more competitive and volatile. For most sites it would make sense to focus mostly on these.
Don’t forget to track rankings across desktop and mobile. This will make it much easier to diagnose problems post-launch should there be performance issues on one device type. If you receive a high volume of traffic from more than one country, consider rank tracking keywords in other markets, too, because visibility and rankings can vary significantly from country to country.
Site performance
The new site’s page loading times can have a big impact on both traffic and sales. Several studies have shown that the longer a page takes to load, the higher the bounce rate. Unless the old site’s page loading times and site performance scores have been recorded, it will be very difficult to attribute any traffic or revenue loss to site performance related issues once the new site has gone live.
It’s recommended that you review all major page types using Google’s PageSpeed Insights and Lighthouse tools. You could use summary tables like the ones below to benchmark some of the most important performance metrics, which will be useful for comparisons once the new site goes live.
MOBILE
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Fast
0.7s
1.4s
Good
81/100
Category page
Slow
1.8s
5.1s
Medium
78/100
Subcategory page
Average
0.9s
2.4s
Medium
69/100
Product page
Slow
1.9s
5.5s
Good
83/100
DESKTOP
Speed
FCP
DCL
Optimization
Optimization score
Homepage
Good
0.7s
1.4s
Average
81/100
Category page
Fast
0.6s
1.2s
Medium
78/100
Subcategory page
Fast
0.6s
1.3s
Medium
78/100
Product page
Good
0.8s
1.3s
Good
83/100
Old site crawl data
A few days before the new site replaces the old one, run a final crawl of the old site. Doing so could later prove invaluable, should there be any optimization issues on the new site. A final crawl will allow you to save vital information about the old site’s page titles, meta descriptions, h1–h6 headings, server status, canonical tags, noindex/nofollow pages, inlinks/outlinks, level, etc. Having all this information available could save you a lot of trouble if, say, the new site isn’t well optimized or suffers from technical misconfiguration issues. Try also to save a copy of the old site’s robots.txt and XML sitemaps in case you need these later.
Search Console data
Also consider exporting as much of the old site’s Search Console data as possible. These are only available for 90 days, and chances are that once the new site goes live the old site’s Search Console data will disappear sooner or later. Data worth exporting includes:
Search analytics queries & pages
Crawl errors
Blocked resources
Mobile usability issues
URL parameters
Structured data errors
Links to your site
Internal links
Index status
Redirects preparation
The redirects implementation is one of the most crucial activities during a site migration. If the legacy site’s URLs cease to exist and aren’t correctly redirected, the website’s rankings and visibility will simply tank.
Why are redirects important in site migrations?
Redirects are extremely important because they help both search engines and users find pages that may no longer exist, have been renamed, or moved to another location. From an SEO point of view, redirects help search engines discover and index a site’s new URLs quicker but also understand how the old site’s pages are associated with the new site’s pages. This association will allow for ranking signals to pass from the old pages to the new ones, so rankings are retained without being negatively affected.
What happens when redirects aren’t correctly implemented?
When redirects are poorly implemented, the consequences can be catastrophic. Users will either land on Not Found pages (404s) or irrelevant pages that do not meet the user intent. In either case, the site’s bounce and conversion rates will be negatively affected. The consequences for search engines can be equally catastrophic: they’ll be unable to associate the old site’s pages with those on the new site if the URLs aren’t identical. Ranking signals won’t be passed over from the old to the new site, which will result in ranking drops and organic search visibility loss. In addition, it will take search engines longer to discover and index the new site’s pages.
301, 302, JavaScript redirects, or meta refresh?
When the URLs between the old and new version of the site are different, use 301 (permanent) redirects. These will tell search engines to index the new URLs as well as forward any ranking signals from the old URLs to the new ones. Therefore, you must use 301 redirects if your site moves to/from another domain/subdomain, if you switch from HTTP to HTTPS, or if the site or parts of it have been restructured. Despite some of Google’s claims that 302 redirects pass PageRank, indexing the new URLs would be slower and ranking signals could take much longer to be passed on from the old to the new page.
302 (temporary) redirects should only be used in situations where a redirect does not need to live permanently and therefore indexing the new URL isn’t a priority. With 302 redirects, search engines will initially be reluctant to index the content of the redirect destination URL and pass any ranking signals to it. However, if the temporary redirects remain for a long period of time without being removed or updated, they could end up behaving similarly to permanent (301) redirects. Use 302 redirects when a redirect is likely to require updating or removal in the near future, as well as for any country-, language-, or device-specific redirects.
Meta refresh and JavaScript redirects should be avoided. Even though Google is getting better and better at crawling JavaScript, there are no guarantees these will get discovered or pass ranking signals to the new pages.
If you’d like to find out more about how Google deals with the different types of redirects, please refer to John Mueller’s post.
Redirect mapping process
If you are lucky enough to work on a migration that doesn’t involve URL changes, you could skip this section. Otherwise, read on to find out why any legacy pages that won’t be available on the same URL after the migration should be redirected.
The redirect mapping file is a spreadsheet that includes the following two columns:
Legacy site URL –> a page’s URL on the old site.
New site URL –> a page’s URL on the new site.
When mapping (redirecting) a page from the old to the new site, always try mapping it to the most relevant corresponding page. In cases where a relevant page doesn’t exist, avoid redirecting the page to the homepage. First and foremost, redirecting users to irrelevant pages results in a very poor user experience. Google has stated that redirecting pages “en masse” to irrelevant pages will be treated as soft 404s and because of this won’t be passing any SEO value. If you can’t find an equivalent page on the new site, try mapping it to its parent category page.
Once the mapping is complete, the file will need to be sent to the development team to create the redirects, so that these can be tested before launching the new site. The implementation of redirects is another part in the site migration cycle where things can often go wrong.
Increasing efficiencies during the redirect mapping process
Redirect mapping requires great attention to detail and needs to be carried out by experienced SEOs. The URL mapping on small sites could in theory be done by manually mapping each URL of the legacy site to a URL on the new site. But on large sites that consist of thousands or even hundreds of thousands of pages, manually mapping every single URL is practically impossible and automation needs to be introduced. Relying on certain common attributes between the legacy and new site can be a massive time-saver. Such attributes may include the page titles, H1 headings, or other unique page identifiers such as product codes, SKUs etc. Make sure the attributes you rely on for the redirect mapping are unique and not repeated across several pages; otherwise, you will end up with incorrect mapping.
Pro tip: Make sure the URL structure of the new site is 100% finalized on staging before you start working on the redirect mapping. There’s nothing riskier than mapping URLs that will be updated before the new site goes live. When URLs are updated after the redirect mapping is completed, you may have to deal with undesired situations upon launch, such as broken redirects, redirect chains, and redirect loops. A content-freeze should be placed on the old site well in advance of the migration date, so there is a cut-off point for new content being published on the old site. This will make sure that no pages will be missed from the redirect mapping and guarantee that all pages on the old site get redirected.
Don’t forget the legacy redirects!
You should get hold of the old site’s existing redirects to ensure they’re considered when preparing the redirect mapping for the new site. Unless you do this, it’s likely that the site’s current redirect file will get overwritten by the new one on the launch date. If this happens, all legacy redirects that were previously in place will cease to exist and the site may lose a decent amount of link equity, the extent of which will largely depend on the site’s volume of legacy redirects. For instance, a site that has undergone a few migrations in the past should have a good number of legacy redirects in place that you don’t want getting lost.
Ideally, preserve as many of the legacy redirects as possible, making sure these won’t cause any issues when combined with the new site’s redirects. It’s strongly recommended to eliminate any potential redirect chains at this early stage, which can easily be done by checking whether the same URL appears both as a “Legacy URL” and “New site URL” in the redirect mapping spreadsheet. If this is the case, you will need to update the “New site URL” accordingly.
Example:
URL A redirects to URL B (legacy redirect)
URL B redirects to URL C (new redirect)
Which results in the following redirect chain:
URL A –> URL B –> URL C
To eliminate this, amend the existing legacy redirect and create a new one so that:
URL A redirects to URL C (amended legacy redirect)
URL B redirects to URL C (new redirect)
Pro tip: Check your redirect mapping spreadsheet for redirect loops. These occur when the “Legacy URL” is identical to the “new site URL.” Redirect loops need to be removed because they result in infinitely loading pages that are inaccessible to users and search engines. Redirect loops must be eliminated because they are instant traffic, conversion, and ranking killers!
Implement blanket redirect rules to avoid duplicate content
It’s strongly recommended to try working out redirect rules that cover as many URL requests as possible. Implementing redirect rules on a web server is much more efficient than relying on numerous one-to-one redirects. If your redirect mapping document consists of a very large number of redirects that need to be implemented as one-to-one redirect rules, site performance could be negatively affected. In any case, double check with the development team the maximum number of redirects the web server can handle without issues.
In any case, there are some standard redirect rules that should be in place to avoid generating duplicate content issues:
URL case: All URLs containing upper-case characters should be 301 redirected to all lower-case URLs, e.g. https://www.website.com/Page/ should be automatically redirecting to https://www.website.com/page/
Host: For instance, all non-www URLs should be 301 redirected to their www equivalent, e.g. https://website.com/page/ should be redirected to https://www.website.com/page/
Protocol: On a secure website, requests for HTTP URLs should be redirected to the equivalent HTTPS URL, e.g. http://www.website.com/page/ should automatically redirect to https://www.website.com/page/
Trailing slash: For instance, any URLs not containing a trailing slash should redirect to a version with a trailing slash, e.g. http://www.website.com/page should redirect to http://www.website.com/page/
Even if some of these standard redirect rules exist on the legacy website, do not assume they’ll necessarily exist on the new site unless they’re explicitly requested.
Avoid internal redirects
Try updating the site’s internal links so they don’t trigger internal redirects. Even though search engines can follow internal redirects, these are not recommended because they add additional latency to page loading times and could also have a negative impact on search engine crawl time.
Don’t forget your image files
If the site’s images have moved to a new location, Google recommends redirecting the old image URLs to the new image URLs to help Google discover and index the new images quicker. If it’s not easy to redirect all images, aim to redirect at least those image URLs that have accrued backlinks.
Phase 3: Pre-launch testing
The earlier you can start testing, the better. Certain things need to be fully implemented to be tested, but others don’t. For example, user journey issues could be identified from as early as the prototypes or wireframes design. Content-related issues between the old and new site or content inconsistencies (e.g. between the desktop and mobile site) could also be identified at an early stage. But the more technical components should only be tested once fully implemented — things like redirects, canonical tags, or XML sitemaps. The earlier issues get identified, the more likely it is that they’ll be addressed before launching the new site. Identifying certain types of issues at a later stage isn’t cost effective, would require more resources, and cause significant delays. Poor testing and not allowing the time required to thoroughly test all building blocks that can affect SEO and UX performance can have disastrous consequences soon after the new site has gone live.
Making sure search engines cannot access the staging/test site
Before making the new site available on a staging/testing environment, take some precautions that search engines do not index it. There are a few different ways to do this, each with different pros and cons.
Site available to specific IPs (most recommended)
Making the test site available only to specific (whitelisted) IP addresses is a very effective way to prevent search engines from crawling it. Anyone trying to access the test site’s URL won’t be able to see any content unless their IP has been whitelisted. The main advantage is that whitelisted users could easily access and crawl the site without any issues. The only downside is that third-party web-based tools (such as Google’s tools) cannot be used because of the IP restrictions.
Password protection
Password protecting the staging/test site is another way to keep search engine crawlers away, but this solution has two main downsides. Depending on the implementation, it may not be possible to crawl and test a password-protected website if the crawler application doesn’t make it past the login screen. The other downside: password-protected websites that use forms for authentication can be crawled using third-party applications, but there is a risk of causing severe and unexpected issues. This is because the crawler clicks on every link on a page (when you’re logged in) and could easily end up clicking on links that create or remove pages, install/uninstall plugins, etc.
Robots.txt blocking
Adding the following lines of code to the test site’s robots.txt file will prevent search engines from crawling the test site’s pages.
User-agent: * Disallow: /
One downside of this method is that even though the content that appears on the test server won’t get indexed, the disallowed URLs may appear on Google’s search results. Another downside is that if the above robots.txt file moves into the live site, it will cause severe de-indexing issues. This is something I’ve encountered numerous times and for this reason I wouldn’t recommend using this method to block search engines.
User journey review
If the site has been redesigned or restructured, chances are that the user journeys will be affected to some extent. Reviewing the user journeys as early as possible and well before the new site launches is difficult due to the lack of user data. However, an experienced UX professional will be able to flag any concerns that could have a negative impact on the site’s conversion rate. Because A/B testing at this stage is hardly ever possible, it might be worth carrying out some user testing and try to get some feedback from real users. Unfortunately, user experience issues can be some of the harder ones to address because they may require sitewide changes that take a lot of time and effort.
On full site overhauls, not all UX decisions can always be backed up by data and many decisions will have to be based on best practice, past experience, and “gut feeling,” hence getting UX/CRO experts involved as early as possible could pay dividends later.
Site architecture review
A site migration is often a great opportunity to improve the site architecture. In other words, you have a great chance to reorganize your keyword targeted content and maximize its search traffic potential. Carrying out extensive keyword research will help identify the best possible category and subcategory pages so that users and search engines can get to any page on the site within a few clicks — the fewer the better, so you don’t end up with a very deep taxonomy.
Identifying new keywords with decent traffic potential and mapping them into new landing pages can make a big difference to the site’s organic traffic levels. On the other hand, enhancing the site architecture needs to be done thoughtfully. Itt could cause problems if, say, important pages move deeper into the new site architecture or there are too many similar pages optimized for the same keywords. Some of the most successful site migrations are the ones that allocate significant resources to enhance the site architecture.
Meta data & copy review
Make sure that the site’s page titles, meta descriptions, headings, and copy have been transferred from the old to the new site without issues. If you’ve created any new pages, make sure these are optimized and don’t target keywords that have already been targeted by other pages. If you’re re-platforming, be aware that the new platform may have different default values when new pages are being created. Launching the new site without properly optimized page titles or any kind of missing copy will have an immediate negative impact on your site’s rankings and traffic. Do not forget to review whether any user-generated content (i.e. user reviews, comments) has also been uploaded.
Internal linking review
Internal links are the backbone of a website. No matter how well optimized and structured the site’s copy is, it won’t be sufficient to succeed unless it’s supported by a flawless internal linking scheme. Internal links must be reviewed throughout the entire site, including links found in:
Main & secondary navigation
Header & footer links
Body content links
Pagination links
Horizontal links (related articles, similar products, etc)
Vertical links (e.g. breadcrumb navigation)
Cross-site links (e.g. links across international sites)
Technical checks
A series of technical checks must be carried out to make sure the new site’s technical setup is sound and to avoid coming across major technical glitches after the new site has gone live.
Robots.txt file review
Prepare the new site’s robots.txt file on the staging environment. This way you can test it for errors or omissions and avoid experiencing search engine crawl issues when the new site goes live. A classic mistake in site migrations is when the robots.txt file prevents search engine access using the following directive:
Disallow: /
If this gets accidentally carried over into the live site (and it often does), it will prevent search engines from crawling the site. And when search engines cannot crawl an indexed page, the keywords associated with the page will get demoted in the search results and eventually the page will get de-indexed.
But if the robots.txt file on staging is populated with the new site’s robots.txt directives, this mishap could be avoided.
When preparing the new site’s robots.txt file, make sure that:
It doesn’t block search engine access to pages that are intended to get indexed.
It doesn’t block any JavaScript or CSS resources search engines require to render page content.
The legacy site’s robots.txt file content has been reviewed and carried over if necessary.
It references the new XML sitemaps(s) rather than any legacy ones that no longer exist.
Canonical tags review
Review the site’s canonical tags. Look for pages that either do not have a canonical tag or have a canonical tag that is pointing to another URL and question whether this is intended. Don’t forget to crawl the canonical tags to find out whether they return a 200 server response. If they don’t you will need to update them to eliminate any 3xx, 4xx, or 5xx server responses. You should also look for pages that have a canonical tag pointing to another URL combined with a noindex directive, because these two are conflicting signals and you;’ll need to eliminate one of them.
Meta robots review
Once you’ve crawled the staging site, look for pages with the meta robots properties set to “noindex” or “nofollow.” If this is the case, review each one of them to make sure this is intentional and remove the “noindex” or “nofollow” directive if it isn’t.
XML sitemaps review
Prepare two different types of sitemaps: one that contains all the new site’s indexable pages, and another that includes all the old site’s indexable pages. The former will help make Google aware of the new site’s indexable URLs. The latter will help Google become aware of the redirects that are in place and the fact that some of the indexed URLs have moved to new locations, so that it can discover them and update search results quicker.
You should check each XML sitemap to make sure that:
It validates without issues
It is encoded as UTF-8
It does not contain more than 50,000 rows
Its size does not exceed 50MBs when uncompressed
If there are more than 50K rows or the file size exceeds 50MB, you must break the sitemap down into smaller ones. This prevents the server from becoming overloaded if Google requests the sitemap too frequently.
In addition, you must crawl each XML sitemap to make sure it only includes indexable URLs. Any non-indexable URLs should be excluded from the XML sitemaps, such as:
3xx, 4xx, and 5xx pages (e.g. redirected, not found pages, bad requests, etc)
Soft 404s. These are pages with no content that return a 200 server response, instead of a 404.
Canonicalized pages (apart from self-referring canonical URLs)
Pages with a meta robots noindex directive
<!DOCTYPE html> <html><head> <meta name="robots" content="noindex" /> (…) </head> <body>(…)</body> </html>
Pages with a noindex X-Robots-Tag in the HTTP header
HTTP/1.1 200 OK Date: Tue, 10 Nov 2017 17:12:43 GMT (…) X-Robots-Tag: noindex (…)
Pages blocked from the robots.txt file
Building clean XML sitemaps can help monitor the true indexing levels of the new site once it goes live. If you don’t, it will be very difficult to spot any indexing issues.
Pro tip: Download and open each XML sitemap in Excel to get a detailed overview of any additional attributes, such as hreflang or image attributes.
HTML sitemap review
Depending on the size and type of site that is being migrated, having an HTML sitemap can in certain cases be beneficial. An HTML sitemap that consists of URLs that aren’t linked from the site’s main navigation can significantly boost page discovery and indexing. However, avoid generating an HTML sitemap that includes too many URLs. If you do need to include thousands of URLs, consider building a segmented HTML sitemap.
The number of nested sitemaps as well as the maximum number of URLs you should include in each sitemap depends on the site’s authority. The more authoritative a website, the higher the number of nested sitemaps and URLs it could get away with.
For example, the NYTimes.com HTML sitemap consists of three levels, where each one includes over 1,000 URLs per sitemap. These nested HTML sitemaps aid search engine crawlers in discovering articles published since 1851 that otherwise would be difficult to discover and index, as not all of them would have been internally linked.
The NYTimes HTML sitemap (level 1)
The NYTimes HTML sitemap (level 2)
Structured data review
Errors in the structured data markup need to be identified early so there’s time to fix them before the new site goes live. Ideally, you should test every single page template (rather than every single page) using Google’s Structured Data Testing tool.
Be sure to check the markup on both the desktop and mobile pages, especially if the mobile website isn’t responsive.
The tool will only report any existing errors but not omissions. For example, if your product page template does not include the Product structured data schema, the tool won’t report any errors. So, in addition to checking for errors you should also make sure that each page template includes the appropriate structured data markup for its content type.
Please refer to Google’s documentation for the most up to date details on the structured data implementation and supported content types.
JavaScript crawling review
You must test every single page template of the new site to make sure Google will be able to crawl content that requires JavaScript parsing. If you’re able to use Google’s Fetch and Render tool on your staging site, you should definitely do so. Otherwise, carry out some manual tests, following Justin Brigg’s advice.
As Bartosz Góralewicz’s tests proved, even if Google is able to crawl and index JavaScript-generated content, it does not mean that it is able to crawl JavaScript content across all major JavaScript frameworks. The following table summarizes Bartosz’s findings, showing that some JavaScript frameworks are not SEO-friendly, with AngularJS currently being the most problematic of all.
Bartosz also found that other search engines (such as Bing, Yandex, and Baidu) really struggle with indexing JavaScript-generated content, which is important to know if your site’s traffic relies on any of these search engines.
Hopefully, this is something that will improve over time, but with the increasing popularity of JavaScript frameworks in web development, this must be high up on your checklist.
Finally, you should check whether any external resources are being blocked. Unfortunately, this isn’t something you can control 100% because many resources (such as JavaScript and CSS files) are hosted by third-party websites which may be blocking them via their own robots.txt files!
Again, the Fetch and Render tool can help diagnose this type of issue that, if left unresolved, could have a significant negative impact.
Mobile site SEO review
Assets blocking review
First, make sure that the robots.txt file isn’t accidentally blocking any JavaScript, CSS, or image files that are essential for the mobile site’s content to render. This could have a negative impact on how search engines render and index the mobile site’s page content, which in turn could negatively affect the mobile site’s search visibility and performance.
Mobile-first index review
In order to avoid any issues associated with Google’s mobile-first index, thoroughly review the mobile website and make there aren’t any inconsistencies between the desktop and mobile sites in the following areas:
Page titles
Meta descriptions
Headings
Copy
Canonical tags
Meta robots attributes (i.e. noindex, nofollow)
Internal links
Structured data
A responsive website should serve the same content, links, and markup across devices, and the above SEO attributes should be identical across the desktop and mobile websites.
In addition to the above, you must carry out a few further technical checks depending on the mobile site’s set up.
Responsive site review
A responsive website must serve all devices the same HTML code, which is adjusted (via the use of CSS) depending on the screen size.
Googlebot is able to automatically detect this mobile setup as long as it’s allowed to crawl the page and its assets. It’s therefore extremely important to make sure that Googlebot can access all essential assets, such as images, JavaScript, and CSS files.
To signal browsers that a page is responsive, a meta="viewport" tag should be in place within the <head> of each HTML page.
<meta name="viewport" content="width=device-width, initial-scale=1.0">
If the meta viewport tag is missing, font sizes may appear in an inconsistent manner, which may cause Google to treat the page as not mobile-friendly.
Separate mobile URLs review
If the mobile website uses separate URLs from desktop, make sure that:
Each desktop page has a tag pointing to the corresponding mobile URL.
Each mobile page has a rel="canonical" tag pointing to the corresponding desktop URL.
When desktop URLs are requested on mobile devices, they’re redirected to the respective mobile URL.
Redirects work across all mobile devices, including Android, iPhone, and Windows phones.
There aren’t any irrelevant cross-links between the desktop and mobile pages. This means that internal links on found on a desktop page should only link to desktop pages and those found on a mobile page should only link to other mobile pages.
The mobile URLs return a 200 server response.
Dynamic serving review
Dynamic serving websites serve different code to each device, but on the same URL.
On dynamic serving websites, review whether the vary HTTP header has been correctly set up. This is necessary because dynamic serving websites alter the HTML for mobile user agents and the vary HTTP header helps Googlebot discover the mobile content.
Mobile-friendliness review
Regardless of the mobile site set-up (responsive, separate URLs or dynamic serving), review the pages using a mobile user-agent and make sure that:
The viewport has been set correctly. Using a fixed width viewport across devices will cause mobile usability issues.
The font size isn’t too small.
Touch elements (i.e. buttons, links) aren’t too close.
There aren’t any intrusive interstitials, such as Ads, mailing list sign-up forms, App Download pop-ups etc. To avoid any issues, you should use either use a small HTML or image banner.
Mobile pages aren’t too slow to load (see next section).
Google’s mobile-friendly test tool can help diagnose most of the above issues:
Google’s mobile-friendly test tool in action
AMP site review
If there is an AMP website and a desktop version of the site is available, make sure that:
Each non-AMP page (i.e. desktop, mobile) has a tag pointing to the corresponding AMP URL.
Each AMP page has a rel="canonical" tag pointing to the corresponding desktop page.
Any AMP page that does not have a corresponding desktop URL has a self-referring canonical tag.
You should also make sure that the AMPs are valid. This can be tested using Google’s AMP Test Tool.
Mixed content errors
With Google pushing hard for sites to be fully secure and Chrome becoming the first browser to flag HTTP pages as not secure, aim to launch the new site on HTTPS, making sure all resources such as images, CSS and JavaScript files are requested over secure HTTPS connections.This is essential in order to avoid mixed content issues.
Mixed content occurs when a page that’s loaded over a secure HTTPS connection requests assets over insecure HTTP connections. Most browsers either block dangerous HTTP requests or just display warnings that hinder the user experience.
Mixed content errors in Chrome’s JavaScript Console
There are many ways to identify mixed content errors, including the use of crawler applications, Google’s Lighthouse, etc.
Image assets review
Google crawls images less frequently than HTML pages. If migrating a site’s images from one location to another (e.g. from your domain to a CDN), there are ways to aid Google in discovering the migrated images quicker. Building an image XML sitemap will help, but you also need to make sure that Googlebot can reach the site’s images when crawling the site. The tricky part with image indexing is that both the web page where an image appears on as well as the image file itself have to get indexed.
Site performance review
Last but not least, measure the old site’s page loading times and see how these compare with the new site’s when this becomes available on staging. At this stage, focus on the network-independent aspects of performance such as the use of external resources (images, JavaScript, and CSS), the HTML code, and the web server’s configuration. More information about how to do this is available further down.
Analytics tracking review
Make sure that analytics tracking is properly set up. This review should ideally be carried out by specialist analytics consultants who will look beyond the implementation of the tracking code. Make sure that Goals and Events are properly set up, e-commerce tracking is implemented, enhanced e-commerce tracking is enabled, etc. There’s nothing more frustrating than having no analytics data after your new site is launched.
Redirects testing
Testing the redirects before the new site goes live is critical and can save you a lot of trouble later. There are many ways to check the redirects on a staging/test server, but the bottom line is that you should not launch the new website without having tested the redirects.
Once the redirects become available on the staging/testing environment, crawl the entire list of redirects and check for the following issues:
Redirect loops (a URL that infinitely redirects to itself)
Redirects with a 4xx or 5xx server response.
Redirect chains (a URL that redirects to another URL, which in turn redirects to another URL, etc).
Canonical URLs that return a 4xx or 5xx server response.
Canonical loops (page A has a canonical pointing to page B, which has a canonical pointing to page A).
Canonical chains (a canonical that points to another page that has a canonical pointing to another page, etc).
Protocol/host inconsistencies e.g. URLs are redirected to both HTTP and HTTPS URLs or www and non-www URLs.
Leading/trailing whitespace characters. Use trim() in Excel to eliminate them.
Invalid characters in URLs.
Pro tip: Make sure one of the old site’s URLs redirects to the correct URL on the new site. At this stage, because the new site doesn’t exist yet, you can only test whether the redirect destination URL is the intended one, but it’s definitely worth it. The fact that a URL redirects does not mean it redirects to the right page.
Phase 4: Launch day activities
When the site is down...
While the new site is replacing the old one, chances are that the live site is going to be temporarily down. The downtime should be kept to a minimum, but while this happens the web server should respond to any URL request with a 503 (service unavailable) server response. This will tell search engine crawlers that the site is temporarily down for maintenance so they come back to crawl the site later.
If the site is down for too long without serving a 503 server response and search engines crawl the website, organic search visibility will be negatively affected and recovery won’t be instant once the site is back up. In addition, while the website is temporarily down it should also serve an informative holding page notifying users that the website is temporarily down for maintenance.
Technical spot checks
As soon as the new site has gone live, take a quick look at:
The robots.txt file to make sure search engines are not blocked from crawling
Top pages redirects (e.g. do requests for the old site’s top pages redirect correctly?)
Top pages canonical tags
Top pages server responses
Noindex/nofollow directives, in case they are unintentional
The spot checks need to be carried out across both the mobile and desktop sites, unless the site is fully responsive.
Search Console actions
The following activities should take place as soon as the new website has gone live:
Test & upload the XML sitemap(s)
Set the Preferred location of the domain (www or non-www)
Set the International targeting (if applicable)
Configure the URL parameters to tackle early any potential duplicate content issues.
Upload the Disavow file (if applicable)
Use the Change of Address tool (if switching domains)
Pro tip: Use the “Fetch as Google” feature for each different type of page (e.g. the homepage, a category, a subcategory, a product page) to make sure Googlebot can render the pages without any issues. Review any reported blocked resources and do not forget to use Fetch and Render for desktop and mobile, especially if the mobile website isn’t responsive.
Blocked resources prevent Googlebot from rendering the content of the page
Phase 5: Post-launch review
Once the new site has gone live, a new round of in-depth checks should be carried out. These are largely the same ones as those mentioned in the “Phase 3: Pre-launch Testing” section.
However, the main difference during this phase is that you now have access to a lot more data and tools. Don’t underestimate the amount of effort you’ll need to put in during this phase, because any issues you encounter now directly impacts the site’s performance in the SERPs. On the other hand, the sooner an issue gets identified, the quicker it will get resolved.
In addition to repeating the same testing tasks that were outlined in the Phase 3 section, in certain areas things can be tested more thoroughly, accurately, and in greater detail. You can now take full advantage of the Search Console features.
Check crawl stats and server logs
Keep an eye on the crawl stats available in the Search Console, to make sure Google is crawling the new site’s pages. In general, when Googlebot comes across new pages it tends to accelerate the average number of pages it crawls per day. But if you can’t spot a spike around the time of the launch date, something may be negatively affecting Googlebot’s ability to crawl the site.
Crawl stats on Google’s Search Console
Reviewing the server log files is by far the most effective way to spot any crawl issues or inefficiencies. Tools like Botify and On Crawl can be extremely useful because they combine crawls with server log data and can highlight pages search engines do not crawl, pages that are not linked to internally (orphan pages), low-value pages that are heavily internally linked, and a lot more.
Review crawl errors regularly
Keep an eye on the reported crawl errors, ideally daily during the first few weeks. Downloading these errors daily, crawling the reported URLs, and taking the necessary actions (i.e. implement additional 301 redirects, fix soft 404 errors) will aid a quicker recovery. It’s highly unlikely you will need to redirect every single 404 that is reported, but you should add redirects for the most important ones.
Pro tip: In Google Analytics you can easily find out which are the most commonly requested 404 URLs and fix these first!
Other useful Search Console features
Other Search Console features worth checking include the Blocked Resources, Structured Data errors, Mobile Usability errors, HTML Improvements, and International Targeting (to check for hreflang reported errors).
Pro tip: Keep a close eye on the URL parameters in case they’re causing duplicate content issues. If this is the case, consider taking some urgent remedial action.
Measuring site speed
Once the new site is live, measure site speed to make sure the site’s pages are loading fast enough on both desktop and mobile devices. With site speed being a ranking signal across devices and becauseslow pages lose users and customers, comparing the new site’s speed with the old site’s is extremely important. If the new site’s page loading times appear to be higher you should take some immediate action, otherwise your site’s traffic and conversions will almost certainly take a hit.
Evaluating speed using Google’s tools
Two tools that can help with this are Google’s Lighthouse and Pagespeed Insights.
ThePagespeed Insights Tool measures page performance on both mobile and desktop devices and shows real-world page speed data based on user data Google collects from Chrome. It also checks to see if a page has applied common performance best practices and provides an optimization score. The tool includes the following main categories:
Speed score: Categorizes a page as Fast, Average, or Slow using two metrics: The First Contentful Paint (FCP) and DOM Content Loaded (DCL). A page is considered fast if both metrics are in the top one-third of their category.
Optimization score: Categorizes a page as Good, Medium, or Low based on performance headroom.
Page load distributions: Categorizes a page as Fast (fastest third), Average (middle third), or Slow (bottom third) by comparing against all FCP and DCL events in the Chrome User Experience Report.
Page stats: Can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
Optimization suggestions: A list of best practices that could be applied to a page.
Google’s PageSpeed Insights in action
Google’s Lighthouse is very handy for mobile performance, accessibility, and Progressive Web Apps audits. It provides various useful metrics that can be used to measure page performance on mobile devices, such as:
First Meaningful Paint that measures when the primary content of a page is visible.
Time to Interactive is the point at which the page is ready for a user to interact with.
Speed Index measures shows how quickly a page are visibly populated
Both tools provide recommendations to help improve any reported site performance issues.
Google’s Lighthouse in action
You can also use this Google tool to get a rough estimate on the percentage of users you may be losing from your mobile site’s pages due to slow page loading times.
The same tool also provides an industry comparison so you get an idea of how far you are from the top performing sites in your industry.
Measuring speed from real users
Once the site has gone live, you can start evaluating site speed based on the users visiting your site. If you have Google Analytics, you can easily compare the new site’s average load time with the previous one.
In addition, if you have access to a Real User Monitoring tool such as Pingdom, you can evaluate site speed based on the users visiting your website. The below map illustrates how different visitors experience very different loading times depending on their geographic location. In the below example, the page loading times appear to be satisfactory to visitors from the UK, US, and Germany, but to users residing in other countries they are much higher.
Phase 6: Measuring site migration performance
When to measure
Has the site migration been successful? This is the million-dollar question everyone involved would like to know the answer to as soon as the new site goes live. In reality, the longer you wait the clearer the answer becomes, as visibility during the first few weeks or even months can be very volatile depending on the size and authority of your site. For smaller sites, a 4–6 week period should be sufficient before comparing the new site’s visibility with the old site’s. For large websites you may have to wait for at least 2–3 months before measuring.
In addition, if the new site is significantly different from the previous one, users will need some time to get used to the new look and feel and acclimatize themselves with the new taxonomy, user journeys, etc. Such changes initially have a significant negative impact on the site’s conversion rate, which should improve after a few weeks as returning visitors are getting more and more used to the new site. In any case, making data-driven conclusions about the new site’s UX can be risky.
But these are just general rules of thumb and need to be taken into consideration along with other factors. For instance, if a few days or weeks after the new site launch significant additional changes were made (e.g. to address a technical issue), the migration’s evaluation should be pushed further back.
How to measure
Performance measurement is very important and even though business stakeholders would only be interested to hear about the revenue and traffic impact, there are a whole lot of other metrics you should pay attention to. For example, there can be several reasons for revenue going down following a site migration, including seasonal trends, lower brand interest, UX issues that have significantly lowered the site’s conversion rate, poor mobile performance, poor page loading times, etc. So, in addition to the organic traffic and revenue figures, also pay attention to the following:
Desktop & mobile visibility (from SearchMetrics, SEMrush, Sistrix)
Desktop and mobile rankings (from any reliable rank tracking tool)
User engagement (bounce rate, average time on page)
Sessions per page type (i.e. are the category pages driving as many sessions as before?)
Conversion rate per page type (i.e. are the product pages converting the same way as before?)
Conversion rate by device (i.e. has the desktop/mobile conversion rate increased/decreased since launching the new site?)
Reviewing the below could also be very handy, especially from a technical troubleshooting perspective:
Number of indexed pages (Search Console)
Submitted vs indexed pages in XML sitemaps (Search Console)
Pages receiving at least one visit (analytics)
Site speed (PageSpeed Insights, Lighthouse, Google Analytics)
It’s only after you’ve looked into all the above areas that you could safely conclude whether your migration has been successful or not.
Good luck and if you need any consultation or assistance with your site migration, please get in touch!
Appendix: Useful tools
Crawlers
Screaming Frog: The SEO Swiss army knife, ideal for crawling small- and medium-sized websites.
Sitebulb: Very intuitive crawler application with a neat user interface, nicely organized reports, and many useful data visualizations.
Deep Crawl: Cloud-based crawler with the ability to crawl staging sites and make crawl comparisons. Allows for comparisons between different crawls and copes well with large websites.
Botify: Another powerful cloud-based crawler supported by exceptional server log file analysis capabilities that can be very insightful in terms of understanding how search engines crawl the site.
On-Crawl: Crawler and server log analyzer for enterprise SEO audits with many handy features to identify crawl budget, content quality, and performance issues.
Handy Chrome add-ons
Web developer: A collection of developer tools including easy ways to enable/disable JavaScript, CSS, images, etc.
User agent switcher: Switch between different user agents including Googlebot, mobile, and other agents.
Ayima Redirect Path: A great header and redirect checker.
SEO Meta in 1 click: An on-page meta attributes, headers, and links inspector.
Scraper: An easy way to scrape website data into a spreadsheet.
Site monitoring tools
Uptime Robot: Free website uptime monitoring.
Robotto: Free robots.txt monitoring tool.
Pingdom tools: Monitors site uptime and page speed from real users (RUM service)
SEO Radar: Monitors all critical SEO elements and fires alerts when these change.
Site performance tools
PageSpeed Insights: Measures page performance for mobile and desktop devices. It checks to see if a page has applied common performance best practices and provides a score, which ranges from 0 to 100 points.
Lighthouse: Handy Chrome extension for performance, accessibility, Progressive Web Apps audits. Can also be run from the command line, or as a Node module.
Webpagetest.org: Very detailed page tests from various locations, connections, and devices, including detailed waterfall charts.
Structured data testing tools
Google’s structured data testing tool & Google’s structured data testing tool Chrome extension
Bing’s markup validator
Yandex structured data testing tool
Google’s rich results testing tool
Mobile testing tools
Google’s mobile-friendly testing tool
Google’s AMP testing tool
AMP validator tool
Backlink data sources
Ahrefs
Majestic SEO
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger http://ift.tt/2oUoyNm via SW Unlimited
0 notes
fairchildlingpo1 · 7 years ago
Text
The Website Migration Guide: SEO Strategy &amp;amp; Process
Posted by Modestos
What is a site migration?
A site migration is a term broadly used by SEO professionals to describe any event whereby a website undergoes substantial changes in areas that can significantly affect search engine visibility — typically substantial changes to the site structure, content, coding, site performance, or UX.
Google’s documentation on site migrations doesn’t cover them in great depth and downplays the fact that so often they result in significant traffic and revenue loss, which can last from a few weeks to several months — depending on the extent search engine ranking signals have been affected, as well as how long it may take the affected business to rollout a successful recovery plan.
Quick access links
Site migration examples Site migration types Common site migration pitfalls Site migration process 1. Scope & planning 2. Pre-launch preparation 3. Pre-launch testing 4. Launch day actions 5. Post-launch testing 6. Performance review Appendix: Useful tools
Site migration examples
The following section discusses how both successful and unsuccessful site migrations look and explains why it is 100% possible to come out of a site migration without suffering significant losses.
Debunking the “expected traffic drop” myth
Anyone who has been involved with a site migration has probably heard the widespread theory that it will result in de facto traffic and revenue loss. Even though this assertion holds some truth for some very specific cases (i.e. moving from an established domain to a brand new one) it shouldn’t be treated as gospel. It is entirely possible to migrate without losing any traffic or revenue; you can even enjoy significant growth right after launching a revamped website. However, this can only be achieved if every single step has been well-planned and executed.
Examples of unsuccessful site migrations
The following graph illustrates a big UK retailer’s botched site migration where the website lost 35% of its visibility two weeks after switching from HTTP to HTTPS. It took them about six months to fully recover, which must have had a significant impact on revenue from organic search. This is a typical example of a poor site migration, possibly caused by poor planning or implementation.
Example of a poor site migration — recovery took 6 months!
But recovery may not always be possible. The below visibility graph is from another big UK retailer, where the HTTP to HTTPS switchover resulted in a permanent 20% visibility loss.
Another example of a poor site migration — no signs of recovery 6 months on!
In fact, it’s is entirely possible to migrate from HTTP to HTTPS without losing so much traffic and for so such a long period, aside from the first few weeks where there is high volatility as Google discovers the new URLs and updates search results.
Examples of successful site migrations
What does a successful site migration look like? This largely depends on the site migration type, the objectives, and the KPIs (more details later). But in most cases, a successful site migration shows at least one of the following characteristics:
Minimal visibility loss during the first few weeks (short-term goal)
Visibility growth thereafter — depending on the type of migration (long-term goal)
The following visibility report is taken from an HTTP to HTTPS site migration, which was also accompanied by significant improvements to the site’s page loading times.
The following visibility report is from a complete site overhaul, which I was fortunate to be involved with several months in advance and supported during the strategy, planning, and testing phases, all of which were equally important.
As commonly occurs on site migration projects, the launch date had to be pushed back a few times due to the risks of launching the new site prematurely and before major technical obstacles were fully addressed. But as you can see on the below visibility graph, the wait was well worth it. Organic visibility not only didn’t drop (as most would normally expect) but in fact started growing from the first week.
Visibility growth one month after the migration reached 60%, whilst organic traffic growth two months post-launch exceeded 80%.
Example of a very successful site migration — instant growth following new site launch!
This was a rather complex migration as the new website was re-designed and built from scratch on a new platform with an improved site taxonomy that included new landing pages, an updated URL structure, lots of redirects to preserve link equity, plus a switchover from HTTP to HTTPS.
In general, introducing too many changes at the same time can be tricky because if something goes wrong, you’ll struggle to figure out what exactly is at fault. But at the same time, leaving major changes for a later time isn’t ideal either as it will require more resources. If you know what you’re doing, making multiple positive changes at once can be very cost-effective.
Before getting into the nitty-gritty of how you can turn a complex site migration project into a success, it’s important to run through the main site migration types as well as explain the main reason so many site migrations fail.
Site migration types
There are many site migration types. It all depends on the nature of the changes that take place to the legacy website.
Google’s documentation mostly covers migrations with site location changes, which are categorised as follows:
Site moves with URL changes
Site moves without URL changes
Site move migrations
These typically occur when a site moves to a different URL due to any of the below:
Protocol change
A classic example is when migrating from HTTP to HTTPS.
Subdomain or subfolder change
Very common in international SEO where a business decides to move one or more ccTLDs into subdomains or subfolders. Another common example is where a mobile site that sits on a separate subdomain or subfolder becomes responsive and both desktop and mobile URLs are uniformed.
Domain name change
Commonly occurs when a business is rebranding and must move from one domain to another.
Top-level domain change
This is common when a business decides to launch international websites and needs to move from a ccTLD (country code top-level domain) to a gTLD (generic top-level domain) or vice versa, e.g. moving from .co.uk to .com, or moving from .com to .co.uk and so on.
Site structure changes
These are changes to the site architecture that usually affect the site’s internal linking and URL structure.
Other types of migrations
There are other types of migration which are triggered by changes to the site’s content, structure, design, or platform.
Replatforming
This is the case when a website is moved from one platform/CMS to another, e.g. migrating from WordPress to Magento or just upgrading to the latest platform version. Replatforming can, in some cases, also result in design and URL changes because of technical limitations that often occur when changing platforms. This is why replatforming migrations rarely result in a website that looks exactly the same as the previous one.
Content migrations
Major content changes such as content rewrites, content consolidation, or content pruning can have a big impact on a site’s organic search visibility, depending on the scale. These changes can often affect the site’s taxonomy, navigation, and internal linking.
Mobile setup changes
With so many options available for a site’s mobile setup moving, enabling app indexing, building an AMP site, or building a PWA website can also be considered as partial site migrations, especially when an existing mobile site is being replaced by an app, AMP, or PWA.
Structural changes
These are often caused by major changes to the site’s taxonomy that impact on the site navigation, internal linking and user journeys.
Site redesigns
These can vary from major design changes in the look and feel to a complete website revamp that may also include significant media, code, and copy changes.
Hybrid migrations
In addition to the above, there are several hybrid migration types that can be combined in practically any way possible. The more changes that get introduced at the same time the higher the complexity and the risks. Even though making too many changes at the same time increases the risks of something going wrong, it can be more cost-effective from a resources perspective if the migration is very well-planned and executed.
Common site migration pitfalls
Even though every site migration is different there are a few common themes behind the most typical site migration disasters, with the biggest being the following:
Poor strategy
Some site migrations are doomed to failure way before the new site is launched. A strategy that is built upon unclear and unrealistic objectives is much less likely to bring success.
Establishing measurable objectives is essential in order to measure the impact of the migration post-launch. For most site migrations, the primary objective should be the retention of the site’s current traffic and revenue levels. In certain cases the bar could be raised higher, but in general anticipating or forecasting growth should be a secondary objective. This will help avoid creating unrealistic expectations.
Poor planning
Coming up with a detailed project plan as early as possible will help avoid delays along the way. Factor in additional time and resources to cope with any unforeseen circumstances that may arise. No matter how well thought out and detailed your plan is, it’s highly unlikely everything will go as expected. Be flexible with your plan and accept the fact that there will almost certainly be delays. Map out all dependencies and make all stakeholders aware of them.
Avoid planning to launch the site near your seasonal peaks, because if anything goes wrong you won’t have enough time to rectify the issues. For instance, retailers should avoid launching a site close to September/October to avoid putting the busy pre-Christmas period at risk. In this case, it would be much wiser launching during the quieter summer months.
Lack of resources
Before committing to a site migration project, estimate the time and effort required to make it a success. If your budget is limited, make a call as to whether it is worth going ahead with a migration that is likely to fail in meeting its established objectives and cause revenue loss.
As a rule of thumb, try to include a buffer of at least 20% in additional resource than you initially think the project will require. This additional buffer will later allow you to quickly address any issues as soon as they arise, without jeopardizing success. If your resources are too tight or you start cutting corners at this early stage, the site migration will be at risk.
Lack of SEO/UX consultation
When changes are taking place on a website, every single decision needs to be weighted from both a UX and SEO standpoint. For instance, removing great amounts of content or links to improve UX may damage the site’s ability to target business-critical keywords or result in crawling and indexing issues. In either case, such changes could damage the site’s organic search visibility. On the other hand, having too much text copy and few images may have a negative impact on user engagement and damage the site’s conversions.
To avoid risks, appoint experienced SEO and UX consultants so they can discuss the potential consequences of every single change with key business stakeholders who understand the business intricacies better than anyone else. The pros and cons of each option need to be weighed before making any decision.
Late involvement
Site migrations can span several months, require great planning and enough time for testing. Seeking professional support late is very risky because crucial steps may have been missed.
Lack of testing
In addition to a great strategy and thoughtful plan, dedicate some time and effort for thorough testing before launching the site. It’s much more preferable to delay the launch if testing has identified critical issues rather than rushing a sketchy implementation into production. It goes without saying that you should not launch a website if it hasn’t been tested by both expert SEO and UX teams.
Attention to detail is also very important. Make sure that the developers are fully aware of the risks associated with poor implementation. Educating the developers about the direct impact of their work on a site’s traffic (and therefore revenue) can make a big difference.
Slow response to bug fixing
There will always be bugs to fix once the new site goes live. However, some bugs are more important than others and may need immediate attention. For instance, launching a new site only to find that search engine spiders have trouble crawling and indexing the site’s content would require an immediate fix. A slow response to major technical obstacles can sometimes be catastrophic and take a long time to recover from.
Underestimating scale
Business stakeholders often do not anticipate site migrations to be so time-consuming and resource-heavy. It’s not uncommon for senior stakeholders to demand that the new site launch on the planned-for day, regardless of whether it’s 100% ready or not. The motto “let's launch ASAP and fix later” is a classic mistake. What most stakeholders are unaware of is that it can take just a few days for organic search visibility to tank, but recovery can take several months.
It is the responsibility of the consultant and project manager to educate clients, run them through all the different phases and scenarios, and explain what each one entails. Business stakeholders are then able to make more informed decisions and their expectations should be easier to manage.
Site migration process
The site migration process can be split into six main essential phases. They are all equally important and skipping any of the below tasks could hinder the migration’s success to varying extents.
Phase 1: Scope & Planning Work out the project scope
Regardless of the reasons behind a site migration project, you need to be crystal clear about the objectives right from the beginning because these will help to set and manage expectations. Moving a site from HTTP to HTTPS is very different from going through a complete site overhaul, hence the two should have different objectives. In the first instance, the objective should be to retain the site’s traffic levels, whereas in the second you could potentially aim for growth.
A site migration is a great opportunity to address legacy issues. Including as many of these as possible in the project scope should be very cost-effective because addressing these issues post-launch will require significantly more resources.
However, in every case, identify the most critical aspects for the project to be successful. Identify all risks that could have a negative impact on the site’s visibility and consider which precautions to take. Ideally, prepare a few forecasting scenarios based on the different risks and growth opportunities. It goes without saying that the forecasting scenarios should be prepared by experienced site migration consultants.
Including as many stakeholders as possible at this early stage will help you acquire a deeper understanding of the biggest challenges and opportunities across divisions. Ask for feedback from your content, SEO, UX, and Analytics teams and put together a list of the biggest issues and opportunities. You then need to work out what the potential ROI of addressing each one of these would be. Finally, choose one of the available options based on your objectives and available resources, which will form your site migration strategy.
You should now be left with a prioritized list of activities which are expected to have a positive ROI, if implemented. These should then be communicated and discussed with all stakeholders, so you set realistic targets, agree on the project, scope and set the right expectations from the outset.
Prepare the project plan
Planning is equally important because site migrations can often be very complex projects that can easily span several months. During the planning phase, each task needs an owner (i.e. SEO consultant, UX consultant, content editor, web developer) and an expected delivery date. Any dependencies should be identified and included in the project plan so everyone is aware of any activities that cannot be fulfilled due to being dependent on others. For instance, the redirects cannot be tested unless the redirect mapping has been completed and the redirects have been implemented on staging.
The project plan should be shared with everyone involved as early as possible so there is enough time for discussions and clarifications. Each activity needs to be described in great detail, so that stakeholders are aware of what each task would entail. It goes without saying that flawless project management is necessary in order to organize and carry out the required activities according to the schedule.
A crucial part of the project plan is getting the anticipated launch date right. Ideally, the new site should be launched during a time when traffic is low. Again, avoid launching ahead of or during a peak period because the consequences could be devastating if things don’t go as expected. One thing to bear in mind is that as site migrations never go entirely to plan, a certain degree of flexibility will be required.
Phase 2: Pre-launch preparation
These include any activities that need to be carried out while the new site is still under development. By this point, the new site’s SEO requirements should have been captured already. You should be liaising with the designers and information architects, providing feedback on prototypes and wireframes well before the new site becomes available on a staging environment.
Wireframes review
Review the new site’s prototypes or wireframes before development commences. Reviewing the new site’s main templates can help identify both SEO and UX issues at an early stage. For example, you may find that large portions of content have been removed from the category pages, which should be instantly flagged. Or you may discover that some high traffic-driving pages no longer appear in the main navigation. Any radical changes in the design or copy of the pages should be thoroughly reviewed for potential SEO issues.
Preparing the technical SEO specifications
Once the prototypes and wireframes have been reviewed, prepare a detailed technical SEO specification. The objective of this vital document is to capture all the essential SEO requirements developers need to be aware of before working out the project’s scope in terms of work and costs. It’s during this stage that budgets are signed off on; if the SEO requirements aren’t included, it may be impossible to include them later down the line.
The technical SEO specification needs to be very detailed, yet written in such a way that developers can easily turn the requirements into actions. This isn’t a document to explain why something needs to be implemented, but how it should be implemented.
Make sure to include specific requirements that cover at least the following areas:
URL structure
Meta data (including dynamically generated default values)
Structured data
Canonicals and meta robots directives
Copy & headings
Main & secondary navigation
Internal linking (in any form)
Pagination
XML sitemap(s)
HTML sitemap
Hreflang (if there are international sites)
Mobile setup (including the app, AMP, or PWA site)
Redirects
Custom 404 page
JavaScript, CSS, and image files
Page loading times (for desktop & mobile)
The specification should also include areas of the CMS functionality that allows users to:
Specify custom URLs and override default ones
Update page titles
Update meta descriptions
Update any h1–h6 headings
Add or amend the default canonical tag
Set the meta robots attributes to index/noindex/follow/nofollow
Add or edit the alt text of each image
Include Open Graph fields for description, URL, image, type, sitename
Include Twitter Open Graph fields for card, URL, title, description, image
Bulk upload or amend redirects
Update the robots.txt file
It is also important to make sure that when updating a particular attribute (e.g. an h1), other elements are not affected (i.e. the page title or any navigation menus).
Identifying priority pages
One of the biggest challenges with site migrations is that the success will largely depend on the quantity and quality of pages that have been migrated. Therefore, it’s very important to make sure that you focus on the pages that really matter. These are the pages that have been driving traffic to the legacy site, pages that have accrued links, pages that convert well, etc.
In order to do this, you need to:
Crawl the legacy site
Identify all indexable pages
Identify top performing pages
How to crawl the legacy site
Crawl the old website so that you have a copy of all URLs, page titles, meta data, headers, redirects, broken links etc. Regardless of the crawler application of choice (see Appendix), make sure that the crawl isn’t too restrictive. Pay close attention to the crawler’s settings before crawling the legacy site and consider whether you should:
Ignore robots.txt (in case any vital parts are accidentally blocked)
Follow internal “nofollow” links (so the crawler reaches more pages)
Crawl all subdomains (depending on scope)
Crawl outside start folder (depending on scope)
Change the user agent to Googlebot (desktop)
Change the user agent to Googlebot (smartphone)
Pro tip: Keep a copy of the old site’s crawl data (in a file or on the cloud) for several months after the migration has been completed, just in case you ever need any of the old site’s data once the new site has gone live.
How to identify the indexable pages
Once the crawl is complete, work on identifying the legacy site’s indexed pages. These are any HTML pages with the following characteristics:
Return a 200 server response
Either do not have a canonical tag or have a self-referring canonical URL
Do not have a meta robots noindex
Aren’t excluded from the robots.txt file
Are internally linked from other pages (non-orphan pages)
The indexable pages are the only pages that have..
http://ift.tt/2Hd5yRD
0 notes