#so i can post a brand new story in april + onwards
Explore tagged Tumblr posts
goldencuffs · 11 months ago
Note
OMG YOU ARE BACK????????!! I literally haven't used tumblr in years but every once in a while I check back in here for the off chance that you might be back and now you are! L I S T E N!!!! Literally whenever I think about captive prince (and I still think about it a lot) I also always have to think about all my words because I am absolutely obsessed, it's literally all I ever dream of in lamen au fanfics kdosnskdlsl and you're telling us that you're working on it again???!! best news of the year dead on I am so fucking happy right now oh my gods
ILYYYYYYYYYYYYYYYYYYYYYYYYYY 💖💖💖💖💖
thank you so much i love you this is so nice!!!! im so, so, so unbelievably happy my dumb horny little stories mean so much to you!!! 😭😭😭
also!!! im most probably going to upload the next all my words chapter this weekend so hopefully i can finish up the entire story soon hehe 😋
23 notes · View notes
jira-chii · 5 years ago
Text
I tried rewriting the Afterlost/Shoumetsu Toshi anime
Background: Madhouse made an anime from a mobile game with the best story out of any gacha game I have ever played, and fucking botched it. So I tried to see if I could do it better.
Tumblr media
You don’t need to have seen Shoumetsu Toshi to read this post.  However I do recommend reading this article concurrently with my summary of episodes on twitter.
So, the Shoumetsu Toshi anime was announced during the game’s fifth anniversary, and aired in April 2019. It was both exciting and terrifying because Shoumetsu Toshi is a seriously underrated game with a mindblowing story, but given the track record of game-to-anime adaptations, everyone was sceptical about it. And that scepticism ended up being justified. 
In summary (my opinion): the anime itself started out half-decent, shot itself in the foot three episodes in, staggered back up for another two or three episodes, fell flat on its face directly after, and then just kept on falling from episode 8 onwards.
While watching said anime, I realised there were very minor things I could do to improve the flow, such as changing the order of episodes, or introducing certain characters later. After a while I came up with a new alternate anime structure (see tldr on twitter). This post will be the accompanying commentary explaining the decisions I made.
My aim for this exercise was to stick as close as possible to the main message and tone of the original anime. Honestly though, if I were rewriting the anime from scratch, I would have taken a different approach entirely (because I don’t agree with the message or the tone of what we got, but that’s a story for another post).
I have tried to make sure you don’t need to watch the anime to understand this post (it would be my last wish to force anyone to watch it). That said, I may make reference to the (far superior) source material, the Shoumetsu Toshi game. I have tried to keep the general commentary in the main body, with more specific detail in the side notes at the end of each episode break-down. 
Ok, here goes.
Episode 1 Lost/Lost
The role of the first episode is to establish the characters and setting. The main idea of this episode should be to introduce Lost as mysterious and frightening, and why the main characters need to go there.
As a whole, I think the original anime does do this. It starts off with an overview of the situation (courtesy of Yuki’s monologue), and then gives us some action. Characters are introduced, the mystery starts to reveal itself, and then we end on a cliffhanger. I personally don’t feel a need to change that structurally.
I do however feel the need to switch up the order and amount of introduced characters in this episode. Our brains can only hold so much information in one episode and this episode probably exceeded that capacity and then some. 
Instead of Eiji and Kikyou saving them, I suggest having Takuya take Yuki to Geek’s first. Geek can provide a simple man’s explanation of the tragedy of Lost, using his favourite idol group SPR5 as an illustrative example. 
This means we will not be met with completely new information when the much heavier exposition from Kikyou and Eiji comes later. Because instead of brand new information, most of it will be filling in the gaps from what Geek told us. And that is much less headache-inducing than the chaos of the original episode 1.
This also means the cliffhanger of the first episode becomes Yuki finding out her father is possibly alive (instead of us wondering if Takuya is still alive…). This is admittedly not as dramatic nor as shocking as the cliffhanger of the original anime. However I went with this because I don’t think an audience going in blind would have been able to handle any more new information right after what Kikyou and Eiji did to us. Also, I don’t think many people would fall for the main character dying in the very first episode...
Side notes:
The original episode 1 started with Yuki’s monologue. I liked that it invoked interest while telling a story succinctly. So I would keep it. I would also keep the scene where Takuya busts Yuki out by throwing a fire extinguisher and jumping out the window because ngl, that was badass.
Tumblr media
I think the original anime built up too much goodwill in Takuya in the first two episodes. By episode 2, Yuki basically completely trusts him, but I think it would be more fitting if Takuya had to ‘earn’ that goodwill gradually. Note that in my version of episode 1, none of the conflicts between Takuya and Yuki are resolved. For example, in the original anime, Takuya tells Yuki even if she doesn’t trust him, she has to trust her dad. What a good-guy thing to say. That doesn’t happen in my version. The aim of that is to create a constant feeling of unease and tension. Because this early on, none of us should be sure if Yuki can really trust anyone around her. She’s just forced to follow them to find her father.
You may have noticed I’ve removed Rou (flying monk dude) entirely from my version of the anime. I’m not hating on Rou, but I really don’t think he was useful. In fact, he just made people more confused. I would much rather prefer Suzumebachi appear in both episodes, with the fight not even happening until the second. I made this decision with three reasons in mind: 1. It sets up the expectation in the audience for more action in later episodes, encouraging them to keep watching to see the resolution; 2. It makes Suzumebachi seem like a more threatening opponent, thereby upping the tension; and 3. There is one less character we have to introduce (Akira), meaning he’ll really shine and leave an impact when we do the big reveal
Honestly, I would also remove fortune teller Kazuko. That’s a cameo nobody asked for, and it felt kind of forced. Though there’s also no harm in keeping her in. 
On the other hand I have deliberately chosen to delay introducing Yumiko and Kouta. Like I said, there are already too many characters in this first episode. The result of this, though, is that my ending deviates quite a lot from the original episode 1 ending. But I think that’s fine. This is only episode 1, we don’t have to end with drama. I think it’s more important to get the main themes across (the episode is titled ‘Lost’, so let’s make that the focus until the very end ok).
Episode 2: Sacrifice/Sacrifice
Episode 1 should be somewhat dramatic and action-packed to hook our audience in. But then episode 2 should provide them with more information about the setting. I think the anime made a bad move by pulling a plot twist at the end of episode 1 and then immediately rectifying it in episode 2. It feels cheap and undermining. And a waste of time.
So I chose to avoid the whole thing with Yumiko and Kouta saving Takuya. Instead I get Eiji and Kikyou to send them straight into hiding. I think it is more fitting if, directly following from episode 1, this episode starts off focusing on Yuki’s feelings about her family. This provides some much-needed context, and makes her values clear from the beginning, as she starts thinking about whether it is worth going to Lost to find her father.
Now we can start introducing some new characters. Honestly though, in the grand scheme of things, Yumiko and Kouta serve the exact same purpose as Eiji and Kikyou in the narrative. Which is a shame, because I think just their connection to the organisation opens up a whole heap of possibilities. Therefore, if I am going to include them I should take advantage of that connection. In my version of the anime I make them privy to certain information about Lost. This would make sense because the organisation they used to work for was all about getting Taiyou to Lost to complete the Noah Plan. By making that special information only Yumiko and Kouta know, I make them important to the plot. 
I can further leverage their connection to the enemy to arouse suspicion in them, because Takuya and Yuki wound up in an ambush with Suzumebachi following their instructions. But that’s a minor conflict that can easily be resolved and is honestly not that important.
What is important, is that being ambushed by Suzumebachi means I can finally show off  tamashii Akira! The fight will basically play out like the original episode 1, only with Suzumebachi instead of Rou. After the fight, I envision the scene to play out similar to the end of the original episode 3, with Souma appearing before them.
Side notes:
In the original anime episode 2, Yuki talks to Takuya when he shares bread with her on the roof. They talk about Akira, which is fair enough. However in my version they don’t even know about Akira at this point. They have nothing in common to bond over. Also, Yuki is still distrustful of him. This more closely mirrors the game, where both of them were more closed off. But I think it makes sense to be wary of a stranger you just met, let alone one who just called you a fucking package. 
That said, we can hint at some development through symbolism. Yuki starts off thinking about her family all by herself. But then Takuya comes. And even if they get nowhere in their conversation, physically at least, Yuki is no longer alone. 
The true reason Yumiko betrays the organisation is simply that she likes Takuya. But she can’t just tell him that directly. That’s one of the reasons the anime uses Suzuna in episode 2. I have Kouta take that role here. I also wanted Yumiko’s attraction to Takuya to be more subtle than her straight up telling Suzuna she likes him. There’s a bit of dramatic irony there because Takuya would still be suspicious of her, but the audience should be able to tell her true reason for helping is actually quite pure.
Akira is actually a symbol of hope for Yuki. He is a remnant of her old life. But as is the case in the tragic storyline of Shoumetsu Toshi, whenever we see hope, we must destroy it in the most ironic way possible. Here, it is the appearance of the other person from her old life, her brother Souma.
Episode 3: Memory/Suspicion
The original anime pulled a really, really bad move here, by throwing in a monster of the week-style episode while its setting was still developing. This is confusing and distracting. But the biggest reason I chose to delay the SPR5 episode to episode 4 is because Yuki’s character development in the original episode 3 was too great, as to almost make no sense. 
Therefore, I use episode 3 instead as a stepping stone to get Yuki to learn to trust the people around her. Once she can resolve her own insecurities, then it makes more sense for her to be able to fix other people’s problems.
My version of episode 3 is more similar to the anime’s original episode 4, which introduced Ryouko. After being betrayed by her remaining family member, a traumatised Yuki is finally introduced to somebody she thinks she can actually trust. But Eiji and the others warn Yuki she must not tell Ryouko anything. That said, Ryouko proves to be a good person, who even helps with some of Yuki’s troubles.
Tumblr media
Introducing Ryouko also introduces the investigative element of the anime, which emphasises the Lost mystery, but is also honestly a breath of fresh air from the doom and gloom on Yuki’s side. A benefit of seeing the police’s perspective is that they are just as clueless as the audience. So it is much easier to follow their train of thought (compared to the other group led by Eiji and Kikyou). 
So, with three episodes done, the audience should now have a good idea of what they are getting themselves into. We’ve got the characters, the setting, and a couple of mysteries set up. All that’s left is to deliver on them (which is easier said than done).
Side notes:
Honestly, writing up this first encounter with Suzuna and Souma was harder than I expected. The problem is that Souma is too powerful to be deterred by anyone, especially not the police. I have to come up with some reason to force him out of the picture. I decided to use Suzuna for a couple of reasons. It gives the audience a good sense of her foresight powers. But also, maybe it could indicate Suzuna has another motive? I had thought as far as using Taiyou as Suzuna’s excuse: Souma did not inform Taiyou he was going off on his own. If he does anything unnecessary he could get into trouble. This also effortlessly allows me to foreshadow the final boss.
Yuki is a very introspective character, meaning she thinks to herself a lot but rarely shows it. Which is why having a character like Ryouko spell out things for the audience can be helpful. 
Importantly, Yuki does not reveal everything to Ryouko just yet, even though she really wants to. This ups the tension, but importantly, it also means Ryouko can stick around a little longer! So we (and Yuki) get some time to get attached to her.
Ryouko’s thought process right now would be: she thinks the group is suspicious, but needs more evidence to back her gut feeling. So she does the background check, leading to Kaibara where we can hint at Takuya’s connection to Yoshiaki and start setting up the Kaitodan arc.
Episode 4: Suspicion/Memory
With the main plot finally set up, I am now comfortable to slip in the SPR5 episode. Surprisingly, I actually did not mind the episode as much as I thought I would. 
My biggest recommendation for this episode would be to drop the entire Seiji/Shoumetsu subplot altogether. It is just too confusing with all the other information in the episode. And with Seiji dying eventually, in the grand scheme of things it means nothing. The main aim of this episode should be to focus on what tamashii really are and what they can do. Dropping Seiji also means Takuya doesn’t have to disappear during Yuki’s character development (?) during her moment with Yua. I would really like to use this episode to hint more at Takuya’s regret, and do some more solid foreshadowing of the orphanage arc. This then leads in nicely to the sudden phone call from Yumiko I have inserted at the end of this episode to set up the next.
Episode 5: Affection/Affection
Just like the anime, my versions of episodes 5 and 6 focus on the Kaitodan. However, I make some pretty big changes. While I did really enjoy the execution of the original episode 5 (which was my favourite episode of the whole series), the whole ‘heist’ made no sense to me. Kaitodan didn’t even enter the building past the roof. The only real merit of the original setting is the clock tower, which is an Easter egg referencing Tsubasa and Yoshiaki’s reunion in the actual game. 
So I decided to switch up the tone entirely, by having Takuya and Yuki meet Yoshiaki in a family restaurant (family being the key word). Again, admittedly not as dramatic as the original, but this saves a lot more time.
Something I think the anime really should have done was foreshadow Tsubasa better. He has deep ties to Takuya, but episode 5 is the first time we hear about him. I opted to foreshadow him in episode 3 (through Yumiko’s memories), and episode 2 (being used by Souma). This should make it easier for the audience to piece the puzzle together with Yuki.
My drama for this scene comes in the form of an actual kidnapping. By separating Takuya and Yuki, we get a glimpse into how each thinks of the other. It’s also great that Rui canonically flirts with Yuki so I want to see that. I also really want to show the family dynamic of the Kaitodan, for three reasons. 1. As a contrast with Yuki to emphasise the loss she feels in losing her own family; 2. The irony that Yoshiaki will join this family as a result of losing his own, and to know he will be in good hands; and 3. Kaitodan’s quirky personalities are the most lovable thing about them and we need to see more of it.
Side notes:
Yoshiaki’s power reveal having a purpose to the plot would be so much cooler than him just randomly making an owl feather disappear. I decided to show said power in the midst of a dramatic car chase scene. Because why not.
I gave quite a lot of thought into how Kaitodan could secretly track down Takuya and Yuki, and it goes like this: Kana bugged Yumiko, because Kaitodan know the organisation had something to do with Tsubasa disappearing. I envision it happening the night Yumiko left the organisation. Once Yumiko escapes far enough to be safe, she would feel a sense of relief. But her guard would also lower. Kana has been tracking Yumiko ever since, waiting for a clue. This means the Kaitodan heard Yumiko’s call with Kaibara and know Tsubasa is involved. But they couldn’t make a move on Yoshiaki directly because they were aware he was being surveilled. So they followed Takuya and Yuki via Yumiko, and waited for their chance. Also, because it was Yumiko they bugged, they don’t get to know the contents of Takuya and Yuki’s meeting with Yoshiaki. This is all implied in my version of the anime, but none of it is important. It is just a check I did to make sure everything made logical sense.
I went as far as making up a scene to show some Kaitodan interaction. But including that in the episode summary is probably too much detail, so I will indulge myself in these side notes:
The Kaitodan notice Takuya chasing them. Jack tells Yuki that her “lover boy” is here. Yuki gets flustered and says he’s only coming after her because of their contract.
Rui doesn’t understand what Yuki sees in an uncouth guy like Takuya. He talks about a time Takuya used some pretty underhanded methods to catch them. They managed to escape by the skin of their necks by Jack throwing an up-close bomb. Kana laughs remembering how the bomb totally ruined Rui’s clothes. Rui then gets angry at Jack because that was a new suit. Sumire tries to break (i.e. cut) them apart with her chainsaw, causing chaos as Kana continues laughing hysterically.
Yuki looks on this with a small smile, thinking the Kaitodan aren’t such a bad bunch.
Tumblr media
Episode 6: Parting Ways/Choice
The anime concludes the Kaitodan arc in this episode, but I’d like to keep them for a little longer. I reserve the fighting for a later episode, and instead, I make this episode quite information heavy. The aim of all this is to lead into the orphanage arc. I did not like that Takuya went off to resolve his own regrets without Yuki in the original anime. However it makes no sense to have him make Yuki tag along with him to something unrelated to their contract. My solution to this is to give Yuki more agency, and the mind to make her own decision.
I intend for a couple of truth bombs to come out here: the organisation does human experimentation; Daichi works for the organisation; the organisation has a base in Lacuna; and Yuki and Takuya will die if they go to Lost. The anime left a lot of these reveals until the end, but I don’t think they are truly that important to warrant that. Having Yuki know these things now, and make a decision despite that, helps build her character more (so we can pull all her confidence down later).
Side notes:
Having Eiji reveal the truth puts both suspicion and trust in him. If he was working for Daichi, that means he was also working for the organisation. However, if what he says is the truth, the reason he reveals this information is because of his guilty conscience. To make sure he comes off as sincere, his guilt should be foreshadowed in earlier episodes.
I wonder, if Lacuna really did end up being the final boss, would that mean Yuki would not need to go to Lost anymore? If Yuki were able to save Souma and return him to ‘normal’ would she be satisfied with never seeing her father again? Or was she hoping Souma could join them on their journey to Lost as well? Honestly, not even I know what Yuki was thinking. All I know is that the anime built up Lacuna as a fake ‘last boss’ (spoiler: it’s not), so I will too.
Episode 7: Regret/Regret
Like the anime again, I make episode 7 the orphanage arc. Unlike the anime, I also combine it with Tsubasa’s arc. The key to making this work is to progress both, but put the focus on only one of the storylines. For episode 7, the orphanage will be the focus, while I leave Tsubasa for episode 8.
The anime originally put the focus of this episode on solving the mystery and finding the link between Lacuna and the orphanage. However, because of how I set things up in episode 6, our characters’ main goal is actually to find clues to the Iink between Souma and Tsubasa. I want the focus on the orphanage side to be entirely about Takuya’s character development, and how Yuki looks over that development. The real plot actually happens on the Kaitodan side. Naturally, Takuya and Yuki will eventually join the fray but I think, for the purpose of the final message of the anime, it is important to show that things can still happen without Takuya and Yuki. 
Side notes:
I want Ayano to have more screentime, so I am making her lead them to Hinako. Ayano thus acts as a facilitator, rather than an observer. Also, I plan to make her shuumeigiku an AF (because AF were built up as some really important thing in the original but were not even used in the final battle).
Having the episode end on a cliffhanger means the main characters (and hopefully the audience too) won’t have time to ponder the shuumeigiku, nor the birdhouse for now. This is intentional.
Episode 8: Choice/Parting Ways
In the original anime, episode 8 is the Lacuna break-in episode. But it’s still too early for that. My episode 8 wraps up the Kaitodan arc.
We've had some time to get to know the Kaitodan members so I want this episode to be their moment to shine. First off, putting the Kaitodan flashback at the start of the episode, instead of in the middle of a fight (like what episode 6 of the original did) makes it feel less hasty. I also use this opportunity to insert some more Kaitodan shenanigans through flashback. This episode focuses on the bonds they have with Tsubasa, and the pain of losing that bond. So to make that emotional impact really hit, I have to utilise the flashback effectively.
Tumblr media
Other than that, I guess the fight will progress similarly to episode 6 of the original, just in a different setting.
Side notes:
Having Yoshiaki and Tsubasa’s confrontation in the middle of what was once the lab both were experimented on as children has symbolic significance, especially because AF are the key to the battle.
I avoided the whole flashback around Tsubasa slashing Tsuki/the organisation hacking Yoshiaki’s twitter to lure Tsubasa into a trap. Because I think this episode already has more than enough flashbacks. I want to try to avoid overuse of flashbacks because they tend to ruin the immersion and pacing of an episode. If I really had to insert it somewhere though, it would be during an exposition by Souma when he calls Tsubasa weak.
I tried to make Sumire and Kana’s roles more relevant. I am not sure if it worked.
Tumblr media
You think the title of this episode refers to Yoshiaki and Tsubasa parting ways, but did you expect Ryouko would also actually die? This is (I hope), an unexpected death, and I want the audience to feel almost just as devastated as Yuki does. Ryouko’s stuck around for a while now, so killing her off should have some sort of impact.
The ending of my version of this episode is a direct copy of the Tsuki Taiyou scene from episode 10 of the original. Placing it here is a set up for what I have planned in my version of the Lacuna break-in. But alternatively, it could also be placed at the end of my version of episode 6, because that is the episode we learn more about the heads of the organisation.
Episode 9: Fate/Fate
The original anime made this entire episode a flashback. And I hate that. Because I hate flashbacks. And using one for an entire episode just seems like lazy writing. I will address the whole thing with Daichi in my own way in a later episode, but for now I’ve got to do the break-in.
So, in a nutshell, I was very unsatisfied with how the anime handled the break-in. It was anticlimactic, and the real conflict (with Souma) happened nowhere near Lacuna. I understand the need for futility, but having nothing accomplished at all in the grand scheme of things is also doing a disservice to your audience. Because it’s wasting time if it’s not progressing the story or characters.
Essentially, I want my version of this episode to deliver the same information as the original anime episode 9 (flashback), but with about 70% more action and 99% less flashback. 
As I mentioned before, Lacuna was built up like a climactic boss battle so I am going to treat it like one. This is also a chance to foreshadow Taiyou’s powers that exhibit symptoms of Lost. and a chance for Suzuna to show off more of how formidable her powers can be.
I’ve put Tsuki as the miniboss because mate, she’s perfect. Why wouldn’t I? The anime seriously missed a golden opportunity. Tsuki is actually a really good character to fight against because she stands for a lot. She is obviously on Taiyou’s side, but in a tragic way. Despite being betrayed by him, she still stands by him, and will sacrifice herself even after he turned her into a monster. This is exactly what makes her fascinating. She brings moral greyness to an otherwise black and white fight against Taiyou. It’s also a chance to show that Akira is becoming more powerful, and Yuki feels more confident in herself. 
That said, the end of this episode is a direct test of Yuki’s character development, because she’s been put in the exact same situation she was in at the start of the anime. This is her chance to fix that regret that’s been haunting her the whole time. And now she has Akira by her side, she can be brave enough to move forward. Little does she know, what lies beyond the door is really going to test her limits…
Side notes:
Yuki being a product of two worlds was surprising, but i feel the impact is stronger if she gets told that directly by Taiyou, rather than the audience seeing it through a flashback. It adds more tension, especially as Taiyou gets away.
In my version of the anime, it is entirely possible for Takuya and Yuki to not even be aware Tsuki was once human (but the audience does know). I can play up the dramatic irony, by having Tsuki voice her thoughts. Yuki might get the feeling: this monster is completely loyal to Taiyou. Is she his pet?
Tumblr media
On the other hand, if I choose to have Tsuki present herself as human-turned-monster, this gives an opportunity for her to explain there is good in following the Noah plan: it will save a lot of people. Going with this version should involve suggesting that Tsuki at least genuinely does want to save the people on her side. Maybe she has a child?
The Noah plan is important for understanding Taiyou’s motives. But honestly it is not that important for the audience to know all the details. Which is why I want Takuya to be the one to see it. He’ll trivialise it, because he knows what’s really important is what’s happening for him now. And really, Noah is just trivia/fan service for the audience in this anime, seeing as there is barely any moral ambiguity in Taiyou anyway.
Episode 10: Decision/Decision
This is the Souma episode. Despite the very different build up, the events in my version aren’t that different from the anime. 
I thought the lab would be an appropriate final confrontation setting. It symbolises Souma’s desire for power which ultimately led to his current downfall. But he wanted that power to protect Yuki. Which should resonate with Akira. And that is why he is also there.
Akira needed more screen time in the anime. He is not just a weapon; tamashii are not tools. He is Yuki’s protector, and his very presence as a tamashii implies his regret and desire around that.
This is simultaneously the grieving episode (original episode 10), which I’ve merged into the seaside scene. To really heighten the emotion, Yuki should have an outburst. Think about it, she’s been keeping everything bottled up inside her: her lab PTSD, the tragic fate of the orphans, and Ryouko. Losing her brother was the last straw. Showing vulnerability to Takuya is a sign she trusts him.
At the same moment Yuki loses one of the most important people in her life, Takuya gives her hope for the future. A bittersweet moment like this is really what the message of the anime should be about. Additionally the timing of this ray of hope means it is so much more important to Yuki to hold onto that image of happiness (and more impactful for the audience).
Takuya showing willingness to give up his contract is the ultimate sign this man cares for Yuki.
Side notes:
I feel like I packed a lot into this episode, but that’s because Souma’s death scene actually goes by very quickly, because neither side are putting up much of a fight? Both the original anime and my version have to rely on flashbacks to pad it out.
I foreshadowed Souma’s fate using Tsuki (both morphed into monsters).
Akira should also feel emotionally impacted by Souma’s death. Why does the anime make it seem like both Daichi and Akira show favouritism towards Yuki? Souma is also Daichi’s child, and he also saw Akira as a big brother. 
I also think the flashbacks should show more of the three interacting together, instead of solely Yuki and Souma.
Souma gets some final character development by apologising to Akira before he dies. 
At the seaside, Akira is also grieving. But he is full of guilt. He does not have the right to comfort Yuki.
Putting ourselves in Yuki’s shoes for a moment: the only two people left as a reminder of her previous ordinary life were her brother and her father. Now that Souma is gone, she is left with no choice but to go to Lost. Even if we ignore all the saving the world rhetoric, Yuki would still go just for personal reasons alone. Because what Yuki wants more than ever right now is the comfort of family. It’s an incredibly tragic situation. That’s why it is so important that Takuya shows a willingness to break his contract. He’s not just comforting her because he wants her to get over it and go to Lost so he can do his job. Knowing this allows Yuki to trust him, and show her vulnerable side.
This is the moment Yuki wishes things were different. If Takuya wasn’t here with her, she really might have reset the world like Daichi wanted her to.
Tumblr media
Episode 11: Trust/Trust
My version of episode 11 is quite similar to the anime’s episode 11. Except I cut some NPC moments and massively extend the parallel world moment.
Suzuna and Taiyou’s scene plays out like in the anime. If possible, I’d like them to have some conversation in the sedan hinting that Taiyou has lost his way to his original altruistic goal.
I make the crux of this episode the parallel worlds within Lost. I thought that part was very cool in the original anime, but it just wasn’t impactful enough. 
I want to use this as an opportunity for Yuki to see Daichi’s memory. Essentially, I want to condense the flashback episode (episode 9) in the original anime to its bare essentials. This is probably going to be challenging, but with a combination of visuals paired with efficient narration by Daichi, I think it is possible to make things move quicker.
Basically, I want Yuki to see the whole picture, to understand her father’s perspective, and despite that make her final decision. Yuki realises her father is from a parallel world and is trying to save a lot of people from another world. But even so she is conflicted about whether what she is doing is right.
I like that this allows a clear link to happiness at the end. Yuki resonates with Daichi’s own desire for happiness, causing her to remember the happy future she painted together with Takuya. This is a much clearer way of explaining how Takuya and Yuki managed to free themselves from the effects of Lost, while also emphasising their bond.
Side note:
I’d like to believe that even with a clear image of happiness in mind, a normal human like Takuya should not be able to escape Lost unscathed. I’d like to hint that the fact he is even able to function in this space is because of his bond with Yuki, a girl intricately connected to parallel worlds and thus more immune to their effects?
Tumblr media
Episode 12: Future/Future
The last episode’s boss battle with Taiyou plays out largely like the original anime. But I think there is scope to have a lot more satisfying pay-offs.
Lost is how the tamashii came to be, so first off I would recommend there be more of them. Even Tsubasa, who disappeared, could possibly exist as a parallel world version of himself. That’s what Lost is about, after all.
It would be fitting if Taiyou also gets taken down by these other tamashii, including the orphans. This would be an appropriate approach to highlight the contrast between Taiyou (who is alone), and Takuya and Yuki (who are not). Additionally, I bring back the shuumeigiku, which means ‘to endure’, in order to emphasise the justice that was eventually served.
I wanted there to be less spotlight on Suzuna and more on Akira. After all, he is the one with the real meaningful connection to protecting Yuki. Therefore I have him deal the final blow after evolving. On top of being epic fanservice, the fact he is the only tamashii who evolves in the entire anime means there is narrative significance to it. It is the culmination of his bond with Yuki, and it is fitting that he can finally wield that power to protect her properly.
Tumblr media
After the fight in the original anime, Daichi shows Yuki an alternate world without tragedy, an ideal world. However, importantly, this is a world where Yuki never interacts with Takuya, Geek, or any of the other people she’s met on her journey, and I think the anime could have done better to show that. 
So my alternative is to have Yuki see those happy versions of Ryouko, Yoshiaki etc in the previous episode, while showing her own alternative life in this one. I really want to juxtapose the happiness she feels living an ordinary life with her family, against the missed opportunities to meet people like Ryouko and Takuya. This provides a lot more ambiguity to her choice, and we get to see the conflict playing out in her head in real time. By actually making it possible to follow her train of thought, the audience will be more likely to understand why she makes the decision she makes.
Finally, I choose to end my version of the anime with a monologue from Daichi as a homage to the game. But also, it seems appropriate because he is an observer, just like us. And as observers, who are we to judge the choices people make? I contrast Daichi’s message of hope to the somewhat less than ideal futures everyone leads in order to show the ambiguous ending I think the original anime was going for. Regardless of whether you end up better or worse, life moves on.
Side notes:
Yuki needs to show more emotion when she sees Daichi. Seriously.
I referenced the ending of the original anime (Yuki’s ideal house) in the ‘ideal’ world Daichi shows Yuki. I think the message the anime wanted to convey with that scene is that what we perceive to be true happiness may not really be that. And that’s kind of the vibe I’m going for in my ideal world scene, which is why I put it there instead of at the end.
Daichi should thank Akira. Holy fuck the poor guy’s been through so much with absolutely zero gratitude. He kept your daughter alive gdi
Ultimately the problem of Lost was never properly resolved? But the anime left it equally open-ended so I’m cool with that.
Finally, some overall points 
I removed Keigo and Shunpei entirely from my version of the anime. Because they weren't needed. Keigo I can put back in (because he and Yuuji serve the same narrative purpose), but I don't want Shunpei. He was an original character created purely to betray Ryouko and then die. I have no need for a character like that. He could be Ryouko's assassinator I guess, but I would rather the sniper be a tamashii cameo like Wolf or something. Or nobody at all. Because the sniper as a character is not  important at all. 
I removed all the deliberate food scenes because there was no scope to include them. It is one thing to claim you want to put more emphasis on food, but that should be second to creating an actually viable product.
And with that, this project is finally complete. 
I admit I had high hopes for the anime, and it frustrates me that it turned out the way it did when there were so many simple things they could have done to make it less confusing. 
That said, fixing something that has already been done is much easier than creating something entirely from scratch. And while everything may work on paper, translating that into practice with the actual production is a very different story. Therefore, despite how everything turned out, I still commend the production team for being able to make the anime a reality at all. 
As I've mentioned before, it is not easy to make an engaging story fit in the span of just twelve episodes, let alone one adapted from an epic game featuring time travel and parallel worlds. But I wish they could have tried just a little bit harder, to be just a little bit more risky, to deliver a product we could actually enjoy.
Some of you may like my version of the anime better, some of you may not. A lot of my personal biases definitely showed through, and it was more challenging than I thought translating my ideas into writing. If I’ve confused you about anything, feel free to drop me a message.  
If this post wasn’t long enough for you, you can read more of my analyses on the Shoumetsu Toshi anime below: 
My thoughts on the anime before it aired
My thoughts on the anime after it aired
a deeper analysis of episodes 5 and 6
1 note · View note
judgeanon · 8 years ago
Text
A SHORT HISTORY OF FEMALE JUDGES IN JUDGE DREDD FROM 2004 TO 2007
Tumblr media
With a sense of newfound stability and confidence brought about by finally being owned by a company that genuinely cared for its characters and stories, 2000AD carried onwards into the new millennium, with John Wagner leading Dredd into a new epic and setting him off on the road to a storyline that would redefine both character and setting forever. One particular staple of this era is the solidification of the strip as a very character-driven, procedural crime drama, building even further on the lessons learned from “The Pit” but also adding a deeper layer of examination of the strip’s protagonist and his relationships with his supporting cast.
Unfortunately, said supporting cast is still running a bit low in the female judges department, although that doesn’t really put a dent on female protagonism in general, as Dredd’s niece Vienna takes a much more center stage. And although Chief Judge Hershey likewise remains a regular fixture, it’ll still take a few more years for Wagner to introduce a new female judge with any real lasting power. In the meantime, however, a new generation of writers and artists will begin introducing several new female judges in a variety of roles, from background extras to one-thrill wonders and maybe even villains...
(Previous posts: 1979 to 1982 - 1982 to 1986 - 1986 to 1990 - 1990 to 1993 - 1993 to 1995 - 1995 to 1998 - 1998 to 2001 - 2001 to 2004. All stories written by John Wagner unless noted otherwise. Cover art by Henry Flint)
Our first stop is “Terror”, painted by Colin MacNeil and published in progs 1392-1399 (June-July 2004). A prologue to the upcoming mini-epic of the year, it heavily features a Judge Stuyvesant as part of a small task force of judges investigating the extremist democratic terrorist group Total War. Sporting her own variant on the black bobcut, Stuyvesant runs surveillance on a suspected Total War operative as he falls hopelessly in love with a citizen, acting as a secret bridge of sorts between them and Dredd, and saving the latter from having to spend hours looking at monitors.
Right after the last episode of “Terror” comes “Big Deal at Drekk City”, drawn by Cam Kennedy (progs 1400-1404, August ’04), where Dredd and a Judge Vance take a handful of cadets, including a very aptly-named Cadet Laws on a rather troubled Cursed Earth familiarization trip. Vance proves to be an experienced judge, not just in combat but also at testing the cadets’ attitude, but at the story’s climax she takes a spear to the chest and comes extremely close to dying. Luckily for her, the cadets overturn Dredd’s orders to stay back and return to save both of them, with Laws taking care of her injuries. So a decent outing, all things considered.
Tumblr media
And so we get to the first big thrill of this post, the 12-episodes long “Total War”, drawn by Henry Flint and published in progs 1408 to 1419 (September-December ’04). This is also Chief Judge Hershey’s first city-wide crisis in office, as both “Helter Skelter” and the Aliens invasion were events mostly isolated to one or two sectors. The threat itself comes from an alleged two hundred nuclear explosives secretly placed around the city by Total War, the afore-introduced terrorist group. Their demands are simple: all judges must turn in their badges and surrender their power to the public, or they will begin detonating the bombs at regular intervals until they accept or the city has been reduced to glowing dust.
With a clear (albeit hidden) enemy and the tense, gripping pace of a good Tom Clancy novel, “Total War” has little room for character development, and most of it is taken by a subplot involving Dredd, Vienna and a genetically-altered clone. The impersonal nature of the threat also means there’s not much in the way of gunplay or fight scenes, with most of the action being a race against the clock for Dredd and a team of investigators to locate and dispose of the nukes. One thing we do get to see, however, is Hershey at her best as Chief Judge: unfettered, collected and focused, but also willing to resort to certain tactics that many of her predecessors would’ve found difficult to stomach. The most obvious case being the opening page of episode 7, where in a citywide broadcast she concedes to Total War’s demands and orders the immediate disbanding of the judges and a return to a civilian government.
Tumblr media
Naturally, it’s all a ruse designed to buy more time as Dredd and company desperately chase every possible lead and exhaust every resource to find the bombs, and to Hershey’s credit it works like a spell. Chief of Undercover Division Judge Hollister also makes an appearance, as does Judge Stuyvesant from a few months back, and it’s actually pretty interesting to see how Flint manages to give the latter’s design a few unique qualities to differentiate her from her chief.
Tumblr media
A few female undercover judges, an unnamed bespectacled control judge and a fairly striking PSU judge also make small appearances, and once the crisis is over we get a short and somewhat odd scene where Dredd tries to hand over his badge for racing to save an endangered Vienna from the devastation of a nuclear blast instead of protecting the citizens, but Hershey downplays his perceived dereliction of duty and reminds him that he’s still human. It’s rather strange and almost out of place at the end of a story where her stoicism reached almost robotic heights, but it does make a good job of showing the personal bond that still exists between them. And at any rate, it’s worth it to see her turn Dredd’s catchphrase against him.
Tumblr media
By way of intermission, the special prog 2005′s “Christmas with the Blints” (Andrew Currie, January ‘05) has Dredd travelling to Brit-Cit hot on the trail of a married couple of serial killers, which nets us a couple of background female brit judges in a few panels. Then it’s back to the Big Meg for a handful of epilogue stories dealing with the fallout of Total War’s terrorist attack. “After the Bombs” (Jason Brashill, 1420-1422, idem) has yet another appearance by Stuyvesant; “Horror in Emergency Camp 4″ (D’Israeli, 1425-1428, February ‘05) has a quite staggering amount of possible background female street judges, including two named ones called Rush and Woo and a very librarian-like PSU judge; and “Missing in Action” (written by Gordon Rennie and drawn by Ian Gibson, 1429-1431, March ‘05) has not only a young Judge Herriman as a small plot point, but also a very odd female judge with dual straight shoulderpads, platinum blonde hair and a Justice Dept. branded hairband who may be a psi (Anderson, even?), although it’s hard to tell because the badge looks like a regular street judge’s.
Tumblr media
(The one liberty Justice Dept. hasn’t crushed: artistic liberties!)
EDIT: via Facebook, Gibson himself has confirmed the judge pictured here is indeed a regular street judge. He also had a few comments about the design:
"Returning to the ‘uniform’ topic, for some reason that now escapes me, I decided to give the female judge from the Missing in action adventure a Justice department head scarf. I think it suits her and makes her less scary for the little girl they rescue.”
Tumblr media
Speaking of Psis, a slightly redesigned Judge Karyn reappears in progs 1432-1436’s “Descent” (by Rennie and Boo Cook, April ‘05) with a new pink hairstyle and a psi-flash that leads her and Dredd into the Undercity to rescue some survivors of a hovership crash through the hole left by one of Total War’s bombs. Unfortunately, what they find there is a supernatural entity known as the Shadow King, which Dredd and Karyn had already fought in the Megazine (volume 4, issue 5). In the ensuing firefight, Dredd is possessed by the Shadow King’s spirit and turns into a hulking monstrosity, and Karyn takes her whole “Anderson wannabe” character trait to its logical conclusion by knocking Dredd out and absorbing the spirit into herself. Unfortunately, the Shadow King turns out to be too powerful for Karyn, and ends up destroying her mind and taking over her body. She’s eventually subdued by Dredd and judge reinforcements, and the creature once known as Judge Karyn is locked inside a holding cell deep inside Psi Division’s headquarters, never to escape.
Tumblr media
It’s a move somewhat reminiscent of Garth Ennis’ treatment of Judge Perrier or Dekker: bring an obscure character back from the depths of oblivion, use them as supporting cast for a couple of years, then kill ‘em off at a later date. And it can definitely be read as a fairly manipulative attempt at getting some emotion out of disposable characters by offing recognizable names rather than complete nobodies. However, the difference between them and Karyn to me lies in Rennie’s very meta-textual idea of her chasing after Anderson’s star. Karyn, much like Janus, was created as a reserve Anderson and swiftly put aside once she returned. They both have their fans, sure, but the general consensus is that try as they might, they just couldn’t match up to the original. Which is exactly what happens to Karyn in this story. She tries to be Anderson but ultimately isn’t as strong as her, and ends up literally erased as a result. It’s still a very heroic sacrifice, as she dies saving Dredd, but it also has a deeper narrative core that was missing from pretty much every other revival of old, forgotten judges.
Tumblr media
Rennie sticks around to write the much longer “Blood Trails” (art by Currie, progs 1440-1449, May-July ‘05), which is his own take on the Wagnerian procedural cop show-style mini-epic. As such it features a nice couple of female background judges (only one gets a name: Weisak), including a surprise cameo by Judge Morinta, the inventor med-judge from “Gulag”, although for some reason she’s now blonde instead of brunette. And of course, there’s the by now mandatory final page visit to Hershey’s office, this time to orchestrate a covert retaliatory orbital strike against Anatoli Kazan, War Marshall Kazan’s clone (also introduced in “Gulag”), for siccing a bunch of assassins on Dredd’s niece. Needless to say, it gets carried out quite swiftly.
A few weeks later, Carlos Ezquerra draws another background female judge in “Matters of Life and Death”, also by Rennie (1452, August ‘05). Wagner returns with artist Kev Walker in tow to deliver “Mandroid” (1453-1464, August-November ‘05), and although the main female character in that story is not really a judge, there are still a few proper ones scattered throughout, including a Judge Kowalski who gets a tender little background moment in what’s otherwise one of the absolute bleakest stories of the decade.
Tumblr media
Right afterwards, we get a Tek Judge James in prog 1465′s “Everything In The Garden” (Arthur Ranson, November ‘05). Then Rennie and Flint return for a small epilogue to “Blood Trails”, as Anatoli Kazan, now hunted by his own government, arrives in Mega-City One requesting political asylum in “Change of Loyalties” (1466, November ‘05). Of course, Dredd’s having none of it, but since Anatoli could potentially be an invaluable tactical resource and a goldmine of intel, Hershey asks him to at least talk to the creep before calling for his execution. So what we’ve got here is maybe the first example of the main conflict between Dredd and Hershey, one that continues literally to this day in stories like “Harvey”.
Tumblr media
(Also of note: this Brendan McCarthy-esque coloring job by Flint. Talk about seeing red!)
On this corner we have the immovable object: Judge Dredd, with a mindset as narrow as his helmet’s visor, never one to double-think himself or back down, apt to following both his sharp gut instincts and his Everestian mountains of experience, and usually very, very right. On the other, we have the irresistible force: Chief Judge Hershey, focused on the big picture, willing to take a chance on a potentially risky idea that could also bring about huge benefits to her city, confident that they’ll be able to handle whatever pitfalls may appear later on, but willing to listen to all parties involved. That last part is important because with any other chief this is the kind of conflict that would lead to some serious fallout, but Hershey is smarter than that, and more importantly, has seen first-hand what happens to Chiefs who don’t listen to Dredd. Her default way, then, of bridging the gap between the two forces is to plainly ask Dredd his opinion and promise to act accordingly to it, in this case, by letting him decide her vote on the council.
Overall it’s a decent way to avoid some very cliche drama, but there’s a couple of problems with it, not least of all that the sheer number of stories like this has turned it into a cliche in and of itself. However, the biggest problem, for me, is that it reduces Hershey to a bit of an echo chamber or political proxy for Dredd, allowing him to make decisions and direct the course of the city without actually leaving his position as a street judge. It lets him play in both arenas at once but doesn’t chain him to anything. From an in-story perspective it makes sense, since despite everything they still have a history together and Dredd is rarely wrong, but it’s annoying from a character standpoint because it stifles Hershey quite a bit. At its worst, she comes off as just a puppet of Dredd’s, although her directly asking for his council and seemingly agreeing with it on some level helps stave that off. And in a sneaky bit of storytelling, when the council does make its decision on Anatoli’s fate we get to see the result but we don’t get to see who actually voted for what. So whether Hershey actually went through with her promise is left, much like the voting hands, up in the air.
Tumblr media
Rounding up 2005 we have a forensic judge at the start of “Nobody”, by Robbie Morrison and Richard Elson (1467, November). The new year then kicks off with a nice handful of background judges through “Your Beating Heart”, by Wagner and Patrick Goddard (1469-1474, January-February ‘06) and the return of Judge Lola in Ian Edginton and D’Israeli’s “Time and Again” (1475, February ‘06), which makes sense considering it’s a sequel to “Tempus Fugitive”. The same story also features an elderly, unnamed scarred female judge as head of a parole board. 
Things get a little more exciting in our next stop, with newcomer writer Simon Spurrier, artist Laurence Campbell and (awesomely-named) inker Kris Justice’s “Dominoes” (1482, April ‘06), a story that plants some seeds that would take six years to blossom.
Tumblr media
(This is also the first time an artist draws the oversized chains holding Hershey’s badge far as I've been able to gleam, which means I owe Campbell a drink because it’s my absolute favorite detail of her uniform)
Ostensibly, the story is all about Dredd going on a diplomatic mission to Neocuba to handle a prisoner exchange with their president. Two pages in, however, we learn that his ship’s pilot is actually a fanatical black ops agent with a mission of her own: assassinating said president in such a way that it looks like either an accident or a covert sov op. Which she does beautifully and without Dredd ever realizing it. And in a final bit of very sharp writing, Spurrier all but screams she did it all on Hershey’s orders.
Overall, a fair lot to unpack for a six page story. We’ve known for a while now that Hershey favours more subtle, underhanded ways of securing Mega-City One’s interests than blunt force of arms, but something about this one feels like pushing it. Maybe it’s keeping Dredd in the dark about it, or directly targeting a foreign head of state, but if it’s not crossing a line, at the very least it’s toeing it. In a way it echoes "The Chief Judge’s Man”, but having the target be an implicitly corrupt foreign leader makes it slightly less damning than murdering rebellious but innocent MC-1 citizens. And, more importantly, Spurrier decides to end the story without confirming nor denying Hershey’s involvement in it, although at the time it seemed like a sure thing.
Tumblr media
From a six-pager to a six-episode-er as Gordon Rennie, Ian Richardson and PJ Holden bring us “House of Pain” (1485-1490, May ‘06), and what a treat it is. Right off the bat we open with Judge Alice, a street judge driving a catch wagon on the graveyard shift, being harassed by a couple of punks. Her scene is mostly set-up for Judge Guthrie’s return, but things get a lot better as the story goes along. First, with a small guest appearance by Wally Chief Judge Hollister going undercover with two others as kneepad models in a pretty funny, albeit pretty skeevy scene; then with Judge Corson, a bomb defusal specialist tek who first helps Dredd deal with a perp’s suicide box implant and later with the main villain’s offshore platform’s self-destruct; and there’s still enough space for Hershey to deliver a short lesson on law economics:
Tumblr media
By contrast, “Jumped”, by John Smith and Simon Fraser (1491-1494, June ‘06), only has a lone unhelmeted female judge in a panel, although she is seen with a similarly unhelmeted male judge, so maybe all those fan letters about lady judges being allergic to helmets are finally having an effect. “Neoweirdies”, by Simon Spurrier and Paul Marshall, (1496-1498, July ‘06) has a couple of background sightings but also co-stars a Judge Garris as part of a team investigating murders in a pretentious weirdos competition, including a cheeky little panel of her checking out a naked contestant. Avert your eyes, lest the SJS pluck ‘em out. Spurrier also writes the Pete Doherty-drawn “Versus” (1499, August ‘06), a mostly silent tale featuring a female control judge.
Tumblr media
Now we reach what’s probably the most important and certainly the longest story of the decade: “Origins”, by Wagner and Ezquerra (progs 1505 to 1535 -with interludes- September ‘06 to May ‘07). The premise is a heavy one: Justice Department receives a note demanding a ransom for the corpse of Judge Fargo, creator of the judge system and father of justice. With the note is a tissue sample that was taken from a living organism, so if it is indeed Fargo, he’s also alive. Chief Judge Hershey quickly sends Dredd and a team of hand-picked judges into the Cursed Earth with one billion credits and a wagon to follow the trail of the kidnappers. The team includes two female judges: the returning Judge Sanchez from “Incubus” and a Judge Waters. Of the two, Waters is definitely the most impressive, a hardened street judge who acts as Dredd’s second in command early on, leading a defensive action against an army of Mad Max extras with a very calm, matter-of-fact badass attitude that feels quite refreshing after years of more troubled, insecure judges. It also helps that Ezquerra draws her as noticeably older than Sanchez, so it’s clear from the get-go that Waters is a veteran. And near the end of the story, Waters gets another chance to shine as she orchestrates and executes the rescue of Dredd and Fargo from the renegade army of the damned. Overall, she proves herself more than worthy of being in a Brian Bolland cover.
Tumblr media
Of course, as the name suggests, “Origins” is mostly concerned with showing the birth of the Justice Department and the events leading to Dredd’s creation, so there’s not much else there that concerns this post, save for one small but crucial page: when the young Fargo is outlining his plans for the creation of a corps of judges armed with the power to dispense instant justice to the United States senate, he specifically mentions the need for choosing “good men and women.” And on the same page we get a glimpse of an early class of judge cadets in training which also includes a couple of women. So here is confirmation that from day one the judges counted several women in their ranks, although admittedly by now this probably wasn’t terribly in doubt.
The final episode also has a predictable appearance by Hershey, who gets the unexpected privilege of being the second to last person to talk to Fargo before he finally expires. The last one, naturally, is Dredd himself, although both of them lie through their teeth about what his final words to each other were. More on that later.
Tumblr media
Now, about those interludes. Prog 1521′s “The Sexmek Slasher” (January ‘07) by Wagner and Vince Locke features a Judge Wyler in a supporting role. And Gordon Rennie teams-up with Ian Gibson to bring us “Judgement” (1523-1528, February-March ‘07), a supernatural revenge story guest-starring Judge Anderson in one of her rare not-Grant/Wagner-scripted appearances.
Tumblr media
(Also featuring the return of Teddy Dredd!)
The story is a tightly-written tale about a ghost judge murdering members of a crime cartel known for using psykers to mask their activities. The revenant also murders a judge but spares his partner, a Judge Bunns (could it be the same Bunns from 1983′s “Rumble in the Jungle”?), demonstrating some kind of psychic ability to tell the guilty from the innocent, although how innocent a Mega-City One judge can really be is anyone’s guess. Dredd and Anderson’s investigation reveals that the ghost is actually the enraged soul of a long-dead judge, killed by Rico Dredd before he was sent to Titan. But it’s Anderson who connects the final dot and reaches the real source of the apparition: Judge Edek, a veteran psi-judge ambushed and all but murdered by the aforementioned cartel. Crippled beyond repair and frozen in cryo-stasis in the hopes that some day technology would advance enough to heal her, some part of Edek’s consciousness remained, well, conscious, and reached out in anger to another betrayed judge in order to turn them into a revenant and get her revenge. Ultimately, Anderson reaches Edek’s chamber after fighting her way through a gorgeously drawn horde of ectoplasmic monsters and pulls the plug, while Dredd handles what remains of Judgement on his own.
Tumblr media
Overall, “Judgement” is a very strong showing for Anderson and a great psychic/supernatural twist on the old “judge turned vigilante” premise of stories like “The Executioner” or “Raider.” Also on display is Gibson’s slight redesign of the psi judge uniform, as Anderson sports twin straight shoulderpads all the way through. According to Gibson himself, the pads helped him make her more “dynamic.” He also had a spot of trouble rendering Anderson’s new shorter haircut, first shown in Alan Grant and Arthur Ranson’s ongoing Psi Division strip in the Megazine. In his own words:
"Then, when I was back doing a Dredd for 2000ad, on a story called ‘Judgement’ ( I think ), Andy was again in the script. But someone had cut off all her lovely flowing blonde tresses, for reasons of their own. So I had to render her thus.”
Rounding up this post we have a trio of short done-in-ones: “Fifty-Year Man” (Wagner and Patrick Goddard, prog 1536, May ‘07) has a female judge, perhaps a public relations administrator, as head of a team putting together a retrospective of Dredd’s fifty years on the streets, with some predictably disastrous results. In a similar vein, a journalist is arrested by a Judge Socks (maybe, her badge is hard to read) after going insane trying to write a biography of Dredd in “The Biographer”, Rob Williams’ first appearance in this project (with Boo Cook, 1537, idem). And we end in a note of tragedy with “The Incident”, by Robbie Morrison and Richard Elson (1538, idem). The story kicks off when undercover Judge Ferrara has her cover blown and is kidnapped by assassins who use a nano-virus to destroy the high tech implant wire on her brain that she’d being using to spy on their boss. Unfortunately for her, both PSU and Dredd are too slow to reach her, and the virus erases her mind. One seriously bleak story, although we do get to see Elson draw some seriously unique uniforms for the Control judges:
Tumblr media
In our next episode: the hardest part to write for me. Also, the most important female judge of the new millennium has her prog debut.
21 notes · View notes
michaelandy101-blog · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
New Post has been published on https://tiptopreview.com/fifteen-years-is-a-long-time-in-seo/
Fifteen Years Is a Long Time in SEO
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you’d like to see more of on the Moz Blog.
Take the Survey
Source link
0 notes
isearchgoood · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
via Blogger https://ift.tt/31WNCqt #blogger #bloggingtips #bloggerlife #bloggersgetsocial #ontheblog #writersofinstagram #writingprompt #instapoetry #writerscommunity #writersofig #writersblock #writerlife #writtenword #instawriters #spilledink #wordgasm #creativewriting #poetsofinstagram #blackoutpoetry #poetsofig
0 notes
evempierson · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
thanhtuandoan89 · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
drummcarpentry · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
whitelabelseoreseller · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog https://feedpress.me/link/9375/13814660/15-years-in-seo
0 notes
lakelandseo · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
epackingvietnam · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
#túi_giấy_epacking_việt_nam #túi_giấy_epacking #in_túi_giấy_giá_rẻ #in_túi_giấy #epackingvietnam #tuigiayepacking
0 notes
timeblues · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog https://ift.tt/3iMLZ5t More on https://seouk4.weebly.com/
0 notes
bfxenon · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
fmsmartchoicear · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
localwebmgmt · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
nutrifami · 5 years ago
Text
Fifteen Years Is a Long Time in SEO
Posted by willcritchlow
I’ve been in an introspective mood lately.
Earlier this year (15 years after starting Distilled in 2005), we spun out a new company called SearchPilot to focus on our SEO A/B testing and meta-CMS technology (previously known as Distilled ODN), and merged the consulting and conferences part of the business with Brainlabs.
I’m now CEO of SearchPilot (which is primarily owned by the shareholders of Distilled), and am also SEO Partner at Brainlabs, so… I’m sorry everyone, but I’m very much staying in the SEO industry.
As such, it feels a bit like the end of a chapter for me rather than the end of the book, but it has still had me looking back over what’s changed and what hasn’t over the last 15 years I’ve been in the industry.
I can’t lay claim to being one of the first generation of SEO experts, but having been building websites since around 1996 and having seen the growth of Google from the beginning, I feel like maybe I’m second generation, and maybe I have some interesting stories to share with those who are newer to the game.
I’ve racked my brain to try and remember what felt significant at the time, and also looked back over the big trends through my time in the industry, to put together what I think makes an interesting reading list that most people working on the web today would do well to know about.
The big eras of search
I joked at the beginning of a presentation I gave in 2018 that the big eras of search oscillated between directives from the search engines and search engines rapidly backing away from those directives when they saw what webmasters actually did:
While that slide was a bit tongue-in-cheek, I do think that there’s something to thinking about the eras like:
Build websites: Do you have a website? Would you like a website? It’s hard to believe now, but in the early days of the web, a lot of folks needed to be persuaded to get their business online at all.
Keywords: Basic information retrieval became adversarial information retrieval as webmasters realized that they could game the system with keyword stuffing, hidden text, and more.
Links: As the scale of the web grew beyond user-curated directories, link-based algorithms for search began to dominate.
Not those links: Link-based algorithms began to give way to adversarial link-based algorithms as webmasters swapped, bought, and manipulated links across the web graph.
Content for the long tail: Alongside this era, the length of the long tail began to be better-understood by both webmasters and by Google themselves — and it was in the interest of both parties to create massive amounts of (often obscure) content and get it indexed for when it was needed.
Not that content: Perhaps predictably (see the trend here?), the average quality of content returned in search results dropped dramatically, and so we see the first machine learning ranking factors in the form of attempts to assess “quality” (alongside relevance and website authority).
Machine learning: Arguably everything from that point onwards has been an adventure into machine learning and artificial intelligence, and has also taken place during the careers of most marketers working in SEO today. So, while I love writing about that stuff, I’ll return to it another day.
History of SEO: crucial moments
Although I’m sure that there are interesting stories to be told about the pre-Google era of SEO, I’m not the right person to tell them (if you have a great resource, please do drop it in the comments), so let’s start early in the Google journey:
Google’s foundational technology
Even if you’re coming into SEO in 2020, in a world of machine-learned ranking factors, I’d still recommend going back and reading the surprisingly accessible early academic work:
The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page [PDF]
Link Analysis in Web Information Retrieval [PDF]
Reasonable surfer (and the updated version)
If you weren’t using the web back then, it’s probably hard to imagine what a step-change improvement Google’s PageRank-based algorithm was over the “state-of-the-art” at the time (and it’s hard to remember, even for those of us that were):
Google’s IPO
In more “things that are hard to remember clearly,” at the time of Google’s IPO in 2004, very few people expected Google to become one of the most profitable companies ever. In the early days, the founders had talked of their disdain for advertising, and had experimented with keyword-based adverts somewhat reluctantly. Because of this attitude, even within the company, most employees didn’t know what a rocket ship they were building.
From this era, I’d recommend reading the founders’ IPO letter (see this great article from Danny Sullivan — who’s ironically now @SearchLiaison at Google):
“Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating.”
“Because we do not charge merchants for inclusion in Froogle [now Google shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.” — S1 Filing
In addition, In the Plex is an enjoyable book published in 2011 by Steven Levy. It tells the story of what then-CEO Eric Schmidt called (around the time of the IPO) “the hiding strategy”:
“Those who knew the secret … were instructed quite firmly to keep their mouths shut about it.”
“What Google was hiding was how it had cracked the code to making money on the Internet.”
Luckily for Google, for users, and even for organic search marketers, it turned out that this wasn’t actually incompatible with their pure ideals from the pre-IPO days because, as Levy recounts, “in repeated tests, searchers were happier with pages with ads than those where they were suppressed”. Phew!
Index everything
In April 2003, Google acquired a company called Applied Semantics and set in motion a series of events that I think might be the most underrated part of Google’s history.
Applied Semantics technology was integrated with their own contextual ad technology to form what became AdSense. Although the revenue from AdSense has always been dwarfed by AdWords (now just “Google Ads”), its importance in the history of SEO is hard to understate.
By democratizing the monetization of content on the web and enabling everyone to get paid for producing obscure content, it funded the creation of absurd amounts of that content.
Most of this content would have never been seen if it weren’t for the existence of a search engine that excelled in its ability to deliver great results for long tail searches, even if those searches were incredibly infrequent or had never been seen before.
In this way, Google’s search engine (and search advertising business) formed a powerful flywheel with its AdSense business, enabling the funding of the content creation it needed to differentiate itself with the largest and most complete index of the web.
As with so many chapters in the story, though, it also created a monster in the form of low quality or even auto-generated content that would ultimately lead to PR crises and massive efforts to fix.
If you’re interested in the index everything era, you can read more of my thoughts about it in slide 47+ of From the Horse’s Mouth.
Web spam
The first forms of spam on the internet were various forms of messages, which hit the mainstream as email spam. During the early 2000s, Google started talking about the problem they’d ultimately term “web spam” (the earliest mention I’ve seen of link spam is in an Amit Singhal presentation from 2005 entitled Challenges in running a Commercial Web Search Engine [PDF]).
I suspect that even people who start in SEO today might’ve heard of Matt Cutts — the first head of webspam — as he’s still referenced often despite not having worked at Google since 2014. I enjoyed this 2015 presentation that talks about his career trajectory at Google.
Search quality era
Over time, as a result of the opposing nature of webmasters trying to make money versus Google (and others) trying to make the best search engine they could, pure web spam wasn’t the only quality problem Google was facing. The cat-and-mouse game of spotting manipulation — particularly of on-page content, external links, and anchor text) — would be a defining feature of the next decade-plus of search.
It was after Singhal’s presentation above that Eric Schmidt (then Google’s CEO) said, “Brands are the solution, not the problem… Brands are how you sort out the cesspool”.
Those who are newer to the industry will likely have experienced some Google updates (such as recent “core updates”) first-hand, and have quite likely heard of a few specific older updates. But “Vince”, which came after “Florida” (the first major confirmed Google update), and rolled out shortly after Schmidt’s pronouncements on brand, was a particularly notable one for favoring big brands. If you haven’t followed all the history, you can read up on key past updates here:
A real reputational threat
As I mentioned above in the AdSense section, there were strong incentives for webmasters to create tons of content, thus targeting the blossoming long tail of search. If you had a strong enough domain, Google would crawl and index immense numbers of pages, and for obscure enough queries, any matching content would potentially rank. This triggered the rapid growth of so-called “content farms” that mined keyword data from anywhere they could, and spun out low-quality keyword-matching content. At the same time, websites were succeeding by allowing large databases of content to get indexed even as very thin pages, or by allowing huge numbers of pages of user-generated content to get indexed.
This was a real reputational threat to Google, and broke out of the search and SEO echo chamber. It had become such a bugbear of communities like Hacker News and StackOverflow, that Matt Cutts submitted a personal update to the Hacker News community when Google launched an update targeted at fixing one specific symptom — namely that scraper websites were routinely outranking the original content they were copying.
Shortly afterwards, Google rolled out the update initially named the “farmer update”. After it launched, we learned it had been made possible because of a breakthrough by an engineer called Panda, hence it was called the “big Panda” update internally at Google, and since then the SEO community has mainly called it the Panda update.
Although we speculated that the internal working of the update was one of the first real uses of machine learning in the core of the organic search algorithm at Google, the features it was modelling were more easily understood as human-centric quality factors, and so we began recommending SEO-targeted changes to our clients based on the results of human quality surveys.
Everything goes mobile-first
I gave a presentation at SearchLove London in 2014 where I talked about the unbelievable growth and scale of mobile and about how late we were to realizing quite how seriously Google was taking this. I highlighted the surprise many felt hearing that Google was designing mobile first:
“Towards the end of last year we launched some pretty big design improvements for search on mobile and tablet devices. Today we’ve carried over several of those changes to the desktop experience.” — Jon Wiley (lead engineer for Google Search speaking on Google+, which means there’s nowhere to link to as a perfect reference for the quote but it’s referenced here as well as in my presentation).
This surprise came despite the fact that, by the time I gave this presentation in 2014, we knew that mobile search had begun to cannibalize desktop search (and we’d seen the first drop in desktop search volumes):
And it came even though people were starting to say that the first year of Google making the majority of its revenue on mobile was less than two years away:
Writing this in 2020, it feels as though we have fully internalized how big a deal mobile is, but it’s interesting to remember that it took a while for it to sink in.
Machine learning becomes the norm
Since the Panda update, machine learning was mentioned more and more in the official communications from Google about algorithm updates, and it was implicated in even more. We know that, historically, there had been resistance from some quarters (including from Singhal) towards using machine learning in the core algorithm due to the way it prevented human engineers from explaining the results. In 2015, Sundar Pichai took over as CEO, moved Singhal aside (though this may have been for other reasons), and installed AI / ML fans in key roles.
It goes full-circle
Back before the Florida update (in fact, until Google rolled out an update they called Fritz in the summer of 2003), search results used to shuffle regularly in a process nicknamed the Google Dance:
Most things have been moving more real-time ever since, but recent “Core Updates” appear to have brought back this kind of dynamic where changes happen on Google’s schedule rather than based on the timelines of website changes. I’ve speculated that this is because “core updates” are really Google retraining a massive deep learning model that is very customized to the shape of the web at the time. Whatever the cause, our experience working with a wide range of clients is consistent with the official line from Google that:
Broad core updates tend to happen every few months. Content that was impacted by one might not recover — assuming improvements have been made — until the next broad core update is released.
Tying recent trends and discoveries like this back to ancient history like the Google Dance is just one of the ways in which knowing the history of SEO is “useful”.
If you’re interested in all this
I hope this journey through my memories has been interesting. For those of you who also worked in the industry through these years, what did I miss? What are the really big milestones you remember? Drop them in the comments below or hit me up on Twitter.
If you liked this walk down memory lane, you might also like my presentation From the Horse’s Mouth, where I attempt to use official and unofficial Google statements to unpack what is really going on behind the scenes, and try to give some tips for doing the same yourself:

SearchLove San Diego 2018 | Will Critchlow | From the Horse’s Mouth: What We Can Learn from Google’s Own Words from Distilled
To help us serve you better, please consider taking the 2020 Moz Blog Reader Survey, which asks about who you are, what challenges you face, and what you'd like to see more of on the Moz Blog.
Take the Survey
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes