#but for them there is still a line that they dont cros
Explore tagged Tumblr posts
knox-enden · 3 years ago
Note
ah I wasn't expecting my sleep deprived ranting in the tags to get noticed so hi uhhh what do you think Schlatt and Wilbur was like as teens? how different are they from their future selves?
As Teens, I like to think that they're just dumbass edgy teenagers who use humour as both a coping mechanism and as a tool to repress a lotta stuff. The two didn't have the most ideal upbringings, specially on Schlatt's side, so they come together to preform bounts of dumbassery.
Honestly, they kinda act like Tommy and Tubbo, albeit they preform more crimes and don't really care too much about others.
They aren't too different from their future selves too much, but there is one major different. Teens Schlatt and Wilbur still both have hope for the future.
They're teenagers, and yeah, Schlatt is still very observant and Wilbur is still a smooth talker, but they still have good morals. I mean, as of the moment, the two best friends are happy.
Wilbur has a wonderful girlfriend, Sally, who he's planning on marrying and perhaps start a life with her and even have a kid. And sure, his father doesn't come home often (he ain't a bad dad for this AU but he does travel a lot), but when he does, nothing goes south. Wilbur is happy.
Schlatt is with his best friend. He's away from the homelife he grew up with, he's able to do what he wants. He has plans of growing a powerful business, while still having his best friend around. Schlatt is happy.
The two are just happy as teenagers, and it really throws Tommy and Tubbo for a loop.
30 notes · View notes
pinktatertots99 · 4 years ago
Note
🔥 Feelings on the canon ships of Homestuck?
Send Me a 🔥+ a Topic, and I’ll Tell You My Honest Opinion About It
god am i gonna need to go with the sequals too? just og or all ships that’re considered canon by the end? whatever i’ll just go in order in what might be the ‘canon’ ships from all three of these categories. this is gonna be fucking long so anything and everything is under the cut. also i’m SO not gonna add hiveswap that can be its own separate ask. so:
roxygen: it’s a cute ship...but the sequal vers is garbage considering how inconsiderate roxy is to john’s feelings and his house burning down like damn rox this is the guy who sat with ya as ya mourned doom rose’s death give the guy some fuckin time himself.
rosemary: also cute ship...sequal versions are fuckin godawful tho. their barely a thing in meat from wha i can gather and then there’s candy...oh CANDY kanaya deserves better fuck this sense of her sayin she’s over it idc if it was off screen, even then half the cast ate stupid pills during that time so WHY must i be surprised that this is wha happens?
dave/kat: i dun like it. in either universe. meat is just perfect gay bois who have occasional deep talks and literally do nothing else while candy they split up thanks to jade which, geez ya guys must’ve been shit to tell her to fuck off like come on. og hs wise i barely consider them canon if we only get pictures and them just being on equal footing on quadrant talk. not to mention dave implied to be crushing on jade and JOHN not karkat, idk where this couple even came from other then love triangle situation with terezi but like, that’s barely much of a reason to become canon. i’d go with em bein pale/moirails more.
jadedave: i’m guessing candy and meat i thought meat implied they were dating but may aswell. so....it sucks but thats because candy and meat suck, meat dave’s basically cheating on her i didnt hear any implication they broke up and she’s like...chill??? and then there’s candy where she literally forced him and kar to break up and dated after dirk apparently died, i do not like the implications of this whole thing. course candy dave is dead and a robot now so...anyways canon wise dave had a crush on her and if jade does like him i’d hope it’s not cause of davesprite cause despite both being dave’s they were different. it’s cute otherwise.
davepetajade: it’s...cute? i guess? idk i kinda found davepeta a bit...idk overwhelmingly overrated? like i know where their popularity came from but readin the series now after all that hype i dont really see it anyways tho it’s basically davespritejade with nepeta in the mix. and idk nothin implied much of nepeta liking jade, or talkin to her much. and davepetasprite is a mesh of both so idk. it’s a ship with cute fanon works of em hanging with outfits but that’s bout it.
janejake: i hate it. legit. this is disgusting and completely throws out jane’s character. like even in the fixed timeline the talk she had with dirk probably still happened on the god bed’s and how she acknowledged wha she thought was wrong on wantin jake’s kids and so on trickster still happened and how she also realized she might’ve overhyped jake. but lets throw it ALL out the window to force jake in an unhappy marriage in both universes and possibly force him to stay in candy due to having tavvy if i’m reading the implications right. even then jake isn’t good for jane either both got their own needs this ship would’ve been sunked in canon and WAS but the sequals are beyond it so maybe that explains it but it disgusts me.
roseterezi: guess in meat specifically. yeah i kinda dont...care for it, like i still cant tell if rose and kanaya broke up or if she just fucked off without breaking up either one is fucked up on kan’s behalf. even then i just dont care for their kismesis it got brought up once and that was it.
jaderose: candy wise i guess even tho it was a fling. it disgusts me still mostly because of kan’s behalf on bein fucked over and both goin through a ‘surrogation’ process without her notice. like fuck this shit the jaderose fans deserve better.
roxycallie: idk if this one’s canon but it’s heavily implied callie lives with roxy least in candy. it’s cute, cant deny it even in og it was pretty cute, dont really care for candy vers tho but then again maybe their not a couple in it idk what’s canon couples anymore.
johnterezi: literally fucked in meat universe and john has kismesis feelings for her in canon. it’s...interesting, idk tho i feel like it’s one sided on john’s side.
ms paint/spade slick: i cant deny it’s cute, he’d least know how to treat a lady but god i’d hope it wouldnt be his only defining trait with her. also want ms paint to call his bullshit out pls and thanks.
dirkjake: honestly i cant tell if their STILL canon in og or not god forbid the sequals. in general though...i dont. i honestly dont really like em together much. they seem like the type to least stay friends but idk bout another relationship would be a good idea for em. maybe later down the line but otherwise canon wise they need a break.
and now for the canon one timer ships this involves any ships implied, uncomfirmed, ex-relationships, crushes, etc:
arasol: it’s cute, best ship. their quadrant was never confirmed but regardless their cute. sol tho in the sequals deserved better then to get abandoned by aradia goddamn.
fefsol: also cute, i live for both of em bein ass’s together.
erisol: oh boi this one...this was...yeah i cant even deny they wouldnt be too healthy, i like lookin at fanon ways tho for em. canon wise tho yeaaaah no these guys definietly wouldnt work.
gamtav: it’s...cute but boi gamzee needs some help i think.
gamsol: -sollux did imply he either wanted a kismesis or matesprit with him in one of the flashes- again same as gamtav.
aradia/equius: BIG NOPE nope nope nope equi that’s weird wha ya did never do it again thank fuck aradia hasnt been around him since.
karterezi: their actually kinda cute, looking back on em they could’ve worked. stupid doomed timeline bullshit.
daverezi: also kinda cute, idk tho if i got flushed for em tho i get more pale vibes but it was semi-a thing.
kanvris: it’s baaaaad kanaya deserves much better and vriska never seemed much the type for cementing into a relationship.
vristav: even worse, like i’d like to thank fuck tav one up-ed her in the end cause fuck wha he had to go through.
karmeenah: it...could be cute? maybe? only iffy part is the ages, i thought the dancestors were like sixteen tho since the kids said they were teenagers even tho they were at the time about fourteen? idk tho if eighteen is considered an adult in alternia or not tho it’s kinda implied to be? anyways tho it’s just off puttin maybe a bit tho.
meenahvris: it’s kinda cute, it was atleast, idk lookin back it does feel more unhealthy.
rufidama: baaaaaad i love rufi but he’s got some bullshit he needs worked out and damara deserves someone better.
rufihorr: just as bad as above, both deserve someone better or atleast horrus does with some therapy on it rufioh i think should just chill on relationships but it’s so obvious their not meant to be.
mitula: it’s cuuuuuute i cant deny it, ...okay fanon vers is canon is barely anything and tula does give more pale implications for tuna but with how protective she was over damara near him it’s sweet, but god do i wish canon tuna gave more feelings for tula.
kantula: it’s...creepy. like it’s so obvious the vantas bois cant communicate well but kankri’s crush feels almost pressuring on tula when he kept goin about them and goin “oh but we’re totally friends and i’m celibate so it’s okay its whatevs” like kan go to a corner give tula some air to breath.
crotuna: BIG NOPE cronus needs to learn fuckin boundaries thirsty fish bastard.
should i even add cro//eri due to the fact he literally asked an eridan out? regardless gross, ew, no, i’ll take the fanon ampora brothers anyday canon i didnt fuckin need that thx.
porrnea: it was implied to be more of a fling. idk considerin aranea’s track record i cant really say i’d trust her in many flushed quads. and porrim seems the type to have hers open and not a closed off thing so idk they got different cases.
aranea/jake: i cant deny it’s fuckin cute, i’d would’ve loved if they tried to do somethin but aranea was definietly uhhh not a good choice for jake. least she backed off when he didnt wanna be kissed but man yeah, it was cute while it lasted.
kurmeu: i cant deny the idea kur forced himself quiet due to hurting meu hurts me in a sweet way but as of rn them bein ‘pale’ and him mind controllin her when we dunno if she’s alright with this or not is...disturbing.
vristerezi: i am HIGHLY doubtful this is canon considering everything but i guess i gotta cement this. i dont see em as canon in og or sequal wise since vris is still gone in both, even then i dont like, see it, i see it but idk man i like em more pale then pail.
erifef: honestly no. both are much too different for a relationship, kinda glad they uh...got cut short cause honestly even their moirailship wasnt healthy what’s to say a matespritship would? on BOTH sides mind you.
rosejohn: thank karkat’s shipping board. anyways, i think their cute cause fuck it rose is a bi-con to me, canon wise probably wouldnt work but i’ll take fanon.
vriseri: kinda glad they got cut short of their kismesis cause boi eridan deserves a better one with how shit vriska was in breaking up with him.
johnvris: it was cute, i cant deny i’m soft over how the two talked things about vriska’s life and john’s it’s just kinda cute. it’s obvious tho canon wise with wha john went through it wont work out. would’ve loved if they became moirails tho but o well canon is god i guess.
spadePM: i dont like much of their implications, would be an unhealthy relationship regardless considerin spade’s flushed and PM’s pitch, they deserve some therapy and other people.
dadbert/momlonde: their cute i like the implications of em, sad they died though, it was cute while it lasted.
meowrails: may aswell count moirails in this shipping mess. anyways their cute, they gimmie sibling vibes course equius early into it was so...not a good moirail.
kurtuna: i guess it might be cute moirails? idk tho with kurloz’s implications it concerns me.
gamkar: as moirails...karkat was fuckin shit at his job i cant sugarcoat it. i get where it’s from he’s not gam’s lusus and shouldn’t be forced to check on him during his time of gettin high and such, i get they were kids, but god gam kinda deserved a better moirail. and then later on in the series it gets more fucked up between kar gettin stabbed by him and both in a pretty unhealthy moirailationship to the fixed timeline where gamzee is just shut into a fridge and kar doesnt fuckin care, like dude, wow. gamzee was bad yeah but damn, harsh a tad.
terezigam: as a kismesis it’s almost disgustingly unhealthy to me and honestly terezi deserved better and gamzee maaaaybe shouldn’t get a kismesis, ever, unless he can sort his shit out -the sequals tho wont do that lol-
minorly gonna count johndave in this: idk if i can see john reciprocating for dave so dave’s crush on him almost kinda hurts, especially since fixed timeline dave’s john is well, dead and our john is probably still different from his john, has angst but man i kinda dont mind it as a one sides crush it’s nice confirmation of dave bein bi atleast.
nepetajasper/jasprose: i cant see it, it’s disturbing i guess. i like em more as friends but jasprose is probably more creepy bout it.
signless/diciple: i think considerin the implications they were fuckin adorable and deserved the best.
summoner/mindfang: it’s kinda sad considerin its implied mindfang’s love for him might’ve been one sided, they could’ve been cute tho.
orphanor/mindfang: probably sounded like the best kismesis’s until he murdered dolorosa.
dolorosa/mindfang: BIG NOPE i dun like the implications.
condence/orphaner: since it’s implied orphaner had a crush on her, gonna say tho big nope considerin condence is a bitch.
condence/lord english: its hard to decipher their relationship in canon, but to cover all my bases it’s big nope to me somethin bout it makes me uncomfy despite both bein bastards.
9 notes · View notes
yoitscro · 5 years ago
Photo
Tumblr media
First thought: Homestuck^2 should've just been called Beyond Canon, and more people should call it that. 
The 2 was put on for chuckles; HS trending the day it was announced with it being a sequel spoke enough about how such a thing shant be underestimated, and why Homestuck is ABSOLUTELY more than just our small twitter crowd (and the scrap of us still on tumblr). I say that because remembering the Beyond Canon part slightly reassures me about the fact that this is a fanwork that will do some weird shit, and things I don't agree with, but isn't something that I have to subscribe to enjoying all the way with how I engage with Homestuck.
Homestuck 2 is not the canon continuation. Homestuck 2: Beyond Canon, is an OFFICIAL continuation.
Not having it on such an important stool and as the only content we all are only allowed to digest should come from both people who obsessively dislike it, and people who defensively support it. If a character says they kick babies then I can say, hey that's weird, maybe not great writing, but I can pretend they don't in my content, and i dont have to send threats or call people cishet white men for it! and, it's an absolutely great thing that we were all encouraged to create our own ideas without anyone who's influenced us to do so squinting their eyes when we actually go through with it. Glad I don't have to put this story up to the expectations of being a sequel to a 11 year, worldwide IP that's shooketh the internet landscape since it's merely optional, Death of the Author persists, and ideas aren't just dominated and revolved around the perspective of a 1% in this entire fanbase.
That said.
As an OFFICIAL continuation versus a canon one, HS2 is ok. It certainly has that fanfiction vibe, and a story it wants to tell. I can't really tell what that story is since we have like, 10 sub plots rn though. There's not a real a clear indicator on where the focus of main conflict is that connects all these stories together.
I thought that the prose in replacement of Vriska's battle was jarring, but not teeerribly surprising for the format HS2 is going for. It's more so using drawings to compliment text versus Homestuck's usual of panels being side by side with visual importance, or even itself being the one compliment. It sorta feels weird tho that it brought old fans back in with art just for them to get sneered at when they get a bit upset that there won't be main staples of art known to progress the story forward. 
Also people who mock people for “having to read homestuck” knowing there’s language barriers and struggling focus from those who’ve been use to something that was never so dense, are ridiculous.
Personally this could be solved by knowing how old flashes worked, having way more artists on the team, maybe even an art director if not already, and noting that we're not asking for the next Cascade. Rome wasn't built in a day, but Rose Ride sure was, and Homestuck’s animation is absolutely not the same as a 12-24 framed 12 minute cartoon. That, or just snuff the illustrative art as a whole since it's very clear on where the focus is.
I’m sure you’re not here trying to see my opinions on how the outer workings are though, versus plot.
Uuuuh, let's see. Yiffy's still a name I don't care to use until I eventually get tired of any of my art that do not show up in tags. This is fine and not as offensive as people are saying it is. Minors who want to cosplay this character don't have to call themselves this character. Not wanting to be one letter away from accidentally entering a very NSFW space of twitter is fine. Also the lot of people call Tavros, Tavvy.
I hope Kanaya's anger at being cucked is actually seen versus being implied through fan guesses and another character having to say she was.
Roxy needs to be more of an involved character. Where are they during all this?
Jane should have a mention of her relations to HIC being a main/bad influence on her current parallels to Alternian dictatorship.
The PRE-RETCON GROUP should have a fun one-shot update for fans who like them, since they oughta be around if they fell through the ghost hole. Most of them. The sprites that aren't Jasprosesprite should also show up too, since they're around.
Aaaaaand I think we should be extra careful going into the future when it comes to the alien rebellion. It's weird that a lot of the writers are white and toy around with concepts that can be a not so great parallel to racism. Currently not great timing rn! If the characters are going to remain aracial, but with them still doing not much to reference other non-white earth cultures or getting new hair cuts that have different textures (looking at you, Rose), we shant make the species with actual biological benefits a racism commentary. the xeno joke at least had a play on words. If any writer has happened upon this then a, please don't get mad at me again haha, and b, consider having more black writers or directional assistance on your squad. You know who they are.
In the future. I casually want the ghost from the Dream Bubbles to be shown since it's a big elephant in the room to not have a single one of them in the bg despite a load of them appearing from the ghost whole. Don't gotta give them speaking lines, especially the dancestors. I personally don't know if I want that right now.
I also hope in the future that we don't get HS content that is only going to revolve around HS2, if it's optional enough to engage with without being the only option. That's why PQ could ended a bit better for me, and why I hope it's not the main thing that's keeping Hiveswap on the backburner. I don't think it's farfetched to consider that multiple HS content could come from more than just one team; to relieve work load, but to also strengthen the idea that Homestuck can be a various amount of perspectives when it comes to the ideas fans have. The most dedicated fans leading the direction of the story is not just a handful of them. If anything, at least acknowledge the massive ass fan projects going on once in awhile to showcase the different avenues.
"Hey Cro, you sure have bitched about this alot. Do you have anything good to say? Why don't you stop reading if you hate it so much!"
Not every comment needs to be golden, love. Again, some of these decisions I eck at, but ultimately they're just words on a computer that I'm not holding anyone at gun point to do, and I'm curious to see how the story handles itself going forward, since again, it's just a fanwork. Sometimes I wish to not only see where the plot goes, but to see a writer's craft in action.
Good Things:
The Art. Again, please have more artists. It'd help so much, especially since the main one is also double timing for VE. That said, HS2 sticks out to me because of the way the color composition is used. Aside from hair and other tiny things, I haven't seen black used a lot, which makes colors pop. It's really nice to look at. I hope we get more sharper styles of character in the future, since it builds on nostalgia and makes the trolls feel much less like they're from Repiton, but I can deal with it for the most part. I also like that one panel where the omega kids and vriska are talking in the dark room, and based on where they're standing, the text aligns. Tasty as hell.
Meat and Candy still do hold neat logic in the direction the stories go. Candy, while it could be more tasteless in some areas, is chaotic and too much of a good thing. Meat is having something a little more straightforward, though I'm not sure quite yet where it's going. I always found Candy to be the part of the epilogue that actually entertained me the most, from how much of a surreal Robot Chicken skit at 3am it felt. Sometimes the jokes slapped real nice and made me wonder, going in, how is this monkeys paw gonna play out and, hopefully, make people laugh or smirk like they got a good roast at themself?
The slightly episodic feel of each update is what I wanted from the Epilogues, so it's interesting to see that play out when it comes to switching different perspectives.
The bonus updates get points for featuring characters that a lot of us have been wanting to see for ages.
Hopefully this isn't unpopular, but I think the tension of Yiffy's introduction was nicely composed and written (ignoring some of the things I wish for Jane). It leaves you with enough want to see what'll happen next time. You could also say that despite her growling and making a lot of noise, it's not actually bad writing: I see it as the audience being forced to see her in the same perspective that Jane see's her; a dog. Upon no context we're seeing the same thing while knowing things are obviously off, and once we see this character in a new environment where their personality shines, it'll have a bigger impact her own character being humanized. So I like that.
Okay, I think that's all I got. I improv wrote most of this; hopefully I won't be taken out of context since I don’t think that HS2′s writing should ultimately be a judgement of the writers as people, nor treated as if they should hold the same unhealthy work environment that Andrew forced himself to do when writing the og comic. And I'm still like, donating to the patreon and everything, lol.
[runs away]
edit: i was going to put the cw as another positive thing for the comic...but...yeaaaah.
44 notes · View notes
lokbobpop · 4 years ago
Text
Cross
Cross, the principal symbol of the Christian religion, recalling the Crucifixion of Jesus Christ and the redeeming benefits of his Passion and death. The cross is thus a sign both of Christ himself and of the faith of Christian
Cross c Ross cro ss cr oss
Writing the word cross
Being cross angry i see i have blame towards myself for spending so many wasted years cross angry and not seeing and realizing this wishing id woken up to what i was doing but now seeing it start from here how i am at any given moment i dont have ot be cross i dont have to show. Im cross i can change i see what im allowing i can walk through the judgment and justification of being cross i can let it go i dont need ot be cross anymore i can see it for what it is even my ego within writing this now say yes Caroline your amazing you can do this and seeing others see me as amazing and doing it lol
Reading the word cross
Green cross code wher jimmy saviel would be on the tv and say how to cross the rd then this guy turn out to be one of the countries top pedophiles and i think who can i you trust it’s disgusting how this could even happen a man given the responsibility of children who is a pedo wit people must of know he was weird i feel the system fails people and people are scared to talk and say what they think in fear of being ridiculed
Sean cross a friend from HK friendly very opinionated judgmental loves animals
The cross as in crucification seeing people on the cross blood on there hands and feet thinking about the pain they must of went throw dying on the cross.
Crosses at school when you got your work back from being marked and seeing all the red crosses where you were wrong which was mostly all of them thinking how can this be it felt right im sure it was right they must be wrong i can’t be this wrong all the time no way.
Red Cross charity how i feel they have taken so much money from people and the top people who have lined there pockets with gold of the hard working person who has believed they are putting there money into a good cause but only giving it to the the few at the top while the needed are still needing.
I remember when i was really deaf my friend father who was very religious said oh what a cross to bare i thought about it and thought like is he saying something else did this to me or did i do it.
The cross as in kisses how i use them often and how most people do xxxxxxxxx you’ve got to love them.
Saying out loud
Jesus cross to be cross
To cross it out like what i do when ive walked a word i cross it out like im getting somewhere were i can see whats going on to feel better that the amount i have to do today is getting smaller one my 6th word
To cross paths with someone you don’t want to
To cross the rd why did the chicken cross the rd
Sf
Does this definition support me no pedos in children’s care myself being cross and self judgment Red Cross thief of good peoples money and people on crosses dying its all hear within this word just a simple word i would have never had looked at
Cross cost
Cross
A mark of the X a kiss
To cross a road
To cross out things done or accomplished
How will i live this ? By seeing i have choose not to be cross i choose me with self love self forgiveness self power
0 notes
mattmartelli · 8 years ago
Text
hi everyone im lina !! im fifteen n i live in the cst timezone so im usually on around this time of night !!! also my pronouns are she/her ! im super excited to be rping w yall ??? u should all hmu bc i wanna be friends w all of you.
DYLAN SPRAYBERRY - Did you hear MATTHEW MARTELLI joined the drama club now? Yeah, apparently they’re a SIXTEEN year old JUNIOR at Washington High. I heard they’re playing KURT KELLY in HEATHERS. Let’s hope they don’t fuck that up. I’ve seen HIM before in the FOOTBALL PLAYER table at lunch. At that table, they’re known as the GOLDEN BOY, because they’re +PERSPICACIOUS, but also -POLLYANNAISH. I wish them good luck with their slushy facial, though.
Tumblr media
theres a stats page here and i’ll make a plots page but i already have two of them here !!!! message me to plot or like this and i’ll come to you once i get over my whole shyness thing
matt was born matthew alexander hemingway-martelli on august 29th, 2000 ( he's the youngest junior in the school heh ), but his family no longer say their father’s surname ( hemingway ) in order to honor their mother after their death. now they all just go by martelli, though they are still legally hemingway-martelli.
he was born in manhattan, ny into an upper class family but moved to rivercreek when he was three, the town his parents met and grew up in.
when he was in his mothers womb, matt got sick. this left him with fine hearing in his right ear, but very very limited hearing in his left ( he can really only hear loud noises really close to his ear, and everything sounds mumbled and blurred. therefore, hes fluent in asl and learned it alongside english growing up. he uses a cros system at the moment but is always afraid he’ll be bullied bc of it so he keeps it hidden and only those close to him knows he even hard of hearing. u dont wanna know how many times hes had to lie and say he was attracted to someone who caught him staring at their lips ( aka reading )
his family has five members. he has his dad william, his mother olivia, his older brother maxwell (21), and his older sister ________ [to be added by another rper] (17-18). hes the youngest of them all and his siblings never miss a chance to tease him about it.
his mom was a surgeon, and matt never really liked going to work w her bc when she was at the hospital she was always so serious yet at home she was the most fun person ever ?? she always encouraged him to be healthy and do well in school and was basically his biggest role model. when he was thirteen, she died after being hit by a drunk driver, leading matt to make a vow never to drink or do drugs. ever.
the entire family was devastated by the loss of their mother. thank goodness they turned to each other instead of bad coping mechanisms like drinking or drugs, and ultimately it made their family bond even stronger.
his father was/is a very successful businessman, and used to be very very involved in his large chain of pizzerias that were nationwide, but its hq was in manhattan. when he had kids, he started to focus more on family and when matt was three, he decided to step back a bit and do most of his work from home in order to take care of the kids bc their mother was v busy. hes also a rlly good chef ( hence the pizzerias ) and they often have family dinners around the table ( in fact, when their dad is home its mandatory every night theyre free ). now that the kids are old enough to take care of themselves, hes started to go back to nyc and get back in the swing of things. for the past two years, it isnt uncommon for him to spend a few days there every few weeks.
matt is the kind of person youd call a ‘gifted child.’ ever since he was a kid, hes always been that kind of cookie cutter kid pulled out a 40s comic strip - the all american boy next door whos nice to everyone and good at sports and has a 4.0 gpa thats friends w everybody ( yall know what character im talking about ) - but having that burden placed on him at such a young age has begun to stress him out big time
his after school activities include various volunteer stuff, football, track and field, gymnastics, peer tutoring, national honor society, and now theater. he also is taking seven classes ( not including pe ) - four honors and three aps
when matt grows up he really wants to be an engineer but also wants to follow in the footsteps of his mother, so hes torn between becoming a surgeon or a biomed engineer.
hes lowkey obsessed with maintaining his perfect image. he spends hours upon hours studying each night, works out and practices football and gymnastics almost every day, and on top of that, now he has this freaking musical to top it off ! ( yes, that says freaking. he doesnt swear. )
speaking of the musical, lets just say matt is not pleased. his sister encouraged it based on how much matt enjoyed choir when he was in junior high ( which unfortunately he had no time in his schedule to continue ) and urged him to audition and play a member of the ensemble as a time to destress at the end of the day and do something fun. but no, they just had to give his kurt kelly. while its not a leading role, its still enough that now hes stressing about lines and his singing and dancing and not knowing if what hes doing is realistic because hes never been drunk or a douche how is he supposed to know?!?! i mean he still enjoys it tho but its just kinda like the icing on top of the cake
idk. i’ll update this but plot w me for now !!! yeet
also im not gonna be able to be on or opening tomorrow bc i have rehearsal for my play til 7pm cst so whoops
5 notes · View notes
jimmybechtel · 4 years ago
Text
How Long Does SEO Take to Work? Plan Ahead!
Believe it or not we’ve hit July! And now is the perfect time to remember to plan ahead and get thinking about Q4, Black Friday as well as Christmas planning.
How Long Does SEO Take?
4 to 6 months is what most will say. Even Google says to give any organic SEO change at least 3 months to have an impact.
The Timeline and Lag for Owned and Earned Media
The organic SEO landscape is far more competitive and complex now than it was 10 years ago. So, you need to be your very best to succeed. A page cannot rank if it’s not crawled and indexed, so we need to give the search engines like Google time to crawl and index any changes before they can possibly influence the search engine results pages (SERPs).
Keyword Landscape
Don’t obsess over a couple of highly competitive vanity terms. Look at the bigger picture and don’t just ask “how long will it take me to rank #1 for this [insert highly desirable term]?”.
Search now is all about overall global keyword visibility and ranking for a whole host of related keywords. This includes natural longer tail language patterns with the emergence of voice search.
Static Content & Longer Tail
Every page represents a new opportunity. You should be targeting a range of your main money terms across your product and service pages, but also plugging any keyword gaps with always-on hygiene content via a blogging and content strategy to capture that ‘unaware’ traffic in the early research phase of the buying cycle.
Our A.I. & Clever Predictive Purchase Process
Leveraging A.I. and our predictive purchase process flows across every channel we manage.
In this case, it’s all about creating fantastic content to reach your potential customers and to intercept them when they’re in the unaware stage of the funnel, to gain awareness and to build trust with users that match your target personas and have a high propensity to convert later down the funnel.
Remember, rankings are meaningless without highly converting, profitable traffic.
Focus on Outcomes
Leading on from above, that’s why you should work with SEO agencies who focus on outcomes, rather than outputs. We’re all about performance and ROI.
Site History
If your site has been around 10 years and has a domain authority (DA) of 50, in this example you may expect that a more lofty ambitious brief to be appropriate over say a site that’s brand new with no previous SEO (we all get that enquiry for a someone wanting to be #1 for loans within 6 months…).
A Timeline & Indicative Roadmap
So, what does it depend on? It depends on so many factors including but not limited to how long your website has been around, how much SEO work has been done on it previously, how mobile-friendly it is, what’s the design and UX like, how much content is on it, its link profile as above, and hundreds of other SEO factors. That said the first six months may include:
Month 1: This will typically be strategy focused; including keyword strategy and planning, a website & technical SEO audit, mobile-first audit and a link profile analysis
Month 2: Following the initial strategy work we can start making modifications, including an on-page SEO checklist as well as a content audit for content optimisation. We can also start the link clean-up where needed and disavow process
Month 3: From month 3 once the initial on-page and technical SEO is underway we can introduce some content strategy and planning for always-on and hero content (depending on the package this can also be started in month 1, but some clients budgets mean a phased roadmap). We can also create your digital press office and set your digital PR strategy too. We’d complete a 3-month quarterly review after the first 3 months to present progress on your account, as by month 4 after the initial 3 months we’d expect to be able to present some excellent progress that will show the significant value we’re adding
Month 4: We now commence the monthly technical and on-page SEO checks. Also, we work to commence content creation in line with the plan and we also start digital PR work (again, earlier if the client’s budget allows) to gain media coverage, links, references, mentions, citations and media coverage to build the sites DA and link profile. Around months 3-4 we also look to complete a local SEO review (earlier if local SEO is a key focus for that business). We can also identify any local link targets. This month we should start to see some fantastic progress
Month 5: We continue the on-page and technical SEO, as well as continue to work to the content creation schedule. Digital PR continues in line with the plans set for media outreach. You should be seeing more and more visibility, traffic coming in from SEO and sales starting to increase
Month 6: We continue everything outlined above in month 5, as well as look into components such as conversion rate layout recommendations (CRO) that can be implemented to improve conversation rate. Plans are never set in stone and it’s also important that we revisit any keyword landscaping with a bi-annual keyword and audience analysis. We’d also complete a 6-month quarterly review to present progress on your account and would expect the upward trajectory on your account to be trending very nicely at this point across all KPIs with some fantastic ROI
Protect Your Investment
It’s a lot of hard work to achieve real-world organic SEO performance, with many companies all competing for the same real estate. A lot of people underestimate how much time and money it takes to be successful with SEO.
Regardless of budget, you can’t simply buy your way to the top with SEO. You need to create a sustainable, consistent strategy and maintain gradual progress. Success rarely comes within the first 3 months, so don’t make any early judgment calls and go into the project with realistic expectations, budgeting for at least 6 to 12 months of SEO to allow you to give the campaign chance to gain momentum.
Paying for just a few months of SEO does not work and is simply like throwing your money away, as SEO is a long-term investment that will grow and drive sales for years to come.
Also, don’t get swayed by any short-term tactics that can negatively impact your long-term success. Play by the rules, implement white-hat gold standard SEO practices, and work to the search engines best practice guidelines.
Summary
As above, we mentioned it takes 4 to 6 months to start seeing results. Always remember that this is a general rule and always speak to us for a customised view.
To illustrate this, if you’re a brand-new business wanting to be #1 for loans, you could be 12-18 months and still not see this result as its probably not a realistic goal.
However, we can help you forecast and predict performance. See this post from a couple of months ago https://www.koozai.com/blog/search-marketing/predicting-digital-marketing-performance-the-dos-and-donts/.
The 4-6 months for most is generally accurate, but bear in mind this is when you start seeing results, and SEO results grow and compound over time. Things just continue to improve, and what you see at 6 months will be considerably less than what you’re getting at 12 months, and so on. Then in 12 months’ time when you re-forecast to look at the potential to grow within your marketplace, you’re forecasting from that new higher starting benchmark, so the compounding effect continues. SEO really is an investment.
So, make the proper investment, plan on being in it for the long haul, and you’ll be rewarded with long term success.
If you need any help or support with a forecast or view on the potential for growth for your business, don’t hesitate to get in touch.
The post How Long Does SEO Take to Work? Plan Ahead! appeared first on Koozai.com
How Long Does SEO Take to Work? Plan Ahead! published first on http://wascript.weebly.com/
0 notes
kerahlekung · 5 years ago
Text
Dua minggu ni Bani Melayu jenuh nak cari duit...
Dua minggu ni Bani Melayu jenuh nak cari duit....
Hari ini adalah hari pertama "perintah kawalan pergerakan" di negara kita. Dengan kata-kata mudah, "sila duduk di rumah" dan jarakkan diri anda daripada orang lain. Dua minggu ni lenggang..ramai pakat berkurung dirumah.. Gara2 kerajaan pintu belakang cetus panik. Yg meniaga ka..keja kilang ka.. yg ambik upah panjat pokok petai pun tumpang panik sama.. walhal panjat pokok jer pun.. Lepas 2 minggu..lepas sebulan dua.. Klinik kerajaan pulak penuh dgn golongan ibu2 pi check perut dh tak boleh kerja kat luar, depa pulun 'buat kerja' kat rumah selama 2 minggu...
Peniaga2 bumiputera kebanyakannya meniaga kecil2an.... ada yg sara anak bini jual nasi lemak.. jual mee goreng talam kat gerai yg disewa .. ada yg sara anak dgn hasil jual roti canai.. Tiba2 keluar larangan yang cetus panik.. yg boleh meniaga pun jadi was-was.. bukan sehari dua..tapi dua minggu.. Kerajaan Dap Pulau Pinang yg 'kapir' pun kurang2 ada jugak sediakan 20juta utk bantu peniaga yg terjejas.. tempat lain??? Negeri lebai PAS bagi apa? air jampi dgn kismis? 
2 minggu ni kalu peniaga Cina tutup terus kedai..tengok jer lah..lebai2 yg mulut puaka yg gedik dok main isu perkauman tu..silap2 minyak masak bini nak goreng cucuq di rumah pun tak dak.  Yg dok lagak pakai Merc, Alphard tu.. tersadai lah kereta tepi rumah sebab tak dpt spare part kalau kereta buat perangai.. baru ada akai... Mana pi dah puak2 yg gedik kempen Buy Malay First tu?? Last2 depa yg awai2 serbu kat Tesco, supermarket milik tokey2 Cina... 
Dok kampung, sembang depan lembu kambing..dok takbir..takbir.. macam depa saja yg beragama.. depa saja pandai.. walhal tok lebai depa yg jadi menteri.. ditanya pasai alam sekitar.. dia dok kira pasai Musang King saja. Poodahh... Kita mempunyai peluang yang tipis untuk memutuskan rangkaian jangkitan COVID-19. Bantulah KKM dengan memainkan peranan hangpa, kerana setiap individu bertanggungjawab untuk mengambil segala langkah, berjaga-jaga demi keselamatan diri dan keluarga. Kegagalan bukan pilihan di sini. Jika tidak, kita mungkin menghadapi gelombang ketiga virus ini, yang seterusnya akan menjadi lebih besar seperti tsunami, lebih-lebih lagi, jika kita mempunyai sikap "tidak kisah"... - Aman Shah 11
Pelajar Sekolah,MRSM,University yang dicutikan 2 Minggu, wajar dibenarkan mereka pulang ke rumah... dan waktu ini mereka sesuai berada dipersekitaran rumah masing2 berbanding University.. Makan dan masa depa terurus lebih baik bila bersama keluarga berbanding di Unveristy.. melainkan mereka itu ada simptom demam ,batuk dan selsema.. dalam waktu ini proses disinfection atau pembersihan sesuai dilakukan di University.. Berbeza pandangan aku dengan Pekerja,Ketua Keluarga yang bercuti lalu pulang ke kampung bersama anak2 bercuti bersama Keluarga.. Itu memang tak patut.. sebabnya pergerakan Pelajar di kampus mereka tidak terdedah dengan perskitaran luar berbanding sebagian pekerja..
Aku tak dapat bayangkan, bila University arah kosongkan Univeristy..mereka ini semua terperangkap..dalam keadaan persekitaran kedai dan perniagaan tutup..bagaimana depa nak makan ? hat aku kat kawasan aku jenuh nak cari makan.. apalagi Pelajar Univesity.. Lainlah kalau Kementerian Pendidikan, Pihak Univeristy NADMA menyediakan Pakej bantuan Makanan, dalam tempoh mereka dikuaratin...aku simpati dengan nasib pelajar pelajar Univeristy ini.. bayangkan kalau itu adik dan keluarga korang.. Nampaknya segala sudut dan Kementerian Kerajaan Pintu Belakang dan rampasan kuasa ini gagal...Memang layak dikata Kerajaan tebuk Atap.. - Ipohmali
youtube
Hari malang bg rakyat bila nk keluar daerah dan negeri bila kena beratur panjang di balai polis utk mengisi borang..Bila crowded begini,ianya bakal menjadi tempat utk pemindahan virus Covid-19..Nak elak org berkumpul,hingga solat berjemaah dan jumaat tak dibenarkan,tp skrg bersusun2 di balai polis krana nak isi borang.. Tp itu semua bkn silap phk polis,krana mereka hnya menerima arahan sbgaimana yg diarahkan oleh phk kjaan.. Kalau solat jumaat ditegah,knapa suasana begini tidak? Sebenarnya terlalu simple utk org tdk beratur..Just buat aje website utk isytihar keluar daerah/kawasan dan hanya isi secara online..Tak perlu beratur.. 5 negeri pembangkang kamu tak jemputpun utk sama2 hadir mesyuarat..Mngkin drp 5 wakil yg tak dijemput,salah seorg blh bg idea yg bernas..Lantak kamulah,kalau dah rakyat minat dgn kjaan pintu belakang.. Semoga anggota2 polis sentiasa sabar,tabah dan tenang dlm menempuh suasana yg sulit dan sukar ini..Jasa anggota2 polis terlalu besar dlm menjaga keselamatan negara.. - f/bk
Apa pula reaksi dari Menteri Singapura dalam menghadapi krisis lockdown ini... For companies affected by Malaysia’s restricted movement order, the Singapore Government will provide S$50 for accommodation of every worker per night for 14 nights if it is not feasible for them to stay with relatives, friends or colleagues, say Josephine Teo, Minister of Manpower and Second Minister of Home Affairs Menteri Singapore.. Ni baru betul peranan sebagai sebuah Kerajaan..Bagi Pekerja yang terjejas,Diberikan bantuan Singapore Dollar S$ 50 setiap hari, bagi setiap pekerja yang terjejas akibat sekatan.. Nampaknya kawalan dan cara depa mengurus masalah dan mengambil berat soal ekonomi Negara dan Rakyat... - f/bk
Din's half-baked national 
directive for social distancing...
First,dont get us wrong.We are always prepared to give a fledgling new government a chance.But if in its first public directive,it fumbles big-time,indicative of its lack of thought and planning over a policy matter,it does take quite a bit of convincing that it is actually a functional unit. Social distancing was to combat the spread of Convid 19 through close over-crowding,it seems. But tonight over-crowding were at major police stations,ironically created by the need for social distancing. Within hours of the controlled movement directive by Muyiddin,after his golf game that delayed the telecast,came the IGP's interpretation of the approvals needed from the police in order to cross state boundaries. You have to fill a form. What? In these times of on-line everything! We bet the form is not even a legally registered document either. And wait for the police approval. How it is approved is a mystery,like if you need medical necessities etc,maybe? Somebody forgot about the university students who sre not from the localities. The universities them to get out of their hostels,and yet the police have just told them to get their written permission first.
youtube
Orang ramai serbu balai polis
They were forced to queue and croed the police stations all over,for hours. If there were Convid 19 virus there,they were having a party there with the crowd at the police stations by the cortesy of the IGP!!! Tempers flared. And a policeman was recorded giving the weary and tired undergraduates a lecture that the purpose of the whole exercise was to reduce close socialising,that they should not go back to their kampungs and spread stuff. Stay where they were,he urged. He was spot-on,but he was ignorant of the fact that the kids have no place to stay soon,having been asked to go. These are,in fact,the fortunate ones compared to the ones without travelling expenses for such emergency trips. Aha!! A Malay government not familiar with Malay poverty! What poor coordination between government agencies that caused the student to suffer. Even the Minister of Higher Learning had to publicly appeal not to be cruel to the students. Finally,at late night the order to fill in the forms were rescinded to the student's relief. New orders will be made after a meeting later. What? You had not known or discuss the consequences of your orders? Wasn't there a plan? What a bunch of nincompoops! Anyway,we can bet that a thousand votes were lost from "saliran satu" tonight!
youtube
Crowds after crossing Causeway to Singapore...
Then,until at the time of writing,there must be a crowd of tens of thousands of Malaysian stranded on the causeway,trying to get there before the midnight dateline,and to be there at work tomorrow in order not to lose their cherished jobs. Patying with Convid 19 alright! The government is still silent as to any inter-govermental understanding as to their being away for two weeks from their jobs. Will it still be there when they come back? Wisely,they dump their government and decided to be in Singapore tonight,squat at their friends' place or as one father of six said,"I will sleep in the five-footway for the two weeks if I have to!! And in all that fiasco there was silence from the government, No cabinet minister even bother to say anything. Like this was happening in another country. We'll let it go this time. But the next fumble will invite hell because it's support-the- semburit team again! - Umar Mukhtar.
Our Politicians Have Failed Us...
I could recall vividly the time I stood in line at the designated polling station in Bercham waiting anxiously for my turn to cast my vote. It was the morning of Sunday, May 8, 2018. And the occasion was General Election 14, better known by its acronym GE 14. My wife and I arrived early to be among the first to do our part for the country. We had been diligently voting in successive elections and our polling station was either at the Bercham Chinese school or the national school in Tasek. This was contingent to our listed address in our identity cards. And as responsible citizens, we had never failed in our duty. However, can we say the same of our politicians? As long as I can remember our politicians have been involved in one political manoeuvering after another. Some were embroiled in political scandals and shameful misdeeds. And they have no qualms in doing so openly and under the glare of publicity. Examples are plenty so there is no necessity to name one or an occasion. Events following the ouster of Tun Mahathir, the sitting Prime Minister and the appointment of Tan Sri Muhyiddin Yassin as the 8th Prime Minister on Monday, March 1, has taken a toll on the people to the point that trust in politicians is fast disappearing. This trust deficit is not evident in me alone but in many whom I had spoken to. And that includes Raj, my barber in Bercham. In fact, he has been the most vocal among my many acquaintances. Malaysians are growing tired of politics and politicians because of the protracted crisis which has dogged us. The manner in which the new PM got appointed had left many questions unanswered. “Did he really have the numbers?” asked Raj. “Or was it trickery by another name?” Malaysians are beginning to doubt the functions of representative democracy. Others vowed not to vote in the coming elections. “I think trust in politicians is at an all-time low. Politics is probably the most hated profession at the moment,” said another friend.
The Pakatan Harapan government collapsed early this month, following Tun Mahathir’s resignation as prime minister, after a group of MPs broke ranks to form Perikatan Nasional comprising PPBM, Barisan Nasional, PAS and PKR MPs aligned to their former deputy president, Mohamed Azmin Ali. Critics have described the new ruling coalition as a back door government as it comprises BN, which was ousted in the last general election, and others such as the dodgy Islamist party, PAS. It is considered morally inappropriate as it is not mandated by the rakyat. Let us look into the past to make sense of things. Even during the 1997-9 political upheaval, a government was there for better or for worse. The people’s mandate was sought soon after Anwar Ibrahim, the deputy prime minister was sacked. The rakyat decided to keep the status quo despite lots of misgivings about how Anwar was ousted. This time around, political shenanigans have been elevated to a new level. This is unprecedented in our political history. Many Malaysians, especially those who voted for Pakatan Harapan (PH) in the last elections, are dismayed. The political drama had confused the rakyat. Some netizens became keyboard warriors to voice out their dissatisfactions. There are plenty of reasons why people are upset. Among them is PH’s failure to maintain solidarity within the coalition. Their opponents, BN and PAS, saw this as an opportunity when PH collapsed. What frustrated people, especially those who had a hand in PH’s GE14 victory, was that the crisis occurred at the highest level among the political elites. They could not do anything about it but to watch helplessly on the sidelines. And this resulted in Muhyiddin Yassin getting appointed as prime minister. It was something unexpected. How he got into the King’s good books is a million-dollar question many have been asking. But since the monarch has decided it is a done deal.
In politics, there are no permanent friends and enemies. We are aware of that. But there must be some dignity and decorum left in the hearts of the most ambitious and malicious of politicians. At least respect the mandate of the rakyat in the May 2018 general election. They booted out a six-decade-old government short on ideas but high on corruption and welcomed a new untested coalition. It was people’s power at its best. Malaysians gambled their future in Pakatan Harapan. Thousands living abroad returned home to vote. The ink on their fingers had barely dried when the true colours of the new government manifested itself. Promises were largely ignored. Race relations got worse. Cost of living soared and is still soaring. The failures are not that of the voters but the very people they voted in. What is amazing is that the very people who liked to demonise ‘the other side’ suddenly find it acceptable to work with ‘the devils’. We should learn from the Belgians. Their country was without a government for 541 days from 2010 to 2011 when the ruling coalition collapsed. It happened again last year when they only had a caretaker prime minister. In an era when trust and faith in politicians are growing thin, the Belgium experience is a good example. Perhaps the time has come for us to experiment with one. Our civil service is reasonably competent to begin with. They have not really shown what they are truly capable of because of interference from politicians. Tommy Thomas and Latheefa Koya are two fine examples of exemplary civil servants. Too bad their careers were short-lived due to the current political imbroglio. What we need is an interim prime minister and a lame-duck parliament which is incapable of passing any laws. In such an instance civil servants have no other choice but to come to the fore. Let the Chief Secretary or the KSN (Ketua Setiausaha Negara) call the shots. - Fathol Zaman Bukhari,Ipoh Echo
VIP saja kah yg kena scan? 
Yg lain ada guarantee letter ke bebas dari CoVid 19?
cheers.
Sumber asal: Dua minggu ni Bani Melayu jenuh nak cari duit... Baca selebihnya di Dua minggu ni Bani Melayu jenuh nak cari duit...
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
we-johnnygonzalez-blog · 6 years ago
Text
What Happens When SEO and CRO Conflict?
Posted by willcritchlow
Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.
In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?
In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.
Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.
So how should we mitigate these risks? How should we work together?
But first, some evidence
There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:
And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:
But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:
So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?
Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:
If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).
But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.
An example CRO scenario: The business impact of conversion rate testing
In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.
We start on a high with a winning test in our first month:
After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:
Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:
By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):
Is this the full picture, though?
What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:
If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:
Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:
So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:
Let’s make some more sensible decisions, considering the SEO impact
Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:
We would not:
Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:
When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:
The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:
Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:
Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up? But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales? In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant. If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.
Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.
All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.
The future for effective, accountable SEO
There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:
1. We’re going to need to do more testing generally
I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:
I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:
2. User signals are going to grow in importance
The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.
I love talking about all of this, so if you have any questions, feel free to drop into the comments.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes