#i changed the url of my main slightly and forgot to update the link here whoops
Explore tagged Tumblr posts
identifying-parrots · 3 months ago
Note
Hey just wanted to let you know that the link to your main blog in ur bio is dead
fixed!
2 notes · View notes
wickedbananas · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://ift.tt/2idtH0V via IFTTT
1 note · View note
ocalafloridaseo-blog · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn't love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it's nowhere more relevant than tackling technical SEO issues on your site.
Let's focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it's possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Tumblr media
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I'll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn't identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You'd think that, with all the information out there on the web, many of these would be common knowledge. I'm sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Tumblr media
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven't inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you'd consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I'd never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren't we ranking for our brand name?”
To the website owner, it's a head-scratcher. To the seasoned SEO, it's an eye-roll.
Can you get organic traffic to your site if it doesn't show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you'll immediately see how many pages on your site are ranking.
Tumblr media
What to ask:
Is that approximately the amount of pages that we'd expect to be indexing?
Are we seeing pages in the index that we don't want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they're indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what's causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site's organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I'm still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to yoursitename.com/robots.txt and make sure it doesn't show “User-agent: * Disallow: /”.
Here's a fancy screenshot:
Tumblr media
And this is what it looks like in Google's index:
Tumblr media
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it's set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it's correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won't pull your pages out of Google's index if they're already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that's the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
Tumblr media
90% of the time you'll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It's best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
example.com/home.html
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz's canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It's likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs - only one should exist
If they don't, you'll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it's used for more than resolving the same version of a slightly different URL.
Tumblr media
It's also useful for preventing page duplication when you have similar content across different pages - often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify's platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to https://www.bollandbranch.com/collections/baby-blankets/products/cable-knit-baby-blanket.
But looking at the rel=canonical, we see it's configured to point to the main URL:
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images - it's such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it's not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google's Gary Illyes confirmed that it's unlikely Google's crawler can recognize text well:
@Web4Raw I say no - Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google's ability to extract text from images, and there's evidence of some stunning accuracy from Google's technology:
Tumblr media
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Tumblr media
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it's still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It's most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you're hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks - Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you'll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
Tumblr media
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we'll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What's even more shocking is Google's plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Tumblr media
Going even further, it's not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It's also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO's arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it's a terrible idea.)
They're a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it's vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I've seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it's not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King's striking example shows that the power of 301s over 302s remains:
Tumblr media
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don't go redirect-crazy on all 404 errors - use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there's a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Tumblr media
Source: Wikipedia
It's a fairly simple one to check - either you have it or you don't, and by and large there's no debate that you shouldn't be using these.
Google's John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on - a clear 301 redirect is always much better.”
And Moz's own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there's a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google's Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site's pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I've seen with XML sitemaps while working on clients' sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you're not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn't have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
Tumblr media
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn't rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there's no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer's perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You've heard it a million times, but speed is key - and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed - that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz's page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you're operating a smaller site, you'll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site's crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you're optimizing a massive site with isolated pages that don't fall within a clean site architecture a few clicks from the home page, you'll need to put a lot of effort into it. If you're managing a simple site on a standard platform like WordPress, it's not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Tumblr media
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there's more opportunity for internal link flow - so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here's a newsflash for site owners: It's very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn't really care about traffic to your site or fixing your SEO issues. So if you don't have an SEO helping you with technical issues, don't assume your developer is handling it. They have enough on their plate and they're not incentivized to fix SEO problems.
I've run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I'm compelled to highlight the disasters that can go wrong if this isn't looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
ds4design · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXk05u; />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
lawrenceseitz22 · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger http://ift.tt/2jybHQC via IFTTT
0 notes
swunlimitednj · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger http://ift.tt/2joQIeT via SW Unlimited
0 notes
tracisimpson · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to yoursitename.com/robots.txt and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
example.com/home.html
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to https://www.bollandbranch.com/collections/baby-blankets/products/cable-knit-baby-blanket.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="https://www.bollandbranch.com/products/cable-knit-baby-blanket" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
whitelabelseoreseller · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to yoursitename.com/robots.txt and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
example.com/home.html
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to https://www.bollandbranch.com/collections/baby-blankets/products/cable-knit-baby-blanket.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="https://www.bollandbranch.com/products/cable-knit-baby-blanket" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://tracking.feedpress.it/link/9375/5112335
0 notes
seocompanysurrey · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to yoursitename.com/robots.txt and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
example.com/home.html
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to https://www.bollandbranch.com/collections/baby-blankets/products/cable-knit-baby-blanket.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="https://www.bollandbranch.com/products/cable-knit-baby-blanket" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://tracking.feedpress.it/link/9375/5112335
0 notes
theinjectlikes2 · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from The Moz Blog http://ift.tt/2idtH0V via IFTTT
0 notes
ericsburden-blog · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
0 notes
holmescorya · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
neilmberry · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now published first on http://elitelimobog.blogspot.com
0 notes
jiahaothakur · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
jerrycairns601 · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
0 notes
lawrenceseitz22 · 8 years ago
Text
How to Find and Fix 14 Technical SEO Problems That Can Be Damaging Your Site Now
Posted by Joe.Robison
Who doesn’t love working on low-hanging fruit SEO problems that can dramatically improve your site?
Across all businesses and industries, the low-effort, high-reward projects should jump to the top of the list of things to implement. And it’s nowhere more relevant than tackling technical SEO issues on your site.
Let’s focus on easy-to-identify, straightforward-to-fix problems. Most of these issues can be uncovered in an afternoon, and it’s possible they can solve months' worth of traffic problems. While there may not be groundbreaking, complex issues that will fix SEO once and for all, there are easy things to check right now. If your site already checks out for all of these, then you can go home today and start decrypting RankBrain tomorrow.
Source
Real quick: The definition of technical SEO is a bit fuzzy. Does it include everything that happens on a site except for content production? Or is it just limited to code and really technical items?
I’ll define technical SEO here as aspects of a site comprising more technical problems that the average marketer wouldn’t identify and take a bit of experience to uncover. Technical SEO problems are also generally, but not always, site-wide problems rather than specific page issues. Their fixes can help improve your site as a whole, rather than just isolated pages.
You’d think that, with all the information out there on the web, many of these would be common knowledge. I’m sure my car mechanic thought the same thing when I busted my engine because I forgot to put oil in it for months. Simple oversights can destroy your machine.
Source
The target audience for this post is beginning to intermediate SEOs and site owners that haven’t inspected their technical SEO for a while, or are doing it for the first time. If just one of these 14 technical SEO problems below is harming your site, I think you’d consider this a valuable read.
This is not a complete technical SEO audit checklist, but a summary of some of the most common and damaging technical SEO problems that you can fix now. I highlighted these based on my own real-world experience analyzing dozens of client and internal websites. Some of these issues I thought I’d never run into... until I did.
This is not a replacement for a full audit, but looking at these right now can actually save you thousands of dollars in lost sales, or worse.
1. Check indexation immediately
Have you ever heard (or asked) the question: “Why aren’t we ranking for our brand name?”
To the website owner, it’s a head-scratcher. To the seasoned SEO, it’s an eye-roll.
Can you get organic traffic to your site if it doesn’t show up in Google search? No.
I love it when complex problems are simplified at a higher level. Sergey Stefoglo at Distilled wrote an article that broke down the complex process of a technical SEO audit into two buckets: indexing and ranking.
The concept is that, instead of going crazy with a 239-point checklist with varying priorities, you sit back and ask the first question: Are the pages on our site indexing?
You can get those answers pretty quickly with a quick site search directly in Google.
What to do: Type site:{yoursitename.com} into Google search and you’ll immediately see how many pages on your site are ranking.
What to ask:
Is that approximately the amount of pages that we’d expect to be indexing?
Are we seeing pages in the index that we don’t want?
Are we missing pages in the index that we want to rank?
What to do next:
Go deeper and check different buckets of pages on your site, such as product pages and blog posts
Check subdomains to make sure they’re indexing (or not)
Check old versions of your site to see if they're mistakenly being indexed instead of redirected
Look out for spam in case your site was hacked, going deep into the search result to look for anything uncommon (like pharmaceutical or gambling SEO site-hacking spam)
Figure out exactly what’s causing indexing problems.
2. Robots.txt
Perhaps the single most damaging character in all of SEO is a simple “/” improperly placed in the robots.txt file.
Everybody knows to check the robots.txt, right? Unfortunately not.
One of the biggest offenders of ruining your site’s organic traffic is a well-meaning developer who forgot to change the robots.txt file after redeveloping your website.
You would think this would be solved by now, but I’m still repeatedly running into random sites that have their entire site blocked because of this one problem
What to do: Go to http://ift.tt/2i8q3af and make sure it doesn’t show “User-agent: * Disallow: /”.
Here’s a fancy screenshot:
And this is what it looks like in Google’s index:
What to do next:
If you see “Disallow: /”, immediately talk to your developer. There could be a good reason it’s set up that way, or it may be an oversight.
If you have a complex robots.txt file, like many ecommerce sites, you should review it line-by-line with your developer to make sure it’s correct.
3. Meta robots NOINDEX
NOINDEX can be even more damaging than a misconfigured robots.txt at times. A mistakenly configured robots.txt won’t pull your pages out of Google’s index if they’re already there, but a NOINDEX directive will remove all pages with this configuration.
Most commonly, the NOINDEX is set up when a website is in its development phase. Since so many web development projects are running behind schedule and pushed to live at the last hour, this is where the mistake can happen.
A good developer will make sure this is removed from your live site, but you must verify that’s the case.
What to do:
Manually do a spot-check by viewing the source code of your page, and looking for one of these:
90% of the time you’ll want it to be either “INDEX, FOLLOW” or nothing at all. If you see one of the above, you need to take action.
It’s best to use a tool like Screaming Frog to scan all the pages on your site at once
What to do next:
If your site is constantly being updated and improved by your development team, set a reminder to check this weekly or after every new site upgrade
Even better, schedule site audits with an SEO auditor software tool, like the Moz Pro Site Crawl
4. One version per URL: URL Canonicalization
The average user doesn't really care if your home page shows up as all of these separately:
www.example.com
example.com
www.example.com/home.html
http://ift.tt/1mN11dx
But the search engines do, and this configuration can dilute link equity and make your work harder.
Google will generally decide which version to index, but they may index a mixed assortment of your URL versions, which can cause confusion and complexity.
Moz’s canonicalization guide sums it up perfectly:
“For SEOs, canonicalization refers to individual web pages that can be loaded from multiple URLs. This is a problem because when multiple pages have the same content but different URLs, links that are intended to go to the same page get split up among multiple URLs. This means that the popularity of the pages gets split up.”
It’s likely that no one but an SEO would flag this as something to fix, but it can be an easy fix that has a huge impact on your site.
What to do:
Manually enter in multiple versions of your home page in the browser to see if they all resolve to the same URL
Look also for HTTP vs HTTPS versions of your URLs — only one should exist
If they don’t, you’ll want to work with your developer to set up 301 redirects to fix this
Use the “site:” operator in Google search to find out which versions of your pages are actually indexing
What to do next:
Scan your whole site at once with a scalable tool like Screaming Frog to find all pages faster
Set up a schedule to monitor your URL canonicalization on a weekly or monthly basis
5. Rel=canonical
Although the rel=canonical tag is closely related with the canonicalization mentioned above, it should be noted differently because it’s used for more than resolving the same version of a slightly different URL.
It’s also useful for preventing page duplication when you have similar content across different pages — often an issue with ecommerce sites and managing categories and filters.
I think the best example of using this properly is how Shopify’s platform uses rel=canonical URLs to manage their product URLs as they relate to categories. When a product is a part of multiple categories, there are as many URLs as there are categories that product is a part of.
For example, Boll & Branch is on the Shopify platform, and on their Cable Knit Blanket product page we see that from the navigation menu, the user is taken to http://ift.tt/2i8Ba35.
But looking at the rel=canonical, we see it’s configured to point to the main URL:
<link href="http://ift.tt/2iXslWS" />
And this is the default across all Shopify sites.
Every ecommerce and CMS platform comes with a different default setting on how they handle and implement the rel=canonical tag, so definitely look at the specifics for your platform.
What to do:
Spot-check important pages to see if they're using the rel=canonical tag
Use a site scanning software to list out all the URLs on your site and determine if there are duplicate page problems that can be solved with a rel=canonical tag
Read more on the different use cases for canonical tags and when best to use them
6. Text in images
Text in images — it’s such a simple concept, but out in the wild many, many sites are hiding important content behind images.
Yes, Google can somewhat understand text on images, but it’s not nearly as sophisticated as we would hope in 2017. The best practice for SEO is to keep important text not embedded in an image.
Google’s Gary Illyes confirmed that it’s unlikely Google’s crawler can recognize text well:
@Web4Raw I say no — Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 15, 2016
CognitiveSEO ran a great test on Google’s ability to extract text from images, and there's evidence of some stunning accuracy from Google’s technology:
Source: Cognitive SEO
Yet, the conclusion from the test is that image-to-text extraction technology is not being used for ranking search queries:
Source: Cognitive SEO
The conclusion from CognitiveSEO is that “this search was proof that the search engine does not, in fact, extract text from images to use it in its search queries. At least not as a general rule.”
And although H1 tags are not as important as they once were, it’s still an on-site SEO best practice to prominently display.
This is actually most important for large sites with many, many pages such as massive ecommerce sites. It’s most important for these sites because they can realistically rank their product or category pages with just a simple keyword-targeted main headline and a string of text.
What to do:
Manually inspect the most important pages on your site, checking if you’re hiding important text in your images
At scale, use an SEO site crawler to scan all the pages on your site. Look for whether H1 and H2 tags are being found on pages across your site. Also look for the word count as an indication.
What to do next:
Create a guide for content managers and developers so that they know the best practice in your organization is to not hide text behind images
Collaborate with your design and development team to get the same design look that you had with text embedded in images, but using CSS instead for image overlays
7. Broken backlinks
If not properly overseen by a professional SEO, a website migration or relaunch project can spew out countless broken backlinks from other websites. This is a golden opportunity for recovering link equity.
Some of the top pages on your site may have become 404 pages after a migration, so the backlinks pointing back to these 404 pages are effectively broken.
Two types of tools are great for finding broken backlinks — Google Search Console, and a backlink checker such as Moz, Majestic, or Ahrefs.
In Search Console, you’ll want to review your top 404 errors and it will prioritize the top errors by broken backlinks:
What to do:
After identifying your top pages with backlinks that are dead, 301 redirect these to the best pages
Also look for broken links because the linking site typed in your URL wrong or messed up the link code on their end, this is another rich source of link opportunities
What to do next:
Use other tools such as Mention or Google Alerts to keep an eye on unlinked mentions that you can reach out to for an extra link
Set up a recurring site crawl or manual check to look out for new broken links
8. HTTPS is less optional
What was once only necessary for ecommerce sites is now becoming more of a necessity for all sites.
Google just recently announced that they would start marking any non-HTTPS site as non-secure if the site accepts passwords or credit cards:
“To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.”
What’s even more shocking is Google’s plan to label all HTTP URLs as non-secure:
“Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.”
Going even further, it’s not out of the realm to imagine that Google will start giving HTTPS sites even more of an algorithmic ranking benefit over HTTP.
It’s also not unfathomable that not secure site warnings will start showing up for sites directly in the search results, before a user even clicks through to the site. Google currently displays this for hacked sites, so there's a precedent set.
This goes beyond just SEO, as this overlaps heavily with web development, IT, and conversion rate optimization.
What to do:
If your site currently has HTTPS deployed, run your site through Screaming Frog to see how the pages are resolving
Ensure that all pages are resolving to the HTTPS version of the site (same as URL canonicalization mentioned earlier)
What to do next:
If your site is not on HTTPS, start mapping out the transition, as Google has made it clear how important it is to them
Properly manage a transition to HTTPS by enlisting an SEO migration strategy so as not to lose rankings
9. 301 & 302 redirects
Redirects are an amazing tool in an SEO’s arsenal for managing and controlling dead pages, for consolidating multiple pages, and for making website migrations work without a hitch.
301 redirects are permanent and 302 redirects are temporary. The best practice is to always use 301 redirects when permanently redirecting a page.
301 redirects can be confusing for those new to SEO trying to properly use them:
Should you use them for all 404 errors? (Not always.)
Should you use them instead of the rel=canonical tag? (Sometimes, not always.)
Should you redirect all the old URLs from your previous site to the home page? (Almost never, it’s a terrible idea.)
They’re a lifesaver when used properly, but a pain when you have no idea what to with them.
With great power comes great responsibility, and it’s vitally important to have someone on your team who really understands how to properly strategize the usage and implementation of 301 redirects across your whole site. I’ve seen sites lose up to 60% of their revenue for months, just because these were not properly implemented during a site relaunch.
Despite some statements released recently about 302 redirects being as efficient at passing authority as 301s, it’s not advised to do so. Recent studies have tested this and shown that 301s are the gold standard. Mike King’s striking example shows that the power of 301s over 302s remains:
What to do:
Do a full review of all the URLs on your site and look at a high level
If using 302 redirects incorrectly for permanent redirects, change these to 301 redirects
Don’t go redirect-crazy on all 404 errors — use them for pages receiving links or traffic only to minimize your redirects list
What to do next:
If using 302 redirects, discuss with your development team why your site is using them
Build out a guide for your organization on the importance of using 301s over 302s
Review the redirects implementation from your last major site redesign or migration; there are often tons of errors
Never redirect all the pages from an old site to the home page unless there’s a really good reason
Include redirect checking in your monthly or weekly site scan process
10. Meta refresh
I though meta refreshes were gone for good and would never be a problem, until they were. I ran into a client using them on their brand-new, modern site when migrating from an old platform, and I quickly recommended that we turn these off and use 301 redirects instead.
The meta refresh is a client-side (as opposed to server-side) redirect and is not recommended by Google or professional SEOs.
If implemented, it would look like this:
Source: Wikipedia
It’s a fairly simple one to check — either you have it or you don’t, and by and large there’s no debate that you shouldn’t be using these.
Google’s John Mu said:
“I would strongly recommend not using meta refresh-type or JavaScript redirects like that if you have changed your URLs. Instead of using those kinds of redirects, try to have your server do a normal 301 redirect. Search engines might recognize the JavaScript or meta refresh-type redirects, but that's not something I would count on — a clear 301 redirect is always much better.”
And Moz’s own redirection guide states:
“They are most commonly associated with a five-second countdown with the text 'If you are not redirected in five seconds, click here.' Meta refreshes do pass some link juice, but are not recommended as an SEO tactic due to poor usability and the loss of link juice passed.”
What to do:
Manually spot-check individual pages using the Redirect Path Checker Chrome Extension
Check at scale with Screaming Frog or another site crawler
What to do next:
Communicate to your developers the importance of using 301 redirects as a standard and never using meta refreshes unless there’s a really good reason
Schedule a monthly check to monitor redirect type usage
11. XML sitemaps
XML sitemaps help Google and other search engine spiders crawl and understand your site. Most often they have the biggest impact for large and complex sites that need to give extra direction to the crawlers.
Google’s Search Console Help Guide is quite clear on the purpose and helpfulness of XML sitemaps:
“If your site’s pages are properly linked, our web crawlers can usually discover most of your site. Even so, a sitemap can improve the crawling of your site, particularly if your site meets one of the following criteria: - Your site is really large. - Your site has a large archive of content pages that are isolated or well not linked to each other. - Your site is new and has few external links to it.”
A few of the biggest problems I’ve seen with XML sitemaps while working on clients’ sites:
Not creating it in the first place
Not including the location of the sitemap in the robots.txt
Allowing multiple versions of the sitemap to exist
Allowing old versions of the sitemap to exist
Not keeping Search Console updated with the freshest copy
Not using sitemap indexes for large sites
What to do:
Use the above list to review that you’re not violating any of these problems
Check the number of URLs submitted and indexed from your sitemap within Search Console to get an idea of the quality of your sitemap and URLs
What to do next:
Monitor indexation of URLs submitted in XML sitemap frequently from within Search Console
If your site grows more complex, investigate ways to use XML sitemaps and sitemap indexes to your advantage, as Google limits each sitemap to 10MB and 50,000 URLs
12. Unnatural word count & page size
I recently ran into this issue while reviewing a site: Most pages on the site didn’t have more than a few hundred words, but in a scan of the site using Screaming Frog, it showed nearly every page having 6,000–9,000 words:
It made no sense. But upon viewing the source code, I saw that there were some Terms and Conditions text that was meant to be displayed on only a single page, but embedded on every page of the site with a “Display: none;” CSS style.
This can slow down the load speed of your page and could possibly trigger some penalty issues if seen as intentional cloaking.
In addition to word count, there can be other code bloat on the page, such as inline Javascript and CSS. Although fixing these problems would fall under the purview of the development team, you shouldn’t rely on the developers to be proactive in identifying these types of issues.
What to do:
Scan your site and compare calculated word count and page size with what you expect
Review the source code of your pages and recommend areas to reduce bloat
Ensure that there’s no hidden text that can trip algorithmic penalties
What to do next:
There could be a good reason for hidden text in the source code from a developer’s perspective, but it can cause speed and other SEO issues if not fixed.
Review page size and word count across all URLs on your site periodically to keep tabs on any issues
13. Speed
You’ve heard it a million times, but speed is key — and definitely falls under the purview of technical SEO.
Google has clearly stated that speed is a small part of the algorithm:
“Like us, our users place a lot of value in speed — that's why we've decided to take site speed into account in our search rankings. We use a variety of sources to determine the speed of a site relative to other sites.”
Even with this clear SEO directive, and obvious UX and CRO benefits, speed is at the bottom of the priority list for many site managers. With mobile search clearly cemented as just as important as desktop search, speed is even more important and can no longer be ignored.
On his awesome Technical SEO Renaissance post, Mike King said speed is the most important thing to focus on in 2017 for SEO:
“I feel like Google believes they are in a good place with links and content so they will continue to push for speed and mobile-friendliness. So the best technical SEO tactic right now is making your site faster.”
Moz’s page speed guide is a great resource for identifying and fixing speed issues on your site.
What to do:
Audit your site speed and page speed using SEO auditing tools
Unless you’re operating a smaller site, you’ll want to work closely with your developer on this one. Make your site as fast as possible.
Continuously push for resources to focus on site speed across your organization.
14. Internal linking structure
Your internal linking structure can have a huge impact on your site’s crawlability from search spiders.
Where does it fall on your list of priorities? It depends. If you’re optimizing a massive site with isolated pages that don’t fall within a clean site architecture a few clicks from the home page, you’ll need to put a lot of effort into it. If you’re managing a simple site on a standard platform like WordPress, it’s not going to be at the top of your list.
You want to think about these things when building out your internal linking plan:
Scalable internal linking with plugins
Using optimized anchor text without over-optimizing
How internal linking relates to your main site navigation
I built out this map of a fictional site to demonstrate how different pages on a site can connect to each other through both navigational site links and internal links:
Source: Green Flag Digital
Even with a rock-solid site architecture, putting a focus on internal links can push some sites higher up the search rankings.
What to do:
Test out manually how you can move around your site by clicking on in-content, editorial-type links on your blog posts, product pages, and important site pages. Note where you see opportunity.
Use site auditor tools to find and organize the pages on your site by internal link count. Are your most important pages receiving sufficient internal links?
What to do next:
Even if you build out the perfect site architecture, there’s more opportunity for internal link flow — so always keep internal linking in mind when producing new pages
Train content creators and page publishers on the importance of internal linking and how to implement links effectively.
Conclusion
Here’s a newsflash for site owners: It’s very likely that your developer is not monitoring and fixing your technical SEO problems, and doesn’t really care about traffic to your site or fixing your SEO issues. So if you don’t have an SEO helping you with technical issues, don’t assume your developer is handling it. They have enough on their plate and they’re not incentivized to fix SEO problems.
I’ve run into many technical SEO issues during and after website migrations when not properly managed with SEO in mind. I’m compelled to highlight the disasters that can go wrong if this isn’t looked after closely by an expert. Case studies of site migrations gone terribly wrong is a topic for another day, but I implore you to take technical SEO seriously for the benefit of your company.
Hopefully this post has helped clarify some of the most important technical SEO issues that may be harming your site today and how to start fixing them. For those who have never taken a look at the technical side of things, some of these really are easy fixes and can have a hugely positive impact on your site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
from Blogger http://ift.tt/2izvP0C via IFTTT
0 notes