Main menu

Pages

10 Common Technical SEO Issues and How to fix seo mistakes

10 Common of Technical SEO Issues website optimization examples how to repair them and additional explanations

In my work as a technical search engine optimization marketing consultant and internet site auditor, I come through a lot of a number technical search engine optimization problems every day. Some of these problems are in reality extra frequent and extra serious than others.


Your expertise in these technical search engine optimization troubles will now not solely make you a higher website positioning however may also be a game-changer for the website online that you are auditing.





The under listing is neither entire nor final, so experience free to share different examples of technical website positioning troubles in the comments.


Let’s get started!


How do you test if a website online has these technical search engine optimization issues?

To be capable to observe if these problems are currently on the site, you will want the following website positioning tools:


  • Google PageSpeed Insights
  • Google Search Console
  • Sitebulb and/or Screaming Frog search engine marketing Spider

Make certain to take a look at my listing of web optimization auditing equipment to find out even extra equipment that can assist you to realize technical web optimization troubles on sites.


15 Technical website positioning Issues

These technical website positioning problems are now not in any extraordinary order. Some of these problems virtually have greater precedence than others however it’s simply well worth understanding all of them.


☝️ Make certain to test the listing of website positioning fine practices from Google and my listing of search engine marketing tips.


Technical search engine optimization difficulty #1

Core Web Vitals now not surpassed in the discipline however exceeded in the lab





If the web page passes Core Web Vitals in the lab (Google Lighthouse) but has orange or crimson colors for the discipline facts (Chrome User Experience Report), then the website is regarded to be failing the CVW assessment.


It is no longer individual to see that a web page passes Core Web Vitals in the lab however does no longer do so in the field. Good Google Lighthouse ratings can supply much less skilled SEOs a false influence that the website is doing OK whilst in fact, it is not.


Note that it can additionally be a different way round, i.e. the website online has negative lab ratings however excellent rankings in the field.


Field and lab facts in Google PageSpeed Insights

This is an instance of a web page that has terrible lab rankings but passes Core Web Vitals in the field.


Core Web Vitals are one out of 4 Google web page trip alerts (which are rating factors). These encompass HTTPS, mobile-friendliness, and no intrusive interstitials in addition to Core Web Vitals.

Field information and lab records are different. Google – when assessing websites in terms of Core Web Vitals – solely takes into account the discipline data, i.e. the real-world facts coming from proper customers of the site.

That’s why when optimizing for Core Web Vitals, you need to focal point on subject facts (the CrUX data) which are reachable in Google PageSpeed Insights and the Google Search Console Core Web Vitals record (if the web page receives a significant quantity of traffic).

Core Web Vitals document in Google Search Console

This is the GSC Core Web Vitals record displaying area data.

The PageSpeed Insights device is top-notch for checking how one precise web page is doing whilst the GSC Core Web Vitals file will let you become aware of companies of pages with comparable issues.

  • Field records in Google PageSpeed Insights
  • How do you repair this technical web optimization issue?

The motives for the trouble can also fluctuate a lot so it’s no longer viable to furnish one simple fix. However, right here are a few matters you might also try:


Analyze every difficulty in the Google Search Console Core Web Vitals document and perceive organizations of pages with comparable issues.

  1. Check sample pages from every crew in Google PageSpeed Insights to get specific pointers on what’s worth optimizing.
  2. Identify which Core Web Vital metric is difficult for a given web page crew and think about high-quality optimization practices for this unique metric.

To analyze more, make certain to test my publications on Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).


Technical website positioning problem #2

Robots.txt disallows internet site resources





If the site’s resources, such as images, JS files, or/and CSS documents are disallowed in robots.txt, then the crawler may additionally now not be in a position to render the web page correctly.


Googlebot and different search engine crawlers do no longer solely crawl however additionally render the pages visited to be capable to see all of their content material even if the website online has a lot of JavaScript.


Here is an instance of such mistaken implementation:


User-agent: *

Disallow: /assets/

Disallow: /images/


However, by disallowing precise internet site sources in robots.txt, you will make it not possible for bots to crawl these assets and render the web page correctly. This can lead to undesired consequences, such as decreased rankings or indexing problems.


How do you repair this technical search engine optimization issue?

The answer is pretty easy here. All you want to do is cast off all the disallow directives that block the crawling of internet site resources.


Most content material administration structures enable enhancing robots.txt or placing up the guidelines of how this file is generated.

You can additionally alter the robots.txt file with the aid of honestly connecting to the server with the aid of SFTP and importing the modified file.

If you are the usage of WordPress, you might also test my article on how to get the right of entry to and edit robots.txt in WordPress.


Technical search engine optimization trouble #3

XML sitemap carries unsuitable entries





An XML sitemap must solely comprise the canonical indexable URLs that you prefer to be listed and ranked in Google. Having a lot of flawed URLs in the XML sitemap is a waste of the crawl budget.


Here are the examples of flawed XML sitemap entries:


  • URLs that return error codes like 5XX or 4XX,
  • URLs with a no-index tag,
  • canonicalized URLs,
  • redirected URLs,
  • URLs disallowed in robots.txt


How do you repair this technical website positioning issue?


In most cases, the sitemap is generated automatically, so you solely want to modify the regulations that are used for XML sitemap generation.

In WordPress, it’s very handy to alter the settings of the XML sitemap with a plugin, such as Rank Math.

Any internet site crawler will inform if the XML sitemap carries mistaken URLs. If you don’t understand the place the sitemap of the website is, take a look at my information on how to locate the sitemap of the site.


Technical website positioning problem #4

Incorrect, malformed, or/and conflicting canonical URLs





Unfortunately, there are many methods to enforce canonical URLs incorrectly. The best-case situation with improper canonical URLs is that the search engine will truly pass them and on its very own pick the canonical hyperlink issue of a given page.


Here are examples of what can go incorrect with the implementation of canonical URLs:


  • Canonical hyperlink aspect is targeted outdoor of the head (e.g. in the physique section)
  • Canonical hyperlink aspect is empty or invalid
  • Canonical URL factors to the HTTP model of the URL
  • Canonical URL factors to a URL with a noindex tag
  • Canonical URL factors to a URL that returns error code 4XX & 5XX
  • Canonical URL factors to a URL that is disallowed in robots.txt
  • Canonical URL is now not located in the supply code however solely in the rendered HTML

This is the overview of canon in Screaming Frog.


How do you repair this technical search engine optimization issue?

Fixing this is particularly easy. All you want to do is replace the canonical hyperlinks factors so that they factor into genuine canonical URLs.


If you crawl the website online with Sitebulb or Screaming Frog, you will be capable to see precisely what pages want optimization in this regard.


Technical website of noindx #5

Conflicting nofollow and/or noindex directives in HTML and/or the HTTP header





If the website has a couple of conflicting nofollow and/or noindex directives in the HTTP header and/or HTML, then Google will most in all likelihood select the most restrictive directive.


This can be serious trouble if the greater restrictive directive, such as “noindex” has been introduced accidentally. This applies to a couple of nofollow/noindex directives in both HTML or HTTP the header, or both.


Here is an instance of such improper implementation:


The content material of the HTTP header says that the web page needs to be listed and followed.

HTTP/... 200 OK


X-Robots-Tag: index, follow

The content material of the meta robots tag in the head says the web page needs to now not be indexed.


<title>SEO</title>

<meta name="robots" content="noindex,follow">


The noindex/nofollow directives must be cited simply as soon as in both HTML or the HTTP header.


How do you restore this technical search engine marketing issue?

Fixing this trouble is incredibly easy. All you want to do is dispose of the greater directive and go away solely from the one that you choose Google and different search engines to abide by.


Just like above, you want to use a crawler to extract these challenging URLs.

If the difficulty relates to a notably small variety of pages, you can replace them manually.

If, however, it’s about lots or hundreds of thousands of net pages, then you want to personalize the rule that is including these a couple of nofollow and/or noindex tags.


Technical search engine marketing trouble #6

Nofollowed and/or disallowed inside links





❗Nofollowing or/and disallowing an inner URL may also stop it from rating due to the fact Google will no longer be capable to study the content material of the URL if it is disallowed in robots.txt) or no hyperlink fairness will be surpassed to the URL (if it’s internally nofollowed).


If you don’t choose Google to index a unique URL, virtually add a noindex tag to it.

Disallowing the URL in robots.txt will now not stop it from being indexed.

Unless you have a very desirable reason, nofollowing inner hyperlinks are commonly no longer an appropriate concept in phrases of SEO.

SEOs used to nofollow interior hyperlinks pointing to phrases &amp; prerequisites or privateness coverage pages. However, Google verified many instances that this is now not necessary.


How do you restore this technical website positioning issue?

All you want to do to restore that problem is to put off the disallowed URLs from the robots.txt and do away with the “nofollow” attribute from inner links.


Use an internet site crawler, such as Sitebulb to exhibit to you all the nofollowed inside URLs and their placement.

You can additionally use the NoFollow Chrome extension that will mark nofollowed hyperlinks on any web page you visit. This is especially beneficial if you are doing a guide evaluate of the site.

NoFollow Chrome Extension

This is how the NoFollow Chrome extension highlights nofollow hyperlinks on a page.


Technical search engine optimization trouble #7

Low-value inner observed links





Low-value inner hyperlinks elevate no search engine marketing statistics about the URLs to which they point. This is a large waste of web optimization potential.


Internal hyperlinks are one of the strongest indicators offering facts about the URLs linked. That’s why SEOs have to constantly attempt to make the most of inside linking.


Here are examples of low-value links:


Text hyperlinks with anchor text, such as “Read more”, “Click here”, “Learn more” and so on.

Graphic hyperlinks with no ALT attribute. The ALT attribute in photo hyperlinks acts as anchor textual content in textual content links.

While it is a serious problem if all or the majority of your interior hyperlinks have the “Read more” anchor text, it is certainly much less of a difficulty if there are two inside hyperlinks pointing to a particular web page and one hyperlink has applicable anchor textual content whilst the difference is of the “Learn more” type.


Technical web optimization issues: low fee links

Here the trouble is no longer very serious as there is each an applicable textual content hyperlink and a low price “Read more” link.

How do you restore this technical search engine optimization issue?

In a perfect web optimization world, you desire to solely have high-value textual content and photo inner links.


The best way to restore that is to genuinely get rid of all the “Read more” hyperlinks and change them with textual content hyperlinks with applicable anchor texts.

If you can't take away the “read more” links, at least make certain to add every other high-value textual content hyperlink pointing to the equal URL.

For example, in the case of the weblog title, you may additionally have two links, one with the “Read more” textual content and the different with descriptive anchor textual content like “technical search engine optimization audit guide”.


Technical web optimization difficulty #8

No outgoing and/or incoming inner links





If a given URL does now not have any outgoing and/or incoming inside links, then it is now not passing/receiving any hyperlink fairness to/from different internet pages.


If the URL in the query does no longer “aspire” to rank in Google and/or is simply a touchdown page, then it’s now not an issue. In that case, it’s generally the quality notion to absolutely add a “noindex” tag to such a URL.


However, if the URL is an essential net web page that you prefer to convey natural visitors and have excessive rankings, then the web page may additionally have a situation being listed and/or ranked in Google.


How do you restore this technical search engine marketing issue?

To repair this issue, you must add textual content hyperlinks (with applicable anchor texts) each from and to that URL.


The incoming hyperlinks have to ideally come from different thematically-related pages.

The outgoing hyperlinks – in a similar fashion – ought to factor in different associated net pages.

For example, my technical search engine optimization audit web page needs to hyperlink to and be linked from a comparable web page like my Core Web Vitals audit.


Technical website positioning redirect #9

Internal or/and exterior redirects with issues





Both inner and exterior redirects can lead to a horrific consumer journey and can be deceptive for search engine robots (especially if these redirects do no longer work correctly).


Similar to canonical URLs, there are a lot of matters that can go incorrect with redirects on the website.


Here are some of the most famous problems in this regard:


  • The interior URL redirect returns error repute codes like 4XX or 5XX.
  • The exterior URL redirect returns 4XX or 5XX.
  • The URL redirects returned to itself (a redirect loop).

All of the above instance problems – if related to a massive range of URLs on the website online – can have a terrible influence on the site’s crawl ability and personal experience. Both customers and search engine robots may additionally abandon the website if they come across an inaccurate redirect.


How do you repair this technical website positioning issue?

  1. Fortunately, any crawler will exhibit to you precisely what URLs have this issue. Here is how you repair it:
  2. In the case of interior URLs, you in reality want to replace the goal URLs so that they return repute 200 (OK).
  3. For exterior redirected URLs, you have to dispose of the hyperlinks pointing to these redirected URLs or exchange them with different URLs returning popularity code 200 (OK).


Technical search engine marketing trouble #10

Internal hyperlinks to redirected URLs





If the web page has URLs that are redirected to different interior URLs, then it needs to no longer link to the redirected URLs but goal URLs.


While you don’t have to manipulate the exterior URLs that you hyperlink to and whether or not they turn out to be redirected at some point, you have full manipulation over your inner URLs.


That’s why you ought to make positive that your web page does no longer hyperlinks to internally redirected URLs. Instead, all inner hyperlinks ought to factor into the target URLs.


Example technical search engine marketing issue

Here Sitebulb indicates the internally redirected URLs and on what pages these URLs are placed.

For example, if A is redirected to B, you need to no longer area inner hyperlinks to the A URL however alternatively to the BURL. This is no longer a deadly mistake however a very precise web optimization exercise concerning the crawlability of the site.


Read alsowebsite traffic analysis

Comments