Now that you know what SEO is, why it is important, and the optimal positioning of a web page, it is time to learn how to rank your website in the first search positions on Google (SERPs).
In this article, we are going to guide you through the most effective methods to configure and optimize different sectors within SEO positions, as well as the main SEO problems that arise when optimizing a website and its solutions.
We will divide this article into four learning blocks:
- Accessibility
- Indexability
- Content
- Meta-tags
Accessibility of your website in 2021
The first step to optimizing SEO on a web page is to facilitate the work of search engines like Google so that they can more easily understand the web page. In layman’s terms, you have to check the factors that affect accessibility because misconfigurations will hide the website from search engines, or show a certain page that we do not want to be visible in searches, such as the login page of the administration.
You will have to take into account several essential aspects to improve accessibility:
- Sitemap
- Robots.txt file.
- Web structure, JS and CSS.
- Web speed.
- HTTP status codes.
Sitemap
- The Sitemap has to be uploaded in Google Webmaster Tools.
- The Sitemap needs to be updated; every time you make changes to your web page, check that the new pages are in the Sitemap file.
- The Sitemap has to follow the protocols, otherwise, Google will not process it properly.
In case Google does not have the Sitemap of your website, you must create one by following the steps below:
- Create an Excel document with all the pages that you want to be indexed in Google. In this, we will use the same Excel that we created when searching for the HTTP response codes.
- Create the Sitemap. For this, we recommend using the website called Sitemap Generators (it is easy to use).
- Compare the pages that are in your Excel document (the one that you did in the first step) with those that are in the Sitemap that you generated with Sitemap Generators. Next, remove from the Excel document the pages that you do not want to be indexed.
- Upload the Sitemap through Google Webmaster Tools.
Robots.txt file
Let’s view the robots file as the one in charge and responsible for preventing search engines like Google from indexing certain parts of a website. It is useful to not show Google pages that we do not want to be seen by the public, such as the WordPress administration panel. Take this file into account to make good use of what should and shouldn’t be indexed.
You should check that the robots file is working correctly and also that it is not blocking any important part of your website from www.yourwebsitedomain.com/robots.txt or through Google Webmaster Tools.
The robots file is very useful in that it will allow you to indicate where the Sitemap of your website is located by adding it to the end of the document. So a complete robots file would look something like this:
User-agent:*
Disallow:/wp-admin
Sitemap:http://www.yourwebsitedomain.com/sitemap.xml
If you want to go into more detail about what this file is, we recommend visiting this website with the standard information.
Web structure, JS and CSS
If your web page is very large, Google may find it difficult to reach every page. In general, it is recommended that the structure of a web page is no more than three levels deep (not counting the home page). The Googlebot has a limited time to crawl all the content of a web page, therefore, the more time it uses traversing each level, the less time it will have to analyze the deeper pages of your website.
Example of a vertical structure on a website:


Our advice from Volume99 is that you create a scheme in which you define the horizontal structure that your website will have based on the most important sections that each web page needs, whilst adapting to your niche. For example, in Volume99 we divide our web page into four essential sections (not counting the home page) so that it would look something like this:
- Services
- Paid Ads
- SEO
- Copywriting
- Social Media Management
- Digital Public Relations
- Conversion Rate Optimisation
- About Us
- Blog
- Contact Us
As you can see, our pages do not go beyond the second level. However, the content makes it extensive enough for Google tools to analyze our page (such as the blog page and our articles) so that it would count as a second-level page.
The reality is that Google, over the years, has become more intelligent to read Javascript and CSS. Therefore, we must be careful as both codes could affect our content. For example, Javascript could hide part of our content and CSS could mess it up by displaying another order for search engines like Google.
There are two methods in which Google reads the pages:
- Plugins
- Command “cache:”
Plugins
Plugins, like Disable-HTML or Web Developer, help us see how a search engine like Google “crawls” the web. To do this, you need to open one of these tools and disable the JavaScript on your website. We do this because all drop-down menus, links and texts must be easily readable by Google.
Next, deactivate the CSS of your web page to see the real order of the content; the CSS can change this completely.
Command “cache:”
The easiest way to see how Google sees your web page is through the command “cache:”. This would be done as follows:
cache:www.yourwebsitedomain.com
This way, if you select “text-only version”, Google will show you a document of your web page where you can see how Google reads the web page and when the last time it entered to review it.
Although we must also bear in mind that for the “cache:” command to work correctly, our pages must be previously indexed in Google’s indexes.
Once Google indexes a web page for the first time, it determines how often it will revisit it for updates or new content. This will not only depend on the authority and relevance of the domain to which that page belongs, but also on the frequency with which it is updated.
The loading speed of your website
The robot, or the “Google spiders”, have a limited time to analyze each website that runs every day of the year. Thus, the less time your web page takes to load, the more time it will have to analyze the pages and go more in-depth on your web site.
This is also an important factor when Google decides which position each web page will be in; a slow-loading speed is translated into a bad user experience. This is something that Google punishes frequently by causing the page to lose positions in the search results.
There are different tools to help you determine how fast your web page is. We strongly recommended using GTmetrix, although you can also use Google Page Speed if you want to check which problems are affecting the loading speed of your website; this way you can find and fix them.
HTTP status codes
If a URL or a web page returns a status code (404, 502 etc.) as a response, users and search engines will not be able to access the website. To identify if any of the indexed pages of your website has this type of code, we recommend using Screaming Frog as it quickly shows the status of all the URLs on your website.
Indexability of your website
Once the Google robots enter a page, the next step is to index it in the search results. These pages will be added to an index in which their content, authority and relevance are all taken into account to make it easier, faster and more practical for Google and users to access each of these pages.
How to check if Google indexed my web page and if Google indexed it correctly:
Firstly, if Google has indexed your website correctly, you will have to carry out a search with the command “site:”. In this way, Google will give you the approximate number of pages on your website that your site has indexed Web:

If you have Google Webmaster Tools linked on your website, you can also check the actual number of pages indexed by Google by going to Google Index> Indexing Status.
Knowing (in quotation marks) the exact number of pages that your website has will help you to compare the number of pages that Google has indexed with the number of real pages of your website. Following this, three scenarios can occur:
1. The number of pages that appear indexed in Google and the number of actual pages on your website are similar. This means that everything is fine because it is displaying the pages correctly.
2. The number of pages that appear indexed in Google is less than the number of actual pages on your website. This happens because the Googlebot did not or cannot access all the pages on your website. To solve this, check the accessibility section of this article.
3. The number of pages that appear indexed in Google is greater than the number of pages on your website. This means that your website has duplicate content or that Google is indexing pages that you do not want to be indexed (for example, the WordPress administration login).
Duplicate content
Having duplicate content means that multiple URLs on your website have the same content. This is one of the most common problems when building a new website. This happens unintentionally and can also have very negative effects on SEO positioning in Google.
These are the three most common reasons why there is duplicate content:
- Canonicalization of the page.
- Parameters in URLs.
- Pagination.
Canonicalization of the page
This is the most common reason for duplicate content on a website. This occurs when your home page has more than one URL. For example:
volume99.com
www.volume99.com
volume99.com/index.html
www.volume99.com/index.html
Each of these examples lead to the same page, however, if Google is not told which page should be indexed, it could index or position the wrong one.
This can be solved in three different ways:
1. Do a redirect from the server to make sure that there is only one page that is shown to users. If you enter the wrong one, you will be automatically redirected to the correct one.
2. Define which subdomain you want to be the main one on your website (“www” or “non-www”) in Google Webmaster Tools.
3. Add a “rel = canonical” tag in each version that points to the correct ones for your website.
URL parameters
There are many types of parameters, especially in e-commerce web pages, such as product filters (color, quantity, score etc.) and ordering (lower price, by relevance, higher price, grid view etc.). The problem is that many of these parameters do not change the content of the page which generates many URLs for the same content, which obviously has negative consequences. For example:
There are many types of parameters, especially in e-commerce web pages, such as product filters (color, quantity, score etc.) and ordering (lower price, by relevance, higher price, grid view etc.). The problem is that many of these parameters do not change the content of the page which generates many URLs for the same content, which obviously has negative consequences. For example:
www.ecommercewebsite.com/computers?color=black&price-from=1000&price-to=3000
In this example, we can find three parameters such as the color, the minimum price and the maximum price of the products.
How can this be solved?
If you add a tag “rel = canonical” to the original search page, you will avoid any kind of confusion with the section of Google with the original page.
Pagination
When an article, product list or tag and category pages have more than one page (due to their large content), duplicate content problems can occur despite the pages having completely different content as they are all focused on the same topic. This is a huge problem faced by e-commerce pages where there are hundreds of articles in the same category.
How can this paging problem be solved?
The solution to this comes from the rel = next and rel = prev tags that allow search engines to know which pages belong to the same category or publication. Thus, it is possible to focus all of the positioning potential on the first page.
Content of your website in 2021
For years it has been known that content is king on Google. Therefore, you have to offer a good throne to that.
Content is the most important factor in the SEO positioning of a web page. Even if it is well optimized at the SEO and Optimization level, if you do not have interesting or relevant content for the users who do the searches, it will be very difficult for you to appear in the top positions in search results.
There are various tools to help you create effective content for your web page, but in the end, the most useful thing is to use the page with Javascript and CSS disabled, as we explained earlier in this article. In this way, you will see what content Google is actually reading and in what order it is available for interested people.
When analyzing the content of the pages, you should ask yourself several questions that will guide you in the process of making articles more useful for your audience and the general public:
1. Is the content relevant and interesting? It should be useful for the reader and interesting to read – ask yourself if you would read it and be honest with yourself.
2. Do you have important keywords in the first paragraphs? In addition to these, you must use related terms because Google is very effective at relating terms. However, you must do this in a natural way because doing this in excess will make Google see this as a very negative point.
3. Does the page have enough content? Is your article good enough? There is no standard measure for how much “enough” is, but it should be at least 300 words long and the article should address very specific topics for a good ranking.
4. Do you have spelling mistakes? If so, it is better to correct them as no one will like to read a poorly written article; this creates feelings of mistrust in the information you are providing.
5. Do you have keyword stuffing? If the content of your website has an excess of keywords, you will do the opposite of a good thing. There is nothing that defines a good “keyword density”; Google recommends being as natural as possible.
6. Is your article easy to read? If reading it is not tedious, it will be fine. The paragraphs should not be very long since this is frowned upon by Google. In addition, the typeface should not be very small and it is recommended to have audiovisual content.
7. Is the content recent? The more up-to-date the content on your website is, the higher the frequency of Google crawling and the better the user experience.
Meta-tags of your website
Meta-tags are used to transmit information to search engines about what a specific page is about when sorting and showing their results in the SERPs. These are the most important labels to take into account:
Title tag:
The title tag is the most important element within meta-tags; it is the first thing that appears in the results in Google.
When optimizing the title you have to consider that:
- The tag must be in the section of your website’s code.
- It should not exceed 70 characters, otherwise, it will appear cut off in Google results.
- It must be descriptive with respect to the content of the web page.
- It must contain the keyword for which we are optimizing the page.
- Each page must have a unique title.
You should never abuse the keywords in the title of a web page as this will lose your user’s trust and make Google think that you are trying to deceive them.
Meta-description:
Although it is not a critical factor in the positioning of a web page, it considerably affects the click-through rate. This is because the meta-description makes a web page look better or worse in Google search results.
For the meta-description, you must follow the same principles as with the title, besides its length not exceeding 155 characters. Both for the titles tag and the meta-descriptions, we must avoid duplication. We can check this on the Google Webmaster Tools page> Search appearance> HTML improvements
Labels H1, H2, H3 …
The H1, H2 etc. are very important to have a good information structure in the content of your web page. Additionally, it creates a good user experience as they help define the hierarchy or the order of the content – this improves SEO. We must give more importance to H1 because it is usually in the highest part of the content and the higher a keyword is, the more importance Google will give it when positioning.
Tag “alt” in the image
This tag has to be descriptive with respect to the image and the content within that image. This tag is what Google reads when crawling it and one of the factors that it uses to position it in Google Images.
Conclusion
At this point in the article, if you have read everything and followed the steps correctly (correcting all the errors that are on your website and giving quality content), you will increase your chances of being classified in the best positions of the search results.
However, perhaps now you are wondering what are the keywords that best position my website? We don’t know exactly what those keywords are, but we can help you find them with our next article on “How to do Keyword Research?”
