As a digital marketer, it’s probable that you’ll be tasked with making technical recommendations for a client’s website, most likely at the start of the campaign. The cornerstone of this process will be the SEO Technical Audit. Identifying and drawing insights for the many technical elements will inform next steps in terms of what will allow the valuable pages of a website to be more visible in search results and accessible for users.
Below we will unpack the fundamental technical aspects that are integral in 2017, the nuances of each element and their influence on the functionality of a website.
Meta Data
With reference to Title Tags and Meta Descriptions, these snippets of HTML code are two attributes that serve to inform users and search engines of the page subject. If optimized correctly, the information provided should relate to a users’ search query, which should subsequently improve click-through rates.
To establish proper optimization, there are just a few guidelines for each element that should be accounted for when auditing a website:
Title Tags
• Lead with relevant keywords at the front of Title Tag
• To leverage brand awareness, incorporate the brand name at the end, maintaining a uniform layout site wide
• A Title Tag should not exceed 50-60 characters (or 512 pixels). Search engines may cut off additional characters if the length is exceeded
• Overall, they should be unique and compelling to enhance visibility and user experience
Meta Descriptions
• 1-2 focus keywords should be included and consistent with those used in Title Tag, Header Tags, and in most cases the URL
• The text should align with the page subject
• The optimal length is 150-160 characters
• Meta Descriptions should be unique
• A Meta Description should be written as a compelling piece of ad copy that inspires curiosity and invokes a users’ desire to click-through
Crawlability & Indexation
Search engine bots crawl webpages and then store copies of these pages in their indices. The crawlability of a page is one many of the driving forces that determine which pages are returned following a Google search. If a search bot can’t crawl a page then it certainly won’t be able to show that page to a user who might find the information relevant to their query.
To avoid such problems, let’s take a look at 3 components you should be searching for that will help facilitate the crawlability and indexation of a webpage:
XML Sitemap:
This document provides an easily digestible menu of the value-driven pages, per your discretion, that you’ve chosen to tell search engines about, which will streamline the crawling and indexation process. Furthermore, it’s a quick way to tell Google when fresh content has been published to a website, and which content pieces are originals which will aide in ranking improvements, in time.
HTML Sitemap:
Unlike the XML Sitemap, the HTML Sitemaps are written for human consumption, not search engines. Therefore, if a user can’t find particular content, this sitemap will help improve that user experience.
Robots.txt:
This file lives in the root directory of your site and tells search engine bots which pages to access and index and which pages it should avoid. The functionality of this file is essential for not only the SEO aspect, but also for the privacy of areas of a site.
Upon inspection of a robots.txt file, you should ensure that the directives are aligned with what and how visitors should be accessing the site. For example, a ‘disallow’ directive may be used for page, category or site-level content, as well as resource and image files, as needed. Additionally, the robots.txt file should always point search bots to the XML sitemap.
URL Structure
The structure of a URL will help describe a page to visitors as well as search engines, much like the Title Tag of a page. Dependent on where a page lives within a site, the paths and taxonomy of the URL should align with the site navigation and page subject. Some additional guidelines to adhere to are as follows:
• URLs should be kept as succinct as possible, avoiding excessive use of folders
• Keywords that best represent a page are still useful
• Words in URLs should be separated by hyphens
• Unnecessary punctuation characters should be removed
• Use of certain prepositions or ‘stop’ words should be avoided (of; or; and; a; etc.)
• Dynamic parameters may be excluded, if advisable
If any areas of a website are misrepresented in their URL formatting, it’s important that recommendations are made as the user experience can be compromised and search bots may be confused during the crawl & indexation phase.
Secure Protocol
Since Google launched the HTTPS everywhere campaign back in 2014, the advocacy of this push has only perpetuated. In 2016, Google representatives expressed that making websites secure should definitely be on the radar for webmasters in 2017. That simply means switching your website from HTTP to HTTPS.
HTTP (Hypertext Transfer Protocol) is the structure for transferring and receiving data throughout the Internet. It is an application layer protocol that provides information to a user regardless of the channel that it takes to do so.
Through HTTPS (Secure Hypertext Transfer Protocol), the exchange of authorizations and transactions are protected through an additional layer known as SSL (Secure Sockets Layer) to transfer sensitive data.
Since Google is trying to impose a more secure web experience for its users, it has announced that HTTPS is indeed a ranking signal, albeit a small one. Nevertheless, Google search bots have begun to prioritize secure pages over unsecure pages. The encryption of a website will prove its integrity and authenticity.
When compiling a technical audit, ensure that this element is addressed and next step recommendations to obtain a security certificate are included.
Canonicalization & Redirects
When conducting a technical audit, crawling the website will be one of the first fundamental steps to help you forge ahead and make valuable assessments. A popular tool of choice among SEOs is the Screaming Frog tool, which will provide a comprehensive breakdown of the file type, Meta Data, status code, Canonical URL, etc. for each crawled page of a site.
So, let’s get to the point.
Canonicalization
If a search engine is able to crawl multiple pages that virtually house the same content, it’s going to have a hard time when trying to determine which page version should be selected to appear in the search results based on a query, and thus, might not choose any of them.
To make matters worse, if different sources are linking to these multiple page versions, the authority of the primary page version will be diluted, the trust and credibility of that page is reduced.
Ensure that if a website contains multiple page variations they all point to one common URL, which is assigned via the Canonical tag. Duplicate page version examples are listed below:
1. Pages with filter parameters
2. Paginated pages
3. Secure, https pages
4. Non-secure, http pages
5. www pages
6. Non-www pages
Redirect Chains
A redirect chain is defined as a series of redirects that continually point from one URL to the next, causing users and search engines to wait for these unnecessary steps to be taken before they reach their intended URL. The implication here is that about 10% of Page Authority is lost for each redirect.
A feature available in Screaming Frog makes it quite easy to identify any redirect chains from a simple report. At this point, filter out those that require redirect chain elimination, so that once updated by your development team, the site will garner an improvement in user experience, indexation and page load time.
Site Speed Optimization
Beyond first glance, there are several factors that contribute to the efficacy of a website’s speed and performance. In a digital age, how our devices serve us is paramount to our productivity at work, school and home, we need them to respond as fast as possible.
Since page load time dictates user behavior, Google also wants the pages of our websites to load fast and has stated that site speed warrants a small ranking advantage.
One very useful tool that will provide insights on website performance is GTmetrix. Following a quick analysis of your website you’ll be able to discern how fast (or slow) your website loads, prioritize areas of concern such as reducing image file sizes, leveraging browser caching, deferring the parsing of JavaScript files, etc.
Assessments and recommendations can easily be made as the report also provides a granular look at the specific resources that require attention.
Rich Snippets/Schema Structured Data
Structured Data is a type of HTML markup implemented to pertinent pieces of content on a website, that when read by search engines, helps them quickly interpret the information. The result, although not guaranteed, is the population of rich snippets in the search results, which enhances the searchers’ experience, leading to greater click-though rates.
Some of the most common types of Schema are as follows:
• Organization
• Corporation
• Local Business
• Government Organization
• Sports Organization
• Educational Organization
A full resource that provides the areas of a website that can be included for markup can be found here.
To determine whether or not a webpage has implemented Schema structured data, or whether it’s been properly configured, Google’s Structured Data Testing tool is a comprehensive resource.
Now that you’re equipped with the nitty-gritty of the most fundamental technical components, you can dive deep in to the auditing phase! Remember that the SEO technical audit is only as valuable as the dedicated research that goes in to them. No component can be overlooked, regardless of how laborious this deliverable may be. Once you’re recommendations are implemented, over time you should begin to see the positive impact of the above changes and the result that they have on a website’s user experience and search engine performance.
Leave A Comment