Important Components of Technical SEO Audit
Conducting a deep technical audit is indeed a big deal. As an SEO consultant, you must be thorough in your chore to know where to get started and which steps to follow in succcessio9n. Here is a list of components your technical SEO audit must feature.

Tools you need for technical SEO audit
The different tools that you will crucially need for a technical SEO audit include Screaming Frog, DeepCrawl, Copyscape, and Xenu Sleuth in case of using PC and Integrity in case of using Mac. If you are given access, you will need to work with Google Analytics, Google Search Console, and Bing Webmaster Tools.

Aspects to check using DeepCrawl
Depending on how big the site is, crawl might need one or two days to give the results. On the results, check for the following issues one by one and follow the steps described to fix them.

Duplicate Pages
If you find any duplicate pages, ask the client to rewrite those pages and in the meantime add thetag duplicate pages.

Update the parameter settings on Google Search Console to exclude those parameters that are not seen generating unique content. To improve the crawl budget, you must add the disallow function
robots.txt to the incorrect URLs.

Pagination
With regard to pagination errors, you must check two reports namely First Pages and Unlinked Pagination Pages.

Review the ‘First Pages’ report to check which pages are using pagination. Manually review the pages using this tool on the site and check if the pages are implemented rightly. To check if pagination is working alright, check the report ‘Unlinked Pagination Pages’ and see if the rel=”next” and rel=”prev” are linking to the corresponding pages.

If you find a “view all” or a “load more” page, add rel=”canonical” tag.

If you have every page as a separate page, add rel=”next” and rel=”prev” markup.

If you use infinite scrolling, add the paginated equivalent page URL in the javascript.

Max Redirections
Check the report on “Max Redirections” report to find all those pages that are redirected over 4 times.

301: The majority of codes you will see during your research are 301. If there is only one 301 redirect without any redirect loop, you need not worry about them.

302: It is fine to see 302 redirects. If you see them for more than 3 months, it is better to change them to 301s to make them permanent.

400: Users won’t be access the page
403: Users are not authorized to access the page
404: Means the Page is not found and this happens when the client has accidentally deleted the concerned page without applying a 301 redirect.

500: this is an error in the internal server that has to be reported to the web development professionals to figure out the cause.

To fix these errors, remove all the internal links that point to the old 404 pages. Use the redirected page internal links to update them. By removing the middle redirects, you can undo the redirect chains. You can also do this on Screaming Frog.

Aspects to check using Screaming Frog
If you are adding bigger sites to Screaming Frog, you may use the settings to crawl only the specified areas of the site at a given time.

Google Analytics Code
Screaming Frog will let you find those pages that do not have the Google Analytics code (UA-1234568-9). To find the missing Google Analytics code, follow these steps:

To fix this error, contact the site developers and make them add the code to those pages that have missed them.

Google Tag Manager
Screaming Frog will let you find those pages that do not have the Google Tag Manager snippet.

To fix this issue, go to Google Tag Manager and check for errors and update them. Send the code to the developer to add back to the site.

Schema
Described as structured data, Schema helps the search engines find out the status of the page on the website. Screaming Frog will let you find how many pages have been indexed for your client.

In case of new sites, it is possible that Google has not indexed them. Review the robots.txt file to ensure that you have not disallowed those things that you want Google to crawl. Ensure you have submitted the site map to the Google Search Console and also Bing Webmaster Tools.

Flash
Since 2016, Google is blocking Flash on account of slow page loads. While conducting technical SEO audit, it is important to check if the client site is using Flash. Screaming Frog will let you find those pages that are using Flash.

To fix this, you can embed the required videos from YouTube. Alternatively, you can choose HTML5 standards while adding a video.

JavaScript
Google announced in 2015 that JavaScript is permissible if it does not block anything on your robots.txt.

To fix this issue, check JavaScript to make sure robots.txt does not block it. Check if JavaScript is runs on the server. In case of the pages you want the search engines to crawl, remove them if they are listed as Disallow. In case of sites with multiple sub-domains, you need to have separate robots.txt for each one of them. Ensure to list the sitemap in the in the robots.txt.

Crawl Errors
You may use DeepCrawl, Screaming Frog, and Google and Bing webmaster tools to check crawl errors. Once the crawl is over, review the Bulk Reports.

Most of the 404 errors can be fixed at the site’s backend with 301 redirect.

Redirect Chains
Redirect chains impact user experience and also slow down the page loading.
In Screaming Frog, go to Reports > Redirect Chains to review the crawl path of the redirects. Ensure that 301 redirects remain as 301 redirects. If you spot a 404 error, it is necessary to clean it up.

Broken Links
To fix the internal and external broken links, use Integrity for Mac or use Xenu Sleuth for PC. These tools can give you the list of broken URLs. Update them manually or ask the development team to do it.

URLs
You must check if the URL has unacceptable characters like?, =, or +. Dynamic URLs can result in duplicate content if not optimized.

On Screaming Frog, check the list of parameters that have resulted in duplicate content. To the main URL page, add a canonical tag. Go to Google Search Console and update your parameters under Crawl > URL Parameters. In the robots.txt, do not forget to disallow the duplicate URLs.