As an SEO specialist, you don’t really need to tap into all of the intricacies of website development. But you do need to know the basics, since the way a website is coded has a great impact on its performance and therefore SEO potential. In the post on HTML tags, we’ve gone through the HTML basics you need to understand to efficiently do website SEO. This time I offer you to dig into other coding languages developers use to make a website look good and make it interactive.
What is CSS
CSS stands for Cascading Style Sheets and as the name suggests it allows creating websites in style. CSS is always used alongside HTML. It’s the wrapping paper that gives a gift box the merry look. A plain HTML web page would look like this minus the set width and left-side alignment.
The thing is today, CSS is used on every website, even if it is a rather dull-looking page from an RFC series of technical notes on how the Internet works.
The HTML markup sets the structure of a web page and defines its elements in a way Google can understand. CSS, in turn, styles the website header, footer, and navigation turning them visually appealing and user-friendly. With CSS, you can do a bunch of cool things:
- Set the color, font, and size of the text,
- Define spacing between elements,
- Control the way the elements are laid out on the page,
- Add background images or background colors.
CSS can be implemented in 3 ways:
- Inline. This is when the style attributes are added individually to every HTML element you want to style. This method is rarely used as it is too time-consuming.
- Internally. To set styles for the whole web page, the <style> element is added to the <head> section of the page. This method is used when you need to give some landing pages a unique look.
- Externally. The most common way of implementing CSS is via an external stylesheet in the .css format. The file is linked to from the <head> section of the page. This method is the most popular one because it allows to define the style of a whole website using a single document. The common practice, though, is to use separate CSS files for different types of pages (e.g. category pages, blog, about us, etc.) as it improves the page loading speed.
How Google handles CSS
Page Layout Algorithm also relies on CSS. It is meant to determine whether users can easily find the content on the page and CSS helps Google understand how the page is laid out both on desktop and mobile, and where exactly within the page every piece of content resides: front and center, in the sidebar or at the bottom of the page way below the fold.
With regular HTML pages, it is all plain and simple. Googlebot crawls a page and parses its content, including internal links. Then the content gets indexed while the discovered URLs are added to the crawl queue, and the process starts all over again.
Now, are these complications something you should worry about SEO-wise? Just one year ago they were.
Content revealed on click/scroll
The problem here is that Googlebot doesn’t click or scroll the way users do in their browsers.
It surely has its own workarounds, but they may work or not depending on the technology you use. And if you implement things in a way Google can’t figure out, you’ll end up with a part of your JS-powered content not getting indexed.
Now, let’s say your initial HTML contains the page’s content in full and then you use CSS properties to hide some parts of the content and JS to let users reveal these hidden parts. In this case, you’re all good as the content is still there within the HTML code and only hidden for users—Google can still see what’s hidden within the CSS code.
The way you code your JS links matters
<a href=”/page” onclick=”goTo(‘page’)”>your anchor text</a>
Meanwhile, all other variations like links with a missing <a> tag or href attribute or without a proper URL will work for users but Google won’t be able to follow them.
Surely, if you use a sitemap, Google should still discover your website pages even if it can’t follow internal links leading to these pages. However, the search engine still prefers links to a sitemap as they help it understand your website structure and the way pages are related to each other. Besides, internal linking allows you to spread link juice across your website.
Let me just share a small piece of advice with you: Always test your code using Google’s tools such as the Mobile-Friendly Test or the URL Inspection Tool in your Google Search Console to see how Google renders your pages.
Preferably, testing should be done at the early development stage when things can get fixed easier. While the URL Inspection Tool can only be used after the feature is live on a website, the Mobile-Friendly Test can help you catch every bug early on—just ask your developers to create a URL to their localhost server using special tools (e.g. ngrok).
Another option would be to use Chrome Dev tools powered by Lighthouse for debugging. It is now built in the Chrome browser—press Command+Option+J (Mac) or Control+Shift+J (Windows, Linux, Chrome OS) to run the tool.
Here in the Sources tab, you can find your JS files and inspect the code they inject. Then you can pause JS execution at a point where you believe something went wrong using one of the Event Listener Breakpoints and further inspect the piece of code. Once you believe you’ve detected a bug, you can then edit the code live to see in real time if the solution you came up with fixes the issue.
In less drastic cases, Google and browsers can fetch the files, but they load too slowly, which negatively affects the user experience and can also slow down website indexing.
Now, let’s take a look at every error type that can result in the the issues described above.
Google can’t crawl CSS and JS files
Google and browsers can’t load CSS and JS files
You do remember how a page with no CSS applied looks from the image in the first section of this post. If JS is used to load content to the website (e.g. stock exchange rates to a respective website), all the dynamically rendered content will be missing if the code isn’t running properly.
To fix the error, your developers will have to first figure out what’s causing them, and the reasons will vary depending on the technologies used.
In the worst-case scenario, the error occurs because your whole website is down. It happens when your server cannot cope with the amount of traffic. The abrupt increase in traffic may be natural, but in most cases, it is caused by aggressive parsing software or a malicious bot flooding your server with the specific purpose to put it down (DDoS).
To prevent this, you can configure your website server in a way to make it less impatient. But making the server wait for too long is not recommended either.
The thing is, loading a huge JS bundle takes a lot of server resources and if all your server resources are used for loading the file, it won’t be able to fulfill other requests. As a result, your whole website is put on hold until the file loads.
The faster a browser can load page resources, the better experience users get, and if the files are loading slowly, users have to wait for a while to have the page rendered in their browser.
Similarly to 4xx response code, 3ХХ status code means you’re not using a proper URL to tell Googlebot and browser where your file resides. It’s just in this case you’re not using the wrong address, but rather the old one—3XX status code indicated that you’ve moved your CSS/JS file to a different address but failed to update the URL in the website code.
Googlebot and browsers will still fetch the files since your server will redirect them to the proper address—it’s just they’ll have to make an additional HTTP request to reach the destination URL, and that’s no good for the loading time. The performance impact should not be drastic if we’re talking about a single URL or a couple of files, but at a larger scale it can significantly slow down the page loading time.
Caching is not enabled
A great way to minimize the number of HTTP requests to your server is to allow caching in a response header. You surely have heard about caching—you’d often get a suggestion to clear your browser cache when information on a website is not displayed properly.
What cache actually does is it saves a copy of your website resources when a user visits your site. Then, the next time this user comes to your website, the browser will no longer have to fetch the resources—it will serve users the saved copy instead. Caching is essential for website performance as it allows for reducing latency and network load.
Cache-control HTTP header is used to specify caching rules browsers should follow: it indicates whether a particular resource can be cached or not, who can cache it, and how long the cached copy can last. It is highly recommended to allow caching of CSS and JS files as browsers upload these files every time users visit a website, so having them stored in cache can significantly boost the page loading time.
Here’s an example of setting caching for CSS and JS files to one day and public access.
<filesMatch ".(css|js)$"> Header set Cache-Control "max-age=86400, public" </filesMatch>
It is worth noting though that Googlebot normally ignores the cache-control HTTP header because following the directives website set would put too much load on the crawling and rendering infrastructure.
Therefore, whenever you update your CSS and JS file and want Google to take notice of it, it is recommended to rename your file and upload it using a different URL. That way, Google will refetch the file because it will treat it as a totally new resource it hasn’t encountered before.
The number of files matters
File size matters as well
Another reason for splitting one huge JS/CSS bundle is caching. If you have it all in one file, every time you change something in your JS/CSS code, browsers and Google will have to recache the whole bundle. This is not great both for indexing and for the user experience.
In terms of indexing it can go two ways depending on the caching technologies used: you’ll either force Googlebot to constantly recache your JS/CSS bundle or Google may fail to notice in time that the cache is no longer valid and you will end up with Google seeing outdated content.
Speaking of user experience, whenever you update some JS code within the bundle, browsers can no longer serve cached copies to any of your users. So, even if you only change the JS code for your blog, all your users including those who never visited your blog will have to wait for the browsers to load the whole JS bundle to access any page on your website.
Compression is the process of replacing repetitive strings within the source code with pointers to the first instance of that string. Since any code has lots of repetitive parts (think of how many <script> tags your JS contains) and pointers use less space than the initial code, file compression allows to reduce the file size by up to 70%. Browsers cannot read the compressed code, but as long as the browser supports the compression method, it will be able to uncompress the file before rendering.
The great thing about compression is that developers don’t need to do it manually. All of the heavy lifting is done by the server provided that it was configured to compress resources. For example, for Apache servers, a few lines of code are added to the .htaccess file to enable compression.
Minification is the process of removing white space, non-required semicolons, unnecessary lines, and comments from the source code. As a result, you get the code that is not quite human-readable, but still valid. Browsers can render such codes perfectly well, and they’ll even parse and load it faster than raw code. Web developers will have to take care of minification on their own, but with plenty of dedicated tools, it shouldn’t be a problem.
Speaking of reducing the file size, minification won’t give you the staggering 70%. If you already have compression enabled on your server, further minifying your resources can help you reduce their size by an additional few to 16% depending on how your resources are coded. For this reason, some web developers believe minification is obsolete. However, the smaller your CSS and JS files are. the better. So a good practice is to combine both methods.
First and foremost, we’re talking about security risks. If a website that hosts the files you use gets hacked, you may end up running a malicious code injected into the external JS file. Hackers may steal private data of your users including their passwords and credit card details.
Performance-wise, think of all the errors discussed above. If you have no access to the server where the CSS and JS files are hosted, you won’t be able to set up caching, compression, or debug 5XX errors.
If the website that hosts the files at some point removes the file and you fail to notice it in a timely manner, your website will not work properly and you won’t be able to quickly replace a 404 JS or CSS file with a valid one.
Finally, if the website hosting JS or CSS files sets a 3XX redirect to a (slightly) different file, your webpage may look and work not exactly as expected.
If you do use third-party CSS and JS files, my advice is to keep a close eye on them. Still, a way better solution is not to use external CSS and JS at all.
Post Views: 7