Tag: The Moz Blog

Spying On Google: 5 Ways to Use Log File Analysis To Reveal Invaluable SEO Insights

Posted by faisal-anderson

Log File Analysis should be a part of every SEO pro’s tool belt, but most SEOs have never conducted one. Which means most SEOs are missing out on unique and invaluable insights that regular crawling tools just can’t produce. 

Let’s demystify Log File Analysis so it’s not so intimidating. If you’re interested in the wonderful world of log files and what they can bring to your site audits, this guide is definitely for you. 

What are Log Files?

Log Files are files containing detailed logs on who and what is making requests to your website server. Every time a bot makes a request to your site, data (such as the time, date IP address, user agent, etc.) is stored in this log. This valuable data allows any SEO to find out what Googlebot and other crawlers are doing on your site. Unlike regular crawlings, such as with the Screaming Frog SEO Spider, this is real-world data — not an estimation of how your site is being crawled. It is an exact overview of how your site is being crawled.

Having this accurate data can help you identify areas of crawl budget waste, easily find access errors, understand how your SEO efforts are affecting crawling and much, much more. The best part is that, in most cases, you can do this with simple spreadsheet software. 

In this guide, we will be focussing on Excel to perform Log File Analysis, but I’ll also discuss other tools such as Screaming Frog’s less well-known Log File Analyser which can just make the job a bit easier and faster by helping you manage larger data sets. 

Note: owning any software other than Excel is not a requirement to follow this guide or get your hands dirty with Log Files.

How to Open Log Files

Rename .log to .csv

When you get a log file with a .log extension, it is really as easy as renaming the file extension .csv and opening the file in spreadsheet software. Remember to set your operating system to show file extensions if you want to edit these.

How to open split log files

Log files can come in either one big log or multiple files, depending on the server configuration of your site. Some servers will use server load balancing to distribute traffic across a pool or farm of servers, causing log files to be split up. The good news is that it’s really easy to combine, and you can use one of these three methods to combine them and then open them as normal:

  1. Use the command line in Windows by Shift + right-clicking in the folder containing your log files and selecting “Run Powershell from here”

Then run the following command:

copy *.log mylogfiles.csv

You can now open mylogfile.csv and it will contain all your log data.

Or if you are a Mac user, first use the cd command to go to the directory of your log files:

cd Documents/MyLogFiles/

Then, use the cat or concatenate command to join up your files:

cat *.log > mylogfiles.csv

2) Using the free tool, Log File Merge, combine all the log files and then edit the file extension to .csv and open as normal.

3) Open the log files with the Screaming Frog Log File Analyser, which is as simple as dragging and dropping the log files:

Splitting Strings

(Please note: This step isn’t required if you are using Screaming Frog’s Log File Analyser)

Once you have your log file open, you’re going to need to split the cumbersome text in each cell into columns for easier sorting later.

Excel’s Text to Column function comes in handy here, and is as easy as selecting all the filled cells (Ctrl / Cmd + A) and going to Excel > Data > Text to Columns and selecting the “Delimited” option, and the delimiter being a Spacecharacter.

Once you’ve separated this out, you may also want to sort by time and date — you can do so in the Time and Date stamp column, commonly separating the data with the “:” colon delimiter.

Your file should look similar to the one below:

As mentioned before, don’t worry if your log file doesn’t look exactly the same — different log files have different formats. As long as you have the basic data there (time and date, URL, user-agent, etc.) you’re good to go!

Understanding Log Files

Now that your log files are ready for analysis, we can dive in and start to understand our data. There are many formats that log files can take with multiple different data points, but they generally include the following:

  1. Server IP
  2. Date and time
  3. Server request method (e.g. GET / POST)
  4. Requested URL
  5. HTTP status code
  6. User-agent

More details on the common formats can be found below if you’re interested in the nitty gritty details:

  • WC3
  • Apache and NGINX
  • Amazon Elastic Load Balancing
  • HA Proxy
  • JSON

How to quickly reveal crawl budget waste

As a quick recap, Crawl Budget is the number of pages a search engine crawls upon every visit of your site. Numerous factors affect crawl budget, including link equity or domain authority, site speed, and more. With Log File Analysis, we will be able to see what sort of crawl budget your website has and where there are problems causing crawl budget to be wasted. 

Ideally, we want to give crawlers the most efficient crawling experience possible. Crawling shouldn’t be wasted on low-value pages and URLs, and priority pages (product pages for example) shouldn’t have slower indexation and crawl rates because a website has so many dead weight pages. The name of the game is crawl budget conservation, and with good crawl budget conversion comes better organic search performance.

See crawled URLs by user agent

Seeing how frequently URLs of the site are being crawled can quickly reveal where search engines are putting their time into crawling.

If you’re interested in seeing the behavior of a single user agent, this is easy as filtering out the relevant column in excel. In this case, with a WC3 format log file, I’m filtering the cs(User-Agent) column by Googlebot:

And then filtering the URI column to show the number of times Googlebot crawled the home page of this example site:

This is a fast way of seeing if there are any problem areas by URI stem for a singular user-agent. You can take this a step further by looking at the filtering options for the URI stem column, which in this case is cs-uri-stem:

From this basic menu, we can see what URLs, including resource files, are being crawled to quickly identify any problem URLs (parameterized URLs that shouldn’t be being crawled for example).

You can also do broader analyses with Pivot tables. To get the number of times a particular user agent has crawled a specific URL, select the whole table (Ctrl/cmd + A), go to Insert > Pivot Table and then use the following options:

All we’re doing is filtering by User Agent, with the URL stems as rows, and then counting the number of times each User-agent occurs.

With my example log file, I got the following:

Then, to filter by specific User-Agent, I clicked the drop-down icon on the cell containing “(All),” and selected Googlebot:

Understanding what different bots are crawling, how mobile bots are crawling differently to desktop, and where the most crawling is occurring can help you see immediately where there is crawl budget waste and what areas of the site need improvement.

Find low-value add URLs

Crawl budget should not be wasted on Low value-add URLs, which are normally caused by session IDs, infinite crawl spaces, and faceted navigation.

To do this, go back to your log file, and filter by URLs that contain a “?” or question mark symbols from the URL column (containing the URL stem). To do this in Excel, remember to use “~?” or tilde question mark, as shown below:

A single “?” or question mark, as stated in the auto filter window, represents any single character, so adding the tilde is like an escape character and makes sure to filter out the question mark symbol itself.

Isn’t that easy?

Find duplicate URLs

Duplicate URLs can be a crawl budget waste and a big SEO issue, but finding them can be a pain. URLs can sometimes have slight variants (such as a trailing slash vs a non-trailing slash version of a URL).

Ultimately, the best way to find duplicate URLs is also the least fun way to do so — you have to sort by site URL stem alphabetically and manually eyeball it.

One way you can find trailing and non-trailing slash versions of the same URL is to use the SUBSTITUTE function in another column and use it to remove all forward slashes:

=SUBSTITUTE(C2, “/”, “”)

In my case, the target cell is C2 as the stem data is on the third column.

Then, use conditional formatting to identify duplicate values and highlight them.

However, eyeballing is, unfortunately, the best method for now.

See the crawl frequency of subdirectories

Finding out which subdirectories are getting crawled the most is another quick way to reveal crawl budget waste. Although keep in mind, just because a client’s blog has never earned a single backlink and only gets three views a year from the business owner’s grandma doesn’t mean you should consider it crawl budget waste — internal linking structure should be consistently good throughout the site and there might be a strong reason for that content from the client’s perspective.

To find out crawl frequency by subdirectory level, you will need to mostly eyeball it but the following formula can help:

=IF(RIGHT(C2,1)="/",SUM(LEN(C2)-LEN(SUBSTITUTE(C2,"/","")))/LEN("/")+SUM(LEN(C2)-LEN(SUBSTITUTE(C2,"=","")))/LEN("=")-2, SUM(LEN(C2)-LEN(SUBSTITUTE(C2,"/","")))/LEN("/")+SUM(LEN(C2)-LEN(SUBSTITUTE(C2,"=","")))/LEN("=")-1) 

The above formula looks like a bit of a doozy, but all it does is check if there is a trailing slash, and depending on the answer, count the number of trailing slashes and subtract either 2 or 1 from the number. This formula could be shortened if you remove all trailing slashes from your URL list using the RIGHT formula — but who has the time. What you’re left with is subdirectory count (starting from 0 from as the first subdirectory).

Replace C2 with the first URL stem / URL cell and then copy the formula down your entire list to get it working.

Make sure you replace all of the C2s with the appropriate starting cell and then sort the new subdirectory counting column by smallest to largest to get a good list of folders in a logical order, or easily filter by subdirectory level. For example, as shown in the below screenshots:

The above image is subdirectories sorted by level.

The above image is subdirectories sorted by depth.

If you’re not dealing with a lot of URLs, you could simply sort the URLs by alphabetical order but then you won’t get the subdirectory count filtering which can be a lot faster for larger sites.

See crawl frequency by content type

Finding out what content is getting crawled, or if there are any content types that are hogging crawl budget, is a great check to spot crawl budget waste. Frequent crawling on unnecessary or low priority CSS and JS files, or how crawling is occurring on images if you are trying to optimize for image search, can easily be spotted with this tactic.

In Excel, seeing crawl frequency by content type is as easy as filtering by URL or URI stem using the Ends With filtering option.

Quick Tip: You can also use the “Does Not End With” filter and use a .html extension to see how non-HTML page files are being crawled — always worth checking in case of crawl budget waste on unnecessary js or css files, or even images and image variations (looking at you WordPress). Also, remember if you have a site with trailing and non-trailing slash URLs to take that into account with the “or” operator with filtering.

Spying on bots: Understand site crawl behavior

Log File Analysis allows us to understand how bots behave by giving us an idea of how they prioritize. How do different bots behave in different situations? With this knowledge, you can not only deepen your understanding of SEO and crawling, but also give you a huge leap in understanding the effectiveness of your site architecture.

See most and least crawled URLs

This strategy has been touched up previously with seeing crawled URLs by user-agent, but it’s even faster.

In Excel, select a cell in your table and then click Insert > Pivot Table, make sure the selection contains the necessary columns (in this case, the URL or URI stem and the user-agent) and click OK.

Once you have your pivot table created, set the rows to the URL or URI stem, and the summed value as the user-agent.

From there, you can right-click in the user-agent column and sort the URLs from largest to smallest by crawl count:

Now you’ll have a great table to make charts from or quickly review and look for any problematic areas:

A question to ask yourself when reviewing this data is: Are the pages you or the client would want being crawled? How often? Frequent crawling doesn’t necessarily mean better results, but it can be an indication as to what Google and other content user-agents prioritize most.

Crawl frequency per day, week, or month

Checking the crawling activity to identify issues where there has been loss of visibility around a period of time, after a Google update or in an emergency can inform you where the problem might be. This is as simple as selecting the “date” column, making sure the column is in the “date” format type, and then using the date filtering options on the date column. If you’re looking to analyze a whole week, just select the corresponding days with the filtering options available.

Crawl frequency by directive

Understanding what directives are being followed (for instance, if you are using a disallow or even a no-index directive in robots.txt) by Google is essential to any SEO audit or campaign. If a site is using disallows with faceted navigation URLs, for example, you’ll want to make sure these are being obeyed. If they aren’t, recommend a better solution such as on-page directives like meta robots tags.

To see crawl frequency by directive, you’ll need to combine a crawl report with your log file analysis.

(Warning: We’re going to be using VLOOKUP, but it’s really not as complicated as people make it out to be)

To get the combined data, do the following:

  1. Get the crawl from your site using your favorite crawling software. I might be biased, but I’m a big fan of the Screaming Frog SEO Spider, so I’m going to use that.

    If you’re also using the spider, follow the steps verbatim, but otherwise, make your own call to get the same results.

  2. Export the Internal HTML report from the SEO Spider (Internal Tab > “Filter: HTML”) and open up the “internal_all.xlsx” file.

    From there, you can filter the “Indexability Status” column and remove all blank cells. To do this, use the “does not contain” filter and just leave it blank. You can also add the “and” operator and filter out redirected URLs by making the filter value equal “does not contain → “Redirected” as shown below:

    This will show you canonicalized, no-index by meta robots and canonicalized URLs.

  3. Copy this new table out (with just the Address and Indexability Status columns) and paste it in another sheet of your log file analysis export.
  4. Now for some VLOOKUP magic. First, we need to make sure the URI or URL column data is in the same format as the crawl data.

    Log Files don’t generally have the root domain or protocol in the URL, so we either need to remove the head of the URL using “Find and Replace” in our newly made sheet, or make a new column in your log file analysis sheet append the protocol and root domain to the URI stem. I prefer this method because then you can quickly copy and paste a URL that you are seeing problems with and take a look. However, if you have a massive log file, it is probably a lot less CPU intensive with the “Find and Replace” method.

    To get your full URLs, use the following formula but with the URL field changed to whatever site you are analyzing (and make sure the protocol is correct as well). You’ll also want to change D2 to the first cell of your URL column

    =”<a href="https://www.example.com&quot; &d2

    Drag” class=”redactor-autoparser-object”>https://www.example.com”&D&#8230; down the formula to the end of your Log file table and get a nice list of full URLs:

  5. Now, create another column and call it “Indexability Status”. In the first cell, use a VLOOKUP similar to the following: =VLOOKUP(E2,CrawlSheet!A$1:B$1128,2,FALSE). Replace E2 with the first cell of you “Full URL” column, then make the lookup table into your new. crawl sheet. Remember to sue the dollar signs so that the lookup table doesn’t change as you. apply the formula to further roles. Then, select the correct column (1 would be the first column of the index table, so number 2 is the one we are after). Use the FALSE range lookup mode for exact matching. Now you have a nice tidy list of URLs and their indexability status matched with crawl data:

    Crawl frequency by depth and internal links

    This analysis allows us to see how a site’s architecture is performing in terms of crawl budget and crawlability. The main aim is to see if you have far more URLs than you do requests — and if you do then you have a problem. Bots shouldn’t be “giving up” on crawling your entire site and not discovering important content or wasting crawl budget on content that is not important.

    Tip: It is also worth using a crawl visualization tool alongside this analysis to see the overall architecture of the site and see where there are “off-shoots” or pages with poor internal linking.

    To get this all-important data, do the following:

    1. Crawl your site with your preferred crawling tool and export whichever report has both the click depth and number of internal links with each URL.

      In my case, I’m using the Screaming Frog SEO Spider, going exporting the Internal report:

    2. Use a VLOOKUP to match your URL with the Crawl Depth column and the number of Inlinks, which will give you something like this:
    3. Depending on the type of data you want to see, you might want to filter out only URLs returning a 200 response code at this point or make them filterable options in the pivot table we create later. If you’re checking an e-commerce site, you might want to focus solely on product URLs, or if you’re optimizing crawling of images you can filter out by file type by filtering the URI column of your log file using the “Content-Type” column of your crawl export and making an option to filter with a pivot table. As with all of these checks, you have plenty of options!
    4. Using a pivot table, you can now analyze crawl rate by crawl depth (filtering by the particular bot in this case) with the following options:

    To get something like the following:

    Better data than Search Console? Identifying crawl issues

    Search Console might be a go-to for every SEO, but it certainly has flaws. Historical data is harder to get, and there are limits on the number of rows you can view (at this time of writing it is 1000). But, with Log File Analysis, the sky’s the limit. With the following checks, we’re going to be discovered crawl and response errors to give your site a full health check.

    Discover Crawl Errors

    An obvious and quick check to add to your arsenal, all you have to do is filter the status column of your log file (in my case “sc-status” with a W3C log file type) for 4xx and 5xx errors:

    Find inconsistent server responses

    A particular URL may have varying server responses over time, which can either be normal behavior, such as when a broken link has been fixed or a sign there is a serious server issue occurring such as when heavy traffic to your site causes a lot more internal server errors and is affecting your site’s crawlability.

    Analyzing server responses is as easy as filtering by URL and by Date:

    Alternatively, if you want to quickly see how a URL is varying in response code, you can use a pivot table with the rows set to the URL, the columns set to the response codes and counting the number of times a URL has produced that response code. To achieve this setup create a pivot table with the following settings:

    This will produce the following:

    As you can see in the above table, you can clearly see “/inconcistent.html” (highlighted in the red box) has varying response codes.

    View Errors by Subdirectory

    To find which subdirectories are producing the most problems, we just need to do some simple URL filtering. Filter out the URI column (in my case “cs-uri-stem”) and use the “contains” filtering option to select a particular subdirectory and any pages within that subdirectory (with the wildcard *):

    For me, I checked out the blog subdirectory, and this produced the following:

    View Errors by User Agent

    Finding which bots are struggling can be useful for numerous reasons including seeing the differences in website performance for mobile and desktop bots, or which search engines are best able to crawl more of your site.

    You might want to see which particular URLs are causing issues with a particular bot. The easiest way to do this is with a pivot table that allows for filtering the number of times a particular response code occurs per URI. To achieve this make a pivot table with the following settings:

    From there, you can filter by your chosen bot and response code type, such as image below, where I’m filtering for Googlebot desktop to seek out 404 errors:

    Alternatively, you can also use a pivot table to see how many times a specific bot produces different response codes as a whole by creating a pivot table that filters by bot, counts by URI occurrence, and uses response codes as rows. To achieve this use the settings below:

    For example, in the pivot table (below), I’m looking at how many of each response code Googlebot is receiving:

    Diagnose on-page problems 

    Websites need to be designed not just for humans, but for bots. Pages shouldn’t be slow loading or be a huge download, and with log file analysis, you can see both of these metrics per URL from a bot’s perspective.

    Find slow & large pages

    While you can sort your log file by the “time taken” or “loading time” column from largest to smallest to find the slowest loading pages, it’s better to look at the average load time per URL as there could be other factors that might have contributed to a slow request other than the web page’s actual speed.

    To do this, create a pivot table with the rows set to the URI stem or URL and the summed value set to the time taken to load or load time:

    Then using the drop-down arrow, in this case, where it says “Sum of time-taken” and go to “Value Field Settings”:

    In the new window, select “Average” and you’re all set:

    Now you should have something similar to the following when you sort the URI stems by largest to smallest and average time taken:

    Find large pages

    You can now add the download size column (in my case “sc-bytes”) using the settings shown below. Remember that the set the size to the average or sum depending on what you would like to see. For me, I’ve done the average:

    And you should get something similar to the following:

    Bot behavior: Verifying and analyzing bots

    The best and easiest way to understand bot and crawl behavior is with log file analysis as you are again getting real-world data, and it’s a lot less hassle than other methods.

    Find un-crawled URLs

    Simply take the crawl of your website with your tool of choice, and then take your log file an compare the URLs to find unique paths. You can do this with the “Remove Duplicates” feature of Excel or conditional formatting, although the former is a lot less CPU intensive especially for larger log files. Easy!

    Identify spam bots

    Unnecessary server strain from spam and spoof bots is easily identified with log files and some basic command line operators. Most requests will also have an IP associated with it, so using your IP column (in my case, it is titled “c-ip” in a W3C format log), remove all duplicates to find each individual requesting IP.

    From there, you should follow the process outlined in Google’s document for verifying IPs (note: For Windows users, use the nslookup command):

    https://support.google.com/webmasters/answer/80553?hl=en

    Or, if you’re verifying a bing bot, use their handy tool:

    https://www.bing.com/toolbox/verify-bingbot

    Conclusion: Log Files Analysis — not as scary as it sounds

    With some simple tools at your disposal, you can dive deep into how Googlebot behaves. When you understand how a website handles crawling, you can diagnose more problems than you can chew — but the real power of Log File Analysis lies in being able to test your theories about Googlebot and extending the above techniques to gather your own insights and revelations.

    What theories would you test using log file analysis? What insights could you gather from log files other than the ones listed above? Let me know in the comments below.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Originaly posted from The Moz Blog http://bit.ly/2M3TU17
    If you’re involved with SEO, you’ll already know about the Moz SEO blog. Great info and practicle guides on SEO, check it out.

    Aren’t 301s, 302s, and Canonicals All Basically the Same? – Best of Whiteboard Friday

    Posted by Dr-Pete

    They say history repeats itself. In the case of the great 301 vs 302 vs rel=canonical debate, it repeats itself about every three months. And in the case of this Whiteboard Friday, it repeats once every two years as we revisit a still-relevant topic in SEO and re-release an episode that’s highly popular to this day. Join Dr. Pete as he explains how bots and humans experience pages differently depending on which solution you use, why it matters, and how each choice may be treated by Google.

    Aren't 301s, 302s, and canonicals all basically the same?

    Click on the whiteboard image above to open a high-resolution version in a new tab!

    Video Transcription

    Hey, Moz fans, it’s Dr. Pete, your friendly neighborhood marketing scientist here at Moz, and I want to talk today about an issue that comes up probably about every three months since the beginning of SEO history. It’s a question that looks something like this: Aren’t 301s, 302s, and canonicals all basically the same?

    So if you’re busy and you need the short answer, it’s, “No, they’re not.” But you may want the more nuanced approach. This popped up again about a week [month] ago, because John Mueller on the Webmaster Team at Google had posted about redirection for secure sites, and in it someone had said, “Oh, wait, 302s don’t pass PageRank.”

    John said, “No. That’s a myth. It’s incorrect that 302s don’t pass PR,” which is a very short answer to a very long, technical question. So SEOs, of course, jumped on that, and it turned into, “301s and 302s are the same, cats are dogs, cakes are pie, up is down.” We all did our freakout that happens four times a year.

    So I want to get into why this is a difficult question, why these things are important, why they are different, and why they’re different not just from a technical SEO perspective, but from the intent and why that matters.

    I’ve talked to John a little bit. I’m not going to put words in his mouth, but I think 95% of this will be approved, and if you want to ask him, that’s okay afterwards too.

    Why is this such a difficult question?

    So let’s talk a little bit about classic 301, 302. So a 301 redirect situation is what we call a permanent redirect. What we’re trying to accomplish is something like this. We have an old URL, URL A, and let’s say for example a couple years ago Moz moved our entire site from seomoz.org to moz.com. That was a permanent change, and so we wanted to tell Google two things and all bots and browsers:

    1. First of all, send the people to the new URL, and, second,
    2. pass all the signals. All these equity, PR, ranking signals, whatever you want to call them, authority, that should go to the new page as well.

    So people and bots should both end up on this new page.

    A classic 302 situation is something like a one-day sale. So what we’re saying is for some reason we have this main page with the product. We can’t put the sale information on that page. We need a new URL. Maybe it’s our CMS, maybe it’s a political thing, doesn’t matter. So we want to do a 302, a temporary redirect that says, “Hey, you know what? All the signals, all the ranking signals, the PR, for Google’s sake keep the old page. That’s the main one. But send people to this other page just for a couple of days, and then we’re going to take that away.”

    So these do two different things. One of these tells the bots, “Hey, this is the new home,” and the other one tells it, “Hey, stick around here. This is going to come back, but we want people to see the new thing.”

    So I think sometimes Google interprets our meaning and can change things around, and we get frustrated because we go, “Why are they doing that? Why don’t they just listen to our signals?”

    Why are these differentiations important?

    The problem is this. In the real world, we end up with things like this, we have page W that 301s to page T that 302s to page F and page F rel=canonicals back to page W, and Google reads this and says, “W, T, F.” What do we do?

    We sent bad signals. We’ve done something that just doesn’t make sense, and Google is forced to interpret us, and that’s a very difficult thing. We do a lot of strange things. We’ll set up 302s because that’s what’s in our CMS, that’s what’s easy in an Apache rewrite file. We forget to change it to a 301. Our devs don’t know the difference, and so we end up with a lot of ambiguous situations, a lot of mixed signals, and Google is trying to help us. Sometimes they don’t help us very well, but they just run into these problems a lot.

    In this case, the bots have no idea where to go. The people are going to end up on that last page, but the bots are going to have to choose, and they’re probably going to choose badly because our intent isn’t clear.

    How are 301s, 302s, and rel=canonical different?

    So there are a couple situations I want to cover, because I think they’re fairly common and I want to show that this is complex. Google can interpret, but there are some reasons and there’s some rhyme or reason.

    1. Long-term 302s may be treated as 301s.

    So the first one is that long-term 302s are probably going to be treated as 301s. They don’t make any sense. If you set up a 302 and you leave it for six months, Google is going to look at that and say, “You know what? I think you meant this to be permanent and you made a mistake. We’re going to pass ranking signals, and we’re going to send people to page B.” I think that generally makes sense.

    Some types of 302s just don’t make sense at all. So if you’re migrating from non-secure to secure, from HTTP to HTTPS and you set up a 302, that’s a signal that doesn’t quite make sense. Why would you temporarily migrate? This is probably a permanent choice, and so in that case, and this is actually what John was addressing in this post originally, in that case Google is probably going to look at that and say, “You know what? I think you meant 301s here,” and they’re going to pass signals to the secure version. We know they prefer that anyway, so they’re going to make that choice for you.

    If you’re confused about where the signals are going, then look at the page that’s ranking, because in most cases the page that Google chooses to rank is the one that’s getting the ranking signals. It’s the one that’s getting the PR and the authority.

    So if you have a case like this, a 302, and you leave it up permanently and you start to see that Page B is the one that’s being indexed and ranking, then Page B is probably the one that’s getting the ranking signals. So Google has interpreted this as a 301. If you leave a 302 up for six months and you see that Google is still taking people to Page A, then Page A is probably where the ranking signals are going.

    So that can give you an indicator of what their decision is. It’s a little hard to reverse that. But if you’ve left a 302 in place for six months, then I think you have to ask yourself, “What was my intent? What am I trying to accomplish here?”

    Part of the problem with this is that when we ask this question, “Aren’t 302s, 301s, canonicals all basically the same?” what we’re really implying is, “Aren’t they the same for SEO?” I think this is a legitimate but very dangerous question, because, yes, we need to know how the signals are passed and, yes, Google may pass ranking signals through any of these things. But for people they’re very different, and this is important.

    2. Rel=canonical is for bots, not people.

    So I want to talk about rel=canonical briefly because rel=canonical is a bit different. We have Page A and Page B again, and we’re going to canonical from Page A to Page B. What we’re basically saying with this is, “Look, I want you, the bots, to consider Page B to be the main page. You know, for some reason I have to have these near duplicates. I have to have these other copies. But this is the main one. This is what I want to rank. But I want people to stay on Page A.”

    So this is entirely different from a 301 where I want people and bots to go to Page B. That’s different from a 302, where I’m going to try to keep the bots where they are, but send people over here.

    So take it from a user perspective. I have had in Q&A all the time people say, “Well, I’ve heard that rel=canonical passes ranking signals. Which should I choose? Should I choose that or 301? What’s better for SEO?”

    That’s true. We do think it generally passes ranking signals, but for SEO is a bad question, because these are completely different user experiences, and either you’re going to want people to stay on Page A or you’re going to want people to go to Page B.

    Why this matters, both for bots and for people

    So I just want you to keep in mind, when you look at these three things, it’s true that 302s can pass PR. But if you’re in a situation where you want a permanent redirect, you want people to go to Page B, you want bots to go to Page B, you want Page B to rank, use the right signal. Don’t confuse Google. They may make bad choices. Some of your 302s may be treated as 301s. It doesn’t make them the same, and a rel=canonical is a very, very different situation that essentially leaves people behind and sends bots ahead.

    So keep in mind what your use case actually is, keep in mind what your goals are, and don’t get over-focused on the ranking signals themselves or the SEO uses because all off these three things have different purposes.

    So I hope that makes sense. If you have any questions or comments or you’ve seen anything weird actually happen on Google, please let us know and I’ll be happy to address that. And until then, we’ll see you next week.

    Video transcription by Speechpad.com

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Originaly posted from The Moz Blog http://bit.ly/30Fm9aC
    If you’re involved with SEO, you’ll already know about the Moz SEO blog. Great info and practicle guides on SEO, check it out.

    MozCon 2019: Everything You Need to Know About Day Three

    Posted by KameronJenkins

    If the last day of MozCon felt like it went too fast or if you forgot everything that happened today (we wouldn’t judge — there were so many insights), don’t fret. We captured all of day three’s takeaways so you could relive the magic of day three. 

    Don’t forget to check out all the photos with Roger from the photobooth! They’re available here in the MozCon Facebook group. Plus: You asked and we delivered: the 2019 MozCon speaker walk-on playlist is now live and available here for your streaming pleasure. 

    Cindy Krum— Fraggles, Mobile-First Indexing, & the SERP of the Future 

    If you were hit with an instant wave of nostalgia after hearing Cindy’s walk out music, then you are in good company and you probably were not disappointed in the slightest by Cindy’s talk on Fraggles.

    https://platform.twitter.com/widgets.js

    • “Fraggles” are fragments + handles. A fragment is a piece of info on a page. A handle is something like a bookmark, jump link, or named anchor — they help people navigate through long pages to get what they’re looking for faster.
    • Ranking pages is an inefficient way to answer questions. One page can answer innumerable questions, so Google’s now can pull a single answer from multiple parts of your page, skipping sections they don’t think are as useful for a particular answer.
    • The implications for voice are huge! It means you don’t have to listen to your voice device spout off a page’s worth of text before your question is answered.
    • Google wants to index more than just websites. They want to organize the world’s information, not websites. Fraggles are a demonstration of that.

    Luke Carthy — Killer Ecommerce CRO and UX Wins Using A SEO Crawler 

    Luke Carthy did warn us in his talk description that we should all flex our notetaking muscles for all the takeaways we would furiously jot down — and he wasn’t wrong.

    https://platform.twitter.com/widgets.js

    • Traffic doesn’t always mean sales and sales don’t always mean traffic!
    • Custom extraction is a great tool for finding missed CRO opportunities. For example, Luke found huge opportunity on Best Buy’s website — thousands of people’s site searches were leading them to an unoptimized “no results found” page.
    • You can also use custom extraction to find what product recommendations you or your customers are using at scale! Did you know that 35% of what customers buy on Amazon and 75 percent of what people watch on Netflix are the results of these recommendations?
    • For example, are you showing near-exact products or are you showing complementary products? (hint: try the latter and you’ll likely increase your sales!)
    • Custom extraction from Screaming Frog allows you to scrape any data from the HTML of the web pages while crawling them.

    Andy Crestodina — Content, Rankings, and Lead Generation: A Breakdown of the 1% Content Strategy 

    Next up, Andy of Orbit Media took the stage with a comprehensive breakdown of the most effective tactics for turning content into a high-powered content strategy. He also brought the fire with this sound advice that we can apply in both our work life and personal life.

    https://platform.twitter.com/widgets.js

    • Blog visitors often don’t have commercial intent. One of the greatest ways to leverage blog posts for leads is by using the equity we generate from links to our helpful posts and passing that onto our product and service pages.
    • If you want links and shares, invest in original research! Not sure what to research? Look for unanswered questions or unproven statements in your industry and provide the data.
    • Original research may take longer than a standard post, but it’s much more effective! When you think about it this way, do you really have time to put out more, mediocre posts?
    • Give what you want to get. Want links? Link to people. Want comments? Comment on others people’s work.
    • To optimize content for social engagement, it should feature real people, their faces, and their quotes.
    • Collaborating with other content creators on your content not only gives it built-in amplification, but it also leads to great connections and is just generally more fun.

    Rob Ousbey — Running Your Own SEO Tests: Why It Matters & How to Do It Right 

    Google’s algorithms have changed a heck of a lot in recent years — what’s an SEO to do? Follow Rob’s advice — both fashion and SEO — who says that the answer lies in testing.

    https://platform.twitter.com/widgets.js

    • “This is the way we’ve always done it” isn’t sufficient justification for SEO tactics in today’s search landscape.
    • In the earlier days of the algorithm, it was much easier to demote spam than it was to promote what’s truly good.
    • Rob and his team had a theory that Google was beginning to rely more heavily on user experience and satisfaction than some of the more traditional ranking factors like links.
    • Through SEO A/B testing, they found that:
      • Google relies less heavily on link signals when it comes to the top half of the results on page 1.
      • Google relies more heavily on user experience for head terms (terms with high search volume), likely because they have more user data to draw from.
    • In the process of A/B testing, they also found that the same test often produces different results on different sites. The best way to succeed in today’s SEO landscape is to cultivate a culture of testing!

    Greg Gifford — Dark Helmet’s Guide to Local Domination with Google Posts and Q&A 

    If you’re a movie buff, you probably really appreciated Greg’s talk — he schooled us all in movie references and brought the fire with his insights on Google Posts and Q&A  

    https://platform.twitter.com/widgets.js

    The man behind #shoesofmozcon taught us that Google is the new home page for local businesses, so we should be leveraging the tools Google has given us to make our Google My Business profiles great. For example…

    Google Posts

    • Images should be 1200×900 on google posts
    • Images are cropped slightly higher than the center and it’s not consistent every time
    • The image size of the thumbnail is different on desktop than it is on mobile
    • Use Greg’s free tool at bit.ly/posts-image-guide to make sizing your Google Post images easier
    • You can also upload videos. The file size limit is 100mb and/or 30 seconds
    • Add a call-to-action button to make your Posts worth it! Just know that the button often means you get less real estate for text in your Posts
    • Don’t share social fluff. Attract with an offer that makes you stand out
    • Make sure you use UTM tracking so you can understand how your Posts are performing in Google Analytics. Otherwise, it’ll be attributed as direct traffic.

    Google Q&A

    • Anyone can ask and answer questions — why not the business owner! Control the conversation and treat this feature like it’s your new FAQ page.
    • This feature works on an upvote system. The answer with the most upvotes will show first.
    • Don’t include a URL or phone number in these because it’ll get filtered out.
    • A lot of these questions are potential customers! Out of 640 car dealerships’ Q&As Greg evaluated, 40 percent were leads! Of that 40 percent, only 2 questions were answered by the dealership.

     Emily Triplett Lentz — How to Audit for Inclusive Content 

    Emily of Help Scout walked dropped major knowledge on the importance of spotting and eliminating biases that frequently find their way into online copy. She also hung out backstage after her talk to cheer on her fellow speakers. #GOAT. #notallheroeswearcapes.

    https://platform.twitter.com/widgets.js

    • As content creators, we’d all do well to keep ableism in mind: discrimination in favor of able-bodied people. However, we’re often guilty of this without even knowing it.
    • One example of ableism that often makes its way into our copy is comparing dire or subideal situations with the physical state of another human (ex: “crippling”).
    • While we should work on making our casual conversation more inclusive too, this is particularly important for brands.
    • Create a list of ableist words, crawl your site for them, and then replace them. However, you’ll likely find that there is no one-size-fits-all replacement for these words. We often use words like “crazy” as filler words. By removing or replacing with a more appropriate word, we make our content better and more descriptive in the process.
    • At the end of the day, brands should remember that their desire for freedom of word choice isn’t more important than people’s right not to feel excluded and hurt. When there’s really no downside to more inclusive content, why wouldn’t we do it?

    Visit http://content.helpscout.net/mozcon-2019 to learn how to audit your site for inclusive content!

    Joelle Irvine — Image & Visual Search Optimization Opportunities 

    Curious about image optimization and visual search? Joelle has the goods for you — and was blowing people’s minds with her tips for visual optimization and how to leverage Google Lens, Pinterest, and AR for visual search.

    https://platform.twitter.com/widgets.js

    • Visual search is not the same thing as searching for images. We’re talking about the process of using an image to search for other content.
    • Visual search like Google Lens makes it easier to search when you don’t know what you’re looking for.
    • Pinterest has made a lot of progress in this area. They have a hybrid search that allows you to find complimentary items to the one you searched. It’s like finding a rug that matches a chair you like rather than finding more of the same type of chair.
    • 62 percent of millennials surveyed said they would like to be able to search by visual, so while this is mostly being used by clothing retailers and home decor right now, visual search is only going to get better, so think about the ways you can leverage it for your brand!

    Joy Hawkins — Factors that Affect the Local Algorithm that Don’t Impact Organic 

    Proximity varies greatly when comparing local and organic results — just ask Joy of Sterling Sky, who gets real about fake listings while walking through the findings of a recent study.

    https://platform.twitter.com/widgets.js

    Here are the seven areas in which the local algorithm diverges from the organic algorithm:

    • Proximity (AKA: how close is the biz to the searcher?)
      • Proximity is the #1 local ranking factor, but the #27 ranking factor on organic.
      • Studies show that having a business that’s close in proximity to the searcher is more beneficial for ranking in the local pack than in traditional organic results.
    • Rank tracking
      • Because there is so much variance by latitude/longitude, as well as hourly variances, Joy recommends not sending your local business clients ranking reports.
      • Use rank tracking internally, but send clients the leads/sales. This causes less confusion and gets them focused on the main goal.
      • Visit bit.ly/mozcon3 for insights on how to track leads from GMB
    • GMB landing pages (AKA: the website URL you link to from your GMB account)
      • Joy tested linking to the home page (which had more authority/prominence) vs. linking to the local landing page (which had more relevance) and found that traffic went way up when linking to the home page.
      • Before you go switching all your GMB links though, test this for yourself!
    • Reviews
      • Joy wanted to know how much reviews actually impacted ranking, and what it was exactly about reviews that would help or hurt.
      • She decided to see what would happen to rankings when reviews were removed. This happened to a business who was review gating (a violation of Google’s guidelines) but Joy found that reviews flagged for violations aren’t actually removed, they’re hidden, explaining why “removed” reviews don’t negatively impact local rankings.
    • Possum filter
      • Organic results can get filtered because of duplicate content, whereas local results can get filtered because they’re too close to another business in the same category. This is called the Possum filter.
    • Keywords in a business name
      • This is against Google’s guidelines but it works sadly
      • For example, Joy tested adding the word “salad bar” to a listing that didn’t even have a salad bar and their local rankings for that keyword shot up.
      • Although it works, don’t do it! Google can remove your listing for this type of violation, and they’ve been removing more listings for this reason lately.
    • Fake listings
      • New listings can rank even if they have no website, authority, citations, etc. simply because they keyword stuffed their business name. These types of rankings can happen overnight, whereas it can take a year or more to achieve certain organic rankings.
      • Spend time reporting spam listings in your clients’ niches because it can improve your clients’ local rankings.

    Britney Muller — Featured Snippets: Essentials to Know & How to Target 

    Closing out day three of MozCon was our very own Britney, Sr. SEO scientist extraordinaire, on everyone’s favorite SEO topic: Featured snippets!

    https://platform.twitter.com/widgets.js

    We’re seeing more featured snippets than ever before, and they’re not likely going away. It’s time to start capitalizing on this SERP feature so we can start earning brand awareness and traffic for our clients!

    Here’s how:

    • Know what keywords trigger featured snippets that you rank on page 1 for
    • Know the searcher’s intent
    • Provide succinct answers
    • Add summaries to popular posts
    • Identify commonly asked questions
    • Leverage Google’s NLP API
    • Monitor featured snippets
    • If all else fails, leverage ranking third party sites. Maybe your own site has low authority and isn’t ranking well, but try publishing on Linkedin or Medium instead to get the snippet!

    There’s lots of debate over whether featured snippets send you more traffic or take it away due to zero-click results, but consider the benefits featured snippets can bring even without the click. Whether featured snippets bring you traffic, increased brand visibility in the SERPs, or both, they’re an opportunity worth chasing.

    Aaaand, that’s a wrap!

    Thanks for joining us at this year’s MozCon! And a HUGE thank you to everyone (Mozzers, partners, and crew) who helped make this year’s MozCon possible — we couldn’t have done it without all of you. 

    What was your favorite moment of the entire conference? Tell us below in the comments! And don’t forget to grab the speaker slides here

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Originaly posted from The Moz Blog http://bit.ly/2XR6Tuk
    If you’re involved with SEO, you’ll already know about the Moz SEO blog. Great info and practicle guides on SEO, check it out.

    The Real Impact of Mobile-First Indexing & The Importance of Fraggles

    Posted by Suzzicks

    While SEOs have been doubling-down on content and quality signals for their websites, Google was building the foundation of a new reality for crawling — indexing and ranking. Though many believe deep in their hearts that “Content is King,” the reality is that Mobile-First Indexing enables a new kind of search result. This search result focuses on surfacing and re-publishing content in ways that feed Google’s cross-device monetization opportunities better than simple websites ever could.

    For two years, Google honed and changed their messaging about Mobile-First Indexing, mostly de-emphasizing the risk that good, well-optimized, Responsive-Design sites would face. Instead, the search engine giant focused more on the use of the Smartphone bot for indexing, which led to an emphasis on the importance of matching SEO-relevant site assets between desktop and mobile versions (or renderings) of a page. Things got a bit tricky when Google had to explain that the Mobile-First Indexing process would not necessarily be bad for desktop-oriented content, but all of Google’s shifting and positioning eventually validated my long-stated belief: That Mobile-First Indexing is not really about mobile phones, per se, but mobile content.

    I would like to propose an alternative to the predominant view, a speculative theory, about what has been going on with Google in the past two years, and it is the thesis of my 2019 MozCon talk — something we are calling Fraggles and Fraggle-based Indexing

     I’ll go through Fraggles and Fraggle-based indexing, and how this new method of indexing has made web content more ‘liftable’ for Google. I’ll also outline how Fraggles impact the Search Results Pages (SERPs), and why it fits with Google’s promotion of Progressive Web Apps. Next, I will provide information about how astute SEO’s can adapt their understanding of SEO and leverage Fraggles and Fraggle-Based Indexing to meet the needs of their clients and companies. Finally, I’ll go over the implications that this new method of indexing will have on Google’s monetization and technology strategy as a whole.

    Ready? Let’s dive in.

    Fraggles & Fraggle-based indexing

    The SERP has changed in many ways. These changes can be thought of and discussed separately, but I believe that they are all part of a larger shift at Google. This shift includes “Entity-First Indexing” of crawled information around the existing structure of Google’s Knowledge Graph, and the concept of “Portable-prioritized Organization of Information,” which favors information that is easy to lift and re-present in Google’s properties — Google describes these two things together as “Mobile-First Indexing.”

    As SEOs, we need to remember that the web is getting bigger and bigger, which means that it’s getting harder to crawl. Users now expect Google to index and surface content instantly. But while webmasters and SEOs were building out more and more content in flat, crawlable HTML pages, the best parts of the web were moving towards more dynamic websites and web-apps. These new assets were driven by databases of information on a server, populating their information into websites with JavaScript, XML or C++, rather than flat, easily crawlable HTML. 

    For many years, this was a major problem for Google, and thus, it was a problem for SEOs and webmasters. Ultimately though, it was the more complex code that forced Google to shift to this more advanced, entity-based system of indexing — something we at MobileMoxie call Fraggles and Fraggle-Based Indexing, and the credit goes to JavaScript’s “Fragments.”

    Fraggles represent individual parts (fragments) of a page for which Google overlayed a “handle” or “jump-link” (aka named-anchor, bookmark, etc.) so that a click on the result takes the users directly to the part of the page where the relevant fragment of text is located. These Fraggles are then organized around the relevant nodes on the Knowledge Graph, so that the mapping of the relationships between different topics can be vetted, built-out, and maintained over time, but also so that the structure can be used and reused, internationally — even if different content is ranking. 

    More than one Fraggle can rank for a page, and the format can vary from a text-link with a “Jump to” label, an unlabeled text link, a site-link carousel, a site-link carousel with pictures, or occasionally horizontal or vertical expansion boxes for the different items on a page.

    The most notable thing about Fraggles is the automatic scrolling behavior from the SERP. While Fraggles are often linked to content that has an HTML or JavaScript jump-links, sometimes, the jump-links appear to be added by Google without being present in the code at all. This behavior is also prominently featured in AMP Featured Snippets, for which Google has the same scrolling behavior, but also includes Google’s colored highlighting — which is superimposed on the page — to show the part of the page that was displayed in the Featured Snippet, which allows the searcher to see it in context. I write about this more in the article: What the Heck are Fraggles.

    How Fraggles & Fraggle-based indexing works with JavaScript

    Google’s desire to index Native Apps and Web Apps, including single-page apps, has necessitated Google’s switch to indexing based on Fragments and Fraggles, rather than pages. In JavaScript, as well as in Native Apps, a “Fragment” is a piece of content or information that is not necessarily a full page. 

    The easiest way for an SEO to think about a Fragment is within the example of an AJAX expansion box: The piece of text or information that is fetched from the server to populate the AJAX expander when clicked could be described as a Fragment. Alternatively, if it is indexed for Mobile-First Indexing, it is a Fraggle. 

    It is no coincidence that Google announced the launch of Deferred JavaScript Rendering at roughly the same time as the public roll-out of Mobile-First Indexing without drawing-out the connection, but here it is: When Google can index fragments of information from web pages, web apps and native apps, all organized around the Knowledge Graph, the data itself becomes “portable” or “mobile-first.”

    We have also recently discovered that Google has begun to index URLs with a # jump-link, after years of not doing so, and is reporting on them separately from the primary URL in Search Console. As you can see below from our data, they aren’t getting a lot of clicks, but they are getting impressions. This is likely because of the low average position. 

    Before Fraggles and Fraggle-Based Indexing, indexing # URLs would have just resulted in a massive duplicate content problem and extra work indexing for Google. Now that Fraggle-based Indexing is in-place, it makes sense to index and report on # URLs in Search Console — especially for breaking up long, drawn-out JavaScript experiences like PWA’s and Single-Page-Apps that don’t have separate URLs, databases, or in the long-run, possibly even for indexing native apps without Deep Links. 

    Why index fragments & Fraggles?

    If you’re used to thinking of rankings with the smallest increment being a URL, this idea can be hard to wrap your brain around. To help, consider this thought experiment: How useful would it be for Google to rank a page that gave detailed information about all different kinds of fruits and vegetables? It would be easy for a query like “fruits and vegetables,” that’s for sure. But if the query is changed to “lettuce” or “types of lettuce,” then the page would struggle to rank, even if it had the best, most authoritative information. 

    This is because the “lettuce” keywords would be diluted by all the other fruit and vegetable content. It would be more useful for Google to rank the part of the page that is about lettuce for queries related to lettuce, and the part of the page about radishes well for queries about radishes. But since users don’t want to scroll through the entire page of fruits and vegetables to find the information about the particular vegetable they searched for, Google prioritizes pages with keyword focus and density, as they relate to the query. Google will rarely rank long pages that covered multiple topics, even if they were more authoritative.

    With featured snippets, AMP featured snippets, and Fraggles, it’s clear that Google can already find the important parts of a page that answers a specific question — they’ve actually been able to do this for a while. So, if Google can organize and index content like that, what would the benefit be in maintaining an index that was based only on per-pages statistics and ranking? Why would Google want to rank entire pages when they could rank just the best parts of pages that are most related to the query?

    To address these concerns, historically, SEO’s have worked to break individual topics out into separate pages, with one page focused on each topic or keyword cluster. So, with our vegetable example, this would ensure that the lettuce page could rank for lettuce queries and the radish page could rank for radish queries. With each website creating a new page for every possible topic that they would like to rank for, there’s lot of redundant and repetitive work for webmasters. It also likely adds a lot of low-quality, unnecessary pages to the index. Realistically, how many individual pages on lettuce does the internet really need, and how would Google determine which one is the best? The fact is, Google wanted to shift to an algorithm that focused less on links and more on topical authority to surface only the best content — and Google circumvents this with the scrolling feature in Fraggles.

    Even though the effort to switch to Fraggle-based indexing, and organize the information around the Knowledge Graph, was massive, the long-term benefits of the switch far out-pace the costs to Google because they make Google’s system for flexible, monetizable and sustainable, especially as the amount of information and the number of connected devices expands exponentially. It also helps Google identify, serve and monetize new cross-device search opportunities, as they continue to expand. This includes search results on TV’s, connected screens, and spoken results from connected speakers. A few relevant costs and benefits are outlined below for you to contemplate, keeping Google’s long-term perspective in mind:

    Why Fraggles and Fraggle-based indexing are important for PWAs

    What also makes the shift to Fraggle-based Indexing relevant to SEOs is how it fits in with Google’s championing of Progressive Web Apps or AMP Progressive Web Apps, (aka PWAs and PWA-AMP websites/web apps). These types of sites have become the core focus of Google’s Chrome Developer summits and other smaller Google conferences.

    From the perspective of traditional crawling and indexing, Google’s focus on PWAs is confusing. PWAs often feature heavy JavaScript and are still frequently built as Single-Page Apps (SPA’s), with only one or only a few URLs. Both of these ideas would make PWAs especially difficult and resource-intensive for Google to index in a traditional way — so, why would Google be so enthusiastic about PWAs? 

    The answer is because PWA’s require ServiceWorkers, which uses Fraggles and Fraggle-based indexing to take the burden off crawling and indexing of complex web content.

    In case you need a quick refresher: ServiceWorker is a JavaScript file — it instructs a device (mobile or computer) to create a local cache of content to be used just for the operation of the PWA. It is meant to make the loading of content much faster (because the content is stored locally) instead of just left on a server or CDN somewhere on the internet and it does so by saving copies of text and images associated with certain screens in the PWA. Once a user accesses content in a PWA, the content doesn’t need to be fetched again from the server. It’s a bit like browser caching, but faster — the ServiceWorker stores the information about when content expires, rather than storing it on the web. This is what makes PWAs seem to work offline, but it is also why content that has not been visited yet is not stored in the ServiceWorker.

    ServiceWorkers and SEO

    Most SEOs who understand PWAs understand that a ServiceWorker is for caching and load time, but they may not understand that it is likely also for indexing. If you think about it, ServiceWorkers mostly store the text and images of a site, which is exactly what the crawler wants. A crawler that uses Deferred JavaScript Rendering could go through a PWA and simulate clicking on all the links and store static content using the framework set forth in the ServiceWorker. And it could do this without always having to crawl all the JavaScript on the site, as long as it understood how the site was organized, and that organization stayed consistent. 

    Google would also know exactly how often to re-crawl, and therefore could only crawl certain items when they were set to expire in the ServiceWorker cache. This saves Google a lot of time and effort, allowing them to get through or possibly skip complex code and JavaScript.

    For a PWA to be indexed, Google requires webmasters to ‘register their app in Firebase,’ but they used to require webmasters to “register their ServiceWorker.” Firebase is the Google platform that allows webmasters to set up and manage indexing and deep linking for their native apps, chat-bots and, now, PWA’s

    Direct communication with a PWA specialist at Google a few years ago revealed that Google didn’t crawl the ServiceWorker itself, but crawled the API to the ServiceWorker. It’s likely that when webmasters register their ServiceWorker with Google, Google is actually creating an API to the ServiceWorker, so that the content can be quickly and easily indexed and cached on Google’s servers. Since Google has already launched an Indexing API and appears to now favor API’s over traditional crawling, we believe Google will begin pushing the use of ServiceWorkers to improve page speed, since they can be used on non-PWA sites, but this will actually be to help ease the burden on Google to crawl and index the content manually.

    Flat HTML may still be the fastest way to get web information crawled and indexed with Google. For now, JavaScript still has to be deferred for rendering, but it is important to recognize that this could change and crawling and indexing is not the only way to get your information to Google. Google’s Indexing API, which was launched for indexing time-sensitive information like job postings and live-streaming video, will likely be expanded to include different types of content. 

    It’s important to remember that this is how AMP, Schema, and many other types of powerful SEO functionalities have started with a limited launch; beyond that, some great SEO’s have already tested submitting other types of content in the API and seen success. Submitting to APIs skips Google’s process of blindly crawling the web for new content and allows webmasters to feed the information to them directly.

    It is possible that the new Indexing API follows a similar structure or process to PWA indexing. Submitted URLs can already get some kinds of content indexed or removed from Google’s index, usually in about an hour, and while it is only currently officially available for the two kinds of content, we expect it to be expanded broadly.

    How will this impact SEO strategy?

    Of course, every SEO wants to know how to leverage this speculative theory — how can we make the changes in Google to our benefit? 

    The first thing to do is take a good, long, honest look at a mobile search result. Position #1 in the organic rankings is just not what it used to be. There’s a ton of engaging content that is often pushing it down, but not counting as an organic ranking position in Search Console. This means that you may be maintaining all your organic rankings while also losing a massive amount of traffic to SERP features like Knowledge Graph results, Featured Snippets, Google My Business, maps, apps, Found on the Web, and other similar items that rank outside of the normal organic results. 

    These results, as well as Pay-per-Click results (PPC), are more impactful on mobile because they are stacked above organic rankings. Rather than being off to the side, as they might be in a desktop view of the search, they push organic rankings further down the results page. There has been some great reporting recently about the statistical and large-scale impact of changes to the SERP and how these changes have resulted in changes to user-behavior in search, especially from Dr. Pete Meyers, Rand Fishkin, and JumpTap.

    Dr. Pete has focused on the increasing number of changes to the Google Algorithm recorded in his MozCast, which heated up at the end of 2016 when Google started working on Mobile-First Indexing, and again after it launched the Medic update in 2018. 

    Rand, on the other hand, focused on how the new types of rankings are pushing traditional organic results down, resulting in less traffic to websites, especially on mobile. All this great data from these two really set the stage for a fundamental shift in SEO strategy as it relates to Mobile-First Indexing.

    The research shows that Google re-organized its index to suit a different presentation of information — especially if they are able to index that information around an entity-concept in the Knowledge Graph. Fraggle-based Indexing makes all of the information that Google crawls even more portable because it is intelligently nested among related Knowledge Graph nodes, which can be surfaced in a variety of different ways. Since Fraggle-based Indexing focuses more on the meaningful organization of data than it does on pages and URLs, the results are a more “windowed” presentation of the information in the SERP. SEOs need to understand that search results are now based on entities and use-cases (think micro-moments), instead of pages and domains.

    Google’s Knowledge Graph

    To really grasp how this new method of indexing will impact your SEO strategy, you first have to understand how Google’s Knowledge Graph works. 

    Since it is an actual “graph,” all Knowledge Graph entries (nodes) include both vertical and lateral relationships. For instance, an entry for “bread” can include lateral relationships to related topics like cheese, butter, and cake, but may also include vertical relationships like “standard ingredients in bread” or “types of bread.” 

    Lateral relationships can be thought of as related nodes on the Knowledge Graph, and hint at “Related Topics” whereas vertical relationships point to a broadening or narrowing of the topic; which hints at the most likely filters within a topic. In the case of bread, a vertical relationship-up would be topics like “baking,” and down would include topics like “flour” and other ingredients used to make bread, or “sourdough” and other specific types of bread.

    SEOs should note that Knowledge Graph entries can now include an increasingly wide variety of filters and tabs that narrow the topic information to benefit different types of searcher intent. This includes things like helping searchers find videos, books, images, quotes, locations, but in the case of filters, it can be topic-specific and unpredictable (informed by active machine learning). This is the crux of Google’s goal with Fraggle-based Indexing: To be able to organize the information of the web-based on Knowledge Graph entries or nodes, otherwise discussed in SEO circles as “entities.” 

    Since the relationships of one entity to another remain the same, regardless of the language a person is speaking or searching in, the Knowledge Graph information is language-agnostic, and thus easily used for aggregation and machine learning in all languages at the same time. Using the Knowledge Graph as a cornerstone for indexing is, therefore, a much more useful and efficient means for Google to access and serve information in multiple languages for consumption and ranking around the world. In the long-term, it’s far superior to the previous method of indexing.

    Examples of Fraggle-based indexing in the SERPs 

    Knowledge Graph

    Google has dramatically increased the number of Knowledge Graph entries and the categories and relationships within them. The build-out is especially prominent for topics for which Google has a high amount of structured data and information already. This includes topics like:

    • TV and Movies — from Google Play
    • Food and Recipe — from Recipe Schema, recipe AMP pages, and external food and nutrition databases 
    • Science and medicine — from trusted sources (like WebMD) 
    • Businesses — from Google My Business. 

    Google is adding more and more nodes and relationships to their graph and existing entries are also being built-out with more tabs and carousels to break a single topic into smaller, more granular topics or type of information.

    As you can see below, the build-out of the Knowledge Graph has also added to the number of filters and drill-down options within many queries, even outside of the Knowledge Graph. This increase can be seen throughout all of the Google properties, including Google My Business and Shopping, both of which we believe are now sections of the Knowledge Graph:

    Google Search for ‘Blazers’ with Visual Filters at the Top for Shopping Oriented Queries
    Google My Business (Business Knowledge Graph) with Filters for Information about Googleplex

    Other similar examples include the additional filters and “Related Topics” results in Google Images, which we also believe to represent nodes on the Knowledge Graph:

    0

     Advanced issues found

     

    Google Images Increase in Filters & Inclusion of Related Topics Means that These Are Also Nodes on the Knowledge Graph

    The Knowedge Graph is also being presented in a variety of different ways. Sometimes there’s a sticky navigation that persists at the top of the SERP, as seen in many media-oriented queries, and sometimes it’s broken up to show different information throughout the SERP, as you may have noticed in many of the local business-oriented search results, both shown below.

    Media Knowledge Graph with Sticky Top Nav (Query for ‘Ferris Bueller’s Day Off’)

    Local Business Knowledge Graph (GMB) With Information Split-up Throughout the SERP

    Since the launch of Fraggle-based indexing is essentially a major Knowledge Graph build-out, Knowledge Graph results have also begun including more engaging content which makes it even less likely that users will click through to a website. Assets like playable video and audio, live sports scores, and location-specific information such as transportation information and TV time-tables can all be accessed directly in the search results. There’s more to the story, though. 

    Increasingly, Google is also building out their own proprietary content by re-mixing existing information that they have indexed to create unique, engaging content like animated ‘AMP Stories’ which webmasters are also encouraged to build-out on their own. They have also started building a zoo of AR animals that can show as part of a Knowledge Graph result, all while encouraging developers to use their AR kit to build their own AR assets that will, no doubt, eventually be selectively incorporated into the Knowledge Graph too.

    Google AR Animals in Knowledge Graph
    Google AMP Stories Now Called ‘Life in Images’

    SEO Strategy for Knowledge Graphs

    Companies who want to leverage the Knowledge Graph should take every opportunity to create your own assets, like AR models and AMP Stories, so that Google will have no reason to do it. Beyond that, companies should submit accurate information directly to Google whenever they can. The easiest way to do this is through Google My Business (GMB). Whatever types of information are requested in GMB should be added or uploaded. If Google Posts are available in your business category, you should be doing Posts regularly, and making sure that they link back to your site with a call to action. If you have videos or photos that are relevant for your company, upload them to GMB. Start to think of GMB as a social network or newsletter — any assets that are shared on Facebook or Twitter can also be shared on Google Posts, or at least uploaded to the GMB account.

    You should also investigate the current Knowledge Graph entries that are related to your industry, and work to become associated with recognized companies or entities in that industry. This could be from links or citations on the entity websites, but it can also include being linked by third-party lists that give industry-specific advice and recommendations, such as being listed among the top competitors in your industry (“Best Plumbers in Denver,” “Best Shoe Deals on the Web,” or “Top 15 Best Reality TV Shows”). Links from these posts also help but are not required — especially if you can get your company name on enough lists with the other top players. Verify that any links or citations from authoritative third-party sites like Wikipedia, Better Business Bureau, industry directories, and lists are all pointing to live, active, relevant pages on the site, and not going through a 301 redirect.

    While this is just speculation and not a proven SEO strategy, you might also want to make sure that your domain is correctly classified in Google’s records by checking the industries that it is associated with. You can do so in Google’s MarketFinder tool. Make updates or recommend new categories as necessary. Then, look into the filters and relationships that are given as part of Knowledge Graph entries and make sure you are using the topic and filter words as keywords on your site.

    Featured snippets 

    Featured Snippets or “Answers” first surfaced in 2014 and have also expanded quite a bit, as shown in the graph below. It is useful to think of Featured Snippets as rogue facts, ideas or concepts that don’t have a full Knowledge Graph result, though they might actually be associated with certain existing nodes on the Knowledge Graph (or they could be in the vetting process for eventual Knowledge Graph build-out). 

    Featured Snippets seem to surface when the information comes from a source that Google does not have an incredibly high level of trust for, like it does for Wikipedia, and often they come from third party sites that may or may not have a monetary interest in the topic — something that makes Google want to vet the information more thoroughly and may prevent Google from using it, if a less bias option is available.

    Like the Knowledge Graph, Featured Snippets results have grown very rapidly in the past year or so, and have also begun to include carousels — something that Rob Bucci writes about extensively here. We believe that these carousels represent potentially related topics that Google knows about from the Knowledge Graph. Featured Snippets now look even more like mini-Knowledge Graph entries: Carousels appear to include both lateral and vertically related topics, and their appearance and maintenance seem to be driven by click volume and subsequent searches. However, this may also be influenced by aggregated engagement data for People Also Ask and Related Search data.

    The build-out of Featured Snippets has been so aggressive that sometimes the answers that Google lifts are obviously wrong, as you can see in the example image below. It is also important to understand that Featured Snippet results can change from location to location and are not language-agnostic, and thus, are not translated to match the Search Language or the Phone Language settings. Google also does not hold themselves to any standard of consistency, so one Featured Snippet for one query might present an answer one way, and a similar query for the same fact could present a Featured Snippet with slightly different information. For instance, a query for “how long to boil an egg” could result in an answer that says “5 minutes” and a different query for “how to make a hard-boiled egg” could result in an answer that says “boil for 1 minute, and leave the egg in the water until it is back to room temperature.”

    Featured Snippet with Carousel Featured

    Snippet that is Wrong

    The data below was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.

    This Data Was Collected by Moz & represents an average of roughly 10,000 that skews slightly towards ‘head’ terms

    SEO strategy for featured snippets

    All of the standard recommendations for driving Featured Snippets apply here. This includes making sure that you keep the information that you are trying to get ranked in a Featured Snippet clear, direct, and within the recommended character count. It also includes using simple tables, ordered lists, and bullets to make the data easier to consume, as well as modeling your content after existing Featured Snippet results in your industry.

    This is still speculative, but it seems likely that the inclusion of Speakable Schema markup for things like “How To,” “FAQ,” and “Q&A” may also drive Featured Snippets. These kinds of results are specially designated as content that works well in a voice-search. Since Google has been adamant that there is not more than one index, and Google is heavily focused on improving voice-results from Google Assistant devices, anything that could be a good result in the Google Assistant, and ranks well, might also have a stronger chance at ranking in a Featured Snippet.

    People Also Ask & Related Searches

    Finally, the increased occurrence of “Related Searches” as well as the inclusion of People Also Ask (PAA) questions, just below most Knowledge Graph and Featured Snippet results, is undeniable. The Earl Tea screenshot shows that PAA’s along with Interesting Finds are both part of the Knowledge Graph too.

    The graph below shows the steady increase in PAA’s. PAA results appear to be an expansion of Featured Snippets because once expanded, the answer to the question is displayed, with the citation below it. Similarly, some Related Search results also now include a result that looks like a Featured Snippet, instead of simply linking over to a different search result. You can now find ‘Related Searches’ throughout the SERP, often as part of a Knowledge Graph results, but sometimes also in a carousel in the middle of the SERP, and always at the bottom of the SERP — sometimes with images and expansion buttons to surface Featured Snippets within the Related Search results directly in the existing SERP.

    Boxes with Related Searches are now also included with Image Search results. It’s interesting to note that Related Search results in Google Images started surfacing at the same time that Google began translating image Title Tags and Alt Tags. It coincides well with the concept that Entity-First Indexing, that Entities and Knowledge Graph are language-agnostic, and that Related Searches are somehow related to the Knowledge Graph.

    This data was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.

    People Also Ask

    Related Searches

    SEO STRATEGY for PAA and related searches

    Since PAAs and some Related Searches now appear to simply include Featured Snippets, driving Featured Snippet results for your site is also a strong strategy here. It often appears that PAA results include at least two versions of the same question, re-stated with a different language, before including questions that are more related to lateral and vertical nodes on the Knowledge Graph. If you include information on your site that Google thinks is related to the topic, based on Related Searches and PAA questions, it could help make your site appear relevant and authoritative.

    Finally, it is crucial to remember that you don’t have a website to rank in Google now and SEO’s should consider non-website rankings as part of their job too. 

    If a business doesn’t have a website, or if you just want to cover all the bases, you can let Google host your content directly — in as many places as possible. We have seen that Google-hosted content generally seems to get preferential treatment in Google search results and Google Discover, especially when compared to the decreasing traffic from traditional organic results. Google is now heavily focused on surfacing multimedia content, so anything that you might have previously created a new page on your website for should now be considered for a video.

    Google My Business (GMB) is great for companies that don’t have websites, or that want to host their websites directly with Google. YouTube is great for videos, TV, video-podcasts, clips, animations, and tutorials. If you have an app, a book, an audio-book, a podcast, a movie, TV show, class or music, or PWA, you can submit that directly to GooglePlay (much of the video content in GooglePlay is now cross-populated in YouTube and YouTube TV, but this is not necessarily true of the other assets). This strategy could also include books in Google Books, flights in Google Flights, Hotels in Google Hotel listings, and attractions in Google Explore. It also includes having valid AMP code, since Google hosts AMP content, and includes Google News if your site is an approved provider of news.

    Changes to SEO tracking for Fraggle-based indexing

    The biggest problem for SEOs is the missing organic traffic, but it is also the fact that current methods of tracking organic results generally don’t show whether things like Knowledge Graph, Featured Snippets, PAA, Found on the Web, or other types of results are appearing at the top of the query or somewhere above your organic result. Position one in organic results is not what it used to be, nor is anything below it, so you can’t expect those rankings to drive the same traffic. If Google is going to be lifting and representing everyone’s content, the traffic will never arrive at the site and SEOs won’t know if their efforts are still returning the same monetary value. This problem is especially poignant for publishers, who have only been able to sell advertising on their websites based on the expected traffic that the website could drive.

    The other thing to remember is that results differ — especially on mobile, which varies from device to device (generally based on screen size) but also can vary based on the phone IOS. They can also change significantly based on the location or the language settings of the phone, and they definitely do not always match with desktop results for the same query. Most SEO’s don’t know much about the reality of their mobile search results because most SEO reporting tools still focus heavily on desktop results, even though Google has switched to Mobile-First. 

    As well, SEO tools generally only report on rankings from one location — the location of their servers — rather than being able to test from different locations. 

    The only thing that good SEO’s can do to address this problem is to use tools like the MobileMoxie SERP Test to check what rankings look like on top keywords from all the locations where their users may be searching. While the free tool only provides results with one location at a time, subscribers can test search results in multiple locations, based on a service-area radius or based on an uploaded CSV of addresses. The tool has integrations with Google Sheets, and a connector with Data Studio, to help with SEO reporting, but APIs are also available, for deeper integrations in content editing tools, dashboards and for use within other SEO tools.

    Conclusion

    At MozCon 2017, I expressed my belief that the impact of Mobile-First Indexing requires a re-interpretation of the words “Mobile,” “First,” and “Indexing.” Re-defined in the context of Mobile-First Indexing, the words should be understood to mean “portable,” “preferred,” and “organization of information.” The potential of a shift to Fraggle-based indexing and the recent changes to the SERPs, especially in the past year, certainly seems to prove the accuracy of this theory. And though they have been in the works for more than two years, the changes to the SERP now seem to be rolling-out faster and are making the SERP unrecognizable from what it was only three or four years ago.

    In this post, we described Fraggles and Fraggle-based indexing for SEO as a theory that speculates the true nature of the change to Mobile-First Indexing, how the index itself — and the units of indexing — may have changed to accommodate faster and more nuanced organization of information based on the Knowledge Graph, rather than simply links and URLs. We covered how Fraggles and Fraggle-based Indexing works, how it is related to JavaScript and PWA’s and what strategies SEOs can take to leverage it for additional exposure in the search results as well as how they can update their success tracking to account for all the variabilities that impact mobile search results.

    SEOs need to consider the opportunities and change the way we view our overall indexing strategy, and our jobs as a whole. If Google is organizing the index around the Knowledge Graph, that makes it much easier for Google to constantly mention near-by nodes of the Knowledge Graph in “Related Searches” carousels, links from the Knowledge Graph, and topics in PAAs. It might also make it easier to believe that featured snippets are simply pieces of information being vetted (via Google’s click-crowdsourcing) for inclusion or reference in the Knowledge Graph.

    Fraggles and Fraggled indexing re-frames the switch to Mobile-First Indexing, which means that SEOs and SEO tool companies need to start thinking mobile-first — i.e. the portability of their information. While it is likely that pages and domains still carry strong ranking signals, the changes in the SERP all seem to focus less on entire pages, and more on pieces of pages, similar to the ones surfaced in Featured Snippets, PAAs, and some Related Searches. If Google focuses more on windowing content and being an “answer engine” instead of a “search engine,” then this fits well with their stated identity, and their desire to build a more efficient, sustainable, international engine.

    SEOs also need to find ways to serve their users better, by focusing more on the reality of the mobile SERP, and how much it can vary for real users. While Google may not call the smallest rankable units Fraggles, it is what we call them, and we think they are critical to the future of SEO.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Originaly posted from The Moz Blog http://bit.ly/2XR2nfs
    If you’re involved with SEO, you’ll already know about the Moz SEO blog. Great info and practicle guides on SEO, check it out.

    MozCon 2019: Day Two Learnings

    Posted by KameronJenkins

    We had another amazing day here at MozCon — our speakers delivered some incredible expertise for Day two. But there was plenty of moments in-between that was also just as spectacular. 

    In no particular order, today also consisted of: 

    • Areej parading 180 slides-worth of knowledge in 14 minutes — like a boss!
    • 1,000+ attendees singing Marie happy birthday
    • Dr. Pete bringing the “wizard” in SEO wizard to his talk (and now everyone wants to know which House everyone belongs to)
    • Dogs DO like birthday cake, thank you for coming to our TED talk
    • Yogurt parfaits
    • This tender moment between Wil and Stacy, our live event captioner
    • Cat puns

    And much, much more. Let’s get to it! Read on for our top takeaways from day two of MozCon.

    Heather Physioc — Building a Discoverability Powerhouse: Lessons From Merging an Organic, Paid, & Content Practice

    Heather kicked off day two by making a strong case for un-siloing our search teams. When paid, organic, and content teams join forces, they can reach maximum effectiveness.

    https://platform.twitter.com/widgets.js

    By using her own team’s experience as an example, Heather helped us see what it takes to build a powerful, cross-functional team:

    • Start with a mantra to guide your team. Theirs is “Connected brands start with connected teams.”
    • Rip the bandaid off. Get people involved in the mission and brainstorming as soon as possible.
    • While you want to start collaborating as soon as possible, make the actual changes in small, incremental steps. Develop committees dedicated to making certain aspects of the change easier.
    • “No process is precious” means establishing clear, living processes (they use Confluence to document these) that can adapt over time. Check-in regularly and ditch what isn’t serving you.
    • Commit to cross-team training not so you can do each other’s jobs, but to promote empathy and to start thinking about how your work will affect other people.
    • Just like we should avoid siloing our departments, we should avoid siloing our reporting. Bring data from the channels together to tell a cohesive story.
    • Create a culture of feedback so that feedback feels less personal and more about improving the work.
    • Even if you’re not able to change the org chart, you can still work on un-siloing by collaborating with your counterparts on other teams.

    Visit https://mozcon.vmlyrconnect.com/ for even more wisdom from Heather!

    Mary Bowling — Brand Is King: How to Rule in the New Era of Local Search 

    Mary took the stage next to shed some light on why brand is so critical to success in this latest era of local search.

    https://platform.twitter.com/widgets.js

    • With so much talk about Google taking clicks away from our websites, Mary posited that Google’s actually giving local businesses a ton of opportunity to increase our conversions on the SERP itself.
    • According to research from Mike Blumenthal, 70% of local business conversions happen on the SERP with the smaller percentage happening on websites. While both are important, Mary says that local businesses really need to concentrate on owning our branded SERPs.
    • Google loves brands, and one way we can tell Google we’re a good one is to take control of what other websites say about us.
    • Want to understand Google’s recent attention on local? They’re moving from a company that helps you find answers to a company that helps you get things done.
    • Control whatever you can on your branded SERPs, whether that’s managing reviews, making sure your GMB is up to date and accurate, and investing in PR to influence news and other mentions that show up on your branded SERP.
    • Google is giving small businesses a lot of ways to attract customers. Use them to your advantage!

    Casie Gillette — Making Memories: Creating Content People Remember 

    Casie told us that only 20% of people remember what they read, which means you might not remember this. We’ll try not to take it personally. In the meantime, how do you create something that people will actually remember and come back for again and again?

    https://platform.twitter.com/widgets.js

    Here’s some of the advice she offered:

    • People care about brands that care about them. Make your audience feel seen and you’ll win.
    • Pay attention to your audience demographics and psychographics! Make your content resonate with your audience by knowing your audience.
    • Keep your content clear and simple to give your audience the answer to their question as quickly as possible.
    • Add movement to our images when possible. It grabs attention among a sea of static images.
    • Choose colors wisely. Color can drastically impact conversions and how people respond in general.
    • Messages delivered in stories can be 22 percent more effective than pure info alone.
    • Whatever you do, commit to not being forgettable!

    Wil Reynolds — 20 Years in Search & I Don’t Trust My Gut or Google  

    Wil Reynolds brought the honesty in a continuation of his talk from last year’s MozCon. Massive opportunity is at our fingertips. We just need to leverage the data.

    https://platform.twitter.com/widgets.js

    Here are some of the best nuggets from his presentation!

    • There’s power in looking at big data. You can usually find a ton of waste and save a bunch of money that helps fund your other initiatives.
    • Every client deserves a money-saving analysis. Use big data to help you do this at scale.
    • Looking at data generically can lead you to the wrong conclusions. Instead of blindly following best practices lists and correlation studies, look at data from your own websites to see what actually moves the needle.
    • Always stay in hypothesis mode.
    • Humans are naturally inclined to bring our own bias into decision-making, which is why data is so important. You can’t know everything. Let the data tell you what to do.

    Bonus! Go to bit.ly/savingben if you want to stop losing money.

    Dr. Marie Haynes — Super-Practical Tips for Improving Your Site’s E-A-T

    Dr. Marie Haynes serves up incredible tips for how to practically improve your site’s E-A-T — something every SEO and marketer needs.

    https://platform.twitter.com/widgets.js

    Those tips included things like:

    • Using Help a Reporter Out (HARO) to get authoritative mentions in publications
    • Publishing data — people love to cite original research!
    • Create articles that answer previously unanswered questions (find those on forums!)
    • Create original tools that solve common problems
    • Run a test and publish your results

    Sounds a lot like link building, right? That’s intentional! Links to your site from authoritative sources is a huge factor when it comes to E-A-T.

    Areej AbuAli — Fixing the Indexability Challenge: A Data-Based Framework 

    How do you turn an unwieldy 2.5 million-URL website into a manageable and indexable site of just 20,000 pages? Answer: you catch Areej’s talk. 

    https://platform.twitter.com/widgets.js

    • When doing an audit, it’s a good idea to include not only what the problem is, but what effect it’s causing and the proposed solution.
    • The site Areej was working on had no rules in place to direct robots, creating unlimited URLs to crawl. Crawl budget was being wasted and Google was missing what was actually important on their site. Fundamentals like these needed to be fixed first!
    • She used search volume data to determine what content was important and should be indexed. If a keyword had low search volume but was still needed for usability purposes, it was no-indexed.
    • Another barrier to Google indexing their important content was the lack of a sitemap. Areej recommended creating and submitting separate sitemaps for the different main sections of their website.
    • The site also had no core content and its only links were coming from three referring domains.
    • Despite all of Areej’s recommendations, the client failed to implement many of them and implemented some of them incorrectly. She decided to have a face-to-face meeting to clear things up.

    If she were to do this all over again, here’s what she would do differently:

    • Realize that you can’t force a client to implement your recommendations
    • Take a targeted approach to the SEO audit and focus on tackling one issue at a time.
    • At the end of the day, technical problems are people problems. It doesn’t matter how good your SEO audit is if it’s never followed.

    Go to bit.ly/mozcon-areej for her full methodology and helpful graphics!

    Christi Olson — What Voice Means for Search Marketers: Top Findings from the 2019 Report 

    Microsoft’s Christi Olson gave us the down-low on everything you need to know about voice search now and into the future based on findings from a study they ran at Microsoft.

    https://platform.twitter.com/widgets.js

    • 69 percent of respondents said they have used a digital assistant
    • 75 percent of households will have at least one smart speaker by 2020
    • Over half of consumers expect their voice assistant to help them make retail purchases within five years
    • Search is moving from answers to actions — not smart actions like “Turn on the light” but “I want to know/go/do” actions
    • Smartphones, PC, and smart speakers are the main ways people engage with voice
    • 40 percent of spoken responses come from featured snippets. This is how you win at voice search.
    • To rank in featured snippets: 1) Find queries where you’re already ranking on page one, 2) Ask what questions are related to your query and answer them on your site (hint: even without voice search data, it’s safe to assume that many of the longer and more conversational keywords in your tools were probably spoken queries!), 3) Structure your answer appropriately (paragraph, table, or bullets), however, voice devices don’t usually read tables, 4) Make sure your answers are straightforward and clear, and 5) Don’t forget SEO best practices so it’s easy for search engines to find and understand!
    • Although speakable schema markup says it’s only available for news articles, she’s seen it used (and working!) on non-news sites.
    • 25 percent of people currently are using voice to make purchases

    Main takeaways? Voice is here, use schema that helps voice, and bots/actions will help enable v-commerce (voice shopping) in the future.

    Visit aka.ms/moz19 to view the full report Christi based this talk on.

    Paul Shapiro — Redefining Technical SEO 

    Take your textbook definition of technical SEO and throw it out the window because there’s more to it than crawling, indexing, and rendering. And Paul definitely proves it.

    https://platform.twitter.com/widgets.js

    • We’re used to thinking of SEO sitting at the center of a Venn diagram where content, links, and website architecture converge. That idea is an oversimplification and doesn’t really capture the full spirit of technical SEO.
    • If technical SEO is: “Any sufficiently technical action undertaken with the intent of improving search results” then it broadens the scope beyond just those actions that impact crawl/render/index.
    • There are four main types of technical SEO: checklist, general, blurred responsibility, and advanced-applied:
      • Checklist-style tech SEO is essentially an itemized list of technical problems you could answer yes-or-no to.
      • General technical SEO is similar to a checklist with some additional logic applied.
      • Blurred responsibility technical SEO are those tasks that lie in uncertain territories, such as items that an SEO checks but a developer would need to implement.
      • Advanced-applied SEO involves things like SEO testing, adopting new technology, data science for SEO purposes, Natural Language Processing to enhance content development, using Machine Learning for search data, and creating automation. It involves using technology to do better SEO.
    • Advanced-applied SEO means that all SEO can be technical SEO, including:
      • Redirect mapping
      • Meta descriptions
      • Content ideation
      • Link building
      • Keyword research
      • A/B testing and experimentation

    Visit searchwilderness.com/mozcon-2019 for some of Paul’s python scripts he uses to make “traditional” SEO tasks technical.

    Dr. Pete Meyers — How Many Words Is a Question Worth? 

    Rounding out day 2 was Dr. Pete, asking the important questions: how do we find the best questions, craft content around them, and evaluate success?

    https://platform.twitter.com/widgets.js

    • The prevalence of People Also Ask (PAA) features has exploded within the past year! Last year they were on 30 percent of all SERPs Moz tracked and now they’re on 90 percent.
    • Google is likely using PAA clicks to feed their machine learning and help them better understand query intent.
    • Since Google is using them so often, how can we take advantage?
    • Once you know what questions people are asking around your topic, you can vet which opportunities you’ll go after on the basis of credibility (am I credible enough to answer this intelligently?), competition (is this something realistically I can compete on?), and cannibalization (am I already ranking for this with some other piece on my site?)
    • When you target questions, you’ll often get much more than you bargained for… in a good way! Don’t get discouraged if your keyword research tool shows a low search volume for a query target. Chances are, ranking for that keyword also means you’ll rank well for lots of related queries too.

    Dr. Pete also announced that Moz is looking into the possibility of a People Also Ask tool! For now, he’s testing the model with a manual process you can check out today. Just go to moz.com/20q and he’ll send you a personalized list of the top 20 questions for your domain or topic.

    Day two — done!

    Only one more day left for this year’s MozCon! What stood out the most for you on day two? Tell us in the comments below!

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Originaly posted from The Moz Blog http://bit.ly/2JNJqQN
    If you’re involved with SEO, you’ll already know about the Moz SEO blog. Great info and practicle guides on SEO, check it out.

    MozCon 2019: The Top Takeaways From Day One

    Posted by KameronJenkins

    Rand, Russ, Ruth, Rob, and Ross. Dana and Darren. Shannon and Sarah. We didn’t mean to (we swear we didn’t) but the first day of MozCon was littered with alliteration, takeaways, and oodles of insights from our speakers. Topics ranged from local SEO, link building, and Google tools, and there was no shortage of “Aha!” moments. And while the content was diverse, the themes are clear: search is constantly changing. 

    If you’re a Moz community member, you can access the slides from Day One. Not a community member yet? Sign up — it’s free!

    Get the speaker slides!

    Ready? Let’s make like Roger in his SERP submarine and dive right in!

    Sarah’s welcome

    Our fearless leader took the stage to ready our attendees for their deep sea dive over the next three days. Our guiding theme to help set the tone? The deep sea of data that we find ourselves immersed in every day.

    People are searching more than ever before on more types of devices than ever before… we truly are living in the golden age of search. As Sarah explained though, not all search is created equal. Because Google wants to answer searchers’ questions as quickly as possible, they’ve moved from being the gateway to information to being the destination for information in many cases. SEOs need to be able to work smarter and identify the best opportunities in this new landscape. 

    https://platform.twitter.com/widgets.js

    https://platform.twitter.com/widgets.js

    Rand Fishkin — Web Search 2019: The Essential Data Marketers Need

    Next up was Rand of SparkToro who dropped a ton of data about the state of search in 2019.

    To set the stage, Rand gave us a quick review of the evolution of media: “This new thing is going to kill this old thing!” has been the theme of panicked marketers for decades. TV was supposed to kill radio. Computers were supposed to kill TV. Mobile was supposed to kill desktop. Voice search was supposed to kill text search. But as Rand showed us, these new technologies often don’t kill the old ones — they just take up all our free time. We need to make sure we’re not turning away from mediums just because they’re “old” and, instead, make sure our investments follow real behavior.

    https://platform.twitter.com/widgets.js

    Rand’s deck was also chock-full of data from Jumpshot about how much traffic Google is really sending to websites these days, how much of that comes from paid search, and how that’s changed over the years.

    In 2019, Google sent ~20 fewer organic clicks via browser searches than in 2016.

    In 2016, there were 26 organic clicks for every paid click. In 2019, that ratio is 11:1.

    https://platform.twitter.com/widgets.js

    Google still owns the lion’s share of the search market and still sends a significant amount of traffic to websites, but in light of this data, SEOs should be thinking about how their brands can benefit even without the click.

    https://platform.twitter.com/widgets.js

    And finally, Rand left us with some wisdom from the world of social — getting engagement on social media can get you the type of attention it takes to earn quality links and mentions in a way that’s much easier than manual, cold outreach.

    Ruth Burr Reedy — Human > Machine > Human: Understanding Human-Readable Quality Signals and Their Machine-Readable Equivalents

    It’s 2019. And though we all thought by this year we’d have flying cars and robots to do our bidding, machine learning has come a very long way. Almost frustratingly so — the push and pull of making decisions for searchers versus search engines is an ever-present SEO conundrum.

    Ruth argued that in our pursuit of an audience, we can’t get too caught up in the middleman (Google), and in our pursuit of Google, we can’t forget the end user.

    Optimizing for humans-only is inefficient. Those who do are likely missing out on a massive opportunity. Optimizing for search engines-only is reactive. Those who do will likely fall behind.

    https://platform.twitter.com/widgets.js

    She also left us with the very best kind of homework… homework that’ll make us all better SEOs and marketers!

    • Read the Quality Rater Guidelines
    • Ask what your site is currently benefiting from that Google might eliminate or change in the future
    • Write better (clearer, simpler) content
    • Examine your SERPs with the goal of understanding search intent so you can meet it
    • Lean on subject matter experts to make your brand more trustworthy
    • Conduct a reputation audit — what’s on the internet about your company that people can find?

    And last, but certainly not least, stop fighting about this stuff. It’s boring.

    https://platform.twitter.com/widgets.js

    Thank you, Ruth!

    Dana DiTomaso — Improved Reporting & Analytics Within Google Tools

    Freshly fueled with cinnamon buns and glowing with the energy of a thousand jolts of caffeine, we were ready to dive back into it — this time with Dana from Kick Point.

    This year was a continuation of Dana’s talk on goal charters. If you haven’t checked that out yet or you need a refresher, you can view it here

    Dana emphasized the importance of data hygiene. Messy analytics, missing tracking codes, poorly labeled events… we’ve all been there. Dana is a big advocate of documenting every component of your analytics.

    She also blew us away with a ton of great insight on making our reports accessible — from getting rid of jargon and using the client’s language to using colors that are compatible with printing.

    https://platform.twitter.com/widgets.js

    And just when we thought it couldn’t get any more actionable, Dana drops some free Google Data Studio resources on us! You can check them out here.

    (Also, close your tabs!)

    Rob Bucci — Local Market Analytics: The Challenges and Opportunities

    The first thing you need to know is that Rob finally did it — he finally got a cat.

    https://platform.twitter.com/widgets.js

    Very bold of Rob to assume he would have our collective attention after dropping something adorable like that on us. Luckily, we were all able to regroup and focus on his talk — how there are challenges aplenty in the local search landscape, but there are even more opportunities if you overcome them.

    Rob came equipped with a ton of stats about localized SERPs that have massive implications for rank tracking.

    • 73 percent of the 1.2 million SERPs he analyzed contained some kind of localized feature.
    • 25 percent of the sites he was tracking had some degree of variability between markets.
    • 85 percent was the maximum variability he saw across zip codes in a single market.

    That’s right… rankings can vary by zip code, even for queries you don’t automatically associate as local intent. Whether you’re a national brand without physical storefronts or you’re a single-location retail store, localization has a huge impact on how you show up to your audience.

    With this in mind, Rob announced a huge initiative that Moz has been working on… Local Market Analytics — complete with local search volume! Eep! See how you perform on hyper-local SERPs with precision and ease — whether you’re an online or location-based business.

    https://platform.twitter.com/widgets.js

    It launched today as an invitation-only limited release. Want an invite? Request it here

    Ross Simmonds— Keywords Aren’t Enough: How to Uncover Content Ideas Worth Chasing

    Ross Simmonds was up next, and he dug into how you might be creating content wrong if you’re building it strictly around keyword research.

    The methodology we marketers need to remember is Research – Rethink – Remix.

    Research:

    • Find the channel your audience spends time on. What performs well? How can you serve this audience?

    Rethink:

    • Find the content that your audience wants most. What topics resonate? What stories connect?

    Remix:

    • Measure how your audience responds to the content. Can this be remixed further? How can we remix at scale?

    https://platform.twitter.com/widgets.js

    If you use this method and you still aren’t sure if you should pursue a content opportunity, ask yourself the following questions:

    • Will it give us a positive ROI?
    • Does it fall within our circle of competence?
    • Does the benefit outweigh the cost of creation?
    • Will it give us shares and links and engagement?

    Thanks, Ross, for such an actionable session!

    Shannon McGuirk — How to Supercharge Link Building with a Digital PR Newsroom

    Shannon of Aira Digital took the floor with real-life examples of how her team does link building at scale with what she calls the “digital PR newsroom.”

    The truth is, most of us are still link building like it’s 1948 with “planned editorial” content. When we do this, we’re missing out on a ton of opportunity (about 66%!) that can come from reactive editorial and planned reactive editorial.

    Shannon encouraged us to try tactics that have worked for her team such as:

    • Having morning scrum meetings to go over trending topics and find reactive opportunities
    • Staffing your team with both storytellers and story makers
    • Holding quarterly reviews to see which content types performed best and using that to inform future work

    Her talk was so good that she even changed Cyrus’s mind about link building!

    https://platform.twitter.com/widgets.js

    For free resources on how you can set up your own digital PR newsroom, visit: aira.net/mozcon19.

    Darren Shaw— From Zero to Local Ranking Hero

    Next up, Darren of Whitespark chronicled his 8-month long journey to growing a client’s local footprint.

    Here’s what he learned and encouraged us to implement in response:

    • Track from multiple zip codes around the city
    • Make sure your citations are indexed
    • The service area section in GMB won’t help you rank in those areas. It’s for display purposes only
    • Invest in a Google reviews strategy
    • The first few links earned really have a positive impact, but it reaches a point of diminishing returns
    • Any individual strategy will probably hit a point of diminishing returns
    • A full website is better than a single-page GMB website when it comes to local rankings

    As SEOs, we’d all do well to remember that it’s not one specific activity, but the aggregate, that will move the needle!

    https://platform.twitter.com/widgets.js

    Russ Jones — Esse Quam Videri: When Faking it is Harder than Making It

    Rounding out day one of MozCon was our very own Russ Jones on Esse Quam Videri — “To be, rather than to seem.”

    By Russ’s own admission, he’s a pretty good liar, and so too are many SEOs. In a poll Russ ran on Twitter, he found that 64 percent of SEOs state that they have promoted sites they believe are not the best answer to the query. We can be so “rank-centric” that we engage in tactics that make our websites look like we care about the users, when in reality, what we really care about is that Google sees it.

    Russ encouraged SEOs to help guide the businesses we work for to “be real companies” rather than trying to look like real companies purely for SEO benefit.

    Thanks to Russ for reminding us to stop sacrificing the long run for the short run!

    https://platform.twitter.com/widgets.js

    Phew — what a day!

    And it ain’t over yet! There are two more days to make the most of MozCon, connect with fellow attendees, and pick the brains of our speakers. 

    In the meantime, tell me in the comments below — if you had to pick just one thing, what was your favorite part about day one?

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Originaly posted from The Moz Blog http://bit.ly/2XR2mrU
    If you’re involved with SEO, you’ll already know about the Moz SEO blog. Great info and practicle guides on SEO, check it out.

    How to Target Featured Snippet Opportunities — Best of Whiteboard Friday

    Posted by BritneyMuller

    Once you’ve identified where the opportunity to nab a featured snippet lies, how do you go about targeting it? Part One of our “Featured Snippet Opportunities” series focused on how to discover places where you may be able to win a snippet, but today we’re focusing on how to actually make changes that’ll help you do that. 

    Joining us at MozCon next week? This video is a great lead up to Britney’s talk: Featured Snippets: Essentials to Know & How to Target.

    Give a warm, Mozzy welcome to Britney as she shares pro tips and examples of how we’ve been able to snag our own snippets using her methodology.

    Target featured snippet opportunities

    Click on the whiteboard image above to open a high-resolution version in a new tab!

    Video Transcription

    Today, we are going over targeting featured snippets, Part 2 of our featured snippets series. Super excited to dive into this.

    What’s a featured snippet?

    For those of you that need a little brush-up, what’s a featured snippet? Let’s say you do a search for something like, “Are pigs smarter than dogs?” You’re going to see an answer box that says, “Pigs outperform three-year old human children on cognitive tests and are smarter than any domestic animal. Animal experts consider them more trainable than cats or dogs.” How cool is that? But you’ll likely see these answer boxes for all sorts of things. So something to sort of keep an eye on. How do you become a part of that featured snippet box? How do you target those opportunities?

    Last time, we talked about finding keywords that you rank on page one for that also have a featured snippet. There are a couple ways to do that. We talk about it in the first video. Something I do want to mention, in doing some of that the last couple weeks, is that Ahrefs can help you discover your featured snippet opportunities. I had no idea that was possible. Really cool, go check them out. If you don’t have Ahrefs and maybe you have Moz or SEMrush, don’t worry, you can do the same sort of thing with a Vlookup.

    So I know this looks a little crazy for those of you that aren’t familiar. Super easy. It basically allows you to combine two sets of data to show you where some of those opportunities are. So happy to link to some of those resources down below or make a follow-up video on how to do just that.

    1. Identify

    All right. So step one is identifying these opportunities. You want to find the keywords that you’re on page one for that also have this answer box. You want to weigh the competitive search volume against qualified traffic. Initially, you might want to just go after search volume. I highly suggest you sort of reconsider and evaluate where might the qualified traffic come from and start to go after those.

    2. Understand

    From there, you really just want to understand the intent, more so even beyond this table that I have suggested for you. To be totally honest, I’m doing all of this with you. It’s been a struggle, and it’s been fun, but sometimes this isn’t very helpful. Sometimes it is. But a lot of times I’m not even looking at some of this stuff when I’m comparing the current featured snippet page and the page that we currently rank on page one for. I’ll tell you what I mean in a second.

    3. Target

    So we have an example of how I’ve been able to already steal one. Hopefully, it helps you. How do you target your keywords that have the featured snippet?

    • Simplifying and cleaning up your pages does wonders. Google wants to provide a very simple, cohesive, quick answer for searchers and for voice searches. So definitely try to mold the content in a way that’s easy to consume.
    • Summaries do well. Whether they’re at the top of the page or at the bottom, they tend to do very, very well.
    • Competitive markup, if you see a current featured snippet that is marked up in a particular way, you can do so to be a little bit more competitive.
    • Provide unique info
    • Dig deeper, go that extra mile, provide something else. Provide that value.

    How To Target Featured Snippet Examples

    What are some examples? So these are just some examples that I personally have been running into and I’ve been working on cleaning up.

    • Roman numerals. I am trying to target a list result, and the page we currently rank on number one for has Roman numerals. Maybe it’s a big deal, maybe it’s not. I just changed them to numbers to see what’s going to happen. I’ll keep you posted.
    • Fix broken links. But I’m also just going through our page and cleaning it. We have a lot of older content. I’m fixing broken links. I have the Check My Links tool. It’s a Chrome add-on plugin that I just click and it tells me what’s a 404 or what I might need to update.
    • Fixing spelling errors or any grammatical errors that may have slipped through editors’ eyes. I use Grammarly. I have the free version. It works really well, super easy. I’ve even found some super old posts that have the double or triple spacing after a period. It drives me crazy, but cleaning some of that stuff up.
    • Deleting extra markup. You might see some additional breaks, not necessarily like that ampersand. But you know what I mean in WordPress where it’s that weird little thing for that break in the space, you can clean those out. Some extra, empty header markup, feel free to delete those. You’re just cleaning and simplifying and improving your page.

    One interesting thing that I’ve come across recently was for the keyword “MozRank.” Our page is beautifully written, perfectly optimized. It has all the things in place to be that featured snippet, but it’s not. That is when I fell back and I started to rely on some of this data. I saw that the current featured snippet page has all these links.

    So I started to look into what are some easy backlinks I might be able to grab for that page. I came across Quora that had a question about MozRank, and I noticed that — this is a side tip — you can suggest edits to Quora now, which is amazing. So I suggested a link to our Moz page, and within the notes I said, “Hello, so and so. I found this great resource on MozRank. It completely confirms your wonderful answer. Thank you so much, Britney.”

    I don’t know if that’s going to work. I know it’s a nofollow. I hope it can send some qualified traffic. I’ll keep you posted on that. But kind of a fun tip to be aware of.

    How we nabbed the “find backlinks” featured snippet

    All right. How did I nab the featured snippet “find backlinks”? This surprised me, because I hardly changed much at all, and we were able to steal that featured snippet quite easily. We were currently in the fourth position, and this was the old post that was in the fourth position. These are the updates I made that are now in the featured snippet.

    Clean up the title

    So we go from the title “How to Find Your Competitor’s Backlinks Next Level” to “How to Find Backlinks.” I’m just simplifying, cleaning it up.

    Clean up the H2s

    The first H2, “How to Check the Backlinks of a Site.” Clean it up, “How to Find Backlinks?” That’s it. I don’t change step one. These are all in H3s. I leave them in the H3s. I’m just tweaking text a little bit here and there.

    Simplify and clarify your explanations/remove redundancies

    I changed “Enter your competitor’s domain URL” — it felt a little duplicate — to “Enter your competitor’s URL.” Let’s see. “Export results into CSV,” what kind of results? I changed that to “export backlink data into CSV.” “Compile CSV results from all competitors,” what kind of results? “Compile backlink CSV results from all competitors.”

    So you can look through this. All I’m doing is simplifying and adding backlinks to clarify some of it, and we were able to nab that.

    So hopefully that example helps. I’m going to continue to sort of drudge through a bunch of these with you. I look forward to any of your comments, any of your efforts down below in the comments. Definitely looking forward to Part 3 and to chatting with you all soon.

    Thank you so much for joining me on this edition of Whiteboard Friday. I look forward to seeing you all soon. See you.

    Video transcription by Speechpad.com


    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Originaly posted from The Moz Blog http://bit.ly/2JNd7l2
    If you’re involved with SEO, you’ll already know about the Moz SEO blog. Great info and practicle guides on SEO, check it out.