Image search results used to give you the option to “view image” without having to navigate to the site the image was hosted on.
When it started in 2013, sites saw a 63% decline in organic traffic from image results.
Why?
Because there was no need to click through when the image could be viewed in full from within the search results.
And then everything changed
In February 2018, Google decided to remove the “view image” button. Now searchers must visit the site hosting that image directly, restoring image results to their former organic search driving power.
According to some recent studies, this change has increased organic image traffic a massive 37%.
Given image results’ return to value, marketers are asking themselves how they can make the most out of this search mechanism.
So what are some new ways we can leverage tools to better understand how to optimize images for ranking?
To explore this, I decided to see if Google’s Vision AI could assist in unearthing hidden information about what matters to image ranking. Specifically, I wondered what Google’s image topic modeling would reveal about the images that rank for individual keyword searches, as well as groups of thematically related keywords aggregated around a specific topic or niche.
Here’s what I did — and what I found.
A deep dive on “hunting gear”
I began by pulling out 10 to 15 top keywords in our niche. For this article, we chose “hunting gear” as a category and pulled high-intent, high-value, high-volume keywords. The keywords we selected were:
Bow hunting gear
Cheap hunting gear
Coyote hunting gear
Dans hunting gear
Deer hunting gear
Discount hunting gear
Duck hunting gear
Hunting gear
Hunting rain gear
Sitka hunting gear
Turkey hunting gear
Upland hunting gear
Womens hunting gear
I then pulled the image results for the Top 50 ranking images for each of these keywords, yielding roughly ~650 images to give to Google’s image analysis API. I made sure to make note of the ranking position of each image in our data (this is important for later).
Learning from labels
The first, and perhaps most actionable, analysis the API can be used for is in labeling images. It utilizes state-of-the-art image recognition models to parse each image and return labels for everything within that image it can identify. Most images had between 4 and 10 identifiable objects contained within them. For the “hunting gear” related keywords listed above, this was the distribution of labels:
At a high level, this gives us plenty of information about Google’s understanding of what images that rank for these terms should depict. A few takeaways:
The top ranking images across all 13 of these top keywords have a pretty even distribution across labels.
Clothing, and specifically camouflage, are highly represented, with nearly 5% of all images containing camo-style clothing. Now, perhaps this seems obvious, but it’s instructive. Including images in your blog posts related to these hunting keywords with images containing camo gear likely gives you improved likelihood of having one of your images included in top ranking image results.
Outdoor labels are also overrepresented: wildlife, trees, plants, animals, etc. Images of hunters in camo, out in the wild, and with animals near them are disproportionately represented.
Looking closer at the distribution labels by keyword category can give use a deeper understanding of how the ranking images differ between similar keywords.
For “turkey hunting gear” and “duck hunting gear,” having birds in your images seems very important, with the other keywords rarely including images with birds.
Easy comparisons are possible with the interactive Tableau dashboards, giving you an “at a glance” understanding of what image distributions look like for an individual keyword vs. any other or all others. Below I highlighted just “duck hunting gear,” and you can see similar distribution of the most prevalent labels as the other keywords at the top. However, hugely overrepresented are “water bird,” “duck,” “bird,” “waders,” “hunting dog,” “hunting decoy,” etc., providing ample ideas for great images to include in the body of your content.
Getting an intuition for the differences in top ranking (images ranking in the first 10 images for a keyword search) vs. bottom ranking (images ranking in the 41st to 50th positions) is also possible.
Here we can see that some labels seem preferred for top rankings. For instance:
Clothing-related labels are much more common amongst the best ranking images.
Animal-related labels are less common amongst the best ranking images but more common amongst the lower ranking images.
Guns seem significantly more likely to appear in top ranking images.
By investigating trends in labels across your keywords, you can gain many interesting insights into the images most likely to rank for your particular niche. These insights will be different for any set of keywords, but a close examination of the results will yield more than a few actionable insights.
Not surprisingly, there are ways to go even deeper in your analysis with other artificial intelligence APIs. Let’s take a look at how we can further supplement our efforts.
An even deeper analysis for understanding
Deepai.org has an amazing suite of APIs that can be easily accessed to provide additional image labeling capabilities. One such API is “Image Captioning,” which is similar to Google’s image labeling, but instead of providing single labels, it provides descriptive labels, like “the man is holding a gun.”
We ran all of the same images as the Google label detection through this API and got some great additional detail for each image.
Just as with the label analysis, I broke up the caption distributions and analyzed their distributions by keyword and by overall frequency for all of the selected keywords. Then I compared top and bottom ranking images.
A final interesting finding
Google sometimes ranks YouTube video thumbnails in image search results. Below is an example I found in the hunting gear image searches.
It seems likely that at least some of Google’s understanding of why this thumbnail should rank for hunting gear comes from its image label detection. Though other factors, like having “hunting gear” in the title and coming from the NRA (high topical authority) certainly help, the fact that this thumbnail depicts many of the same labels as other top-ranking images must also play a role.
The lesson here is that the right video thumbnail choice can help that thumbnail to rank for competitive terms, so apply your learnings from doing image search result label and caption analysis to your video SEO strategy!
In the case of either video thumbnails or standard images, don’t overlook the ranking potential of the elements featured — it could make a difference in your SERP positions.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Spending quality time getting to know your client, their goals and capabilities, and getting them familiar with their team sets you up for a better client-agency relationship. Immersion workshops are the answer. Learn more about how to build a strong foundation with your clients in this week’s Whiteboard Friday presented by Heather Physioc.
Click on the whiteboard image above to open a high resolution version in a new tab!
Video Transcription
Hey, everybody, and welcome back to Whiteboard Friday. My name is Heather Physioc, and I’m Group Director of Discoverability at VMLY&R. So I learned that when you onboard clients properly, the rest of the relationship goes a lot smoother.
Through some hard knocks and bumps along the way, we’ve come up with this immersion workshop model that I want to share with you. So I actually conducted a survey of the search industry and found that we tend to onboard clients inconsistently from one to the next if we bother to do a proper onboarding with them at all. So to combat that problem, let’s talk through the immersion workshop.
Why do an immersion workshop with a client?
So why bother taking the time to pause, slow down, and do an immersion workshop with a client?
1. Get knowledgeable fast
Well, first, it allows you to get a lot more knowledgeable about your client and their business a lot faster than you would if you were picking it up piecemeal over the first year of your partnership.
2. Opens dialogue
Next it opens a dialogue from day one.
It creates the expectation that you will have a conversation and that the client is expected to participate in that process with you.
3. Build relationships
You want to build a relationship where you know that you can communicate effectively with one another. It also starts to build relationships, so not only with your immediate, day-to-day client contact, but people like their bosses and their peers inside their organization who can either be blockers or advocates for the search work that your client is going to try to implement.
4. Align on purpose, roadmap, and measuring success
Naturally the immersion workshop is also a crucial time for you to align with your client on the purpose of your search program, to define the roadmap for how you’re going to deliver on that search program and agree on how you’re going to measure success, because if they’re measuring success one way and you’re measuring success a different way, you could end up at completely different places.
5. Understand the DNA of the brand
Ultimately, the purpose of a joint immersion workshop is to truly understand the DNA of the brand, what makes them tick, who are their customers, why should they care what this brand has to offer, which helps you, as a search professional, understand how you can help them and their clients.
Setting
Do it live! (Or use video chats)
So the setting for this immersion workshop ideally should be live, in-person, face-to-face, same room, same time, same place, same mission.
But worst case scenario, if for some reason that’s not possible, you can also pull this off with video chats, but at least you’re getting that face-to-face communication. There’s going to be a lot of back-and-forth dialogue, so that’s really, really important. It’s also important to building the empathy, communication, and trust between people. Seeing each other’s faces makes a big difference.
Over 1–3 days
Now the ideal setting for the immersion workshop is two days, in my opinion, so you can get a lot accomplished.
It’s a rigorous two days. But if you need to streamline it for smaller brands, you can totally pull it off with one. Or if you have the luxury of stretching it out and getting more time with them to continue building that relationship and digging deeper, by all means stretch it to three days.
Customize the agenda
Finally, you should work with the client to customize the agenda. So I like to send them a base template of an immersion workshop agenda with sessions that I know are going to be important to my search work.
But I work side-by-side with that client to customize sessions that are going to be the right fit for their business and their needs. So right away we’ve got their buy-in to the workshop, because they have skin in the game. They know which departments are going to be tricky. They know what objectives they have in their heads. So this is your first point of communication to make this successful.
Types of sessions
So what types of sessions do we want to have in our immersion workshop?
Vision
The first one is a vision session, and this is actually one that I ask the clients to bring to us. So we slot about 90 minutes for the client to give us a presentation on their brand, their overarching strategy for the year, their marketing strategy for the year.
We want to hear about their goals, revenue targets, objectives, problems they’re trying to solve, threats they see to the business. Whatever is on their mind or keeps them up at night or whatever they’re really excited about, that’s what we want to hear. This vision workshop sets the tone for the entire rest of the workshop and the partnership.
Stakeholder
Next we want to have stakeholder sessions.
We usually do these on day one. We’re staying pretty high level on day one. So these will be with other departments that are going to integrate with search. So that could be the head of marketing, for example, like a CMO. It could be the sales team. If they have certain sales objectives they’re trying to hit, that would be really great for a search team to know. Or it could be global regions.
Maybe Latin America and Europe have different priorities. So we may want to understand how the brand works on the global scale as opposed to just at HQ.
Practitioner
On day two is when we start to get a little bit more in the weeds, and we call these our practitioner sessions. So we want to work with our day-to-day SEO contacts inside the organization. But we also set up sessions with people like paid search if they need to integrate their search efforts.
We might set up time with analytics. So this will be where we demo our standard SEO reporting dashboards and then we work with the client to customize it for their needs. This is a time where we find out who they’re reporting up to and what kinds of metrics they’re measured on to determine success. We talk about the goals and conversions they’re measuring, how they’re captured, why they’re tracking those goals, and their existing baseline of performance information.
We also set up time with developers. Technology is essential to actually implementing our SEO recommendations. So we set up time with them to learn about their workflows and their decision-making process. I want to know if they have resource constraints or what makes a good project ticket in Jira to get our work done. Great time to start bonding with them and give them a say in how we execute search.
We also want to meet with content teams. Now content tends to be one of the trickiest areas for our clients. They don’t always have the resources, or maybe the search scope didn’t include content from day one. So we want to bring in whoever the content decision-makers or creators are. We want to understand how they think, their workflows and processes. Are they currently creating search-driven content, or is this going to be a shift in mentality?
So a lot of times we get together and talk about process, editorial calendaring, brand tone and voice, whatever it takes to get content done for search.
Summary and next steps
So after all of these, we always close with a summary and next steps discussion. So we work together to think about all the things that we’ve accomplished during this workshop and what our big takeaways and learnings are, and we take this time to align with our client on next steps.
When we leave that room, everybody should know exactly what they’re responsible for. Very powerful. You want to send a recap after the fact saying, “Here’s what we learned and here’s what we understand the next steps to be. Are we all aligned?” Heads nod. Great.
Tools to use
So a couple of tools that we’ve created and we’ll make sure to link to these below.
We’ve created a standard onboarding checklist. The thing about search is when we’re onboarding a new client, we pretty commonly need the same things from one client to the next. We want to know things about their history with SEO. We need access and logins. Or maybe we need a list of their competitors. Whatever the case is, this is a completely repeatable process. So there’s no excuse for reinventing the wheel every single time.
So this standard onboarding checklist allows us to send this list over to the client so they can get started and get all the pieces in place that we need to be successful. It’s like mise en place when you’re cooking.
Discussion guides
We’ve also created some really helpful session discussion guides. So we give our clients a little homework before these sessions to start thinking about their business in a different way.
We’ll ask them open-ended questions like: What kinds of problems are your business unit solving this year? Or what is one of the biggest obstacles that you’ve had to overcome? Or what’s some work that you’re really proud of? So we send that in advance of the workshop. Then in our business unit discussions, which are part of the stakeholder discussions, we’ll actually use a few of the questions from that discussion guide to start seeding the conversation.
But we don’t just go down the list of questions, checking them off one by one. We just start the conversation with a couple of them and then follow it organically wherever it takes us, open-ended, follow-up, and clarifying questions, because the conversations we are having in that room with our clients are far more powerful than any information you’re going to get from an email that you just threw over the fence.
Sticky note exercise
We also do a pretty awesome little sticky note exercise. It’s really simple. So we pass out sticky notes to all the stakeholders that have attended the sessions, and we ask two simple questions.
One, what would cause this program to succeed? What are all the factors that can make this work?
We also ask what will cause it to fail.
Before you know it, the client has revealed, in their own words, what their internal obstacles and blockers will be. What are the things that they’ve run into in the past that have made their search program struggle? By having that simple exercise, it gets everybody in the mind frame of what their role is in making this program a success.
Now this is not about how good they are at SEO. This is how well they incorporate SEO into their organization. Now we’ve actually done a separate Whiteboard Friday on the maturity assessment and how to implement that. So make sure to check that out. But a quick overview. So we have a survey that addresses five key areas of a client’s ability to integrate search with their organization.
It’s stuff like people. Do they have the right resources?
Process. Do they have a process? Is it documented? Is it improving?
Capacity. Do they have enough budget to actually make search possible?
Knowledge. Are they knowledgeable about search, and are they committed to learning more? Stuff like that.
So we’ve actually created a five-part survey that has a number of different questions that the client can answer. We try to get as many people as possible on the client side to answer these questions as we can. Then we take the numerical answers and the open-ended answers and compile that into a maturity assessment for the brand after the workshop.
So we use that workshop time to actually execute the survey, and we have something that we can bring back to the client not long after to give them a picture of where they stand today and where we’re going to take them in the future and what the biggest obstacles are that we need to overcome to get them there.
Heather shared even more strong team-building goodness in her MozCon 2019 talk. Get access to her session and more in our newly released video bundle, plus access 26 additional future-focused SEO topics from our top-notch speakers:
Make sure to schedule a learning sesh with the whole team and maximize your investment in SEO education!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Calculating individual page speed performance metrics can help you to understand how efficiently your site is running as a whole. Since Google uses the speed of a site (frequently measured by and referred to as PageSpeed) as one of the signals used by its algorithm to rank pages, it’s important to have that insight down to the page level.
One of the pain points in website performance optimization, however, is the lack of ability to easily run page speed performance evaluations en masse. There are plenty of great tools like PageSpeed Insights or the Lighthouse Chrome plugin that can help you understand more about the performance of an individual page, but these tools are not readily configured to help you gather insights for multiple URLs — and running individual reports for hundreds or even thousands of pages isn’t exactly feasible or efficient.
In September 2018, I set out to find a way to gather sitewide performance metrics and ended up with a working solution. While this method resolved my initial problem, the setup process is rather complex and requires that you have access to a server.
Ultimately, it just wasn’t an efficient method. Furthermore, it was nearly impossible to easily share with others (especially those outside of UpBuild).
In November 2018, two months after I published this method, Google released version 5 of the PageSpeed Insights API. V5 now uses Lighthouse as its analysis engine and also incorporates field data provided by the Chrome User Experience Report (CrUX). In short, this version of the API now easily provides all of the data that is provided in the Chrome Lighthouse audits.
So I went back to the drawing board, and I’m happy to announce that there is now an easier, automated method to produce Lighthouse reports en masse using Google Sheets and Pagespeed Insights API v5.
Introducing the Automated PageSpeed Insights Report:
With this tool, we are able to quickly uncover key performance metrics for multiple URLs with just a couple of clicks.
Choose the API key option from the ‘Create credentials’ dropdown (as shown):
You should now see a prompt providing you with a unique API key:
Next, simply copy and paste that API key into the section shown below found on the “Settings” tab of the Automated Pagespeed Insights spreadsheet.
Now that you have an API key, you are ready to use the tool.
Setting the report schedule
On the Settings tab, you can schedule which day and time that the report should start running each week. As you can see from this screenshot below, we have set this report to begin every Wednesday at 8:00 am. This will be set to the local time as defined by your Google account.
As you can see this setting is also assigning the report to run for the following three hours on the same day. This is a workaround to the limitations set by both Google Apps Scripts and Google PageSpeed API.
Limitations
Our Google Sheet is using a Google Apps script to run all the magic behind the scenes. Each time that the report runs, Google Apps Scripts sets a six-minute execution time limit, (thirty minutes for G Suite Business / Enterprise / Education and Early Access users).
In six minutes you should be able to extract PageSpeed Insights for around 30 URLs.
Then you’ll be met with the following message:
In order to continue running the function for the rest of the URLs, we simply need to schedule the report to run again. That is why this setting will run the report again three more times in the consecutive hours, picking up exactly where it left off.
The next hurdle is the limitation set by Google Sheets itself.
If you’re doing the math, you’ll see that since we can only automate the report a total of four times — we theoretically will be only able to pull PageSpeed Insights data for around 120 URLs. That’s not ideal if you’re working with a site that has more than a few hundred pages!.
The schedule function in the Settings tab uses the Google Sheet’s built-in Triggers feature. This tells our Google Apps script to run the report automatically at a particular day and time. Unfortunately, using this feature more than four times causes the “Service using too much computer time for one day” message.
This means that our Google Apps Script has exceeded the total allowable execution time for one day. It most commonly occurs for scripts that run on a trigger, which have a lower daily limit than scripts executed manually.
Manually?
You betcha! If you have more than 120 URLs that you want to pull data for, then you can simply use the Manual Push Report button. It does exactly what you think.
Manual Push Report
Once clicked, the ‘Manual Push Report’ button (linked from the PageSpeed Menu on the Google Sheet) will run the report. It will pick up right where it left off with data populating in the fields adjacent to your URLs in the Results tab.
For clarity, you don’t even need to schedule the report to run to use this document. Once you have your API key, all you need to do is add your URLs to the Results tab (starting in cell B6) and click ‘Manual Push Report’.
You will, of course, be met with the inevitable “Exceed maximum execution time” message after six minutes, but you can simply dismiss it, and click “Manual Push Report” again and again until you’re finished. It’s not fully automated, but it should allow you to gather the data you need relatively quickly.
Setting the log schedule
Another feature in the Settings tab is the Log Results function.
This will automatically take the data that has populated in the Results tab and move it to the Log sheet. Once it has copied over the results, it will automatically clear the populated data from the Results tab so that when the next scheduled report run time arrives, it can gather new data accordingly. Ideally, you would want to set the Log day and time after the scheduled report has run to ensure that it has time to capture and log all of the data.
You can also manually push data to the Log sheet using the ‘Manual Push Log’ button in the menu.
How to confirm and adjust the report and log schedules
Once you’re happy with the scheduling for the report and the log, be sure to set it using the ‘Set Report and Log Schedule’ from the PageSpeed Menu (as shown):
Should you want to change the frequency, I’d recommend first setting the report and log schedule using the sheet.
runLog controls when the data will be sent to the LOG sheet.
runTool controls when the API runs for each URL.
Simply click the pencil icon next to each respective function and adjust the timings as you see fit.
You can also use the ‘Reset Schedule’ button in the PageSpeed Menu (next to Help) to clear all scheduled triggers. This can be a helpful shortcut if you’re simply using the interface on the ‘Settings’ tab.
PageSpeed results tab
This tab is where the PageSpeed Insights data will be generated for each URL you provide. All you need to do is add a list of URLs starting from cell B6. You can either wait for your scheduled report time to arrive or use the ‘Manual Push Report’ button.
You should now see the following data generating for each respective URL:
Time to Interactive
First Contentful Paint
First Meaningful Paint
Time to First Byte
Speed Index
You will also see a column for Last Time Report Ran and Status on this tab. This will tell you when the data was gathered, and if the pull request was successful. A successful API request will show a status of “complete” in the Status column.
Log tab
Logging the data is a useful way to keep a historical account on these important speed metrics. There is nothing to modify in this tab, however, you will want to ensure that there are plenty of empty rows. When the runLog function runs (which is controlled by the Log schedule you assign in the “Settings” tab, or via the Manual Push Log button in the menu), it will move all of the rows from the Results tab that contains a status of “complete”. If there are no empty rows available on the Log tab, it will simply not copy over any of the data. All you need to do is add several thousands of rows depending on how often you plan to check-in and maintain the Log.
How to use the log data
The scheduling feature in this tool has been designed to run on a weekly basis to allow you enough time to review the results, optimize, then monitor your efforts. If you love spreadsheets then you can stop right here, but if you’re more of a visual person, then read on.
Visualizing the results in Google Data Studio
You can also use this Log sheet as a Data Source in Google Data Studio to visualize your results. As long as the Log sheet stays connected as a source, the results should automatically publish each week. This will allow you to work on performance optimization and evaluate results using Data Studio easily, as well as communicate performance issues and progress to clients who might not love spreadsheets as much as you do.
Blend your log data with other data sources
One great Google Data Studio feature is the ability to blend data. This allows you to compare and analyze data from multiple sources, as long as they have a common key. For example, if you wanted to blend the Time to Interactive results against Google Search Console data for those same URLs, you can easily do so. You will notice that the column in the Log tab containing the URLs is titled “Landing Page”. This is the same naming convention that Search Console uses and will allow Data Studio to connect the two sources.
There are several ways that you can use this data in Google Data Studio.
Compare your competitors’ performance
You don’t need to limit yourself to just your own URLs in this tool; you can use any set of URLs. This would be a great way to compare your competitor’s pages and even see if there are any clear indicators of speed affecting positions in Search results.
Improve usability
Don’t immediately assume that your content is the problem. Your visitors may not be leaving the page because they don’t find the content useful; it could be slow load times or other incompatibility issues that are driving visitors away. Compare bounce rates, time on site, and device type data alongside performance metrics to see if it could be a factor.
Increase organic visibility
Compare your performance data against Search ranking positions for your target keywords. Use a tool to gather your page positions, and fix performance issues for landing pages on page two or three of Google Search results to see if you can increase their prominence.
Final thoughts
This tool is all yours.
Make a copy and use it as is, or tear apart the Google Apps Script that makes this thing work and adapt it into something bigger and better (if you do, please let me know; I want to hear all about it).
Remember PageSpeed Insights API V5 now includes all of the same data that is provided in the Chrome Lighthouse audits, which means there are way more available details you can extract beyond the five metrics that this tool generates.
Hopefully, for now, this tool helps you gather Performance data a little more efficiently between now and when Google releases their recently announced Speed report for Search Console.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Even though link building has been a trade for more than a decade, it’s clear that there is still an enormous amount of confusion around it.
Every so often, there is a large kerfuffle. Some of these controversies and arguments arise simply from a necessity to fill a content void, but some of them arise from genuine concern and confusion:
“Don’t ask for links!”
“Stick a fork in it, guest posting is done!”
“Try to avoid link building!”
SEO is an everchanging industry; what worked yesterday might not work today. Google’s personnel doesn’t always help the cause. In fact, they often add fuel to the fire. That’s why I want to play the role of “link building myth-buster” today. I’ve spent over ten years in link building, and I’ve seen it all.
I was around for Penguin, and every iteration since. I was around for the launch of Hummingbird. And I was even around for the Matt Cutts videos.
So, if you’re still confused about link building, read through to have ten of the biggest myths in the business dispelled.
1. If you build it, they will come
There is a notion among many digital marketers and SEOs that if you simply create great content and valuable resources, the users will come to you. If you’re already a widely-recognized brand/website, this can be a true statement. If, however, you are like the vast majority of websites — on the outside looking in — this could be a fatal mindset.
In order to get people to find you, you have to build the roads that will lead them to where you want. This is where link building comes in.
A majority of people searching Google end up clicking on organic results. In fact, for every click on a paid result in Google, there are 11.6 clicks to organic results!
And in order to build your rankings in search engines, you need links.
Which brings me to our second myth around links.
2. You don’t need links to rank
I can’t believe that there are still people who think this in 2019, but there are. That’s why I recently published a case study regarding a project I was working on.
To sum it up briefly, the more authoritative, relevant backlinks I was able to build, the higher the site ranked for its target keywords. This isn’t to say that links are the only factor in Google’s algorithm that matters, but there’s no doubt that a robust and relevant backlink profile goes a long way.
3. Only links with high domain authority matter
As a link builder, you should definitely seek target sites with high metrics. However, they aren’t the only prospects that should matter to you.
Sometimes a low domain authority (DA) might just be an indication that it is a new site. But forget about the metrics for one moment. Along with authority, relevancy matters. If a link prospect is perfectly relevant to your website, but it has a low DA, you should still target it. In fact, most sites that will be so relevant to yours will likely not have the most eye-popping metrics, and that is precisely because they are so niche. But more often than not, relevancy is more important than DA.
When you focus solely on metrics, you will lose out on highly relevant opportunities. A link that sends trust signals is more valuable than a link that has been deemed important by metrics devised by entities other than Google.
Another reason why is because Google’s algorithm looks for diversity in your backlink profile. You might think that a profile with over 100 links, all of which have a 90+ DA would be the aspiration. In fact, Google will look at it as suspect. So while you should absolutely target high DA sites, don’t neglect the “little guys.”
4. You need to build links to your money pages
When I say “money pages,” I mean the pages where you are specifically looking to convert, whether its users into leads or leads into sales.
You would think that if you’re going to put in the effort to build the digital highways that will lead traffic to your website, you would want all of that traffic to find these money pages, right?
In reality, though, you should take the exact opposite approach. First off, approaching sites that are in your niche and asking them to link to your money pages will come off as really spammy/aggressive. You’re shooting yourself in the foot.
But most importantly, these money pages are usually not pages that have the most valuable information. Webmasters are much more likely to link to a page with resourceful information or exquisitely created content, not a page displaying your products or services.
5. You have to create the best, most informative linkable asset
If you’re unfamiliar with what a linkable asset is exactly, it’s a page on your website designed specifically to attract links/social shares. Assets can come in many forms: resource pages, funny videos, games, etc.
Of course, linkable assets don’t grow on trees, and the process of coming up with an idea for a valuable linkable asset won’t be easy. This is why some people rely on “the skyscraper technique.” This is when you look at the linkable assets your competitors have created, you choose one, and you simply try to outdo it with something bigger and better.
This isn’t a completely ineffective technique, but you shouldn’t feel like you have to do this.
Linkable assets don’t need to be word-heavy “ultimate guides” or heavily-researched reports. Instead of building something that really only beats your competitor’s word count, do your own research and focus on building an authoritative resource that people in your niche will be interested in.
The value of a linkable asset has much more to do with finding the right angle and the accuracy of the information you’re providing than the amount.
6. The more emails you send, the more links you will get
I know several SEOs who like to cast a wide net — they send out emails to anyone and every one that even has the tiniest bit of relevancy of authority within their niche. It’s an old sales principle: The idea that more conversations will lead to more purchases/conversions. And indeed in sales, this is usually going to be the case.
In link building? Not so much.
This is because, in link building, your chances of getting someone to link to you are increased when the outreach you send is more thoughtful/personalized. Webmasters pore over emails on top of emails on top of emails, so much so that it’s easy to pass over the generic ones.
They need to be effectively persuaded as to the value of linking to your site. If you choose to send emails to any site with a pulse, you won’t have time to create specific outreach for each valuable target site.
7. The only benefit of link building is algorithmic
As I mentioned earlier, links are fundamental to Google’s algorithm. The more quality backlinks you build, the more likely you are to rank for your target keywords in Google.
This is the modus operandi for link building. But it is not the only reason to build links. In fact, there are several non-algorithmic benefits which link building can provide.
First off, there’s brand visibility. Link building will make you visible not only to Google in the long term but to users in the immediate term. When a user comes upon a resource list with your link, they aren’t thinking about how it benefits your ranking in Google; they just might click your link right then and there.
Link building can also lead to relationship building. Because of link building’s very nature, you will end up conversing with many potential influencers and authority figures within your niche. These conversations don’t have to end as soon as they place your link.
In fact, if the conversations do end there every time, you’re doing marketing wrong. Take advantage of the fact that you have their attention and see what else you can do for each other.
8. You should only pursue exact match anchors
Not all myths are born out of complete and utter fiction. Some myths persist because they have an element of truth to them or they used to be true. The use of exact match anchor text is such a myth.
In the old days of SEO/link building, one of the best ways to get ahead was to use your target keywords/brand name as the anchor text for your backlinks. Keyword stuffing and cloaking were particularly effective as well.
But times have changed in SEO, and I would argue mostly for the better. When Google sees a backlink profile that uses only a couple of variations of anchor text, you are now open to a penalty. It’s now considered spammy. To Google, it does not look like a natural backlink profile.
As such, it’s important to note now that the quality of the link itself is far more important than the anchor text that comes with it.
It really should be out of your hands anyway. When you’re link building the right way, you are working in conjunction with the webmasters who are publishing your link. You do not have 100 percent control of the situation, and the webmaster will frequently end up using the anchor text of their choice.
So sure, you should optimize your internal links with optimized anchor text when possible, but keep in mind that it is best to have diverse anchor text distribution.
9. Link building requires technical abilities
Along with being a link builder, I am also an employer. When hiring other link builders, one skepticism I frequently come across relates to technical skills. Many people who are unfamiliar with link building think that it requires coding or web development ability.
If you have the ability to effectively persuade, create valuable content, or identify trends, you can build links.
10. All follow links provide equal value
Not all links are created equally, and I’m not even talking about the difference between follow links and no-follow links. Indeed, there are distinctions to be made among just follow links.
Let’s take .edu links, for example. These links are some of the most sought after for link builders, as they are thought to carry inordinate power. Let’s say you have two links from the same .edu website. They are both on the same domain, same authority, but they are on different pages. One is on the scholarship page, the other is on a professor’s resource page which has been carefully curated.
They are both do-follow links, so naturally, they should both carry the same weight, right?
Fail. Search engines are smart enough to know the difference between a hard-earned link and a link that just about anyone can submit to.
Along with this, the placement of a link on a page matters. Even if two links are on the exact same page (not just the same domain) a link that is above-the-fold (a link you can see without scrolling) will carry more weight.
Conclusion
Link building and SEO are not rocket science. There’s a lot of confusion out there, thanks mainly to the fact that Google’s standards change rapidly and old habits die hard, and the answers and strategies you seek aren’t always obvious.
That said, the above points are some of the biggest and most pervasive myths in the industry. Hopefully, I was able to clear them up for you.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
EAT — also known as Expertise, Authoritativeness, and Trustworthiness — is a big deal when it comes to Google’s algorithms. But what exactly does this acronym entail, and why does it matter to your everyday work? In this bite-sized version of her full MozCon 2019 presentation, Marie Haynes describes exactly what E-A-T means and how it could have a make-or-break effect on your site.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, Moz fans. My name is Marie Haynes, from Marie Haynes Consulting, and I’m going to talk to you today about EAT and the Quality Raters’ Guidelines. By now, you’ve probably heard of EAT. It’s a bit of a buzzword in SEO. I’m going to share with you why EAT is a big part of Google’s algorithms, how we can take advantage of this news, and also why it’s really, really important to all of us.
The Quality Raters’ Guidelines
Let’s talk about the Quality Raters’ Guidelines. These guidelines are a document that Google has provided to this whole army of quality raters. There are apparently 16,000 quality raters, and what they do is they use this document, the Quality Raters’ Guidelines, to determine whether websites are high quality or not.
Now the quality raters do not have the power to put a penalty on your website. They actually have no direct bearing on rankings. But instead, what happens is they feed information back to Google’s engineers, and Google’s engineers can take that information and determine whether their algorithms are doing what they want them to do. Ben Gomes, the Vice President of Search at Google, he had a quote recently in an interview with CNBC, and he said that the quality raters, the information that’s in there is fundamentally what Google wants the algorithm to do.
“They fundamentally show us what the algorithm should do.” – Ben Gomes, VP Search, Google
So we believe that if something is in the Quality Raters’ Guidelines, either Google is already measuring this algorithmically, or they want to be measuring it, and so we should be paying close attention to everything that is in there.
How Google fights disinformation
There was a guide that was produced by Google earlier, in February of 2019, and it was a whole guide on how they fight disinformation, how they fight fake news, how they make it so that high-quality results are appearing in the search results.
There were a couple of things in here that were really interesting.
1. Information from the quality raters allows them to build algorithms
The guide talked about the fact that they take the information from the quality raters and that allows them to build algorithms. So we know that it’s really important that the things that the quality raters are assessing are things that we probably should be paying attention to as well.
2. Ranking systems are designed to ID sites with high expertise, authoritativeness, and trustworthiness
The thing that was the most important to me or the most interesting, at least, is this line that said our ranking systems are designed to identify sites with a high indicia of EAT, of expertise, authoritativeness, and trustworthiness.
So whether or not we want to argue whether EAT is a ranking factor, I think that’s semantics. What the word “ranking factor” means, what we really need to know is that EAT is really important in Google’s algorithms. We believe that if you’re trying to rank for any term that really matters to people, “your money or your life” really means if it’s a page that is helping people make a decision in their lives or helping people part with money, then you need to pay attention to EAT, because Google doesn’t want to rank websites that are for important queries if they’re lacking EAT.
The three parts of E-A-T
So it’s important to know that EAT has three parts, and a lot of people get hung up on just expertise. I see a lot of people come to me and say, “But I’m a doctor, and I don’t rank well.” Well, there are more parts to EAT than just expertise, and so we’re going to talk about that.
1. Expertise
But expertise is very important. In the Quality Raters’ Guidelines, which each of you, if you have not read it yet, you really, really should read this document.
It’s a little bit long, but it’s full of so much good information. The raters are given examples of websites, and they’re told, “This is a high-quality website. This is a low-quality website because of this.” One of the things that they say for one of the posts is this particular page is to be considered low quality because the expertise of the author is not clearly communicated.
Add author bios
So the first clue we can gather from this is that for all of our authors we should have an author bio. Perhaps if you are a nationally recognized brand, then you may not need author bios. But for the rest of us, we really should be putting an author bio that says here’s who wrote this post, and here’s why they’re qualified to do so.
Another example in the Quality Raters’ Guidelines talks about was a post about the flu. What the quality raters were told is that there’s no evidence that this author has medical expertise. So this tells us, and there are other examples where there’s no evidence of financial expertise, and legal expertise is another one. Think about it.
If you were diagnosed with a medical condition, would you want to be reading an article that’s written by a content writer who’s done good research? It might be very well written. Or would you rather see an article that is written by somebody who has been practicing in this area for decades and has seen every type of side effect that you can have from medications and things like that?
Hire experts to fact-check your content
Obviously, the doctor is who you want to read. Now I don’t expect us all to go and hire doctors to write all of our content, because there are very few doctors that have time to do that and also the other experts in any other YMYL profession. But what you can do is hire these people to fact check your posts. We’ve had some clients that have seen really nice results from having content writers write the posts in a very well researched and referenced way, and then they’ve hired physicians to say this post was medically fact checked by Dr. So-and-so. So this is really, really important for any type of site that wants to rank for a YMYL query.
One of the things that we started noticing, in February of 2017, we had a number of sites that came to us with traffic drops. That’s mostly what we do. We deal with sites that were hit by Google algorithm updates. What we were noticing is a weird thing was happening.
Prior to that, sites that were hit, they tended to have all sorts of technical issues, and we could say, “Yes, there’s a really strong reason why this site is not ranking well.” These sites were all ones that were technically, for the most part, sound. But what we noticed is that, in every instance, the posts that were now stealing the rankings they used to have were ones that were written by people with real-life expertise.
This is not something that you want to ignore.
2. Authoritativeness
We’ll move on to authoritativeness. Authoritativeness is really very, very important, and in my opinion this is the most important part of EAT. Authoritativeness, there’s another reference in the Quality Raters’ Guidelines about a good post, and it says, “The author of this blog post has been known as an expert on parenting issues.”
So it’s one thing to actually be an expert. It’s another thing to be recognized online as an expert, and this should be what we’re all working on is to have other people online recognize us or our clients as experts in their subject matter. That sounds a lot like link building, right? We want to get links from authoritative sites.
The guide to this information actually tells us that PageRank and EAT are closely connected. So this is very, very important. I personally believe — I can’t prove this just yet — but I believe that Google does not want to pass PageRank through sites that do not have EAT, at least for YMYL queries. This could explain why Google feels really comfortable that they can ignore spam links from negative SEO attacks, because those links would come from sites that don’t have EAT.
Get recommendations from experts
So how do we do this? It’s all about getting recommendations from experts. The Quality Raters’ Guidelines say in several places the raters are instructed to determine what do other experts say about this website, about this author, about this brand. It’s very, very important that we can get recommendations from experts. I want to challenge you right now to look at the last few links that you have gotten for your website and look at them and say, “Are these truly recommendations from other people in the industry that I’m working in? Or are they ones that we made?”
In the past, pretty much every link that we could make would have the potential to help boost our rankings. Now, the links that Google wants to count are ones that truly are people recommending your content, your business, your author. So I did a Whiteboard Friday a couple of years ago that talked about the types of links that Google might want to value, and that’s probably a good reference to find how can we find these recommendations from experts.
How can we do link building in a way that boosts our authoritativeness in the eyes of Google?
3. Trustworthiness
The last part, which a lot of people ignore, is trustworthiness. People would say, “Well, how could Google ever measure whether a website is trustworthy?” I think it’s definitely possible. Google has a patent. Now we know if there’s a patent, that they’re not necessarily doing this.
Reputation via reviews, blog posts, & other online content
But they do have a patent that talks about how they can gather information about a brand, about an individual, about a website from looking at a corpus of reviews, blog posts, and other things that are online. What this patent talks about is looking at the sentiment of these blog posts. Now some people would argue that maybe sentiment is not a part of Google’s algorithms.
I do think it’s a part of how they determine trustworthiness. So what we’re looking for here is if a business really has a bad reputation, if you have a reputation where people online are saying, “Look, I got scammed by this company.” Or, “I couldn’t get a refund.” Or, “I was treated really poorly in terms of customer service.” If there is a general sentiment about this online, that can affect your ability to rank well, and that’s very important. So all of these things are important in terms of trustworthiness.
Credible, clear contact info on website
You really should have very credible and clear contact information on your website. That’s outlined in the Quality Raters’ Guidelines.
Indexable, easy-to-find info on refund policies
You should have information on your refund policy, assuming that you sell products, and it should be easy for people to find. All of this information I believe should be visible in Google’s index.
We shouldn’t be no indexing these posts. Don’t worry about the fact that they might be kind of thin or irrelevant or perhaps even duplicate content. Google wants to see this, and so we want that to be in their algorithms.
Scientific references & scientific consensus
Other things too, if you have a medical site or any type of site that can be supported with scientific references, it’s very important that you do that.
One of the things that we’ve been seeing with recent updates is a lot of medical sites are dropping when they’re not really in line with scientific consensus. This is a big one. If you run a site that has to do with natural medicine, this is probably a rough time for you, because Google has been demoting sites that talk about a lot of natural medicine treatments, and the reason for this, I think, is because a lot of these are not in line with the general scientific consensus.
Now, I know a lot of people would say, “Well, who is Google to determine whether essential oils are helpful or not, because I believe a lot of these natural treatments really do help people?” The problem though is that there are a lot of websites that are scamming people. So Google may even err on the side of caution in saying, “Look, we think this website could potentially impact the safety of users.”
You may have trouble ranking well. So if you have posts on natural medicine, on any type of thing that’s outside of the generally accepted scientific consensus, then one thing you can do is try to show both sides of the story, try to talk about how actually traditional physicians would treat this condition.
That can be tricky.
Ad experience
The other thing that can speak to trust is your ad experience. I think this is something that’s not actually in the algorithms just yet. I think it’s going to come. Perhaps it is. But the Quality Raters’ Guidelines talk a lot about if you have ads that are distracting, that are disruptive, that block the readers from seeing content, then that can be a sign of low trustworthiness.
“If any of Expertise, Authoritativeness, or Trustworthiness is lacking, use the ‘low’ rating.”
I want to leave you with this last quote, again from the Quality Raters’ Guidelines, and this is significant. The raters are instructed that if any one of expertise, authoritativeness, or trustworthiness is lacking, then they are to rate a website as low quality. Again, that’s not going to penalize that website. But it’s going to tell the Google engineers, “Wait a second. We have these low-quality websites that are ranking for these terms.How can we tweak the algorithm so that that doesn’t happen?”
But the important thing here is that if any one of these three things, the E, the A, or the T are lacking, it can impact your ability to rank well. So hopefully this has been helpful. I really hope that this helps you improve the quality of your websites. I would encourage you to leave a comment or a question below. I’m going to be hanging out in the comments section and answering all of your questions.
I have more information on these subjects at mariehaynes.com/eat and also /trust if you’re interested in these trust issues. So with that, I want to thank you. I really wish you the best of luck with your rankings, and please do leave a question for me below.
Feeling like you need a better understanding of E-A-T and the Quality Raters’ Guidelines? You can get even more info from Marie’s full MozCon 2019 talk in our newly released video bundle. Go even more in-depth on what drives rankings, plus access 26 additional future-focused SEO topics from our top-notch speakers:
Invest in a bag of popcorn and get your whole team on board to learn!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
There are times when your digital marketing agency will find itself serving a local business with a need for which Google has made no apparent provisions. Unavailable categories for unusual businesses come instantly to mind, but scenarios can be more complex than this.
Client workflows can bog down as you worry over what to do, fearful of making a wrong move that could get a client’s listing suspended or adversely affect its rankings or traffic. If your agency has many employees, an entry-level SEO could be silently stuck on an issue, or even doing the wrong thing because they don’t know how or where to ask the right questions.
The best solution I know of consists of a combination of:
Client contracts that are radically honest about the nature of Google
Client management that sets correct expectations about the nature of Google
A documented process for seeking clarity when unusual client scenarios arise
Agency openness to experimentation, failure, and on-going learning
Regular monitoring for new Google developments and changes
A bit of grit
Let’s put the fear of often-murky, sometimes-unwieldy Google on the back burner for a few minutes and create a proactive process your team can use when hitting what feels like procedural dead end on the highways and byways of local search.
The apartment office conundrum
As a real-world example of a GMB dead end, a few months ago, I was asked a question about on-site offices for apartment complexes. The details:
Google doesn’t permit the creation of listings for rental properties but does allow such properties to be listed if they have an on-site office, as many apartment complexes do.
Google’s clearest category for this model is “apartment complex”, but the brand in question was told by Google (at the time) that if they chose that category, they couldn’t display their hours of operation.
This led the brand I was advising to wonder if they should use “apartment rental agency” as their category because it does display hours. They didn’t want to inconvenience the public by having them arrive at a closed office after hours, but at the same time, they didn’t want to misrepresent their category.
Now that’s a conundrum!
When I was asked to provide some guidance to this brand, I went through my own process of trying to get at the heart of the matter. In this post, I’m going to document this process for your agency as fully as I can to ensure that everyone on your team has a clear workflow when puzzling local SEO scenarios arise.
I hope you’ll share this article with everyone remotely involved in marketing your clients, and that it will prevent costly missteps, save time, move work forward, and support success.
Step 1: Radical honesty sets the stage right
Whether you’re writing a client contract, holding a client onboarding meeting, or having an internal brand discussion about local search marketing, setting correct expectations is the best defense against future disappointments and disputes. Company leadership must task itself with letting all parties know:
Google has a near-monopoly on search. As such, they can do almost anything they feel will profit them. This means that they can alter SERPs, change guidelines, roll out penalties and filters, monetize whatever they like, and fail to provide adequate support to the public that makes up and interacts with the medium of their product. There is no guarantee any SEO can offer about rankings, traffic, or conversions. Things can change overnight. That’s just how it is.
While Google’s monopoly enables them to be whimsical, brands and agencies do not have the same leeway if they wish to avoid negative outcomes. There are known practices which Google has confirmed as contrary to their vision of search (buying links, building listings for non-existent locations, etc.). Client and agency agree not to knowingly violate Google’s guidelines. These guidelines include:
Don’t accept work under any other conditions than that all parties understand Google’s power, unpredictability, and documented guidelines. Don’t work with clients, agencies, software providers, or others that violate guidelines. These basic rules set the stage for both client and agency success.
Step 2: Confirm that the problem really exists
When a business believes it is encountering an unusual local search marketing problem, the first task of the agency staffer is to vet the issue. The truth is, clients sometimes perceive problems that don’t really exist. In my case of the apartment complex, I took the following steps.
I confirmed the problem. I observed the lacking display of hours of operation on GMB listings using the “apartment complex” category.
I called half-a-dozen nearby apartment complex offices and asked if they were open either by appointment only, or 24/7. None of them were. At least in my corner of the world, apartment complex offices have set, daily business hours, just like retail, opening in the AM and closing in the PM each day.
I did a number of Google searches for “apartment rental agency” and all of the results Google brought up were for companies that manage rentals city-wide — not rentals of units within a single complex.
So, I was now convinced that the business was right: they were encountering a real dead end. If they categorized themselves as an “apartment complex”, their missing hours could inconvenience customers. If they chose the “apartment rental agency” designation to get hours to display, they could end up fielding needless calls from people looking for city-wide rental listings. The category would also fail to be strictly accurate.
As an agency worker, be sure you’ve taken common-sense steps to confirm that a client’s problem is, indeed, real before you move on to next steps.
Step 3: Search for a similar scenario
As a considerate agency SEO, avoid wasting the time of project leads, managers, or company leadership by first seeing if the Internet holds a ready answer to your puzzle. Even if a problem seems unusual, there’s a good chance that somebody else has already encountered it, and may even have documented it. Before you declare a challenge to be a total dead-end, search the following resources in the following order:
Do a direct search in Google with the most explicit language you can (e.g. “GMB listing showing wrong photo”, “GMB description for wrong business”, “GMB owner responses not showing”). Click on anything that looks like it might contain an answer, look at the date on the entry, and see what you can learn. Document what you see.
Go to the Google My Business Help Community forum and search with a variety of phrases for your issue. Again, note the dates of responses for the currency of advice. Be aware that not all contributors are experts. Looks for thread responses from people labeled Gold Product Expert; these members have earned special recognition for the amount and quality of what they contribute to the forum. Some of these experts are widely-recognized, world-class local SEOs. Document what you learn, even if means noting down “No solution found”.
Often, a peculiar local search issue may be the result of a Google change, update, or bug. Check the MozCast to see if the SERPs are undergoing turbulent weather and Sterling Sky’s Timeline of Local SEO Changes. If the dates of a surfaced issue correspond with something appearing on these platforms, you may have found your answer. Document what you learn.
Check trusted blogs to see if industry experts have written about your issue. The nice thing about blogs is that, if they accept comments, you can often get a direct response from the author if something they’ve penned needs further clarification. For a big list of resources, see: Follow the Local SEO Leaders: A Guide to Our Industry’s Best Publications. Document what you learn.
If none of these tactics yields a solution, move on to the next step.
Step 4: Speak up for support
If you’ve not yet arrived at an answer, it’s time to reach out. Take these steps, in this order:
1) Each agency has a different hierarchy. Now is the time to reach out to the appropriate expert at your business, whether that’s your manager or a senior-level local search expert. Clearly explain the issue and share your documentation of what you’ve learned/failed to learn. See if they can provide an answer.
2) If leadership doesn’t know how to solve the issue, request permission to take it directly to Google in private. You have a variety of options for doing so, including:
In the case of the apartment complex, I chose to reach out via Twitter. Responses can take a couple of days, but I wasn’t in a hurry. They replied:
As I had suspected, Google was treating apartment complexes like hotels. Not very satisfactory since the business models are quite different, but at least it was an answer I could document. I’d hit something of a dead-end, but it was interesting to consider Google’s advice about using the description field to list hours of operation. Not a great solution, but at least I would have something to offer the client, right from the horse’s mouth.
In your case, be advised that not all Google reps have the same level of product training. Hopefully, you will receive some direct guidance on the issue if you describe it well and can document Google’s response and act on it. If not, keep moving.
3) If Google doesn’t respond, responds inexpertly, or doesn’t solve your problem, go back to your senior-level person. Explain what happened and request advice on how to proceed.
4) If the senior staffer still isn’t certain, request permission to publicly discuss the issue (and the client). Head to supportive fora. If you’re a Moz Pro customer, feel free to post your scenario in the Moz Q&A forum. If you’re not yet a customer, head to the Local Search Forum, which is free. Share a summary of the challenge, your failure to find a solution, and ask the community what they would do, given that you appear to be at a dead end. Document the advice you receive, and evaluate it based on the expertise of respondents.
Step 5: Make a strategic decision
At this point in your workflow, you’ve now:
Confirmed the issue
Searched for documented solutions
Looked to leadership for support
Looked to Google for support
Looked to the local SEO industry for support
I’m hoping you’ve arrived at a strategy for your client’s scenario by now, but if not, you have 3 things left to do.
Take your entire documentation back to your team/company leader. Ask them to work with you on an approved response to the client.
Take that response to the client, with a full explanation of any limitations you encountered and a description of what actions your agency wants to take. Book time for a thorough discussion. If what you are doing is experimental, be totally transparent about this with the client.
If the client agrees to the strategy, enact it.
In the case of the apartment complex, there were several options I could have brought to the client. One thing I did recommend is that they do an internal assessment of how great the risk really was of the public being inconvenienced by absent hours.
How many people did they estimate would stop by after 5 PM in a given month and find the office closed? Would that be 1 person a month? 20 people? Did the convenience of these people outweigh risks of incorrectly categorizing the complex as an “apartment rental agency”? How many erroneous phone calls or walk-ins might that lead to? How big of a pain would that be?
Determining these things would help the client decide whether to just go with Google’s advice of keeping the accurate category and using the description to publish hours, or, to take some risks by miscategorizing the business. I was in favor of the former, but be sure your client has input in the final decision.
And that brings us to the final step — one your agency must be sure you don’t overlook.
Step 6: Monitor from here on out
In many instances, you’ll find a solution that should be all set to go, with no future worries. But, where you run into dead-end scenarios like the apartment complex case and are having to cobble together a workaround to move forward, do these two things:
Monitor outcomes of your implementation over the coming months. Traffic drops, ranking drops, or other sudden changes require a re-evaluation of the strategy you selected. *This is why it is so critical to document everything and to be transparent with the client about Google’s unpredictability and the limitations of local SEOs.
Monitor Google for changes. Today’s dead end could be tomorrow’s open road.
This second point is particularly applicable to the apartment complex I was advising. About a month after I’d first looked at their issue, Google made a major change. All of a sudden, they began showing hours for the “apartment complex” category!
If I’d stopped paying attention to the issue, I’d never have noticed this game-changing alteration. When I did see hours appearing on these listings, I confirmed the development with apartment marketing expert Diogo Ordacowski:
Moral: be sure you are continuing to keep tabs on any particularly aggravating dead ends in case solutions emerge in future. It’s a happy day when you can tell a client their worries are over. What a great proof of the engagement level of your agency’s staff!
It’s totally okay if that question occurs to you sometimes when marketing local businesses. There’s a lot on the line — it’s true! The livelihoods of your clients are a sacred trust. The credibility that your agency is building matters.
But, fear not. Unless you flagrantly break guidelines, a dose of grit can take you far when dealing with a product like Google My Business which is, itself, an experiment. Sometimes, you just have to make a decision about how to move forward. If you make a mistake, chances are good you can correct it. When a dead end with no clear egress forces you to test out solutions, you’re just doing your job.
So, be transparent and communicative, be methodical and thorough in your research, and be a bit bold. Remember, your clients don’t just count on you to churn out rote work. In Google’s increasingly walled garden, the agency which can see over the wall tops when necessity calls are bringing extra value.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Google shook up the SEO world by announcing big changes to how publishers should mark nofollow links. The changes — while beneficial to help Google understand the web — nonetheless caused confusion and raised a number of questions. We’ve got the answers to many of your questions here.
14 years after its introduction, Google today announced significant changes to how they treat the “nofollow” link attribute. The big points:
Nofollow can now be specified with 3 different attributes — “nofollow”, “sponsored”, and “ugc” — each signifying a different meaning.
For ranking purposes, Google now treats each of the nofollow attributes as “hints” — meaning they likely won’t impact ranking, but Google may choose to ignore the directive and use nofollow links for rankings.
Google continues to ignore nofollow links for crawling and indexing purposes, but this strict behavior changes March 1, 2020, at which point Google begins treating nofollow attributes as “hints”, meaning they may choose to crawl them.
You can use the new attributes in combination with each other. For example, rel=”nofollow sponsored ugc” is valid.
Paid links must either use the nofollow or sponsored attribute (either alone or in combination.) Simply using “ugc” on paid links could presumably lead to a penalty.
Publishers don’t have to do anything. Google offers no incentive for changing, or punishment for not changing.
Publishers using nofollow to control crawling may need to reconsider their strategy.
Why did Google change nofollow?
Google wants to take back the link graph.
Google introduced the nofollow attribute in 2005 as a way for publishers to address comment spam and shady links from user-generated content (UGC). Linking to spam or low-quality sites could hurt you, and nofollow offered publishers a way to protect themselves.
Google also required nofollow for paid or sponsored links. If you were caught accepting anything of value in exchange for linking out without the nofollow attribute, Google could penalize you.
The system generally worked, but huge portions of the web—sites like Forbes and Wikipedia—applied nofollow across their entire site for fear of being penalized, or not being able to properly police UGC.
This made entire portions of the link graph less useful for Google. Should curated links from trusted Wikipedia contributors really not count? Perhaps Google could better understand the web if they changed how they consider nofollow links.
By treating nofollow attributes as “hints”, they allow themselves to better incorporate these signals into their algorithms.
Hopefully, this is a positive step for deserving content creators, as a broader swath of the link graph opens up to more potential ranking influence. (Though for most sites, it doesn’t seem much will change.)
What is the ranking impact of nofollow links?
Prior to today, SEOs generally believed nofollow links worked like this:
Not used for crawling and indexing (Google didn’t follow them.)
Might be used for ranking, though the observed effect was typically small or nonexistent
To be fair, there’s a lot of debate and speculation around the second statement, and Google has been opaque on the issue. Experimental data and anecdotal evidence suggest Google has long considered nofollow links as a potential ranking signal.
As of today, Google’s guidance states nofollowed attributes—including sponsored and ugc—are treated like this:
Still not used for crawling and indexing (see the changes taking place in the future below)
For ranking purposes, all nofollow directives are now officially a “hint” — meaning Google may choose to ignore it and use it for ranking purposes. Many SEOs believe this is how Google has been treating nofollow for quite some time.
Beginning March 1, 2020, nofollow attributes will be treated as hints across the board, meaning:
In some cases, they may be used for crawling and indexing
In some cases, they may be used for ranking
Emphasis on the word “some.” Google is very explicit that in most cases they will continue to ignore nofollow links as usual.
Do publishers need to make changes?
For most sites, the answer is no — only if they want to. Google isn’t requiring sites to make changes, and as of yet, there is no business case to be made.
That said, there are a couple of cases where site owners may want to implement the new attributes:
Sites that want to help Google better understand the sites they—or their contributors—are linking to. For example, it could be to everyone’s benefit for sites like Wikipedia to adopt these changes. Or maybe Moz could change how it marks up links in the user-generated Q&A section (which often links to high-quality sources.)
Sites that use nofollow for crawl control. For sites with large faceted navigation, nofollow is sometimes an effective tool at preventing Google from wasting crawl budget. It’s too early to tell if publishers using nofollow this way will need to change anything before Google starts treating nofollow as a crawling “hint” but it may be important to pay attention to.
To be clear, if a site is properly using nofollow today, SEOs do not need to recommend any changes be made. Though sites are free to do so, they should not expect any rankings boost for doing so, or new penalties for not changing.
That said, Google’s use of nofollow may evolve, and it will be interesting to see in the future—through study and analysis—if a ranking benefit does emerge from using nofollow attributes in a certain way.
Which nofollow attribute should you use?
If you choose to change your nofollow links to be more specific, Google’s guidelines are very clear, so we won’t repeat them in-depth here. In brief, your choices are:
rel=”sponsored” – For paid or sponsored links. This would assumingly include affiliate links, although Google hasn’t explicitly said.
rel=”ugc” – Links within all user-generated content. Google has stated if UGC is created by a trusted contributor, this may not be necessary.
rel=”nofollow” – A catchall for all nofollow links. As with the other nofollow directives, these links generally won’t be used for ranking, crawling, or indexing purposes.
Additionally, attributes can be used in combination with one another. This means a declaration such as rel=”nofollow sponsored” is 100% valid.
Can you be penalized for not marking paid links?
Yes, you can still be penalized, and this is where it gets tricky.
Google advises to mark up paid/sponsored links with either “sponsored” or “nofollow” only, but not “ugc”.
This adds an extra layer of confusion. What if your UGC contributors are including paid or affiliate links in their content/comments? Google, so far, hasn’t been clear on this.
For this reason, we may likely see publishers continue to markup UGC content with “nofollow” as a default, or possibly “nofollow ugc”.
Can you use the nofollow attributes to control crawling and indexing?
Nofollow has always been a very, very poor way to prevent Google from indexing your content, and it continues to be that way.
If you want to prevent Google from indexing your content, it’s recommended to use one of several other methods, most typically some form of “noindex”.
Crawling, on the other hand, is a slightly different story. Many SEOs use nofollow on large sites to preserve crawl budget, or to prevent Google from crawling unnecessary pages within faceted navigation.
Based on Google statements, it seems you can still attempt to use the nofollow attributes in this way, but after March 1, 2020, they may choose to ignore this. Any SEO using nofollow in this way may need to get creative in order to prevent Google from crawling unwanted sections of their sites.
Final thoughts: Should you implement the new nofollow attributes?
While there is no obvious compelling reason to do so, this is a decision every SEO will have to make for themselves.
Given the initial confusion and lack of clear benefits, many publishers will undoubtedly wait until we have better information.
That said, it certainly shouldn’t hurt to make the change (as long as you mark paid links appropriately with “nofollow” or “sponsored”.) For example, the Moz Blog may someday change comment links below to rel=”ugc”, or more likely rel=”nofollow ugc”.
Finally, will anyone actually use the “sponsored” attribute, at the risk of giving more exposure to paid links? Time will tell.
What are your thoughts on Google’s new nofollow attributes? Let us know in the comments below.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Click Through Rate (CTR) is an important metric that’s useful for making a lot of calculations about your site’s SEO performance, from estimating revenue opportunity, prioritize keyword optimization, to the impact of SERP changes within the market. Most SEOs know the value of creating custom CTR curves for their sites to make those projections more accurate. The only problem with custom CTR curves from Google Search Console (GSC) data is that GSC is known to be a flawed tool that can give out inaccurate data. This convolutes the data we get from GSC and can make it difficult to accurately interpret the CTR curves we create from this tool. Fortunately, there are ways to help control for these inaccuracies so you get a much clearer picture of what your data says.
By carefully cleaning your data and thoughtfully implementing an analysis methodology, you can calculate CTR for your site much more accurately using 4 basic steps:
Extract your sites keyword data from GSC — the more data you can get, the better.
Remove biased keywords — Branded search terms can throw off your CTR curves so they should be removed.
Find the optimal impression level for your data set — Google samples data at low impression levels so it’s important to remove keywords that Google may be inaccurately reporting at these lower levels.
Choose your rank position methodology — No data set is perfect, so you may want to change your rank classification methodology depending on the size of your keyword set.
Let’s take a quick step back
Before getting into the nitty gritty of calculating CTR curves, it’s useful to briefly cover the simplest way to calculate CTR since we’ll still be using this principle.
To calculate CTR, download the keywords your site ranks for with click, impression, and position data. Then take the sum of clicks divided by the sum of impressions at each rank level from GSC data you’ll come out with a custom CTR curve. For more detail on actually crunching the numbers for CTR curves, you can check out this article by SEER if you’re not familiar with the process.
Where this calculation gets tricky is when you start to try to control for the bias that inherently comes with CTR data. However, even though we know it gives bad data we don’t really have many other options, so our only option is to try to eliminate as much bias as possible in our data set and be aware of some of the problems that come from using that data.
Without controlling and manipulating the data that comes from GSC, you can get results that seem illogical. For instance, you may find your curves show position 2 and 3 CTR’s having wildly larger averages than position 1. If you don’t know that data that you’re using from Search Console is flawed you might accept that data as truth and a) try to come up with hypotheses as to why the CTR curves look that way based on incorrect data, and b) create inaccurate estimates and projections based on those CTR curves.
Step 1: Pull your data
The first part of any analysis is actually pulling the data. This data ultimately comes from GSC, but there are many platforms that you can pull this data from that are better than GSC’s web extraction.
Google Search Console — The easiest platform to get the data from is from GSC itself. You can go into GSC and pull all your keyword data for the last three months. Google will automatically download a csv. file for you. The downside to this method is that GSC only exports 1,000 keywords at a time making your data size much too small for analysis. You can try to get around this by using the keyword filter for the head terms that you rank for and downloading multiple 1k files to get more data, but this process is an arduous one. Besides the other methods listed below are better and easier.
Google Data Studio — For any non-programmer looking for an easy way to get much more data from Search Console for free, this is definitely your best option. Google Data Studio connects directly to your GSC account data, but there are no limitations on the data size you can pull. For the same three month period trying to pull data from GSC where I would get 1k keywords (the max in GSC), Data Studio would give me back 200k keywords!
Google Search Console API — This takes some programming know-how, but one of the best ways to get the data you’re looking for is to connect directly to the source using their API. You’ll have much more control over the data you’re pulling and get a fairly large data set. The main setback here is you need to have the programming knowledge or resources to do so.
Keylime SEO Toolbox — If you don’t know how to program but still want access to Google’s impression and click data, then this is a great option to consider. Keylime stores historical Search Console data directly from the Search Console API so it’s as good (if not better) of an option than directly connecting to the API. It does cost $49/mo, but that’s pretty affordable considering the value of the data you’re getting.
The reason it’s important what platform you get your data from is that each one listed gives out different amounts of data. I’ve listed them here in the order of which tool gives the most data from least to most. Using GSC’s UI directly gives by far the least data, while Keylime can connect to GSC and Google Analytics to combine data to actually give you more information than the Search Console API would give you. This is good because whenever you can get more data, the more likely that the CTR averages you’re going to make for your site are going to be accurate.
Step 2: Remove keyword bias
Once you’ve pulled the data, you have to clean it. Because this data ultimately comes from Search Console we have to make sure we clean the data as best we can.
Remove branded search & knowledge graph keywords
When you create general CTR curves for non-branded search it’s important to remove all branded keywords from your data. These keywords should have high CTR’s which will throw off the averages of your non-branded searches which is why they should be removed. In addition, if you’re aware of any SERP features like knowledge graph you rank for consistently, you should try to remove those as well since we’re only calculating CTR for positions 1–10 and SERP feature keywords could throw off your averages.
Step 3: Find the optimal impression level in GSC for your data
The largest bias from Search Console data appears to come from data with low search impressions which is the data we need to try and remove. It’s not surprising that Google doesn’t accurately report low impression data since we know that Google doesn’t even include data with very low searches in GSC. For some reason Google decides to drastically over report CTR for these low impression terms. As an example, here’s an impression distribution graph I made with data from GSC for keywords that have only 1 impression and the CTR for every position.
If that doesn’t make a lot of sense to you, I’m right there with you. This graph says a majority of the keywords with only one impression has 100 percent CTR. It’s extremely unlikely, no matter how good your site’s CTR is, that one impression keywords are going to get a majority of 100 percent CTR. This is especially true for keywords that rank below #1. This gives us pretty solid evidence low impression data is not to be trusted, and we should limit the number of keywords in our data with low impressions.
Step 3 a): Use normal curves to help calculate CTR
For more evidence of Google giving us biased data we can look at the distribution of CTR for all the keywords in our data set. Since we’re calculating CTR averages, the data should adhere to a Normal Bell Curve. In most cases CTR curves from GSC are highly skewed to the left with long tails which again indicates that Google reports very high CTR at low impression volumes.
If we change the minimum number of impressions for the keyword sets that we’re analyzing we end up getting closer and closer to the center of the graph. Here’s an example, below is the distribution of a site CTR in CTR increments of .001.
The graph above shows the impressions at a very low impression level, around 25 impressions. The distribution of data is mostly on the right side of this graph with a small, high concentration on the left implies that this site has a very high click-through rate. However, by increasing the impression filter to 5,000 impressions per keyword the distribution of keywords gets much much closer to the center.
This graph most likely would never be centered around 50% CTR because that’d be a very high average CTR to have, so the graph should be skewed to the left. The main issue is we don’t know how much because Google gives us sampled data. The best we can do is guess. But this raises the question, what’s the right impression level to filter my keywords out to get rid of faulty data?
One way to find the right impression level to create CTR curves is to use the above method to get a feel for when your CTR distribution is getting close to a normal distribution. A Normally Distributed set of CTR data has fewer outliers and is less likely to have a high number of misreported pieces of data from Google.
3 b): Finding the best impression level to calculate CTR for your site
You can also create impression tiers to see where there’s less variability in the data you’re analyzing instead of Normal Curves. The less variability in your estimates, the closer you’re getting to an accurate CTR curve.
Tiered CTR tables
Creating tiered CTR needs to be done for every site because the sampling from GSC for every site is different depending on the keywords you rank for. I’ve seen CTR curves vary as much as 30 percent without the proper controls added to CTR estimates. This step is important because using all of the data points in your CTR calculation can wildly offset your results. And using too few data points gives you too small of a sample size to get an accurate idea of what your CTR actually is. The key is to find that happy medium between the two.
In the tiered table above, there’s huge variability from All Impressions to >250 impressions. After that point though, the change per tier is fairly small. Greater than 750 impressions are the right level for this site because the variability among curves is fairly small as we increase impression levels in the other tiers and >750 impressions still gives us plenty of keywords in each ranking level of our data set.
When creating tiered CTR curves, it’s important to also count how much data is used to build each data point throughout the tiers. For smaller sites, you may find that you don’t have enough data to reliably calculate CTR curves, but that won’t be apparent from just looking at your tiered curves. So knowing the size of your data at each stage is important when deciding what impression level is the most accurate for your site.
Step 4: Decide which position methodology to analyze your data
Once you’ve figured out the correct impression-level you want to filter your data by you can start actually calculating CTR curves using impression, click, and position data. The problem with position data is that it’s often inaccurate, so if you have great keyword tracking it’s far better to use the data from your own tracking numbers than Google’s. Most people can’t track that many keyword positions so it’s necessary to use Google’s position data. That’s certainly possible, but it’s important to be careful with how we use their data.
How to use GSC position
One question that may come up when calculating CTR curves using GSC average positions is whether to use rounded positions or exact positions (i.e. only positions from GSC that rank exactly 1. So, ranks 1.0 or 2.0 are exact positions instead of 1.3 or 2.1 for example).
Exact position vs. rounded position
The reasoning behind using exact position is we want data that’s most likely to have been ranking in position 1 for the time period we’re measuring. Using exact position will give us the best idea of what CTR is at position 1. Exact rank keywords are more likely to have been ranking in that position for the duration of the time period you pulled keywords from. The problem is that Average Rank is an average so there’s no way to know if a keyword has ranked solidly in one place for a full time period or the average just happens to show an exact rank.
Fortunately, if we compare exact position CTR vs rounded position CTR, they’re directionally similar in terms of actual CTR estimations with enough data. The problem is that exact position can be volatile when you don’t have enough data. By using rounded positions we get much more data, so it makes sense to use rounded position when not enough data is available for exact position.
The one caveat is for position 1 CTR estimates. For every other position average rankings can pull up on a keywords average ranking position and at the same time they can pull down the average. Meaning that if a keyword has an average ranking of 3. It could have ranked #1 and #5 at some point and the average was 3. However, for #1 ranks, the average can only be brought down which means that the CTR for a keyword is always going to be reported lower than reality if you use rounded position.
A rank position hybrid: Adjusted exact position
So if you have enough data, only use exact position for position 1. For smaller sites, you can use adjusted exact position. Since Google gives averages up to two decimal points, one way to get more “exact position” #1s is to include all keywords which rank below position 1.1. I find this gets a couple hundred extra keywords which makes my data more reliable.
And this also shouldn’t pull down our average much at all, since GSC is somewhat inaccurate with how it reports Average Ranking. At Wayfair, we use STAT as our keyword rank tracking tool and after comparing the difference between GSC average rankings with average rankings from STAT the rankings near #1 position are close, but not 100 percent accurate. Once you start going farther down in rankings the difference between STAT and GSC become larger, so watch out how far down in the rankings you go to include more keywords in your data set.
I’ve done this analysis for all the rankings tracked on Wayfair and I found the lower the position, the less closely rankings matched between the two tools. So Google isn’t giving great rankings data, but it’s close enough near the #1 position, that I’m comfortable using adjusted exact position to increase my data set without worrying about sacrificing data quality within reason.
Conclusion
GSC is an imperfect tool, but it gives SEOs the best information we have to understand an individual site’s click performance in the SERPs. Since we know that GSC is going to throw us a few curveballs with the data it provides its important to control as many pieces of that data as possible. The main ways to do so is to choose your ideal data extraction source, get rid of low impression keywords, and use the right rank rounding methods. If you do all of these things you’re much more likely to get more accurate, consistent CTR curves on your own site.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
When it comes to Google’s algorithms, there’s quite a difference between how they treat local and organic. Get the scoop on which factors drive the local algorithm and how it works from local SEO extraordinaire, Joy Hawkins, as she offers a taste of her full talk from MozCon 2019.
Click on the whiteboard image above to open a high resolution version in a new tab!
Video Transcription
Hello, Moz fans. I’m Joy Hawkins. I run a local SEO agency from Toronto, Canada, and a search forum known as the Local Search Forum, which basically is devoted to anything related to local SEO or local search. Today I’m going to be talking to you about Google’s local algorithm and the three main factors that drive it.
If you’re wondering what I’m talking about when I say the local algorithm, this is the algorithm that fuels what we call the three-pack here. When you do a local search or a search that Google thinks has local intents, like plumbers let’s say, you traditionally will get three results at the top with the map, and then everything below it I refer to as organic. This algorithm I’ll be kind of breaking down is what fuels this three-pack, also known as Google My Business listings or Google Maps listings.
They’re all talking about the exact same thing. If you search Google’s Help Center on what they look at with ranking these entities, they tell you that there are three main things that fuel this algorithm. The three things that they talk about are proximity, prominence, and relevance. I’m going to basically be breaking down each one and explaining how the factors work.
1. Proximity
I’ll kind of start here with proximity. Proximity is basically defined as your location when you are searching on your phone or your computer and you type something in. It’s where Google thinks you are located. If you’re not really sure, often you can scroll down to the bottom of your page, and at the bottom of your page it will often list a zip code that Google thinks you’re in.
Zip code (desktop)
The other way to tell is if you’re on a phone, sometimes you can also see a little blue dot on the map, which is exactly where Google thinks you’re located. On a high level, we often think that Google thinks we’re located in a city, but this is actually pretty false, which I know that there’s been actually a lot of talk at MozCon about how Google pretty much always knows a little deeper than that as far as where users are located.
Generally speaking, if you’re on a computer, they know what zip code you’re in, and they’ll list that at the bottom. There are a variety of tools that can help you check ranking based on zip codes, some of which would be Moz Check Your Presence Tool, BrightLocal, Whitespark, or Places Scout. All of these tools have the ability to track at the zip code level.
Geo coordinates (mobile)
However, when you’re on a phone, usually Google knows your location even more detailed, and they actually generally know the geo coordinates of your actual location, and they pinpoint this using that little blue dot.
It knows even more about the zip code. It knows where you’re actually located. It’s a bit creepy. But there are a couple of tools that will actually let you see results based on geo coordinates, which is really cool and very accurate. Those tools include the Local Falcon, and there is a Chrome extension which is 100% free, that you can put in your browser, called GS Location Changer.
I use this all the time in an incognito browser if I want to just see what search results look like from a very, very specific location. Now these two levels, depending on what industry you are working in, it’s really important to know which level you need to be looking at. If you work with lawyers, for example, zip code level is usually good enough.
There aren’t enough lawyers to make a huge difference at certain like little points inside a given zip code. However, if you work with dentists or restaurants, let’s say, you really need to be looking at geo coordinate levels. We have seen lots of cases where we will scan a specific keyword using these two tools, and depending on where in that zip code we are, we see completely different three-packs.
It’s very, very key to know that this factor here for proximity really influences the results that you see. This can be challenging, because when you’re trying to explain this to clients or business owners, they search from their home, and they’re like, “Why am I not there?” It’s because their proximity or their location is different than where their office is located.
I realize this is a challenging problem to solve for a lot of agencies on how to represent this, but that’s kind of the tools that you need to look at and use.
2. Prominence
Moving to the next factor, so prominence, this is basically how important Google thinks you are. Like Is this business a big deal, or are they just some random, crappy business or a new business that we don’t know much about?
This looks at things like links, for example.
Store visits, if you are a brick-and-mortar business and you get no foot traffic, Google likely won’t think you’re very prominent.
Reviews, the number of reviews often factors in here. We often see in cases where businesses have a lot of reviews and a lot of old reviews, they generally have a lot of prominence.
Citations can also factor in here due to the number of citations. That can also factor into prominence.
3. Relevance
Moving into the relevance factor, relevance is basically, does Google think you are related to the query that is typed in? You can be as prominent as anyone else, but if you do not have content on your page that is structured well, that covers the topic the user is searching about, your relevance will be very low, and you will run into issues.
It’s very important to know that these three things all kind of work together, and it’s really important to make sure you are looking at all three. On the relevance end, it looks at things like:
content,
onsite SEO, so your title tags, your meta tags, all that nice SEO stuff
Citations also factor in here, because it looks at things like your address. Like are you actually in this city? Are you relevant to the city that the user is trying to get locations from?
Categories are huge here, your Google My Business categories. Google currently has just under 4,000 different Google My Business categories, and they add an insane amount every year and they also remove ones. It’s very important to keep on top of that and make sure that you have the correct categories on your listing or you won’t rank well.
The business name is unfortunately a huge factor as well in here. Merely having keywords in your business name can often give you relevance to rank. It shouldn’t, but it does.
Then review content. I know Mike Blumenthal did a really cool experiment on this a couple years ago, where he actually had a bunch of people write a bunch of fake reviews on Yelp mentioning certain terms to see if it would influence ranking on Google in the local results, and it did. Google is definitely looking at the content inside the reviews to see what words people are using so they can see how that impacts relevance.
How to rank without proximity, prominence, or relevance
Obviously you want all three of these things. It is possible to rank if you don’t have all three, and I’ll give a couple examples. If you’re looking to expand your radius, you service a lot of people.
You don’t just service people on your block. You’re like, “I serve the whole city of Chicago,” for example. You are not likely going to rank in all of Chicago for very common terms, things like dentist or personal injury attorney. However, if you have a lot of prominence and you have a really relevant page or content related to really niche terms, we often see that it is possible to really expand your radius for long tail keywords, which is great.
Prominence is probably the number one thing that will expand your radius inside competitive terms. We’ll often see Google bringing in a business that is slightly outside of the same area as other businesses, just because they have an astronomical number of reviews, or maybe their domain authority is ridiculously high and they have all these linking domains.
Those two factors are definitely what influences the amount of area you cover with your local exposure.
Spam and fake listings
On the flip side, spam is something I talk a lot about. Fake listings are a big problem in the local search space. Fake listings, these lead gen providers create these listings, and they rank with zero prominence.
They have no prominence. They have no citations. They have no authority. They often don’t even have websites, and they still rank because of these two factors. You create 100 listings in a city, you are going to be close to someone searching. Then if you stuff a bunch of keywords in your business name, you will have some relevance, and by somehow eliminating the prominence factor, they are able to get these listings to rank, which is very frustrating.
Obviously, Google is kind of trying to evolve this algorithm over time. We are hoping that maybe the prominence factor will increase over time to kind of eliminate that problem, but ultimately we’ll have to see what Google does. We also did a study recently to test to see which of these two factors kind of carries more weight.
An experiment: Linking to your site within GMB
One thing I’ve kind of highlighted here is when you link to a website inside your Google My Business listing, there’s often a debate. Should I link to my homepage, or should I link to my location page if I’ve got three or four or five offices? We did an experiment to see what happens when we switch a client’s Google My Business listing from their location page to their homepage, and we’ve pretty much almost always seen a positive impact by switching to the homepage, even if that homepage is not relevant at all.
In one example, we had a client that was in Houston, and they opened up a location in Dallas. Their homepage was optimized for Houston, but their location page was optimized for Dallas. I had a conversation with a couple of other SEOs, and they were like, “Oh, well, obviously link to the Dallas page on the Dallas listing. That makes perfect sense.”
But we were wondering what would happen if we linked to the homepage, which is optimized for Houston. We saw a lift in rankings and a lift in the number of search queries that this business showed for when we switched to the homepage, even though the homepage didn’t really mention Dallas at all. Something to think about. Make sure you’re always testing these different factors and chasing the right ones when you’re coming up with your local SEO strategy. Finally, something I’ll mention at the top here.
Local algorithm vs organic algorithm
As far as the local algorithm versus the organic algorithm, some of you might be thinking, okay, these things really look at the same factors. They really kind of, sort of work the same way. Honestly, if that is your thinking, I would really strongly recommend you change it. I’ll quote this. This is from a Moz whitepaper that they did recently, where they found that only 8% of local pack listings had their website also appearing in the organic search results below.
I feel like the overlap between these two is definitely shrinking, which is kind of why I’m a bit obsessed with figuring out how the local algorithm works to make sure that we can have clients successful in both spaces. Hopefully you learned something. If you have any questions, please hit me up in the comments. Thanks for listening.
If you liked this episode of Whiteboard Friday, you’ll love all the SEO thought leadership goodness you’ll get from our newly released MozCon 2019 video bundle. Catch Joy’s full talk on the differences between the local and organic algorithm, plus 26 additional future-focused topics from our top-notch speakers:
We suggest scheduling a good old-fashioned knowledge share with your colleagues to educate the whole team — after all, who didn’t love movie day in school? 😉
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
A lot of people forget that Amazon is a search engine, let alone the largest search engine for e-commerce. With 54 percent of product searches now taking place on Amazon, it’s time to take it seriously as the world’s largest search engine for e-commerce. In fact, if we exclude YouTube as part of Google, Amazon is technically the second largest search engine in the world.
As real estate on Google becomes increasingly difficult to maintain, moving beyond a website-centric e-commerce strategy is a no brainer. With 54% of shoppers choosing to shop on e-commerce marketplaces, it’s no surprise that online marketplaces are the number one most important digital marketing channel in the US, according to a 2018 study by the Digital Marketing Institute. While marketplaces like Etsy and Walmart are growing fast, Amazon maintains its dominance of e-commerce market share owning 47 percent of online sales, and 5 percent of all retail sales in the US.
Considering that there are currently over 500 million products listed on Amazon.com, and more than two-thirds of clicks happen on the first page of Amazon’s search results—selling products on Amazon is no longer as easy as “set it and forget it.”
Enter the power of SEO.
When we think of SEO, many of us are aware of the basics of how Google’s algorithm works, but not many of us are up to speed with SEO on Amazon. Before we delve into Amazon’s algorithm, it’s important to note how Google and Amazon’s starkly different business models are key to what drives their algorithms and ultimately how we approach SEO on the two platforms.
The academic vs. The stockbroker
Google was born in 1998 through a Ph.D. project by Lawrence Page and Sergey Brin. It was the first search engine of its kind designed to crawl and index the web more efficiently than any existing systems at the time.
Google was built on a foundation of scientific research and academia, with a mission to;
“Organize the world’s information and make it universally accessible and useful” — Google
Now, answering 5.6 billion queries every day, Google’s mission is becoming increasingly difficult — which is why their algorithm is designed as the most complex search engine in the world, continuously refined through hundreds of updates every year.
In contrast to Brin and Page, Jeff Bezos began his career on Wall Street in a series of jobs before starting Amazon in 1994 after reading that the web was growing at 2,300 percent. Determined to take advantage of this, he made a list of the top products most likely to sell online and settled with books because of their low cost and high demand. Amazon was built on a revenue model, with a mission to:
“Be the Earth’s most customer-centric company, where customers can find and discover anything they might want to buy online, and endeavors to offer its customers the lowest possible prices.” — Amazon
Amazon doesn’t have searcher intent issues
When it comes to SEO, the contrasting business models of these two companies lead the search engines to ask very different questions in order to deliver the right results to the user.
On one hand, we have Google who asks the question:
“What results most accurately answer the searcher’s query?”
Amazon, on the other hand, wants to know:
“What product is the searcher most likely to buy?”
On Amazon, people aren’t asking questions, they’re searching for products—and what’s more, they’re ready to buy. So, while Google is busy honing an algorithm that aims to understand the nuances of human language, Amazon’s search engine serves one purpose—to understand searches just enough to rank products based on their propensity to sell.
With this in mind, working to increase organic rankings on Amazon becomes a lot less daunting.
Amazon’s A9 algorithm: The secret ingredient
Amazon may dominate e-commerce search, but many people haven’t heard of the A9 algorithm. Which might seem unusual, but the reason Amazon isn’t keen on pushing their algorithm through the lens of a large scale search engine is simply that Amazon isn’t in the business of search.
Amazon’s business model is a well-oiled revenue-driving machine — designed first and foremost to sell as many products as possible through its online platform. While Amazon’s advertising platform is growing rapidly, and AWS continues as their fastest-growing revenue source — Amazon still makes a large portion of revenue through goods sold through the marketplace.
With this in mind, the secret ingredient behind Amazon’s A9 algorithm is, in fact: Sales Velocity
What is sales velocity, you ask? It’s essentially the speed and volume at which your products sell on Amazon’s marketplace.
There are lots of factors which Amazon SEO’s refer to as “direct” and “indirect” ranking factors, but ultimately every single one of them ties back to sales velocity in some way.
At Wolfgang Digital, we approach SEO on Google based on three core pillars — Technology, Relevance, and Authority.
Evidently, Google’s ranking pillars are all based on optimizing a website in order to drive click through on the SERP.
On the other hand, Amazon’s core ranking pillars are tied back to driving revenue through sales velocity — Conversion Rate, Keyword Relevance and of course, Customer Satisfaction.
Without further ado, let’s take a look at the key factors behind each of these pillars, and what you can optimize to increase your chances of ranking on Amazon’s coveted first page.
Conversion rate
Conversion rates on Amazon have a direct impact on where your product will rank because this tells Amazon’s algorithm which products are most likely to sell like hotcakes once they hit the first page.
Of all variables to monitor as an Amazon marketer, working to increase conversion rates is your golden ticket to higher organic rankings.
Optimize pricing
Amazon’s algorithm is designed to predict which products are most likely to convert. This is why the price has such a huge impact on where your products rank in search results. If you add a new product to Amazon at a cheaper price than the average competitor, your product is inclined to soar to the top-ranking results, at least until it gathers enough sales history to determine the actual sales performance.
Even if you’re confident that you have a supplier advantage, it’s worth checking your top-selling products and optimizing pricing where possible. If you have a lot of products, repricing software is a great way to automate pricing adjustments based on the competition while still maintaining your margins.
However, Amazon knows that price isn’t the only factor that drives sales, which is why Amazon’s first page isn’t simply an ordered list of items priced low to high. See the below Amazon UK search results for “lavender essential oil:”
Excluding the sponsored ads, we can still see that not all of the cheap products are ranked high and the more expensive ones lower down the page. So, if you’ve always maintained the idea that selling on Amazon is a race to the bottom on price, read on my friends.
Create listings that sell
As we discussed earlier, Amazon is no longer a “set it and forget” platform, which is why you should treat each of your product listings as you would a product page on your website. Creating listings that convert takes time, which is why not many sellers do it well, so it’s an essential tactic to steal conversions from the competition.
Title
Make your titles user-friendly, include the most important keywords at the front, and provide just enough information to entice clicks. Gone are the days of keyword stuffing titles on Amazon, in fact, it may even hinder your rankings by reducing clicks and therefore conversions.
Bullet points
These are the first thing your customer sees, so make sure to highlight the best features of your product using a succinct sentence in language designed to convert.
Improve the power of your bullet points by including information that your top competitors don’t provide. A great way to do this is to analyze the “answered questions” for some of your top competitors.
Do you see any trending questions that you could answer in your bullet points to help shorten the buyer journey and drive conversions to your product?
Product descriptions
Given that over 50 percent of Amazon shoppers said they always read the full description when they are considering purchasing a product, a well-written product description can have a huge impact on conversions.
Your description is likely to be the last thing a customer will read before they choose to buy your product over a competitor, so give these your time and care, reiterating points made in your bullet points and highlighting any other key features or benefits likely to push conversions over the line.
Taking advantage of A+ content for some of your best selling products is a great way to craft a visually engaging description, like this example from Safavieh.
Of course, A+ content requires additional design costs which may not be feasible for everyone. If you opt for text-only descriptions, make sure your content is easy to read while still highlighting the best features of your product.
For an in-depth breakdown on creating a beautifully crafted Amazon listing, I highly recommend this post from Startup Bros.
AB test images
Images are incredibly powerful when it comes to increasing conversions, so if you haven’t tried split testing different image versions on Amazon, you could be pleasantly surprised. One of the most popular tools for Amazon AB testing is Splitly — it’s really simple to use, and affordable with plans starting at $47 per month.
Depending on your product type, it may be worth investing the time into taking your own pictures rather than using the generic supplier provided images. Images that tend to have the biggest impact on conversions are the feature images (the one you see in search results) and close up images, so try testing a few different versions to see which has the biggest impact.
Amazon sponsored ads
The best thing about Amazon SEO is that your performance on other marketing channels can help support your organic performance.
Unlike on Google, where advertising has no impact on organic rankings, if your product performs well on Amazon ads, it may help boost organic rankings. This is because if a product is selling through ads, Amazon’s algorithm may see this as a product that users should also see organically.
A well-executed ad campaign is particularly important for new products, in order to boost their sales velocity in the beginning and build up the sales history needed to rank better organically.
External traffic
External traffic involves driving traffic from social media, email, or other sources to your Amazon products.
While external sources of traffic are a great way to gain more brand exposure and increase customer reach, a well-executed external traffic strategy also impacts your organic rankings because of its role in increasing sales and driving up conversion rates.
Before you start driving traffic straight to your Amazon listing, you may want to consider using a landing page tool like Landing Cube in order to protect your conversion rate as much as possible.
With a landing page tool, you drive traffic to a landing page where customers get a special offer code to use on your product listing page—this way, you only drive traffic which is guaranteed to convert.
Keyword relevance
A9 still relies heavily on keyword matching to determine the relevance of a product to searcher’s query, which is why this is a core pillar of Amazon SEO.
While your title, bullet points, and descriptions are essential for converting customers, if you don’t include the relevant keywords, your chances of driving traffic to convert are slim to none.
Every single keyword incorporated in your Amazon listing will impact your rankings, so it’s important to deploy a strategic approach.
Steps for targeting the right keywords on Amazon:
Brainstorm as many search terms you think someone would use to find your product.
Analyze 3–5 competitors with the most reviews to identify their target keywords.
Validate the top keywords for your product using an Amazon keyword tool such as Magnet, Ahrefs, or Keywordtool.io.
Download the keyword lists into Excel, and filter out any duplicate or irrelevant keywords.
Prioritize search terms with the highest search volume, bearing in mind that broad terms will be harder to rank for. Depending on the competition, it may make more sense to focus on lower volume terms with lower competition—but this can always be tested later on.
Once you have refined the keywords you want to rank for, here are some things to remember:
Include your most important keywords at the start of the title, after your brand name.
Use long-tail terms and synonyms throughout your bullets points and descriptions.
Use your backend search terms wisely — these are a great place for including some common misspellings, different measurement versions e.g. metric or imperial, color shades and descriptive terms.
Most importantly — don’t repeat keywords. If you’ve included a search term once in your listing i.e. the title, you don’t need to include it in your backend search terms. Repeating a keyword, or keyword stuffing will not improve your rankings.
Customer satisfaction
Account health
Part of Amazon’s mission statement is “to be the Earth’s most customer-centric company.” This relentless focus on the customer is what drives Amazon’s astounding customer retention, with 85 percent of Prime shoppers visiting the marketplace at least once a week and 56% of non-Prime members reporting the same. A focus on the customer is at the core of Amazon’s success, which is why stringent customer satisfaction metrics are a key component to selling on Amazon.
Your account health metrics are the bread and butter of your success as an Amazon seller, which is why they’re part of Amazon’s core ranking algorithm. Customer experience is so important to Amazon that, if you fail to meet the minimum performance requirements, you risk getting suspended as a seller—and they take no prisoners.
On the other hand, if you are meeting your minimum requirements but other sellers are performing better than you by exceeding theirs, they could be at a ranking advantage.
Customer reviews
Customer reviews are one of the most important Amazon ranking factors — not only do they tell Amazon how customers feel about your product, but they are one of the most impactful conversion factors in e-commerce. Almost 95 percent of online shoppers read reviews before buying a product, and over 60 percent of Amazon customers say they wouldn’t purchase a product with less than 4.5 stars.
On Amazon, reviews help to drive both conversion rate and keyword relevance, particularly for long-tail terms. In short, they’re very important.
Increasing reviews for your key products on Amazon was historically a lot easier, through acquiring incentivized reviews. However, in 2018, Amazon banned sellers from incentivizing reviews which makes it even more difficult to actively build reviews, especially for new products.
Tips for building positive reviews on Amazon:
Maintain consistent communication throughout the purchase process using Amazon email marketing software. Following up to thank someone for their order and notify when the order if fulfilled, creates a seamless buying experience which leaves customers more likely to give a positive review.
Adding branded package inserts to thank customers for their purchase makes the buying experience personal, differentiating you as a brand rather than a nameless Amazon seller. Including a friendly reminder to leave a review in a nice delivery note will have better response rates than the generic email they receive from Amazon.
Providing upfront returns information without a customer having to ask for it shows customers you are confident in the quality of your product. If a customer isn’t happy with your product, adding fuel to the fire with a clunky or difficult returns process is more likely to result in negative reviews through sheer frustration.
Follow up with helpful content related to your products such as instructions, decor inspiration, or recipe ideas, including a polite reminder to provide a review in exchange.
And of course, deliver an amazing customer experience from start to finish.
Key takeaways for improving Amazon SEO
As a marketer well versed in the world of Google, venturing onto Amazon can seem like a culture shock — but mastering the basic principles of Amazon SEO could be the difference between getting lost in a sea of competitors and driving a successful Amazon business.
Focus on driving sales velocity through increasing conversion rate, improving keyword relevance, nailing customer satisfaction and actively building reviews.
Craft product listings for customers first, search engines second.
Don’t neglect product descriptions in the belief that no one reads them—over 50% of Amazon shoppers report reading the full description before buying a product.
Keywords carry a lot of weight. If you don’t include a keyword in your listing, your chances of ranking for it are slim.
Images are powerful. Take your own photos instead of using generic supplier images and be sure to test, test, and test.
Actively build positive reviews by delivering an amazing customer experience.
Invest in PPC and driving external traffic to support organic performance, especially for new products.
What other SEO tips or tactics do you apply on Amazon? Tell me in the comments below!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!