Google tests 'more results' mobile search interface and new search refinement buttons

Google has confirmed it is testing a new mobile search interface and a new search refinement button. The new search interface shows fewer search results on the mobile search results page, with the option to click on a button labeled “more results.” In addition, Google is testing showing buttons to refine your search directly in the search results snippets.

A Google spokesperson told us “We constantly experiment with new search formats and experiences to deliver the best experience for our users.”

Dan Brackett shared screen shots with us on Twitter, but many others are noticing these new tests.

‘More results’ feature on Google mobile search

Here is a screen shot showing the “more results” link, often Google is showing as few as two or three organic search results on this page. To see more organic results, you will have to click on the “more results” link, and Google will then dynamically load more search results below.

You can also see the refinements at the top of the screen shot above. Here is another screen shot of these refinements directly in what is called a featured snippet.

Google has been testing both of these at least for the past few weeks, and more and more searchers are beginning to notice it.

This is just a test, and we do not know if or when Google will release this to a wider set of test users or to everyone.

Google changes info command search operator, dropping useful links

Google has confirmed with Search Engine Land that they have changed the way the info command, a search operator that gives you more details about a site, is displayed in search.

Previously, the info operator would give searchers the snippet plus additional links to find more operators that show links to the site, the Google cache link, similar sites to that site and more. But that whole section has been removed, and now Google is showing just the snippet.

Here is the before shot from a couple of years ago:

Now what I see is only the snippet:

Google told us this is the new changed behavior for the command.

Google is testing a new way to report offensive Autocomplete suggestions

Google is testing a new, more visible way for searchers to report potentially offensive suggestions in its Autocomplete feature.

The test is currently limited to a very small percentage of Google users, but the company says it hopes to roll it out to all users around the world soon. Google shared the screen shot below of one version of the reporting tool — you can see the gray text, “Report offensive query,” in the bottom right, below the last Autocomplete suggestion. This may not necessarily be what the final feedback invitation looks like, but is one version in testing now.

A Google spokesperson confirmed the feedback test in a statement shared with Search Engine Land:

“Autocomplete predictions are based on searches previously carried out by users around the world. That means that predicted terms are sometimes unexpected or offensive. We have been actively working on improvements to our algorithm that will help surface more high quality, credible content on the web. In addition, we’re experimenting with a new feature that allows people to report offensive Search predictions. We’re working to incorporate such feedback into our algorithms, and we hope to roll this out more broadly over time. Autocomplete isn’t an exact science and we’re continually working to improve it.”

Google users have already been able to report offensive Autocomplete predictions, but only via a form that’s buried in Google’s support pages. Putting the feedback invitation right in the Autocomplete suggestions will certainly make it more visible and should lead to a lot more reports.

Autocomplete is Google’s tool that relies on worldwide search activity to predict what a user is searching for and display potentially matching queries. But it’s a feature that has come under fire for years. There were accusations last year that Autocomplete was filtering results to Hillary Clinton’s benefit. Before that, the company responded to complaints by removing the phrase “how can I join ISIS” from Autocomplete. Google’s come under fire for racist suggestions in Autocomplete, and has gone to court on several occasions over predicted Autocomplete queries.

What the heck is machine learning, and why should I care?

There are many uses for machine learning and AI in the world around us, but today I’m going to talk about search. So, assuming you’re a business owner with a website or an SEO, the big question you’re probably asking is: what is machine learning and how will it impact my rankings?

The problem with this question is that it relies on a couple of assumptions that may or may not be correct: First, that machine learning is something you can optimize for, and second, that there will be rankings in any traditional sense.

So before we get to work trying to understand machine learning and its impact on search, let’s stop and ask ourselves the real question that needs to be answered:

What is Google trying to accomplish?

It is by answering this one seemingly simple question that we gain our greatest insights into what the future holds and why machine learning is part of it. And the answer to this question is also quite simple. It’s the same as what you and I both do every day: try to earn more money.

This, and this alone, is the objective — and with shareholders, it is a responsibility. So, while it may not be the feel-good answer you were hoping for, it is accurate.

[Read the full article on MarTech Today.]

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

An experiment in trying to predict Google rankings

Machine learning is quickly becoming an indispensable tool for many large companies. Everyone has, for sure, heard about Google’s AI algorithm beating the World Champion in Go, as well as technologies like RankBrain, but machine learning does not have to be a mystical subject relegated to the domain of math researchers. There are many approachable libraries and technologies that show promise of being very useful to any industry that has data to play with.

Machine learning also has the ability to turn traditional website marketing and SEO on its head. Late last year, my colleagues and I (rather naively) began an experiment in which we threw several popular machine learning algorithms at the task of predicting ranking in Google. We ended up with an assembly that achieved 41 percent true positive and 41 percent true negative on our data set.

In the following paragraphs, I will take you through our experiment, and I will also discuss a few important libraries and technologies that are important for SEOs to begin understanding.

Our experiment

Toward the end of 2015, we started hearing more and more about machine learning and its promise to make use of large amounts of data. The more we dug in, the more technical it became, and it quickly became clear that it would be helpful to have someone help us navigate this world.

About that time, we came across a brilliant data scientist from Brazil named Alejandro Simkievich. The interesting thing to us about Simkievich was that he was working in the area of search relevance and conversion rate optimization (CRO) and placing very well for important Kaggle competitions. (For those of you not familiar, Kaggle is a website that hosts machine learning competitions for groups of data scientists and machine learning enthusiasts.)

Simkievich is the owner of Statec, a data science/machine learning consulting company, with clients in the consumer goods, automotive, marketing and internet sectors. Lots of Statec’s work had been focused on assessing the relevance of e-commerce search engines. Working together seemed a natural fit, since we are obsessed with using data to help with decision-making for SEO.

We like to set big hairy goals, so we decided to see if we could use the data available from scraping, rank trackers, link tools and a few more tools, to see if we could create features that would allow us to predict the rank of a webpage. While we knew going in that the likelihood of pulling it off was very low, we still pushed ahead for the opportunity for an amazing win, as well as the chance to learn some really interesting technology.

The data

Fundamentally, machine learning is using computer programs to take data and transform it in a way that provides something valuable in return. “Transform” is a very loosely applied word, in that it doesn’t quite do justice to all that is involved, but it was selected for the ease of understanding. The point here is that all machine learning begins with some type of input data.

(Note: There are many tutorials and courses freely available that do a very good job of covering the basics of machine learning, so we will not do that here. If you are interested in learning more, Andrew Ng has an excellent free class on Coursera here.)

The bottom line is that we had to find data that we could use to train a machine learning model. At this point, we didn’t know exactly what would be useful, so we used a kitchen-sink approach and grabbed as many features as we could think of. GetStat and Majestic were invaluable in supplying much of the base data, and we built a crawler to capture everything else.

Our goal was to end up with enough data to successfully train a model (more on this later), and this meant a lot of data. For the first model, we had about 200,000 observations (rows) and 54 attributes (columns).

A little background

As I said before, I am not going to go into a lot of detail about machine learning, but it is important to grasp a few points to understand the next section. In total, much of the machine learning work today deals with regression, classification and clustering algorithms. I will define the first two here, as they were relevant to our project.

Regression algorithms are normally useful for predicting a single number. If you needed to create an algorithm that predicted a stock price based on features of stocks, you would select this type of model. These are called continuous variables.Classification algorithms are used to predict a member of a class of possible answers. This could be a simple “yes or no” classification, or “red, green or blue.” If you needed to predict whether an unknown person was male or female from features, you would select this type of model. These are called discrete variables.

Machine learning is a very technical space right now, and much of the cutting-edge work requires familiarity with linear algebra, calculus, mathematical notation and programming languages like Python. One of the items that helped me understand the overall flow at an approachable level, though, was to think of machine learning models as applying weights to the features in the data you give it. The more important the feature, the stronger the weight.

When you read about “training models,” it is helpful to visualize a string connected through the model to each weight, and as the model makes a guess, a cost function is used to tell you how wrong the guess was and to gently, or sternly, pull the string in the direction of the right answer, correcting all the weights.

The part below gets a bit technical with terminology, so if it is too much for you, feel free to skip to the results and takeaways in the final section.

Tackling Google rankings

Now that we had the data, we tried several approaches to the problem of predicting the Google ranking of each webpage.

Initially, we used a regression algorithm. That is, we sought to predict the exact ranking of a site for a given search term (e.g., a site will rank X for search term Y), but after a few weeks, we realized that the task was too difficult. First, a ranking is by definition a characteristic of a site relative to other sites, not an intrinsic characteristic of the site (as, for example, word count). Since it was impossible for us to feed our algorithm with all sites ranked for a given search term, we reformulated the problem.

We realized that, in terms of Google ranking, what matters most is whether a given site ends up on the first page for a given search term. Thus, we re-framed the problem: What if we try to predict whether a site will end up in the top 10 sites ranked by Google for a certain search term? We chose top 10 because, as they say, you can hide a dead body on page two!

From that standpoint, the problem turns into a binary (yes or no) classification problem, where we have only two classes: a) the site is a top 10 site, or b) the site is not a top 10 site. Furthermore, instead of making a binary prediction, we decided to predict the probability that a given site belongs to each class.

Later, to force ourselves to make a clear-cut decision, we decided on a threshold above which we predict that a site will be top 10. For example, if we predict that the threshold is 0.85, then if we predict that the probability of a site being in the top 10 is higher than 0.85, we go ahead and predict that the site will be in the top 10.

To measure the performance of the algorithm, we decided to use a confusion matrix.

The following chart provides an overview of the entire process.

Cleaning the data

We used a data set of 200,000 records, including roughly 2,000 different keywords/search terms.

In general, we can group the attributes we used into three categories:

Numerical featuresCategorical variablesText features

Numerical features are those that can take on any number within an infinite or finite interval. Some of the numerical features we used are ease of read, grade level, text length, average number of words per sentence, URL length, website load time, number of domains referring to website, number of .edu domains referring to website, number of .gov domains referring to website, Trust Flow for a number of topics, Citation Flow, Facebook shares, LinkedIn shares and Google shares. We applied a standard scalar (multiplier) to these features to center them around the mean, but other than that, they require no further preprocessing.

A categorical variable is one which can take on a limited number of values, with each value representing a different group or category. The categorical variables we used include most frequent keywords, as well as locations and organizations throughout the site, in addition to topics for which the website is trusted. Preprocessing for these features included turning them into numerical labels and subsequent one-hot encoding.

Text features are obviously composed of text. They include search term, website content, title, meta-description, anchor text, headers (H3, H2, H1) and others.

It is important to highlight that there is not a clear-cut difference between some categorical attributes (e.g., organizations mentioned on the site) and text, and some attributes indeed switched from one category to the other in different models.

Feature engineering

We engineered additional features, which have correlation with rank.

Most of these features are Boolean (true or false), but some are numerical. An example of a Boolean feature is the exact search term included on the website text, whereas a numerical feature would be how many of the tokens in the search term are included in the website text.

Below are some of the features we engineered.


To pre-process the text features, we used the TF-IDF algorithm (term-frequency, inverse document frequency). This algorithm views every instance as a document and the entire set of instances as a corpus. Then, it assigns a score to each term, where the more frequent the term is in the document and the less frequent it is in the corpus, the higher the score.

We tried two TF-IDF approaches, with slightly different results depending on the model. The first approach consisted of concatenating all the text features first and then applying the TF-IDF algorithm (i.e., the concatenation of all text columns of a single instance becomes the document, and the set of all such instances becomes the corpus). The second approach consisted of applying the TF-IDF algorithm separately to each feature (i.e., every individual column is a corpus), and then concatenating the resulting arrays.

The resulting array after TF-IDF is very sparse (most columns for a given instance are zero), so we applied dimensionality reduction (single value decomposition) to reduce the number of attributes/columns.

The final step was to concatenate all resulting columns from all feature categories into an array. This we did after applying all the steps above (cleaning the features, turning the categorical features into labels and performing one-hot encoding on the labels, applying TF-IDF to the text features and scaling all the features to center them around the mean).

Models and ensembles

Having obtained and concatenated all the features, we ran a number of different algorithms on them. The algorithms that showed the most promise are gradient boosting classifier, ridge classifier and a two-layer neural network.

Finally, we assembled the model results using simple averages, and thus we saw some additional gains as different models tend to have different biases.

Optimizing the threshold

The last step was to decide on a threshold to turn probability estimations into binary predictions (“yes, we predict this site will be top 10 in Google” or “no, we predict this site will not be top 10 in Google”). For that, we optimized a cross-validation set and then used the obtained threshold on a test set.


The metric we thought would be the most representative to measure the efficacy of the model is a confusion matrix. A confusion matrix is a table that is often used to describe the performance of a classification model (or “classifier”) on a set of test data for which the true values are known.

I am sure you have heard the saying that “a broken clock is right twice a day.” With 100 results for every keyword, a random guess would correctly predict “not in top 10” 90 percent of the time. The confusion matrix ensures the accuracy of both positive and negative answers. We obtained roughly a 41-percent true positive and 41-percent true negative in our best model.

Another way of visualizing the effectiveness of the model is by using an ROC curve. An ROC Curve is “a graphical plot that illustrates the performance of a binary classifier system as its discrimination threshold is varied. The curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.” The non-linear models used in the ensemble were XGBoost and a neural network. The linear model was logistic regression. The ensemble plot indicated a combination of the linear and non-linear models.

XGBoost is short for “Extreme Gradient Boosting,” with gradient boosting being “a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees.”

The chart below shows the relative contribution of the feature categories to the accuracy of the final prediction of this model. Unlike neural networks, XGBoost, along with certain other models, allow you to easily peek into the model to tell the relative predictive weight that particular features hold.

We were quite impressed that we were able to build a model that showed predictive power from the features that we had given it. We were very nervous that our limitation of features would lead to the utter fruitlessness of this project. Ideally, we would have a way to crawl an entire site to gain overall relevance. Perhaps we could gather data on the number of Google reviews a business had. We also understood that Google has much better data on links and citations than we could ever hope to gather.

What we learned

Machine learning is a very powerful tool that can be used even if you do not understand fully the complexity of how it works. I have read many articles about RankBrain and the inability of engineers to understand how it works. This is part of the magic and beauty of machine learning. Similar to the process of evolution, in which life gains different features and some live and some die, the process of machine learning finds the way to the answer instead of being given it.

While we were happy with the results of our first models, it is important to understand that this was trained on a relatively small sample compared to the immense size of the internet. One of the key goals in building any kind of machine learning tool is the idea of generalization and operating effectively on data that has never been seen before. We are currently testing our model on new queries and will continue to refine.

The largest takeaway for me in this project was just starting to get a grasp on the immense value that machine learning has for our industry. A few of the ways I see it impacting SEO are:

Text generation, summarization and categorization. Think about smart excerpts for content and websites that potentially self-organize based on classification.Never having to write another ALT parameter (See below).New ways of looking at user behavior and classification/scoring of visitors.Integration of new ways of navigating websites using speech and smart Q&A style content/product/recommendation systems.Entirely new ways of mining analytics and crawled data to give insights into visitors, sessions, trends and potentially visibility.Much smarter tools in distribution of ad channels to relevant users.

This project was more about learning for us rather than accomplishing a holy grail (of sorts). Much like the advice I give to new developers (“the best learning happens while doing”), it is important to get your hands dirty and start training. You will learn to gather, clean and organize data, and you’ll familiarize yourself with the ins and outs of various machine learning tools.

Much of this is familiar to more technical SEOs, but the industry also is developing tools to help those who are not as technically inclined. I have compiled a few resources below that are of interest in understanding this space.

Recent technologies of interest

It is important to understand that the gross majority of machine learning is not about building a human-level AI, but rather about using data to solve real problems. Below are a few examples of recent ways this is happening.


NeuralTalk2 is a Torch model by Andrej Karpathy for generating natural language descriptions of given images. Imagine never having to write another ALT parameter again and having a machine do it for you. Facebook is already incorporating this technology.

Microsoft Bots and Alexa

Researchers are mastering speech processing and are starting to be able to understand the meaning behind words (given their context). This has deep implications to traditional websites in how information is accessed. Instead of navigation and search, the website could have a conversation with your visitors. In the instance of Alexa, there is no website at all, just the conversation.

Natural language processing

There is a tremendous amount of work going on right now in the realm of translation and content semantics. It goes far beyond traditional Markov chains and n-gram representations of text. Machines are showing the initial hints of abilities to summarize and generate text across domains. “The Unreasonable Effectiveness of Recurrent Neural Networks” is a great post from last year that gives a glimpse of what is possible here.

Home Depot search relevance competition

Home Depot recently sponsored an open competition on Kaggle to predict the relevance of their search results to the visitor’s query. You can see some of the process behind the winning entries on this thread.

How to get started with machine learning

Because we, as search marketers, live in a world of data, it is important for us to understand new technologies that allow us to make better decisions in our work. There are many places where machine learning can help our understanding, from better knowing the intent of our users to which site behaviors drive which actions.

For those of you who are interested in machine learning but are overwhelmed with the complexity, I would recommend Data Science Dojo. There are simple tutorials using Microsoft’s Machine Learning Studio that are very approachable to newbies. This also means that you do not have to learn to code prior to building your first models.

If you are interested in more powerful customized models and are not afraid of a bit of code, I would probably start with listening to this lecture by Justin Johnson at Stanford, as it goes through the four most common libraries. A good understanding of Python (and perhaps R) is necessary to do any work of merit. Christopher Olah has a pretty great blog that covers a lot of interesting topics involving data science.

Finally, Github is your friend. I find myself looking through recent repos added to see the incredibly interesting projects people are working on. In many cases, data is readily available, and there are pretrained models that perform certain tasks very well. Looking around and becoming familiar with the possibilities will give you some perspective into this amazing field.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Etsy Boosts Search For Better Content Discovery, User Engagement

Etsy has more than 30 million items for sale from more than one million sellers globally. There are no SKUs, and most of the data is unstructured, creating a messy and massive discovery challenge for both Etsy and its users.

Accordingly the company is today rolling out more sophisticated search functionality, after a month-long beta test in which it saw increased engagement from both desktop and mobile users.

Etsy has always had search but at a basic level — it was described to me as a “one-size-fits-all approach” — that didn’t do a good job of showcasing the site’s products. The company is now doing a better job of recognizing user intent and delivering more tailored results.

The top image below is a “before” screen, and the second image below is the “after” screen. These screens may look similar but there’s now a great deal more going on “under the hood” to make results more relevant and to expose more “long tail” content to users, which is better for sellers as well.

Etsy told me that roughly 30 percent of the site’s queries historically have been very broad, “low intent” search terms. Rather than just exposing items in a crude ranking hierarchy based on superficial variables, Etsy is now presenting results that offer both categories (to expose more content) and single items, as the image below reflects.

Etsy’s search evolution is fascinating because it represents a larger problem or story — the challenge of organizing huge amounts of data and presenting it in useful ways for end users. What’s also interesting is that Etsy went old-school as it made the leap to a new system.

The company first tried a machine learning approach but discovered that it didn’t have a strong enough training set of data to go down that path. It wound up hiring a library scientist to create a structured taxonomy. Etsy also enlisted its sellers to help better organize, tag and categorize their own items. Now that it has created this new model and data set, it can later use machine learning techniques to further improve search and content ranking.

The new search capabilities provide more context and at the same time allow people to go much deeper into categories both with queries and subsequent refinements and filters.

Since the beta site went live for part of Etsy’s audience a month ago, the company has seen an improvement of more than 10 percent in user engagement. Mobile web engagement is higher. Mobile now represents more than 50 percent of Etsy’s traffic (apps + mobile Web combined).

The new search capabilities should be live for everyone on the site starting today.

The Feature Google Killed The + Command For — Direct Connect — Is Now Dead

Cast your mind back to November 2011, only a few months after the launch of Google+. That’s when Google made one of its biggest Google+ification moves. It demoted a long-standing search feature involving the plus symbol so that searchers could instead more easily reach new “Google+ Pages” that had been launched. Today, this “Direct Connect” feature is forgotten and broken.

How Direct Connect Worked

When Direct Connect launched, Google promised that people could go directly to popular pages on Google+ by beginning their searches with a + sign followed by a few characters of the page’s name. For example, here’s how it worked for YouTube:

See how typing “+youtube” made a link to the YouTube page on Google+ appear at the top of the search selections? Selecting that would then take you to YouTube on Google+.

Today, if you try the same thing, you no longer get a suggestion. In fact, if you type in the entire string to reach a known page — such as +youtube — that won’t even take you to the page.

The Death Of Direct Connect

When did it die? That’s unclear. Search Engine Land contributor Sean Carlos of Antezeta asked us about it earlier this week — if we’d heard about support being dropped. We hadn’t, but it sure looked dead.

Checking with Google, a spokesperson told us, “That particular feature is not a focus for us moving forward.”

Collateral Damage: The + Operator

To enable Direct Connect, Google had to disable how the + symbol used to work in search. It was part of a set of commands that I used to describe as “search engine math,” where:

+ in front of a word required a search engine to only find pages that actually contained that word in front of a word meant to find pages without a particular word” “ around two or more words meant to find pages with only those exact phrases

The + symbol was a command used by search engines even before Google existed. But Google wanted that + symbol so bad for its Google+ Direct Connect service that it dropped support just before Direct Connect went live. As an alternative, it told searchers who previously used the + symbol to put quotes around single words as a workaround.

Will the + symbol be allowed to return, now that Direct Connect is dead? Google had no comment on that. 

Using the + symbol today on Google certainly causes weird things to happen. For example, a search for the word mars generates about 207 million matches. That would find pages that have the exact word plus pages that might not have the word but are deemed related to it.

Searching for mars surrounded by quotes — “mars” — generates exactly the same number, even though that number should drop. That’s because using the quotes means that Google should find pages only containing the exact word.

Searching the old way with the plus symbol — +mars — generates only about 20,000 results. There are almost certainly many more pages than that which contain the exact word mars on them. Nor is this finding pages that somehow have +mars on them. It’s unclear what it does.

Google does have a Google Verbatim search that was added soon after it dropped support for the + operator, in reaction to complaints from those who didn’t like the change. It searches for only the exact words you provide, without trying to do spelling corrections or synonym matches. But since it doesn’t provide counts, it’s hard to use to measure how well search commands in regular Google search perform.

Chances are, the + operator won’t make a comeback. It’s even more a pity that it had to die in the first place for a Google promotion that didn’t stand the test of time.

Top 10 Search Modifiers: Why They Matter, What They Are & How To Use Them

Google is working hard to improve search, and it’s doing such a great job that the general public doesn’t seem to be noticing. With the inclusion of personalization, localization, customization, and with the depth of data Google knows and understands about the average user, it’s easy to overlook how much goes into making a useful (intuitive) search.

Especially when you factor in how little the average person understands Google, or how Google search actually works.

Yet despite Google’s improvement, or perhaps because of it, people seem to be losing their ability to perform advanced searches within Google — something I’d define as critical to navigating the Web efficiently and effectively, especially as a search marketer.

In fact, Google recently released news of a major update — Hummingbird — specifically designed to help users with complex searches.

Get What You Want

With all this background information, and Google’s evolving ability to understand context, I think it’s more important than ever before to make sure you’re getting out of Google exactly what you want.

We talk a lot about how to optimize your site for Google, but being a power-searcher is also super important for marketers, whether you’re trying to find how your brand is represented on the Web, or what your competitor is doing. And it’s not hard — I believe anyone can become a search pro by understanding 10 simple search modifiers, and creatively applying them to search.

So, instead of surrendering your power and trusting Google (you wouldn’t trust it to run your AdWords campaigns, would you?), let’s take a look at a list of the top 10 search modifiers, and how to use them effectively to perform advanced searches.

The List Of Search Modifiers

Here’s the entire list of the top 10 search modifiers for your perusal.

      “query”–queryQuery AND queryQuery OR querySite:example.comInurl:queryIntitle:queryFiletype:queryRelated:queryInpostauthor:query

This article will give you the tools you need to understand advanced search modifiers, and how to perform intelligent searches to find what you really need. But, before we jump into that, let’s take a look at why advanced search is disappearing.

Why Advanced Search Is Disappearing

Performing an open ended search such as [gas station] a handful of years ago would have been laughable. The results would have been nearly useless — a random mix of big name oil companies and information-based sites (think Wikipedia) describing gas stations and their functions via 10 bare bones links in glorious blue.

Googling back then was nearly an art form — it took an understanding of which words to use, why, and to what effect in order to achieve the desired results.

Search required a certain tech savvy, an understanding of technology to the degree of proficiency.

Now however, if you’re in a new area looking to quickly fill up your gas tank, whether you’re on your laptop, phone, or even a borrowed desktop, the odds are you can and will simply type in [gas station].

Here’s what that looks like on a laptop, not signed into any Google accounts, and with Incognito Mode running:


A far cry from the bare bones, ten-blue links of the past — the first five links are relevant to the area I’m currently in, directing me to where I can purchase gas conveniently and affordably. What’s more, there’s a map of my surrounding area with gas stations marked, and a full knowledge graph carousel at the top.

Obviously, searching [gas station] these days is a surprisingly viable option.

Any user searching via smartphone will be even more likely to do such a broad query search. Here’s a few screenshots of what it looks like when I do the same search with my phone, in the same area:


As you can see, the rich snippet is nearly the same (minus the KG carousel), but more detailed, allowing me to get on-the-spot directions to seven different gas stations, organized by distance. After that, the top five results are the same as the laptop search, with slight variance in ranking.

Will Critchlow of Distilled did a fantastic White Board Friday about the Future of User Behavior that covers this concept of evolving search behavior extremely well, with great examples.

The point is, Google’s search technology has reached the point of high usability. People don’t think, analyze, or really even understand how search works anymore. They just assume it will work and they’ll get the results they need.

This is a very real trend, and likely to continue. For example, consider Google Now — no searching required, just results you’re likely to need and can further refine. Also, consider Google Glass. Glass doesn’t even support advanced searching — it’s all short, to-the-point answers, likely based on the Knowledge Graph, which is rapidly expanding.

But, Google isn’t perfect. There’s still plenty of need to be savvy within search, especially if you’re using it to navigate the murky Web in a precise manner.

So, there’s still need for advanced and intelligent search, despite Google’s improvement.

The Search Modifiers And How To Use Them

1.       “Query” — The Exact Match Search

How it works: Quotation marks, or “query” will Search Google for only the exact match of your query, also known as exact match search.

Example: “Page One Power link building”


Uses: Searching for an exact piece of information. Great for searching serial numbers, model numbers, obscure names, etc. Very basic, but very important in advanced search, especially when combining search modifiers to achieve specific results.

2.       –Query — The Query Exclusion Modifier

How it works: the subtract modifier will remove any query you don’t want in the search results.

Example: “Jon Ball” -“Page One Power”


Uses: Trimming the fat from your search results. When searching for something specific, and you’re finding the inclusion of terms or phrases you specifically wish to avoid, simply introduce the exclusion modifier to remove them from the results.

3.       Query AND query — The Query Combiner

How it works: using “AND” within search will make sure both your queries appear within each result.

Example: “Jon Ball” AND “Page One Power”

Uses: Narrow your subject within search by combining terms. Searching without the ‘AND’ operator would return results individually featuring either “Jon Ball” or “Page One Power,” as opposed to results featuring both “Jon Ball” and “Page One Power.”

Note: if you don’t use caps, you run the risk of Google thinking it’s a phrase as opposed to an operator.

4.       Query OR query — The Similar Query Search

How it works: Allows you to search for multiple terms.

Example: “Jon Ball” CEO OR Founder OR Owner OR Partner


Uses: Search for multiple/similar phrases and words within one result. Typically the ‘OR’ operator is used for multiple words that express the same idea — i.e., CEO/founder/owner/partner.

 5. — The Site Specific Search

How it works: will refine a Google’s search to a single website.

Example: “Jon Ball”

Uses: Finding information within a specific website, especially when using additional search modifiers. This can also be used to narrow down to TLDs (.gov, .com, .edu).

6.       Inurl:query — The URL Specific Search

How it works: Will only return Web pages that have your query in the actual URL.

Example: inurl:Jon Ball


Uses: This search modifier has a variety of uses. Great for finding various online profiles of someone with a unique name, or finding certain types of pages (guest posts, link lists, infographics, forums, etc. etc.), and can be used effectively with site search as well.

7.       Intitle:query — The Title Specific Search

How it works: The intitle:query modifier will refine search to only pages that have your query within their title.

Example: intitle:jon ball

Uses: Very similar to inurl:query, this works well for finding online profiles, different types of pages, and general information regarding your search (since they’ll have the phrase or word in the title).

8.       Filetype:query — The File Specific Search

How it works: Searches only for pages hosting the type of file you specify.

Example: filetype:pdf


Uses: Finding particular files on a particular subject. Also, as the screenshot shows, it’s a great extra filter to help find a specific piece of content on a specific site.

9.       Related:query — The Related Results Search

How it works: Returns results related to your query. Note: the query can be a website, much as in site search, to return other related websites. However, the website needs to be fairly well known, otherwise related search is unlikely to find anything.


Uses: Exploring the Web, finding pages related to your query, and even finding less well known sites similar to popular sites.

10.   Inpostauthor:query — The Blog Author Search

How it works: Inpostauthor: Also known as blog author search — will search blog posts for the author.

Example: inpostauthor:Jon Ball


Uses: Tracking prolific bloggers across the Web! It should be noted that this search can return pretty broad results, especially if the author’s name isn’t fairly unique.

Adding Creativity — Using Multiple Search Modifiers For Advanced Search

So, we’ve covered the top 10 search modifiers. Now, think creatively to search intelligently.

Alone, these search modifiers can help for slightly better results. But combining them together to create a truly precise search — putting together a search string — is where the magic really happens.

In fact, Dr. Pete of Moz wrote a wonderful post about advanced searching based around the site specific search, titled 25 Killer Combos for Google’s Site: Operator. Seriously, take some time to read through that — it’s a great example of how to combine various operators together to create a targeted search for precise results.

Let’s jump into some examples:

1. Track a competitor’s guest post campaign

Possible searches:

Inpostauthor:”Firstname Last” –site:mycompeitor.comInurl:Guest Post “Firstname Last” –site:mycompeitor.comIntitle:Guest Post “Firstname Last” –“Author: Firstname Last” –“Written by Firstname Last” –“Author Profile” “Firstname Last”“About the Author” “Firstname Last”“Author Bio” “Firstname Last”Inurl:Author “Firstname last”

As you can see, even combining two together will give you much more precision than one alone.

I have to say my favorite search string for tracking guest posts  is Inurl:author “Firstname Last.” Very simplistic, this search string is great for finding high-quality guest posts, since quality sites tend to make an author page, and the majority of these pages will have “author” in the URL.

Don’t forget to check author bios, either — plenty of people only add slight variation to their bios, allowing you to effectively exact match search for pieces of their bio to track them across the Web.

2. Brand mentions

Of course, there are tools to help with this — Google Alerts, Fresh Web Explorer, and to name a few.

However, Google search can be used to search for brand mentions as well. Typically, it’s not quite as effective as these tools will be, but for those DIYers, or for learning advanced search, it should prove fun.

Here’s a few examples what that might look like: – – “Page One Power” OR “” OR “” OR ““ – – “Jon Ball” OR “Jonathan Ball” OR “CEO of Page One Power” OR “Founder of Page One Power”.

You want to remove social profiles along with your own site. After that, you should be targeting key brand terms, products, and figures within your company. Using the OR operator will allow you to search for multiple terms at once. Until recently Google had a synonym operator in the form of the tilde ( ~ ), but they unfortunately removed it.

3. Obscure files

One of the main reasons to hone your Google skills — the search for the needle in the haystack.

For this example, let’s assume you’re looking for a presentation from a conference you’ve recently attended.

Often times after a conference or event, presenters will self-host presentations due to the frequency at which conference websites update/delete their pages.

There’s a variety of ways presenters can do this — on their own site, on a third-party site (such as slideshare), or through social media.

Rather than manually checking multiple sources, let’s try an advanced Google search:

“firstname last” filetype:pdf “conference name” –“firstname last” “conference name” presentation OR files OR video OR powerpoint –“conference name” “presentation title” –“conference name” AND “firstname last” presentation OR files OR slides OR video –

There’s a few that should get the ball rolling. The most important thing you can do when using search modifiers is change your search based upon the results, to further hone in your search.


Advanced search is extremely important. Google recently released an update, Hummingbird, that’s specifically targeted at improving complex searches, likely due to the natural language used via voice search.

Google search has improved immensely since it was released 15 years ago. So much so, in fact, that I believe people have a hard time truly remembering what search used to be like. But despite this continued improvement, relying on Google limits your own ability to search efficiently and effectively.

Don’t become over-reliant on Google’s search technology. Remembering 10 simple search modifiers and using them creatively can give you the power to search like a pro.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Google Drops Another Search Operator: Tilde For Synonyms

Google has quietly dropped another search operator, the tilde (i.e., ~) search operator.

Google Operating System blog noticed the lack of support for the tilde operator over the weekend. Reportedly, Dan Russell, a Search Research Scientist at Google, confirmed Google dropped the feature. Dan said the feature was dropped to “lack of use.”

Dan told Alex from Google Operating System:

Yes, it’s been deprecated. Why? Because too few people were using it to make it worth the time, money, and energy to maintain. In truth, although I sometimes disagree with the operator changes, I happen to agree with this one. Maintaining ALL of the synonyms takes real time and costs us real money. Supporting this operator also increases the complexity of the code base. By dropping support for it we can free up a bunch of resources that can be used for other, more globally powerful changes.

Google has recently dropped several search features in the past month including:

Google: We Removed Instant Previews Over Low Usage From SearchersGoogle Local Results Drops “More Results Near…” To “Improve” Local Search ExperienceGoogle Drops “Translated Foreign Pages” Search Option Due To Lack Of UseGoogle Pulls Related Searches Filter Due To Lack Of Usage

There are probably other minor search features Google has dropped that we have not yet spotted yet.

Google Drops "Translated Foreign Pages" Search Option Due To Lack Of Use

Google has quietly dropped the “Translated Foreign Pages” search filter from the Google search options menu.

Google tells us the option was removed due to lack of use, but they say they are still committed to making the Web available to as many people, in as many languages, as possible.

The translated foreign pages search option enabled searchers to restrict the search results to specific languages only. So, if you want to search for something in English and then show results for that search result in French, you could have used this search option for that. Dan Russell, Google’s Search Research Scientist, who gives the Google Power Searcher class, explained how useful this feature was on his personal blog.

Here is a picture of that feature:

A the Google spokesperson told us:

Removing features is always tough, but we do think very hard about each decision and its implications for our users. Unfortunately, this feature never saw much pick up — but you can still use Chrome to translate entire pages very easily, with a built-in translation bar that helps you read content on the Web, regardless of the language.

This feature was removed shortly after Google removed two other search options including related searches and instant previews – both due to lack of usage. Removing features and products is nothing new to Google, some things stick and some do not.

This one, however, seemed to be an implication from Google that they are giving up on their promise of cross-language search, as we covered in 2007 with our posts Google Searchology: CLIR and Views and Google Launches ‘Cross-Language Information Retrieval (CLIR)’.

Google says not so. Per their statement above, Google is still committed to translation. However, there is just no easy way to return and restrict to specific languages for English-based queries now.