SEO in 2018: Optimizing for voice search

Google Webmaster Trends Analyst John Mueller recently asked for feedback on why webmasters are looking for Google to separate voice search queries in Search Console. If you, like me, want to see voice searches in Google Search Console, definitely submit your feedback on Twitter as John requested.

I hear folks asking about voice search data in Search Console often. Can you elaborate on what you want to see there? What's an example of such a query that would be useful? pic.twitter.com/WOqS7aH4tP

— John ☆.o(≧▽≦)o.☆ (@JohnMu) December 7, 2017

I lived through the very beginnings of mobile SEO, where many people thought mobile search behavior would be completely different from desktop search behavior only to find that much of it is the same. So I see why Mueller and others don’t necessarily understand why Search Console users would want to see voice queries separately. Some queries are the same whether they’re typed into a computer at a desktop or spoken across the room to a Google Home.

That being said, there are some very good reasons to want voice search data. Optimizing for voice search requires some slightly different tactics from those for traditional SEO, and having insight into these queries could help you provide a better experience for those searching by voice.

Not convinced you should care about voice search? Here are three reasons I think you should:

1. More visibility on featured snippets

One of the interesting things about Google Home is that when it answers a question with information from the web, it will cite the source of the information by saying the website’s name, and it will often send a link to the searcher’s Google Home app.

Currently, Google Home and Google Assistant read snippets from sites that are ranked in “position zero” and have been granted a featured snippet. This is why more people than ever are talking about how to optimize for featured snippets. If you look at the articles published on the topic (according to what Google has indexed), you’ll see that the number of articles about how to optimize for featured snippets has grown 178 percent in the past year:

Understanding voice search queries could help us better understand the types of queries that surface featured snippets. As marketers, we could then devote time and resources to providing the best answer for the most common featured snippets in hopes of getting promoted to position zero.

This helps marketers drive credibility to their brand when Google reads their best answer to the searcher, potentially driving traffic to the site from the Google Home app.

And this helps Google because they benefit when featured snippets provide good answers and the searcher is satisfied with the Google Home results. The better the service, the more consumers will use it — and potentially buy more Google Home units or Android phones because they think the service is worthwhile.

If bad featured snippets are found because no one is trying to optimize for those queries, or no featured snippets are found and the Google Home unit must apologize for not being able to help with that query yet, Google potentially loses market share to Amazon in the smart speaker race and Apple in the personal assistant race.

So this one is a win-win, Google. You need more great responses competing for position zero, and we want to help. But first, we need to know what types of queries commonly trigger featured snippets from voice search, and that’s why we need this data in Search Console today.

2. Better way to meet consumer demand and query intent based on context

We saw two major things happen in the early days of mobile SEO when we compared desktop and mobile queries:

    Searchers often used the same keywords in mobile search that they did in desktop search; however, certain keywords were used much more often on mobile search than desktop search (and vice versa).Whole new categories of queries emerged as searchers realized that GPS and other features of mobile search could allow them to use queries that just didn’t work in desktop search.

An example of the first point is a query like “store hours,” which peaks in volume when shoppers are headed to stores:

An example of the second is “near me” queries, which have grown dramatically with mobile search and mostly occur on mobile phones:

The mode of search therefore changes search behavior as searchers understand what types of searches work well on mobile but not on desktop.

Consider this in the context of voice search. There are certain types of queries that only work on Google Home and Google Assistant. “Tell me about my day” is one. We can guess some of the others, but if we had voice search data labeled, we wouldn’t have to.

How would this be useful to marketers and site owners? Well, it’s hard to say exactly without looking at the data, but consider the context in which someone might use voice search: driving to the mall to get a present for the holidays or asking Google Home if a store down the street is still open. Does the searcher still say, “Holiday Hut store hours?” Or do they say something like, “OK Google, give me the store hours for the Holiday hut at the local mall?” Or even, “How late is Holiday Hut open?”

Google should consider all these queries synonymous in this case, but in some cases, there could be significant differences between voice search behavior and typed search behavior that will affect how a site owner optimizes a page.

Google has told us that voice searches are different, in that they’re 30 times more likely to be action queries than typed searches. In many cases, these won’t be actionable to marketers — but in some cases, they will be. And in order to properly alter our content to connect with searchers, we’ll first need to understand the differences.

In my initial look at how my own family searched on Google Home, I found significant differences between what my family asked Home and what I ask my smartphone, so there’s reason to believe that there are new query categories in voice search that would be relevant to marketers. We know that there are queries — like “Hey Google, talk to Dustin from Stranger Things” and “Buy Lacroix Sparkling Water from Target” — that are going to give completely different results in voice search on Google Home and Assistant from the results in traditional search. And these queries, like “store hours” queries, are likely to be searched much more on voice search than in traditional search.

The problem is, how do we find that “near me” of voice search if we don’t have the data?

3. Understanding extent of advertising and optimization potential for new voice-based media

The last reason to pay attention to voice search queries is probably the most important — for both marketers and Google.

Let me illustrate it in very direct terms, as it’s not just an issue that I believe marketers have in general, but one that affects me personally as well.

Recently, one of my company’s competitors released survey information that suggested people really want to buy tickets through smart speakers.

As a marketer and SEO who sells tickets, I can take this information and invest in Actions on Google Development and marketing so that our customers can say, “OK Google, talk to Vivid Seats about buying Super Bowl tickets,” and get something from Google Home other than, “I’m sorry but I don’t know how to help with that yet.” (Disclosure: Vivid Seats is my employer.)

Or maybe I could convince my company to invest resources in custom content, as Disney and Netflix have done with Google. But am I really going to do it based on this one data point? Probably not.

As with mobile search in 2005, we don’t know how many people are using voice search in Google Home and Google Assistant yet, so we can’t yet know how big the opportunity is or how fast it’s growing. Voice search is in the “innovators and early adopters” stage of the technology adoption life cycle, and any optimizations done for it are not likely to reach a mainstream audience just yet. Since we don’t have data to the contrary from Google or Amazon, we’ll have to stay with this assumption and invest at a later date, when the impact of this technology on the market will likely mean a significant return on our investment.

If we had that data from Google, I would be able to use it to make a stronger case for early adoption and investment than just using survey data alone. For example, I would be able to say to the executives, “Look how many people are searching for branded queries in voice search and getting zero results! By investing resources in creating a prototype for Google Home and Assistant search, we can satisfy navigational queries that are currently going nowhere and recoup our investment.” Instead, because we don’t have that data from Google, the business case isn’t nearly as strong.

Google has yet to monetize voice search in any meaningful way, but when advertising appears on Google Home, this type of analysis will become even more essential.

Final thoughts

Yes, we can do optimization without knowing which queries are voice search queries, as we could do mobile optimization without knowing which queries are mobile queries; yet understanding the nuances of voice search will help Google and marketers do a better job of helping searchers find exactly what they’re looking for when they’re asking for it by voice.

If you agree, please submit your feedback to John Mueller on Twitter.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Is the featured snippet bubble bursting?

This article was co-authored with my colleague at Go Fish Digital, Chris Long.

We’ve been bullish on answer boxes (also known as featured snippets) for a while now. Six months ago, we wrote about unique strategies we’ve been using to obtain those featured snippets. That coveted “position 0” is just so juicy, for a number of reasons:

It ranks above all of the organic search results.It takes up a lot of SERP real estate.It attracts a ton of eyes and drives a lot of organic traffic.It is often the answer to questions asked to Google Home/Google Assistant.

In May, Ahrefs ran a study of 2 million featured snippets. Out of the 112 million keywords in their database, they observed that ~14 million (a little over 12 percent) were triggering an answer box in their results. That data point aligned with what we were seeing anecdotally, which is that they were popping up all over the place.

Something happened at the end of October

We recently noticed that some of the answer boxes we worked really hard to get were just gone. No notice, no goodbye. We looked across the industry, as well as our tracked keywords for clients, and there did indeed seem to be something going on.

Take a look at what the Mozcast SERP Feature history shows over the past 30 days (pulled November 2, 2017) for Featured Snippets:

That is a drop from around 16 percent to 14 percent over just a couple of days, and after a fairly long period where we’ve seen them generally increase.

We use STAT for keyword tracking. In the example below, our client experienced a 6 percent drop in answer boxes appearing for keywords they track.

Although not a huge dip, it is enough to see a reduction in traffic — definitely not a trend we’d want continuing for them.

What are we seeing in its place?

The interesting thing is that we are seeing an uptick in knowledge panels. Historically, the main knowledge panels in search results were about companies, people and other entities. But now we are seeing generic knowledge panels popping up for all types of queries in ways we hadn’t seen before.

For example, this generic “lunchbox” knowledge panel isn’t a type I had really seen before. Frankly, it isn’t all that helpful either.

When we revisited the Mozcast SERP Feature, we observed a dramatic increase in the number of knowledge panels it’s been discovering:

Since October 27, there has been about a 14 percent increase in the number of knowledge panels Google has been displaying (at the time of this writing). This lines up fairly well with the timeline where featured snippets declined. Returning to our client example, we also saw an increase in the number of knowledge panels observed in their SERP landscape.

Starting on October 27, we can see that the number of knowledge panels in their SERPs rose by 3.7 percent. While this isn’t a huge number, initially only about 0.9 percent of tracked queries were showing a Knowledge Panel. That number has since jumped to 4.6 percent, accounting for over a 400 percent increase.

Has the bubble burst?

I hope not. Google SERPs are volatile right now, so this could be a temporary machine learning test to compare how people interact with the SERPs without featured snippets. Or not.

Will we change strategies?

In the short term, definitely not. Sometimes Google giveth, and sometimes Google taketh away (see authorship photos). Sometimes Google just tests stuff quickly and then goes back to how things were.

I suspect that they’ll come back. But even if they don’t, it is still a worthy endeavor to obtain a featured snippet when you can. Because of the great branding and traffic that come along with featured snippets, even if their volume is reduced, they can still have a positive impact.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Siri, Safari and Google Search: What does it mean for marketers?

Apple has recently made some significant changes to both Siri and Safari, with potentially far-reaching implications for digital marketers.

First, Apple has announced that results for its AI-powered digital assistant, Siri, will now be provided by Google rather than Bing. This interesting development encompasses two of the most important areas of modern search marketing: voice search and mobile. As such, SEOs will be paying close attention to how this might affect their strategies and their reporting.

The launch of the latest version of Apple’s Safari browser also brought with it controversial updates that could significantly impact the digital media industry. By introducing stringent new measures that will prevent third-party cookies from tracking Safari users for more than 24 hours, Apple has made a clear statement about the importance of consumer privacy.

Equally, it has forced some advertisers to rethink their approaches to tracking — and reporting on — digital marketing campaigns. Given the prominent positions of voice search, mobile SEO and data privacy in many industry discussions today, it would be fair to say that Apple has taken a stance.

Apple moves Siri searches to Google

Google has been selected as the default provider of search results via Apple’s voice-enabled digital assistant, Siri, although image search results will still be powered by Bing.

With voice search now accounting for over 20 percent of searches (a number that will likely increase dramatically in the near future), this move will undoubtedly bring a sizable number of queries to Google. Apple’s stated reason for switching is that it will provide a “consistent web search experience” for consumers alongside Safari results, which are already provided by Google by default. Bing and Google process queries and rank organic search results using different algorithms, so we should expect that the answers provided by Siri will change as a part of this development.

If Siri’s answer does not respond adequately to the query, Apple device users will now be sent through to a Google search results page to browse other links. Once a user clicks through to a Google results page, the data can be processed and shared as it would via any other Google SERP. While Google does not share its keyword-level organic search data with site owners, this will still provide welcome insight into other areas of the SEO traffic that brands receive via Google.

How does this affect search marketers?

There will, of course, be an inverse correlation between the number of Google searches and the number of Bing searches that marketers see in their reports, to a greater or lesser degree depending on how much of their audience uses Siri. For paid search, this may mean a re-evaluation of budgets for both Google and Bing. For organic search, the focus should be on providing the most relevant answer to a query, to increase the likelihood that Siri will select your content.

Although this could be seen to represent a seismic shift in how organic search marketers optimize for Siri, the reality is that the core principles of voice search and mobile SEO remain constant:

Micro-moments — revealed in I-want-to-do or I-want-to-go queries, for example — are vitally important.Optimize for longer, natural language queries, as consumers are more likely to search in this manner via voice than via text.Speed is of the essence; mobile users expect content to load quickly, so marketers need to incorporate this as an essential strategic consideration.Hyperlocal searches, driven by implicit location-based intent, are on the rise as consumers come to grips with the capabilities of their mobile devices.Constantly refine the approach as more data becomes available. This is still a nascent area of search marketing, and we need to be prepared to adapt based on consumer feedback.

In fact, as SEOs and content marketers strive to answer the underlying intent of a query rather than simply responding to exact queries through keyword matching, we can safely say that the days of chasing the search algorithms are coming to an end. As a consequence, Apple’s move from Bing to Google for Siri results should not require much adjustment from a sophisticated SEO strategy.

Furthermore, while this is certainly not a positive move for Bing, Microsoft’s search engine does still retain an important share of the market that search professionals cannot afford to neglect.

As mentioned above, Apple’s principal reason for switching to Google was to bring results in line with its Safari browser, which has also been the recipient of some radical overhauls of late.

Safari updates

Apple has primarily updated its Safari web browser in ways that affect the capturing, processing and sharing of user data, with the ultimate aim of improving the user experience. The three most noteworthy changes for marketers are Intelligent Tracking Prevention, Autoplay Blocking and Reader Mode. You can read more about these specific changes here.

Safari accounts for a sizable portion of web traffic, with a 14.22 percent share of the global market and a 31.5 percent share of the US market. With Google planning its own measures to tackle invasive advertising practices in the upcoming Chrome browser update, it is becoming clear that both parties want to protect consumers from irrelevant content and intrusive advertising.

Consumers are increasingly in control of what they see online and how they see it. According to Google, a majority of search traffic worldwide now comes from mobile devices. Combined with the 40 percent of US consumers that have used an ad blocker, the picture becomes clearer still. Brands and publishers are all striving to provide the best possible experience, with a mobile-first slant on everything they do.

Apple’s focus on a fast, user-friendly experience certainly does not exist in a vacuum. Pervasive ads can contribute to longer page load speeds, which is to the detriment of Safari. Apple wants to attract as many users to its browser as possible; removing elements that only detract from the user experience seems a sensible way to achieve that.

Moreover, Apple is not the only company taking measures to this effect. For example, Google’s Accelerated Mobile Pages (AMP) initiative strips back HTML into leaner source code that can be displayed faster, with an increasing amount of SEO-driven content now created for this standard.

Where Safari will not block ads, Google may go one step further with its upcoming Chrome update. Due for launch early next year, the latest Chrome has included an ad blocker in some early tests. We are therefore beginning to see browsers act as intermediaries between websites and consumers, rather than conduits for information.

How should SEOs prepare for these changes?

Apple’s recent updates serve to further consolidate the position of mobile SEO as the cornerstone of organic search marketing today. All of the recent changes have been driven by a desire to improve the consumer experience by delivering the fast, seamless loading of content. Moreover, Apple is at pains to ensure that this is content its customers actually want to see and engage with.

This will not sound revolutionary to many SEOs, who will by now be very familiar with these concepts. However, we should be cognizant of the fact that SEO affects many other marketing disciplines and understand that our work is pivotal as brands adapt to this new landscape.

Those that embrace this new ecosystem — where consumers are increasingly in control and the onus is on brands and advertisers to create experiences that draw engagement — will reap the very significant rewards. We have been taught many lessons and had to adapt to many trends over the past few years in SEO, through the many transitions the industry has seen. The focus on creating genuine connections through data-driven content is one that may soon apply to many other areas of digital marketing.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Will chatbots become part of the consumer search experience?

When thinking about the future of organic search, common considerations include the impending mobile-first index, machine learning, AI, natural language processing, voice search, site speed, HTTP 2, personalization and consumer behavior changes led by the Internet of Things and digital assistants.

However, one technology that’s not on that list — chatbots — could be poised to become a much greater part of the consumer search experience. Since May in the US (Seattle area), Bing has been testing chatbots directly in paid and organic search results, as shown below.

While chatbot integrations have been in the news over recent months, most people outside of the Seattle area won’t have seen them in action or truly considered how such integrations could be used.

For instance, if chatbot integrations within search results become a future reality, they could be used to carry out the following without ever leaving search results:

Book a test driveEngage with customer serviceOrder products and services

The possibilities are vast and shine a light on the importance of APIs and data integrations to enable the next generation of consumer interaction.

The challenges of a chatbot future

For a moment, let’s assume Bing’s testing is successful, and we see chatbots roll out in search results. Getting brands to a point where they can leverage the technology is going to be a challenge never before experienced by owned performance and marketing teams.

Do brands have the data infrastructure and customer service setup to make this happen? Who leads these teams, and are they willing to cooperate? What reporting metrics will be required? New relationships and process will have to be forged and maintained.

Measurement and reporting will also pose new challenges, as consumers will interact with brands through search results pages rather than on-site. Analytics platforms will need to find a way to track these interactions.

If chatbots are to become a part of the consumer search experience in the future, agencies and in-house teams have to set expectations with brands about the level of resource and data integration requirements.

For instance, being an early adopter and investing in new technology may produce underwhelming results until consumer usage becomes mainstream; however, at that point, you’ll be a front-runner with an advantage over competitors.

On the other hand, you can wait until consumer adoption has reached high levels, but you’ll then be playing catch-up to earn visibility within search results.

Prioritizing the short-term, middle and long-term future

While an exciting development, it is unclear whether chatbots will become a permanent feature in search results. Even if they do, it will likely be in the middle to long term.

We should keep a close watch on the direction search engines are moving, but at this early stage, this type of integration is more suited to brands with a healthy test-and-learn budget. For these brands, the test-and-learn process should not purely focus on search integration but rather how chatbots can be used to enhance consumer experience across owned, earned and paid channels.

However, for the majority of brands, the focus should be on how to increase performance over the next 12 to 18 months. For instance, with the mobile-first index on the horizon and mobile web usage ever increasing, a worrying number of brands are still offering a mobile experience that is not consumer-centric. Addressing that issue remains a key priority.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

The rise of personal assistants

Market disruptions are times of great stress, but they also provide great opportunity. They define new winners and losers in the marketplace. And the next major disruption is just around the corner — it’s the coming era of the personal assistants, and there are many market forces that are driving this shift.

The first of these market forces is the explosion of the Internet of Things: Internet-connected devices will be something other than a PC, tablet or a smartphone. Gartner predicts that 8.4 billion connected devices will be in use in 2017 (up 31 percent from last year) and that this number will reach 20.4 billion by 2020.

What will those other devices be? Here are some of them (though there are many, many others not on this list):

RefrigeratorsAlarm systemsThermostatsWatchesCarsTVsSmart speakers

This will create a world where a connected device is always within immediate reach, and for the great majority of those devices, there will no search box and no browser. That leads us to our next major disruptive event: the rise of voice as a UI.

Voice: The UI of choice

In a world with no practical keyboard and a small screen, voice communications will become the UI of choice.

One reason for the fast rise of voice that we’ve seen already is the ubiquity of smartphones. Trying to type in commands on a small keyboard is already an incentive to speak your commands. But the explosion of the Internet of Things provides us with many devices with NO keyboards. As a result, forecasts for the rise of voice search are already quite stunning — comScore even predicts that voice searches will make up 50 percent of all searches by 2020.

There is definitely still some self-consciousness regarding speaking voice commands to phones in public. In a poll that we conducted recently of more than 900 users, we found that more than two-thirds of users polled use voice commands with their phones when at home by themselves:

From Stone Temple’s “Rating the Smarts of the Digital Personal Assistants”

Still, despite the self-consciousness around using voice search in public, many are willing to break through those barriers. Our data also showed that 13 percent of respondents were willing to speak commands to their phones when they were in a public restroom!

Smart speakers: Amazon Echo and Google Home

Amazon launched their “smart speaker” back in 2014, but it’s now beginning to really take hold. In May of 2017, eMarketer released data indicating that “[t]he total number of Americans using voice-activated assistant devices will reach 35.6 million this year, up a whopping 129 percent jump year-over-year.”

They also shared data on estimated market share:

Global Market Insights forecasts that smart speakers will be a $13 billion market by 2024. How fast these devices can become available in international markets will limit how quickly they can grow, but I still expect their rate of growth to be impressive.

What makes these devices so interesting is that they are powered by Alexa (for the Echo) and the Google Assistant (for Google Home). These personal assistants are at the core of their functionality.

The digital personal assistants

The sale of smart speakers is indeed interesting to track, but the driving forces are broader than that. The idea of having a personal assistant on a smartphone has been around since Siri’s launch in October 2011. Google Now came shortly after in 2012 and has since been superseded by the Google Assistant. Cortana from Microsoft put in its initial appearance in 2013.

The Google Assistant is what powers the Google Home device, and it’s also available on Android and iOS phones. What makes this interesting is that the goal is for each user to have one assistant that can be accessed from all your devices:

In this world, the device just acts as a portal to access your personal assistant, and that assistant lives in the cloud. Imagine being able to seamlessly conduct all your online business via your smartphone, watch, thermostat, refrigerator, TV, car, or any other device simply by speaking voice commands. This is a powerful vision, especially when you consider that the vision for these personal assistants is that they will address nearly all your online needs:

It’s the active use of digital personal assistants that I expect will reach 1 billion users quite quickly. There is no major hardware limitation to slow them down, as they already run on smartphones. In the case of the Google Assistant, it already runs on Google Home as well. How fast can they get there? Let’s take a look at recent history to see how fast consumer adoption can reach 1 billion users:

Both Facebook and smartphones took about eight years to get to an installed base of 1 billion. How quickly can the highly active use of personal assistants get to 1 billion users? That depends largely on how complete their service offerings become. I believe that this will happen quite quickly.

The bigger question is how quickly personal assistants will become a central focal point of users’ activity online. As more and more services get mapped into them, that value proposition will continue to grow, and that growth in functionality will drive the depth of users’ level of adoption.

Each of the major players (Google, Amazon, Apple, Microsoft) is doing everything they can to make their personal assistant offerings as comprehensive as they possibly can, and it’s this fact that creates new opportunities for all of us as digital marketers.

How can you get ready for this next era of disruption?

The first step is to get your products and services plugged into the personal assistants. One of the easiest ways to do that it to start working with Amazon Skills and Actions on Google. These are not that hard to work with, and getting in early will help you start learning how this world will differ.

Anyone can start working with these services to build their own app to plug into Amazon’s Alexa or the Google Assistant, respectively. It’s easy to use each of them in a test mode so that you can work to debug a basic app. Once you’ve got that working, you can submit your app for acceptance into their respective ecosystems. You can submit an Amazon Skill for publication here, and learn how to distribute your “Actions on Google” app here.

While you’re doing this, one big area for you to explore is that of conversational interfaces. The first obvious difference is that users will use more natural language when they speak a query or make a request. In the early days of voice interfaces, it will be natural to ask users questions to determine what it is that they want.

Take pains to avoid questions that are open-ended; instead, learn how to ask questions that lead them to provide the type of answer you need to progress them through your navigation. Confirm that you understand the question before moving on to the next step.

In the longer term, you can imagine that these interfaces will evolve, and programs for processing language will improve. Traditional websites are based on a navigation metaphor, where users work their way to the content they want on your site on a step-by-step basis. But imagine a world where a user can state their entire need in one go. For example, imagine a query such as: “Get me a large pepperoni pizza with a 12-ounce diet coke and deliver it to my home address, please use the usual credit card,” where the personal assistant can process that entire query all at once.

We’re a long way from that day just yet, but it’s where we’re headed, and gaining early experience in these areas will be invaluable. You can get a leg up by building some initial smart speaker apps. Here is a summary of the benefits you’ll get by doing that:

    You can get plugged into those app marketplaces early, and that can gain you an edge in long-term exposure there.You can learn how to work with conversational interfaces.You can begin collecting data on how people use voice to ask for things in your market.

This next wave of disruption is already beginning to unfold, and we’re already exiting the early adopter stage, so the time to jump on board is now!

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Key notes on optimizing for voice search: Conversation, content and context

Alexander Supertramp / Shutterstock.com

Voice search, the topic that is on the virtual tip of every marketer’s tongue, currently accounts for one out of every five Google mobile searches — and that number is expected to grow over time as digital assistants and smart home devices become more commonplace. For the modern-day SEO, the shift from manual text queries to voice commands has been subtle over time, but the potential impact over the next few years could be game-changing.

To prepare for the voice search revolution, it is essential that marketers understand the difference between market growth and consumer adoption, the nuances of conversational search and natural language and the role that AI and machine learning play in SERP responses. Just as with any emerging trend in our industry, it is important to plan now to ensure you stay one step ahead of the search curve.

Trend and adoption

All major technology providers, not just Google, are investing in virtual assistants — and by extension, voice search. Google has Google Assistant, Apple has Siri, Amazon has Alexa, Microsoft has Cortana and Samsung has their new Bixby.

These voice-enabled digital assistants are playing an increasingly larger part in consumers’ everyday lives. For example, on your way to work, you may use voice commands to send messages, listen to mail or navigate via your in-car system. At work, you may use voice search on your Mac (Siri) or PC (Cortana) to manage your schedule. And when you get home, Amazon Echo or Google Home may help you choose your favorite TV show or film on Netflix.

The consumer and marketer disconnect

While voice search is becoming part of consumers’ everyday lives, many marketers still don’t have a plan for voice search. This disconnect may indicate that while consumers are ready, marketers may not be fully prepared. According to research from BrightEdge (disclosure: my company), 31 percent of marketers see voice search as the next big thing. However, approximately 62 percent have no plans to prepare for voice search.

(PRNewsfoto/BrightEdge)

These findings suggest that marketers can see the consumer trend but are not prepared. If they don’t address the trend, brands will not meet their consumers’ expectations.

Bridging the gap and connecting the dots

To capitalize on the opportunity and bridge the gap to meet consumer expectations, it is vital to understand the relationship between voice, mobile and local — and to adapt your optimization strategies accordingly.

Below, I share some insights on voice search and provide some optimization tips.

Conversation and intent

One of the biggest mistakes a marketer can make with voice search concerns intent. In many of my previous articles, I have gone into deep detail on the importance of understanding intent, leveraging intent signals and optimizing for the user accordingly. With voice search, understanding intent becomes even more important, and navigating the nuances is critical to success. The rise in conversational search is one of the main reasons voice search is on the rise.

In fact, Google reports that 70 percent of the queries that Google Assistant receives consist of natural language — in other words, a searcher is speaking to their digital search device in the same way that they would ask a question of another person. This is very different from the way they interact with a text search box. Compared to traditional text search over the last 10 years — where marketers’ focus was on keywords and their implicit meanings — voice search queries are more conversational in nature and can reveal new levels of intent.

For example, when looking for a restaurant with text search, I may type in, “lunch in San Mateo.” When I use voice search, my query may change to, “What restaurants are open in San Mateo?” or “What restaurants are open now for lunch?” Voice search queries are longer than their text-based counterparts and normally focus around “who,” “what,” “where,” “when,” “why” and “how.” What’s more, according to Google’s Mariya Moeva, voice searches on Google are 30 times more likely than text searches to be action queries.

Mobile, local and machine learning

Voice, mobile and local search are on a path to convergence. Mobile devices have disrupted search by giving users the ability to perform on-the-go local queries, and the artificial intelligence behind voice search is introducing new methods of query and different experiences for users.

It is important to note that a key difference between text and voice search is that when a person activates voice search, what is considered the best answer is normally the only answer. So it is a winner-take-all search result. I think this increases the importance of SEO skills and bodes well for its skilled practitioners.

Voice search makes it even easier for customers to ask hyper-local queries, which is significant in the context of a mobile-rich environment. As a marketer, it is important to consider how users execute search queries differently when speaking to mobile devices versus exploring the web via a desktop computer. Voice searches tend to contain slightly different words, such as “close” or “nearby,” which are not commonly used on desktop computers.

The combination of hyper-local targeting, artificial intelligence and machine learning plays an important role in the development and accuracy of voice search. Artificial intelligence (AI) is based on the notion that machines will be able to carry out tasks in a smart and intelligent manner, while machine learning is the application and use of AI as machines gain access to more and more data and learn from themselves. Back in 2013, the Google Hummingbird update signified a shift in how the search engine tried to understand the intent behind a consumer’s query, and RankBrain was introduced in 2015 as the machine learning layer that took AI’s natural language processing to a new level.

Artificial intelligence and machine learning are powering voice search, and this means that with every voice search and every query, Google is getting better at understanding intent. Add to this local data points (on location-enabled devices, for example), and geolocation becomes an automatic part of the query answer. The outcome is that results become more accurate, actionable and transactional in their delivery.

Content and context

Based on personal preferences and recognized patterns of behavior, the artificial intelligence that is powering voice search results will get incrementally better at understanding the context behind a query and providing the relevant content to support the answer. This leads to a very important point that marketers must note when optimizing for voice search: semantics matter greatly.

The future success of search is reliant on providing consumers with an excellent user experience. To do that, marketers must become more intelligent in how they produce content. Content should be structured and written so as to provide traditional SEO value and ensure that a voice engine recognizes and understands the content’s context and meaning. Writing content that answers the questions your consumers are asking in a natural, conversational tone is the best way to prepare for voice search.

Understanding the nuances of conversational search queries can help you discern consumer intent and make sure that your website contains the right content to adapt to voice search. As technology providers find more and more ways to improve user experiences and interactions from voice search, content and context will replace keywords.

Optimizing for voice search

Though voice search may still be years away from being fully mainstream, it already has enough traction for us to begin taking it seriously. While technology works out how to monetize voice search, marketers should be planning at least a year in advance.

Below, I leave you with four key steps you can take when looking at voice search consumer adoption and optimizing for voice search success.

Step 1: Think conversational intent.

Remember that voice search queries differ from traditional text search queries in terms of how they are structured — the former is more conversational in nature. Expand your view of searcher intent based on the types of question-based queries a user might ask via voice search. It’s essential to deliver more accurate results based on the anticipated context.

Step 2: Target the long tail.

The question words — who, what, where, why, when and how — have been strongly associated with voice search queries, as they are conversational in nature. Brands should take the time to find long-tail keyword phrases that use these question words, and start developing content tailored for those using voice search. Develop site content and expand your keyword lists (organic and paid) to target longer-tail keyword phrases.

Step 3: Master mobile and go local.

With the incredible overlap between mobile and voice search, making sure that all content has been prepared for mobile devices is a crucial step towards optimizing for voice. This includes using mobile-friendly layouts, but also optimizing for speed and preparing mobile-optimized content.

If you are a local business, ensure that you complete all the local business listings for your company’s physical locations. Furthermore, ensure that each listing of your business is accurate and complete across Google, Yelp, Bing, Apple Maps and other relevant services.

Step 4: Build and structure content, and utilize semantics.

Write content in a conversational manner, and think about natural-language processing. Build content that answers questions quickly. Make sure structured data markup is integrated into your website where appropriate. When consumers are searching on their phone, Alexa or another device, your goal is to answer these questions with written content focused on long-tail topics as a part of a greater mix of content included on your site.

Conclusion

Voice search is still developing, and its true potential may be several years away. Technology titans such as Google, Amazon, Microsoft, Apple and Samsung are rapidly innovating with new ways to utilize data and represent voice search results across multiple personal, work and home devices. In addition, they are working out new ways to monetize the voice search phenomenon without disrupting the user experience.

Consumer trends are currently outpacing marketing preparation. To meet customer expectations, search marketers should start developing a plan for voice search now.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Featured snippets: Optimization tips & how to ID candidate snippets

Featured snippets are quickly becoming the only search results for many queries.

If a user goes to Google.com and types [what is the tallest tree], Google returns a featured snippet, followed by thousands of organic search results. However, when a user conducts the same query via Google voice search, Google responds with an audible version of the text in the featured snippet but (in many cases) no “blue links.”

Before diving too deeply into featured snippets, let’s back up a minute…

What is a featured snippet?

A featured snippet is a summarized answer to a user’s query displayed in Google organic search results. It is extracted from a results page and includes the page title, URL and link. Featured snippets can be paragraphs, lists or tables. These results display an “About this result” link near the bottom right corner of the answer box.

Google includes answers in featured snippets at the top of search because it is faster than sending users to the source page — no matter how fast the source page loads. As a result, marketers could experience declines in clicks and page views for featured snippet queries but should interpret increased impressions for these queries as a positive KPI.

In fact, from a marketing perspective, featured snippets are highly desirable. Top positioning in Google mobile or desktop search results can help URLs garner greater visibility than traditional results. (And although Google may soon change this, it is currently possible for sites to appear in both the featured snippet and the organic results, giving those sites lots of visibility on the SERPs.)

Because featured snippets typically appear above the first organic result, you may hear marketers refer to them as “position zero.”

What makes a good featured snippet?

If you’re wondering what Google looks for in a featured snippet, it can be helpful to identify existing snippets and review the pages from which they’re pulling info. By reviewing winning content, we can start to get an idea of what Google wants.

However, it can be just as illuminating to look at the content that failed to achieve a featured snippet. Following is a little-known tip to help you identify what I call “featured snippet candidates.” I think of these as pages that could have produced a featured snippet but didn’t quite make the cut.

Featured snippet candidates provide a prime opportunity for understanding more about how featured snippets work in Google organic search results. By comparing these pages to the “winning” pages, we can get clues about ideal formatting, page layout and content quality that can help inform our own optimization strategies.

To see featured snippet candidates, just add the parameter “&num=1”, “&num=2”, “&num=3” (and so on) to the end of Google’s URLs for queries with featured snippets. Currently, Google displays “candidates” for many featured snippet queries.

One thing you may notice is that featured snippets and “candidates” can change on a fairly regular basis. Depending on a variety of factors (where, when and how you search), your results may vary from the examples shown below. Even if your examples are different from mine, the process is what is useful.

Here is an example of a featured snippet for the query [hummingbird food] from the URL https://www.google.com/search?q=hummingbird+food

Here is an example of a featured snippet “candidate” for the same query [hummingbird food] from the URL https://www.google.com/search?q=hummingbird+food&num=1 — as you can see, we appended the URL above with &num=1.

If you have a page that you believe has the potential to produce a featured snippet, consider the search query (or queries) that might be appropriate and check them for featured snippets. If your desired search query does produce a featured snippet, take a look at the “winning” snippet, as well as the “candidates,” to get an idea of what you could be doing better.

How do you measure featured snippets for text and voice queries?

Unfortunately, featured snippets are difficult to detect, let alone track — especially for large sites. So far, I have not found a tool to detect more than about 20 percent of the featured snippets found by manual review. Additionally, there is currently no way to track voice queries for the 400,000 to 500,000 estimated Google Home devices.

Complicating matters further, featured snippets for long-tail queries with very low search volumes are not unusual — so there might be search queries triggering featured snippets that you (or a tool) wouldn’t necessarily think to check. And because featured snippet queries do not have to be phrased as questions, tools that filter based on question keywords like “how to” are not truly accurate.

You may also notice that Google canonicalizes some featured snippet queries. A Google patent published in 2017 states:

[T]he system can also transform terms in the questions and answers into canonical forms. For example, the system can transform inflected forms of the term “cook,” e.g., “cooking,” ”cooked,” “cooks,” and so on, into the canonical form “cook.”

And of course, featured snippets can vary (or not appear at all) based on device, time, location, previous queries and/or a combination of the three.

The bottom line? Do not trust tools when it comes to determining if pages from a site are appearing in featured snippets. Tools only find a fraction of the queries returning featured snippets for a site. The best way to investigate featured snippet performance is manually, with queries from Google Search Console keyword data or with the AdWords dimensions “Paid and Organic” report. (Search Console provides data for the last 90 days, but the Paid and Organic report in AdWords includes Google search console data for more than 90 days.)

Featured snippet observations & tips

After reviewing and comparing hundreds if not thousands of featured snippets and “featured snippet candidates” over the past couple of years, I’ve put together a few observations and tips.

Ensure that your content is as complete and useful as possible. If you compare the featured snippet and candidate examples above, it is easy to see that one is not as helpful as the other. For instance, the second featured snippet (the candidate) is difficult to understand, has more steps and does not include basic information like the amount of sugar needed.The featured snippet display can vary based on the quality of the information provided and/or other factors. The featured snippet candidate below appeared a week or so after the candidate above. Google has actually bolded the word “sugar” in the featured snippet candidate even though it does not appear to be bolded in the landing page. Notice also that the example below is in more paragraph format than bulleted list.Featured snippets appear most often for informational queries. These queries may be in the form of questions, words, fragments or statements. Notice the query [hummingbird food] is not phrased as a question.Consider your content formatting. Answers to featured snippets queries do not have to be marked up in a special way but schema.org structured data markup and elements like bullets, bold/strong, ordered and unordered lists are not a bad idea.Higher quality, better information is of little value if users cannot understand it. Ensure featured snippet content is written in a way that most people can read and understand.Instead of focusing on word count, focus on characters. The key is to ensure featured snippets can fit on the screen of mobile devices. If you do not have testing devices, try Chrome Dev Tools or make a trip to your local smartphone retailer for testing.Images in featured snippets vary and may come from websites other than where the featured snippet is derived. For instance, the image in the featured snippet below comes from SearchEngineLand.com, even though the snippet content does not.Google may add a ‘title’ to your featured snippet. Google sometimes includes a title or heading on a featured snippet box, even if it does not appear on the landing page from which it’s pulling information. This is most common when the featured snippet is providing an answer to a query that has been entered in the form of a question. For instance, take a look at these two versions of the same featured snippet for the query [what is the largest animal].

Final thoughts

Historically, Google has helped users find answers to questions on their own, one step at a time, via “10 blue links.” But today, Google is positioned to answer questions and complete tasks for users in a single step, with or without a screen.  Featured snippets may prove to be one of the most critical elements in the future of search.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Voice search becomes voice action: A key talking point at SMX London

From combining search and social to leveraging moments that matter, last week’s attendees at SMX London gained a deeper understanding of the numerous ways they can optimize their search strategies.

Described as the “ultimate survival guide to the dynamic and tumultuous world of search marketing,” SMX  — run by Search Engine Land’s parent, Third Door Media — is a conference series designed to highlight the reach and opportunities that can be achieved through search advertising and outline search’s position in the wider marketing mix.

From my own perspective, one of the more enlightening sessions of the London event featured a presentation by Pete Campbell, founder and managing director of Kaizen, on the subject of voice search — a prominent theme given the ongoing battle of the AI assistants.

Despite existing for half a decade — Siri has been around since 2011 — voice search has only recently surged in popularity, with over a quarter (27 percent) of US smartphone users now utilizing voice search assistants once a week or more. This rise in usage is largely due to the shift in focus from voice search to voice command.

Just being able to search for information using voice doesn’t add a great deal of value for the user; it’s not that different to searching by typing. But being able to make something actually happen using voice? Well, that’s a far more useful experience — and it is something Amazon’s Alexa is excelling at.

Through voice commands, users can now order their favorite pizza, schedule an Uber, or even buy a dollhouse – as Amazon Echo’s incident earlier this year ably illustrated. Rather than using voice as an alternative to a keyboard or touchscreen for entering a search, users want to be able to control the world around them by talking to it and driving action, creating a far more personal and interactive alternative to traditional search.

At present, the voice search functionalities available through personal assistants remain within the realm of narrow AI, meaning they can only perform relatively basic tasks. Moving forward, Google’s DeepMind machine learning technology is likely to be integrated into Google Home, shifting voice search toward deeper AI as it starts to learn and adapt itself to the unique needs of the individual. And while it is still fairly new to the B2C space, IBM’s Watson is also expected to drive voice search to a point where it is continually aware and constantly learning.

While the discussion around voice search was one of the most interesting at the SMX London event, the technology is still in its infancy, and advertisers don’t need to be rebuilding their entire search strategies around voice at this stage. While paid advertising is available via the format, the search engine does the heavy lifting, translating voice search into keywords and matching these to ads in the same way as a traditional text search.

Once AI evolves and the technological capabilities allow a better understanding of natural language, the way consumers utilize search could change. Currently, users know they must phrase their questions in a way their device comprehends, omitting slang terms and speaking in a more robotic manner than they usually would.

It will be interesting to observe how common search activities — in particular, shopping — will change as the technology develops. Perhaps at next year’s SMX London, we’ll be discussing new strategies for harnessing the power of voice that we haven’t even considered at this stage.

To really gain the most value out of search — be it voice-activated or not — we need to fill the gap between optimizing search advertising and achieving business goals, and put customer lifetime value ahead of return on ad spend (ROAS) when measuring success.

As the technology develops, companies that use voice search technology that reacts more naturally to consumers’ preferred language will attract more repeat visits and loyalty. And by aligning marketing efforts with inventory management to ensure only those products that are in stock and require promotion are advertised, brands can create valuable experiences that keep consumers returning again and again.

Optimising Content For Voice Search & Virtual Assistants from Pete Campbell

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

The AdWords 2017 roadmap is loaded with artificial intelligence

Google recently shared their 2017 AdWords product roadmap at Google Marketing Next. Because the audience is primarily comprised of executives at big agencies and big brands, and Google is doing its best to get them excited about all their capabilities, the event sometimes skims over some of the details that matter to those of us managing accounts day-to-day.

I’ll share my take on the announcements and what excited or frustrated me the most. Even though it’s now been five years since I left Google, all but one of the presenters are people I used to work with, and they were kind enough to invite me backstage to get a bit more detail than what was covered in the keynote.

Custom in-market audiences for search

I’m a PPC geek, so I obviously love better targeting. That’s why the announcement of in-market audiences for search got me so excited. How often have we all wished for a way to look beyond the query and distinguish between a prospective buyer and a kid doing research for a school project? Access to in-market audiences lets us make that distinction so that we can bid more aggressively for better-qualified leads.

But guess what? Everyone will now bid more for better-qualified traffic because it should convert better. According to Bhanu Narasimhan, Google’s director of audience products, conversion rates for in-market audiences are on average 10 percent better. So get your boss ready for the inevitable run-up in maximum bids you’ll need to set to remain competitive.

Unfortunately, this feature’s exact launch date was not announced; Google only said it’d be available by the end of 2017.

Did you say ‘custom’?

Currently in the US, there are 493 in-market audiences for the Display Network across a number of verticals. That’s a lot of options, but just as we had affinity audiences before custom affinity audiences, now we’re about to get custom in-market audiences.

Karen Yao, Google’s group product manager for ad platforms, revealed this very cool update: we will be able to create custom in-market audiences by adding keywords we believe someone would have used if they were in the market for our product or service. Combined with Google’s vast amounts of data, this can then help us find an audience of people in the market for what we sell.

New: Life event targeting

The way Google can give us custom in-market audiences and targeting based on life events is by having really good machine learning. Knowing who is going through a targetable life event like graduating from college, buying a home, getting married or having a baby is done by understanding the online behavior that corresponds to these events.

In simple artificial intelligence (AI), engineers could write some simple “if-then” statements to place people into these targeting groups based on a handful of searches they did. But with Google now having built a much faster Tensorflow processor that underpins their AI efforts, you can bet their systems for finding which users are going through a particular life event will be really good and useful for advertisers.

In the example they gave, they showed how people in different cultures might search for different things related to a wedding. Google’s machine learning can pick up on these differences and know that it corresponds with the marriage life stage.

The best ad automatically

We’ve all been doing A/B ad testing for years. But that’s becoming much less relevant if you look at what Google is now able to do with AI. Sridhar Ramaswamy, senior vice president of ads and commerce, showed an example of three users all searching for something pretty generic (like “cheapest hotel”) but each one being served a different ad from the same advertiser.

The different ads weren’t driven by audience bid adjustments or some other thing we control — rather, it was AdWords predicting each user’s preference in order to show subtle ad text variations, focusing either on price, value or selection.

As someone who’s created tools for ad optimization in our software suite at Optmyzr, what I heard was that we should focus primarily on creating a ton of ad variations and then let the machines decide which one to serve. What that means for advertisers is that the creation of many ad variations is likely to become a bigger task than before, so that we can feed the machine all the possible variations it requires to do an amazing optimization.

Data-driven attribution becomes easy

Search Engine Land paid media reporter Ginny Marvin wrote a great recap of what Google Attribution is, an important piece to read if you’ve been wondering why Google decided there was need for yet one more tool to do attribution modeling (we already get it in AdWords, Analytics and DoubleClick).

I am excited about this new offering because when I got to play with it, I saw just how quick and easy it was to get up and running. But easy setup is meaningless unless the tool is also really good, so the real reason for my excitement is that data-driven attribution modeling is now becoming much more accessible.

The problem with attribution models is that they are our best-effort attempt at modeling real-world behavior with a somewhat limited set of tools. Thanks to improved store visit data, store sales data, easier consolidation of data and Google’s AI — four themes of the event — we no longer have to flail around trying to do something really complicated by hand. Data-driven models evaluate how each touch point contributes to the eventual outcome.

In AdWords, that means knowing how a click on one more keyword will change conversion rates. By looking beyond AdWords, it means knowing how the interplay of channels, impressions, clicks and more contribute to a conversion.

With Google Attribution, Google runs the models and feeds the data back into AdWords, where we can use a flexible bid strategy, or use the enhanced data to achieve better results using the bid management tool of our choice. In Optmyzr, that means you’ll get better insights to help set bids and do optimizations with the same tools you’re already used to.

The thing I wish Google would work on next is to make it easier to import data from competing channels into Analytics. Right now, to get the full picture, we still need to tag campaigns and import cost data. I also hope that somehow they can use data across accounts to reduce the currently very high requirement that a conversion has 600 conversions over a 30-day period before data driven models start to work.

Hey Google, are keywords dead?

At I/O, Google announced that 20 percent of searches in the Google mobile app in the US are done by voice. Sridhar Ramaswamy repeated that amazing stat at Google Marketing Next.

Does that mean that we’re on the verge of not needing keywords anymore? Luckily not — it turns out that the majority of voice searches still lead to a traditional search results page. The difference is merely in how users enter the query into the search box: users are substituting typing with speaking. Only a small portion of the voice interactions are with the Google Assistant. The key difference is that in a substitute for typing, the results are still returned on-screen, whereas with the Assistant, the entire interaction is by voice.

Regardless, I hear a lot of advertisers who want to have a better presence on the Assistant-type interactions. Most of the Assistant’s data comes from data we already provide Google, so be sure to have a Google My Business account to manage your location info and to use local inventory feeds to give Google data about prices and inventory at your locations.

Google has now also opened up the ability for developers to build actions so that in response to a conversation, the Assistant could do a transaction with the user. The example given by Google is a frequent business traveler who asks her Assistant for the next flight. Knowing that she flies from SFO to LAX every week on United, it could give info on the price of the next flight and even book the ticket, all by voice.

I suspect supporting Google’s buy buttons, which they call Purchases on Google (managed in the Merchant Center), will also become a way to get your online store ready for voice-driven transactions.

Is this the year AI replaces account managers?

Every single announcement I’ve covered here has some connection to machine learning and artificial intelligence. So where do we all fit into this evolution toward ever more complexity, where humans can no longer hope to achieve great results without the help of tremendous computing power?

This question got me thinking about Lee Sedong, the Go champion who lost to Google’s DeepMind in 2016. The part of the story that didn’t receive as much coverage is about how Lee Sedong said that being schooled by the machine taught him to become a better player. Wired Magazine said that the pivotal play in the game was also the moment that “machines and humanity finally began to evolve together.” While the move that set up the machine to win was puzzling to humans, it opened Lee Sedong’s eyes to strategies he hadn’t considered before. So how can we as marketers learn from what the AdWords machine does?

Google’s Paul Muret, one of the founders of Urchin (now Google Analytics), explained to me that Surveys 360 can help us gain insights. The idea is that through the new integration between Surveys 360 and remarketing lists, we can poll users who’ve interacted with our site and ads so we can ask them what features they wanted or what compelled them to buy or not.

On last week’s #ppcchat on Twitter, a lot of people agreed that Surveys 360 can only be as effective as the questions being asked. I gave this example:

If airlines asked consumers a question about what they wanted most and didn’t qualify this with price, they’d be putting in more seats that nobody would want to buy.

Conclusion

It’s clear that AdWords will continue to be a major force in online marketing in 2017 and beyond, and I am excited to try out many of the announced capabilities as soon as they are available. While I am a fan of automation, I truly hope that AdWords finds a way to add some transparency to what its artificial intelligence does so that we can learn from it and evolve together.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.