What to watch for in 2018: Mobile SEO predictions

As we wrap up 2017 and look forward to 2018, many SEOs will speculate about what to expect in the year to come. Since my focus is mobile, I wanted to share my unique expectations for 2018 by outlining what we know and what we suspect on the mobile SEO front.

This past year brought a lot of changes to the mobile ecosystem, though we are still waiting expectantly for the launch of Google’s mobile-first index. We have been assured that it will launch sometime in 2018, and we hope this is true.

For this article, I plan to focus on a few of my key predictions for 2018: the blurring of the lines between app and web, cross-device convergence and the increased reliance on schema markup in HTML, JSON and databases. I will then tie all the trends together with unique speculation about what mobile-first indexing will actually be and what strategies you can start incorporating now to create an immediate SEO benefit.

This background information about mobile trends and the long-term expectations about mobile-first indexing should help you prioritize and plan for a more successful 2018.

Blurring of the app/web lines

The biggest trend in 2017 that will continue to grow in 2018 is a movement toward Progressive Web Apps, or PWAs. You can expect them to be an even bigger focus in 2018.

Just as a refresher, Progressive Web Apps are websites that enable an app shell and configuration file to be downloaded to a phone, which allows it to take on all the best characteristics of a native app while living on the web. Remember, “web apps” are basically just JavaScript-heavy websites that look like native apps, so making them function as a PWA just entails adding a couple of extra files and a little more functionality.

The great thing about PWAs is that they allow for an app icon, full-screen display without an address bar, speedy on- and offline functionality and push notifications. They are a good way to help companies build a bridge between the discoverability of the web and the engagement and satisfaction that users experience with apps, all while minimizing overhead. They can be used directly on the web or installed like a native app on Android devices (and iOS devices soon, too). That means there is a lot less to maintain, optimize and promote, so they are incredibly attractive to savvy companies of all sizes.

The app development trends will start to shift away from native apps and toward PWAs as more companies begin to understand the value that PWAs can provide. The Android OS now treats PWAs almost exactly like native apps, showing their resource consumption and specs in the exact same places, displaying them in the app tray, and soon, adding them to the Google Play Store. Google has also begun to transition many of their specific-interest web resources into PWAs, including Traffic, Sports, Restaurants, Weather, Google Contribute, Maps-Go and Weather PWA.

You can see this trend in action below. The first screen shows a web search result for the local weather. The next screen shows the same search result with a different presentation and the option to add it to the home screen. The third screen shows the dialogue where you accept addition of the PWA icon to your home screen. The final image shows Google’s native weather app and its weather PWA app icons side by side. The two apps do the exact same thing and have the exact same interface.

[Click to enlarge.]

PWAs are also important because they remove the need for companies to set up deep links from their websites into their apps and vice versa — a process that has proven complicated and sometimes impossible for large companies that don’t have exact parity between their app and website content. Google always prefers to recommend and reward the least error-prone options, and in our experience, deep linking the old fashioned way is very error-prone. Every time something changes in the app or content moves on the website (individual 301 redirects or a full migration), app indexing and deep linking is at risk of failing or completely breaking down.

And even when your deep links are working correctly, referral touch points and attribution can be nearly impossible to track without the assistance of third-party services. This is a stark contrast to the simplicity of linking on the web. PWAs are self-contained apps that are already indexed on the web, eliminating all that complexity.

If everything that happens in your company’s app can be achieved in a PWA, it makes sense to focus efforts on the PWA — especially if the company is struggling with deep linking. As long as your PWA is well indexed and delivering a great user experience, Android deep links will be irrelevant.

Since PWAs will be in Google Play with native apps, Android users likely won’t be able to tell the difference between a native app and a PWA. On Android, it is important to note that Google may eventually change how they treat deep links when a PWA is available. Google may begin to prefer PWA content over deep links (especially if the app is not installed), just as they have done for AMP content.

This is less of a concern for iOS, especially if deep linking is happening through iOS Universal links rather than any Firebase implementation. Since Universal Links are executed with the iOS operating system rather than the browser, it seems likely that iOS will continue to honor Universal Links into apps, even if a PWA is available.

Just remember that, in both cases, if the PWA is replacing the website, the app deep links will need to match up with the URLs used in the PWA. If the PWA is in addition to the main website, only the web URLs that are associated with app URIs will trigger the deep links.

As Google begins adding PWAs to Google Play and indexing them on the web, this could make it easier for it to add app logos to SERPs for both Android and iOS, improving the appearance, CTR and engagement of the PWA links. Regardless, there may still be a push for all app deep links to be moved into its Firebase system, to help Google improve its cross-device, cross-OS reporting and attribution. Depending on how quickly Google is able to finish launching mobile-first indexing, this is something that may be a big push for the company in the second half of 2018.

We are seeing similar changes on the app store optimization (ASO) front as well. The Google Play algorithm is historically much less sophisticated than the Google search algorithm, but recent changes to the Google Play app algorithm show a much larger focus on app performance, efficiency, engagement and reviews, and a relative decrease in the importance of app metadata. This could be considered a signal of a potential impending merge between Google Play and regular SERPs, since we know performance is an important ranking factor there. When PWAs are added to the Google Play Store, native Android apps will be competing against PWA websites in terms of performance. Conversely, this will likely mean that PWAs may also be subject to ranking fluctuations based on user reviews and star ratings.

Though it is less prominent for SEO, the same may be true in the Apple world of technology. Historically, Apple was resistant to allowing their Safari browser to support PWAs, but recent announcements make it seem as though the company’s perspective has flipped. In 2017, Apple finally made it clear that Safari would soon support the Service Worker files that make PWAs so useful, and just this month (Dec. 12, 2017), in its quest to eliminate the use of app templating services, Apple seemingly endorsed PWAs as a better option for companies with limited budgets than templated native apps!

Apple’s sudden and emphatic endorsement of PWAs is a strong indication that PWAs will be supported in the next Safari update. It may also indicate that Apple has developed a scheme to monetize PWAs. Apple could also plan on adding them to its App Store (where they can exercise more editorial control over them). This is all yet to be seen, of course, but it will be interesting.

Cross-device convergence

The next major theme to expect in 2018 is cross-device convergence. As the number and purpose of connected devices continues to expand, mindsets will also need to expand to take on a wider view of what it means to be “cross-device.” Historically, cross-device might have meant having apps and a website, or having a responsive design website that worked on all devices. But in 2018, people will start to realize that this is not enough. As the line between app and web merges on mobile, it will also merge on desktop and the Internet of Things (IoT).

As more information moves to the cloud, it will be easier to seamlessly move from one device to another, maintaining the state, history and status of the interaction on all devices simultaneously. The presentation layer will simply include hooks into a larger API. Developers will be more focused on testing data integrations of one app across many different devices, rather than testing multiple, device-specific apps on multiple devices (somewhat similar to the transition to responsive design on the web).

There is a store for Google Home and a store for Google Actions, Google’s Voice-First and Voice-Only channels, but these will probably merge into the same store — possibly when the mobile-first index fully launches, but more likely soon after. You can expect an eventual convergence of mobile and desktop app stores, operating systems and search utilities, though this won’t all be completed or even initiated in 2018. It is just the direction things are going.

We have already seen this happening in some places. The convergence between mobile and desktop is most obvious when you look at the changes that happened in Windows 10. The desktop OS incorporates an app store and looks much more like an Android phone, even including customizable widgets in the “Start” screens. Microsoft announced just this month that Service Workers, push notifications and local cache will all also be enabled by default in Microsoft’s new Edge browser, which is intended for both desktop and mobile.

PWAs and Android apps are already available in the Windows app store, which means that PWAs are already available and partially usable on desktop. In that same vein, Microsoft has now made a point of making some of the top software, like Outlook, Excel and Word, available on Android devices, without a license.

There are also indications that Google may begin to test sponsored App Pack rankings. Since App Pack rankings happen in the regular SERP rather than an app store, this could be important for desktop, too. As companies begin to realize how useful PWAs are, they will have a visual advantage over other sponsored results on both mobile and desktop.

Google and Microsoft/Windows have always been more willing to coexist without walled gardens, while Apple has always leaned toward proprietary products and access. If Safari mobile will support PWAs and Service Workers, then it may also be true for the desktop version of Safari, meaning that the line between mobile and desktop will be merging in the larger Apple universe, too. The MacOS has had its own app store for a long time, but the Apple teams, like the Android and Windows teams, have also reported that they will be merging the MacOS and iOS stores into one in 2018.

This cross-device, voice- and cloud-oriented model is already being pursued with Cortana’s integration in Windows 10, where the mobile and desktop app stores have already merged. Similarly, Siri, Safari and Spotlight work cross-device to surface apps and websites, and Google has added voice search to desktop — but they have both yet to really push the assistant to the front and center as a means of surfacing that app and web content on all devices.

There were rumors that iOS apps would also be available in the Windows app store, but that looks like it has fallen through, at least in terms of 2018 planning. Instead, Apple may have decided to extend or merge its own iOS App Store with the desktop version of the store and could also have decided to include PWAs for the desktop experience.

The last thing to watch out for in this trend is changes with Accelerated Mobile Pages (AMP). AMP was designed to make webpages fast and mobile-friendly, and even though these enhanced pages can work on desktop and probably could integrate easily with voice, Google has reportedly struggled to integrate them into the mobile-first index. While it does provide a lot of advantages, AMP will probably have to make major changes or face a reckoning in 2018. There are still significant problems that need to be resolved in terms of UX and measurement.

Increased reliance on structured data markup in more places

The final thing to watch for in 2018 is Google’s push for webmasters to mark up everything with structured data, including social profiles, corporate contact information, books, events, courses and facts. Structured data, and specifically markup that is formatted in JSON-LD to provide semantic understanding, is what allows Google to understand “entities.” (The “LD” in JSON-LD stands for Linked Data.)

We know that structured data will be a big deal because it helps Google figure out what is going on without having to rely so heavily on crawling and parsing all the content on the web — which has become quite a monumental job with no end in sight. This is why Google has switched to requesting most data-rich assets in the JSON-LD format, including Google Action markup, Web-app manifests, and the files saved by Service Workers.

Last year, before Google I/O, Google made a big point of creating a structured data testing tool that gave specific implementation instructions for a variety of different kinds of markup. The kinds of schema included there, not surprisingly, are specifically good for interactions with Google Home, Google Assistant and Chromecast — things like restaurants, reservations, travel plans, music, TV, movies and recipes.

Content that is well marked up with structured data can be easily parsed and presented on non-traditional devices through voice search and interaction (like with Google Assistant, Google Home, Android Auto). This is also a big deal for non-Google products like Amazon Alexa, Siri, Fitbit (which launched its own OS-specific partner apps) and voice-enabled TV remotes.

The one thing in Google’s structured data documentation that has not gotten due attention is the database or data set markup (i.e., instructions for how to add structured data markup to your database). Databases don’t necessarily have URLs or need websites, and this is core to the theory that the mobile-first index will not require URLs for indexing and that it will rely on schemas and entity understanding.

Let’s look at an example of how markup is creating “entity” understanding. Below, you can see a search result for a specific boot. Rather than showing all the web locations where you might find that boot, Google has aggregated it into a utility that can give users a lot more information directly from the SERP.

The result shows the full name of the boot, as well as what stores have it in stock and at what prices. It also shows the star ratings for the boot and lets me toggle to different sizes and colors. If I click the tabs, I can see more details about the boot and read reviews that have been aggregated from all the places that sell it. Since this information is an aggregation of information from all over the web, it actually does not have a static URL, so Google includes a triangle “share” link so that the aggregation itself can be shared.

This sharing functionality is something that you can expect to see much more of in mobile-first indexing. It is an indication that Google views a topic as an entity and thus has stored, aggregated or assimilated information on the topic as a whole (the entity). Dynamic links are links that Google generates on the fly, for content that it understands, but that does not naturally have a URL.

It is important to remember that Google’s very first (unsuccessful) attempt to encourage app deep-linking used Dynamic Links, as part of Google Now On-Tap. Then, they were used as a unified link that united the same piece of content on the web, in an iOS app and in an Android app. They allowed one link to trigger the right experience on any device, and if the appropriate app was not installed, the link would fall back to the web version of the content. Now, Dynamic Links are still included as an important part of Google’s app indexing platform, Firebase.

In the next example below, you can see how the linked data helps support entity understanding in a search result. The query is for a popular author, so the result shows pictures and a brief biography at the very top. There are only minor differences between the Google Now result and the Google Web result — one has a dynamic share link, and the other offers the ability to “follow” the entity or concept.

In both, the result aggregates information such as quotes and movies attributed to the author, lists influences and links to a Wikipedia page. Below that, Google displays a carousel of his most popular books, with pictures of the cover and the date they came out. Below that, it shows a “People Also Searched For” carousel, which is full of authors who write in the same genre.

We believe Google is using clicks on these bottom two carousels to verify and vet the linked data that it has assimilated about this author. The more clicks a carousel item gets, the more likely it is linked to the topic of the query.

A new way to think of mobile-first indexing

Knowing these trends should help you understand how mobile-first indexing fits into the larger SEO picture. Inclusion of the word “indexing” in Google’s official title for the update is telling. It indicates that this is not just an algorithm update, but an update to the fundamental architecture and organization of the system. Remember, an “index” is just a repository of ordered information that is easy to query or search. Indexes can be created for all different kinds of information and ordered in a variety of ways: alphabetically, numerically, or in Google’s case, historically based on URLs.

Since native apps and progressive web apps don’t require different URLs to show different content, we believe the method of indexing and organizing content has to change. Forcing URLs into those new technologies has proved untenable, so Google needs a new index — and it will be one that prefers “portable” content that lives in the cloud and is well marked up with structured data. It will probably be an “entity index” based on unique “entity concepts” that include domains (with URLs), native app entities and their content, PWA entities and database entities that need no design elements at all.

Use of the phrase “mobile-first” in the name is also interesting. With both the mobile-friendly update and mobile-first indexing, Google repurposes phrases that were previously used to describe design elements — but in both, Google mainly focused on the technological back end that made the design changes possible. For the mobile-friendly update, Google did provide guidelines on how content should look on the page, but based on their testing tool, their main focus was really on the crawlability of dependent files on the site (specifically, the CSS and JavaScript).

The mobile-friendly update was an important precursor to mobile-first indexing because it gave Google what it needed to feed and train its machine learning programs about how they should ingest and interpret JavaScript. As SEOs, we all endured the mobile-friendly update, which preferred sites that qualified as such and awarded them with a mobile-friendly icon when they appeared in search results.

Similarly, the phrase “mobile-first” was originally used to describe a design principle in which responsive design website frameworks were established with the most essential elements of functionality first, and these were meant for mobile devices with the smallest screens. Only later were designers able to add in other, less necessary elements of the design and UX for larger-screened devices that had more room.

It now appears that Google has also co-opted the term “mobile-first” to mean something slightly different, with implications that are much larger than just design. Rather than focusing on mobile devices and screen sizes, Google will put the focus on content accessibility and the cloud and focus much less on the presentation.

This is an important trend because “the cloud” is where Google has been focusing most of their time and innovative energy. Content that is hosted in the cloud, without being formatted specifically for any one device, is exactly what they are after; it is the easiest for them to process with AI and the easiest for them to redisplay on any screen (or read out loud, in voice-only contexts). That is where Google Now and Google Assistant come in.

Google Now was Google’s first attempt at a predictive search engine that anticipated queries before a user even submitted them. It used all the information it knew or could detect about your habits to anticipate information you would want and displayed it in an interface to the left of the home screen on Android phones. It was also available as the Google App on iOS, but it was never as good since they weren’t able to aggregate as many personal habits and preferences from iOS users. Google Now included a voice search capability, but it just translated voice queries into text.

There are minimal differences in most search rankings when you compare regular search in Google.com and a search in Google Now. The primary differences happen when there is a PWA available (like the Weather PWA). There are also some minor variations in the “share” and “follow” functionality, which probably also hint at what to expect in mobile-first indexing. You can see the differences below.

Google Assistant is a bit more sophisticated in that it can sometimes answer simple questions directly rather than just returning a search result. It also uses passive and active signals about a user to ensure that it is giving the most accurate and useful information possible. Google Assistant is the critical element of a Google Home device, which operates primarily with voice but can cast results to connected TVs or phones if visual review is required.

Google Now and Google Assistant are obvious precursors for mobile-first indexing and give us a great deal of insight into what to expect. The two utilities are very similar and may simply be combined for mobile-first indexing. One of the strongest endorsements of this idea is that Google has recently gotten much more aggressive at pushing Android users into the Google Now/Google Assistant world. They moved the query bar from the Google Now interface (one swipe left of the main phone screen) to the standard layout (accessible on all versions of the home screen).

The new search bar just says “Google,” so most users won’t realize that they are accessing a different experience there than in the web-oriented version of Google (google.com).

Google’s most recent blog post about the mobile-first index didn’t really add anything new to the equation, so our best guess is still that the new index will probably also lean heavily on Google’s existing semantic understanding of the web (which is based on Knowledge Graph and its historical incorporation and build-up of Freebase). It will also use cards and AI, like we are used to seeing in Google Now. This concept is backed up by Google’s retirement of the term “rich snippets” and the launch of the new Rich Results Testing Tool on December 19.

The image below shows the different methods Google is using to inform the Google Assistant about an individual user’s preferences, which will help further personalize individual search results. But this data could also be aggregated — in a “Big Data” way — to determine larger patterns, needs and search trends so that it can adapt more quickly.

On the left, you can see a Google Cloud Search, which draws together information about assets on all of my devices that are logged into a Google Account. This includes emails, calendar entries, Drive documents, photos, SMS and apps. Though this has not been the focus of any Google marketing, it is part of Google’s Business GSuite package, which is turned on by default for all GSuite users.

On the right, you can see the Google My Activity tracker. This is another feature that is turned on by default. It is similar to the Cloud Search function, but instead of just being a searchable database, it organizes the information in chronological order. It breaks out my daily activity on a timeline and a map. The data includes the amount of time I spent walking and driving. It also shows the businesses that I visited and the times I was there. It also places pictures that I took on the timeline and associates them with the locations where the pictures were taken.

Elements like this are meant to help Google Assistant have a greater understanding of personal context so that it can respond when surfacing search results, either to an explicit search or to an anticipated want or need (e.g., Google Now).

In the long run, Google Assistant may be the new entry to Google search on all devices, forcing people to log in so that their state and history can be maintained across different devices, and so that a personal history and index can be developed and built out for each user. The beginning of this personal history index is already in Google Now for Android users. It uses active and passive machine learning to track and compile all of a user’s cross-device activity in Google Cloud, then translates that information into predicted needs in Google Now.

Google has already begun promoting a “one-click register and form “complete” and “one-click sign-in” that works and transfers credentials across different devices. This functionality is all currently made possible by Google’s Credential Management API, which means that it relies on a cloud-hosted shared “state” managed by coordination of local Service Workers that pass state changes to the cloud-hosted Google Account. If and when this takes off, it will be a huge boon to engagement and e-commerce conversion because it eliminates the main friction.

Conclusion

From a search prospective, data that lives in one state, regardless of the device, is great — but assimilating all the different types of potential search results into an index is hard. The new mobile-first index will mix together websites with apps, PWAs and other data sets that don’t all have URLs, so this is where structured data markup will come in.

Just as advertising systems profile individual users with device fingerprints, Google will have to organize the new index with similar unique identifiers, which will include web URLs and app URIs. But, for content that does not have an existing unique identifier, like a page deep within a PWA experience or an asset in a database, Google will allow “Dynamic Links” to stand in as their unique identifier so that they can be indexed.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

The upcoming mobile app Monday: Be prepared

The season is upon us: mobile download season. Christmas falls on Monday, and if history holds true, Christmas and the day after will be the top mobile app download days of the year. With less than a week left, your app store optimization (ASO) activities should be in full swing.

Becky Peterson heads up our app store optimization at Walgreens. Becky was looking to be on the nice list, so she put together some optimization tips for new and existing apps to help you maximize the download season.

Optimization essentials

Title: Choose a title for your app that is creative but concise. If appropriate, take advantage of the character limit to include relevant keywords that describe your app’s core functionality. (Just don’t overdo it — you don’t want to appear spammy!)Icon: Create an eye-catching icon that is clean and easily recognizable. A recognizable icon can make the difference when customers are searching specifically for your app.Keywords & description: Conduct keyword research to determine the most valuable and relevant terms for your app. Utilize the keyword field in iTunes Connect, and use your keywords throughout your description and in your creative assets.Video: Create a preview video (or three for iOS!) that walks through your core features and provides visitors an overview of how to use your app. On iOS 11, your previews will autoplay in search results and on your store page; in Google Play, they will underlay the Feature Graphic. Create videos that are engaging. Test, test, test to determine which video version generates optimal downloads prior to the top download days.Screen shots: Create clean and visually appealing screen shots that capture the essence of your app and encourage visitors to continue scrolling through the gallery.

Additional tips

Take full advantage of free app store intelligence platforms, such as App Annie or Sensor Tower, or invest in an app store analytics platform that will provide you with keyword ranking, competitors, Top Chart and download data.If applicable, seasonalize or incentivize your store listing! Use your description to highlight how your app is seasonally relevant and provide offers (i.e., shopping deals, products and more), and update your creative assets to showcase a holiday theme.Respond to your app reviews. Demonstrate to your users that you appreciate their feedback and are constantly working to improve your app. Some reviewers may even choose to update their original review simply because you responded in a considerate manner. Prospective users are more inclined to download an app when it is clear the app owners take feedback seriously.

Capitalizing on the top download days of the year can be the difference between an average app and a top download. Keep your content fresh, do not over-optimize, and remember that the goal is to assist customers in finding the right app for the right purpose. Put in the effort, set download goals, and allocate plenty of time to respond to the flurry of reviews that occur soon after installation and use.

Remember to document your lessons learned once the season is over. Download season will be back before you know it, and those valuable lessons can be the difference-maker next year.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Google bringing the Assistant to tablets and Lollipop Android phones

Google is rolling out the Assistant to more devices. It will soon be available on Android tablets running Nougat and Marshmallow, and smartphones running Lollipop.

Tablets in the US running English will be the first to get access. However, a wide array of Android 5.0 smartphones (Lollipop) will get the Assistant: Those operating in English in major markets and in Spanish in the US, Mexico and Spain; and Lollipop smartphones in Italy, Japan, Germany, Brazil and Korea.

Google is pushing the Assistant out to more devices as the market becomes more competitive and AI development accelerates.

A July 2017 report from Verto Analytics found that 42 percent of US smartphone owners used virtual assistants, in the aggregate, on average 10 times per month. That translated into more than 70 million smartphone owners and almost 1 billion hours per month in the US. The numbers are likely somewhat higher now.

Personal Assistant Usage Numbers & Demographics

Source: Verto Analytics (5/17)

Siri was the most used (largest audience), but Cortana and Alexa were the fastest-growing assistants, according to Verto.

Separate research has found that virtual assistants are used much more frequently on smart speakers, which makes sense because of the general absence of screens: almost three uses per day vs. less than one for smartphones.

Google Lens an impressive start for 'visual search'

Google Lens has gone live or is about to on Pixel phones in the US, the UK, Australia, Canada, India and Singapore (in English). Over the past couple of weeks, I’ve been using it extensively and have had mostly positive results — though not always.

Currently, Lens can read text (e.g., business cards), identify buildings and landmarks (sometimes), provide information on artwork, books and movies (from a poster) and scan barcodes. It can also identify products (much of the time) and capture and keep (in Google Keep) handwritten notes, though it doesn’t turn them into text.

To use Lens, you tap the icon in the lower right of the screen when Google Assistant is invoked. Then you tap the image or object or part of an object you want to scan.

As a barcode scanner, it works nearly every time. In that regard, it’s worthy and a more versatile substitute for Amazon’s app and just as fast or faster in many cases. If there’s no available barcode, it can often correctly identify products from their packaging or labels. It also does very well identifying famous works of art and books.

Google Lens struggled most with buildings and with products that didn’t have any labeling on them. For example (below), it was rather embarrassingly unable to identify an Apple laptop as a computer, and it misidentified Google Home as “aluminum foil.”

When Lens gets it wrong it asks you to let it know. And when it’s uncertain but you affirm its guess, you can get good information.

I tried Lens on numerous well-known buildings in New York, and it was rarely able to identify them. For example, the three buildings below (left to right) are New York City Hall, the World Trade Center and the Oculus transportation hub. (In the first case, if you’re thinking, he tapped the tree and not the building, I took multiple pictures from different angles, and it didn’t get one right.)

I also took lots of pictures of random objects (articles of clothing, shoes, money) and those searches were a bit hit-and-miss, though often, when it missed it was a near-miss.

As these results indicate, Google Lens is far from perfect. But it’s much much better than Google Goggles ever was, and it will improve over time. Google will also add capabilities that expand use cases.

It’s best right now for very specific uses, which Google tries to point out in its blog post. One of the absolute best uses is capturing business cards and turning them into contacts on your phone.

Assuming that Google is committed to Lens and continues investing in it, over time it could become a widely adopted alternative to traditional mobile and voice search. It might eventually also drive considerable mobile commerce.

Voice search: Content may be king, but context is queen in the new voice-first world

In 2016, Google said that 20 percent of all mobile queries were voice searches. Since that time, the number of virtual assistants in US households has continued to swell, with tens of millions of voice-enabled home devices projected to be in use.

Voice as a primary search interface — beyond mobile phones — is a reality. Marketers need to rapidly iterate on their mobile-first strategies, to adapt to the voice-first marketplace. And as the aptly titled e-book released today [registration required] suggests, voice search changes everything.

I sat down with the book’s author, Yext VP of Industry Insights Duane Forrester, to discuss the landscape of voice search, how it will impact the business of search marketing, and what marketers can do to prepare for this evolution in search user interfaces.

“Voice engagement is the most likely scenario that will challenge the biggest players in search for supremacy.”

Michelle Robbins: What inspired you to put this e-book together?

Duane Forrester: The work we do at Yext is focused on helping businesses understand what data they can control, and empowering them with ways to manage that data. So from that point of view, there was a lot of support for exploring this developing space. Personally, I’ve always been an early adopter. The last decade of my life I’ve been fortunate enough to see the leading edge of technology up close and interact with it personally, so as “voice” developed to what we have today, I’ve been engaged and watching its progress.

MR: The major players in the space have been established. Do you see room for any other competitors to enter the voice arena?

DF: Absolutely. There is a boom happening in China right now with dozens of new companies entering the smart speaker space. While most won’t survive, it’s inevitable we’ll see new devices reach our shores next year, driving prices down and adoption up.

Most of that expansion will be white-labeled products (Google Assistant built into a Samsung TV, for example), but from the consumer’s point of view, it’ll be less about buying because of the embedded assistant and more about brand awareness around specific products. People don’t buy the Samsung TV because of Google Assistant (or Siri, or Cortana, or IBM, etc.), they buy it because Samsung makes excellent televisions. The voice assistant is a nice addition. That’s our immediate future. Over time, however, this could change if one or more of the leaders make significant technology breakthroughs that bring obvious differentiation and improvements.

MR: Is there anything holding back even greater adoption of voice-enabled devices?

DF: We’re starting to see the end of people’s reluctance to speak to their devices. This was a major factor in adoption over the last five years. Couple that with less than stellar services and results, and adoption was predictably sluggish — right up until Amazon landed in millions of living rooms around the world.

The biggest factor in voice adoption remains time. As services surpass an accuracy rate of 98%+ and consumer upgrade devices, or have their first contact with new devices that are voice enabled, the growth will continue. Voice will conquer all.

MR: How can marketers, and search marketers in particular, shift from a ‘content is king’ focus to competitively prepare for the ‘context is queen’ world and surface as the one primary voice result?

DF: The beauty of this is clear. All the investment that’s been poured into content continues to pay dividends in a voice-first world. If anything, in order to truly get to the context-first scenarios we have today, you need deep, detailed, rich content. But even here, context plays a role. If the request is for the temperature, the platform being engaged will determine location as part of the relevancy factoring. The answer (let’s say “72 degrees”) in any other context might seem “thin” by nature. But as an answer to “What’s the temperature outside?,” it’s a perfect fit.

A more complex scenario might look like “Who is Harry Potter?” and “What is Harry Potter?” The former should bring back an answer about a fictional person, while the later should elicit a response about a fictional series about a boy wizard, etc. The answers for the latter would be deeper, and pull from richer “answers” provided by websites.

To be included in the “spoken answer” column, we have no set best practices from the engines to follow, but we do have some common best practices we know they respond to for things like the Answer Boxes. And increasingly, it’s those answer box contents that are being spoken aloud to inform consumer queries.

As for specific tactics people can employ, here’s a short list. This is in addition to the usual quality content production and SEO best practices.

    Adopt a long tail/conversational phrase approach to targeting what to produce content around.Build out detailed answers to common (and even uncommon) questions related to your products and services.Use Schema to mark up your content (where appropriate).Clean up your own house — be sure crawlers can find your content.Make sure your site is mobile-friendly — not really an option these days.Make security a priority — becoming more of a trust signal.

“A picture is worth a thousand words”

MR: What additional innovations in voice are coming into play?

DF: If you’ve shopped via a voice device, you’ve encountered an area that will improve significantly when visuals are added. Ask the system to buy a blue sweater, and you immediately realize without being able to see the sweater, you’re missing a lot of information needed to make an informed purchase.

This is where visual search comes into play, and it’s here now as the logical next step from voice search. We see initial products from Amazon in the market now (Show and Spot), and I expect to see more companies fielding visually-enabled voice devices soon. In terms of e-commerce, this expands the usefulness of current content investments like product videos.

MR: What kind of technology investments should marketers be making to address this new playing field?

DF: Things that were optional even just a couple years ago, are no longer optional. Being mobile-friendly is a requirement. Being secure is rapidly becoming a differentiator. Marking up your content is no longer a nice-to-have. Every day adoption of those technical items grows, which means the playing field is changing. If a search engine suggests a protocol is worth using, it’s worth paying attention.

Things like Schema markup help an engine grow trust in your website and content, so take advantage of that. Being secure shows an investment in protecting consumers, again an area the engines favor and actively support. And if you really want to walk a mile in your customer’s shoes, to really learn what their journey is like, you’ll buy the main voice-enabled devices on the market today. Set them up and use them all day, every day. This practice will uncover new features and highlight new opportunities for you to align with the customer’s journey.

MR: What kind of personnel investments should organizations be making to effectively compete in a voice-first world?

DF: It’s highly likely that a business already has the skill sets they need on hand. If they have an SEO person or team, they’re off to an excellent start. To truly take advantage of new environments like voice and visual search, though, you need to have someone who has a broad understanding of emerging opportunities, has the reach to influence across and within your company, and can offer guidance based on experiences in discrete areas. That’s the role of a Digital Knowledge Manager (DKM).

The DKM can help ensure all assets in a company are aligned to best effect, while also keeping the company up to speed on emerging technologies. From the top, it’s the DKM that guides. From a more tactical level, it’s likely a technically proficient SEO aligning efforts across research, content development and deployment. That combined effort can help a company get started and take a leadership position in their verticals.

Join us at SMX West this March in San Jose, where we’ll feature industry leaders sharing tips and tactics for search marketing success in voice search, local and mobile SEO and much more!

.mktoButton{background:#000!important;}

Stay up to date on voice search and other industry news and trends.

Investors anxious about Google traffic acquisition costs, which regulation could further increase

Google’s traffic acquisition costs (TAC) have been climbing. As Google’s ad revenues grow, so does the money it pays to partners and distributors. Investors are particularly sensitive to this issue.

In response to investor questions, Alphabet CFO Ruth Porat explained recently that Google’s increasing traffic costs are partly about mobile and programmatic growth, which have different payment structures and higher TAC. Last year, Google’s ad revenues were $90.3 billion; its TAC was $16.8 billion (other cost of revenues was $18.3 billion).

Alphabet cost of revenues, including TAC

Source: Google 10K filing 2016

As Bloomberg reports, investors worry that increasing TAC will squeeze margins and make Google less profitable. In Alphabet’s 10K filing from 2016, the company said the following about rising TAC:

In this multi-device world, we generate our advertising revenues increasingly from mobile and newer advertising formats, and the margins from the advertising revenues from these sources have generally been lower than those from traditional desktop search. We also expect traffic acquisition costs (TAC) paid to our distribution partners to increase due to changes in device mix between mobile, desktop, and tablet, partner mix, partner agreement terms, and the percentage of queries channeled through paid access points. We expect these trends to continue to put pressure on our overall margins.

Financial analyst Mark Mahaney recently estimated, according to Bloomberg, that each percentage point of TAC growth for Google/Alphabet represents nearly $300 million in decreased profit. In 2016, TAC as a percentage of Google advertising revenues was 21 percent; as of Q2 2017, it was 22 percent.

In the past, however, TAC has been higher as a percentage of overall revenues. In 2010, it was 26 percent of ad revenues, and in 2009, it was 27 percent. Yet Google’s ad revenues are much higher now, and so are the real dollars it pays to partners.

TAC has been increasing partly because of these distribution and rev share payments, which Google makes to Apple and various Android ecosystem partners. It has been estimated they will approach $10 billion by year-end, up from roughly $3.5 billion five years ago.

Another wild card that could impact TAC, according to Bloomberg, is regulation. In its 10K filing late last year, Google said new litigation or regulations, including the elimination of various safe harbors, could impact Google’s performance and increase costs:

The General Data Protection Regulation (GDPR), coming into effect in Europe in May of 2018, creates a range of new compliance obligations and increases financial penalties for noncompliance significantly.Court decisions, such as the “right to be forgotten” ruling issued by the European court, which allows individuals to demand that Google remove search results about them in certain instances, may limit the content we can show to our users and impose significant operational burdens.Various US and international laws restrict the distribution of materials considered harmful to children and impose additional restrictions on the ability of online services to collect information from minors.Data protection laws passed by many states and by certain countries outside the US require notification to users when there is a security breach for personal data, such as California’s Information Practices Act.Data localization laws generally mandate that certain types of data collected in a particular country be stored and/or processed within that country.

According to Reuters, the UK government is considering removing one of Google’s safe harbors. It’s contemplating regulating Google and Facebook as traditional news publishers, with all the associated burdens and potential liabilities. An increasing percentage of the population gets its news from these sources, even though they’re not the generators of the news.

Partly this is a response to a number of traditional publishers and media outlets which are lobbying UK regulators to treat Google and Facebook as they’re treated. That would almost certainly increase Google’s operational and legal costs as well.

Report: Google to debut 'Home Mini' smart speaker for $49 on October 4

Google is set to reveal the Pixel 2 smartphone and potentially other hardware at an event on October 4, in time for holiday shopping. While the Pixel 2 is set to be the star of the event, a prominent supporting role will be played by the new “Google Home Mini.”

This is apparently Google’s answer to the low-cost Amazon Echo Dot. According to Droid Life, it will be priced comparably at $49 and be available in three colors.

Image credit: Droid Life

The device will support the Google Assistant and reportedly will provide the same functionality as Google Home. It’s all but certain the sound quality won’t be as good. And there may be other hardware compromises to bring costs down. It will very likely broaden the market for Google Home and the Google Assistant.

Amazon has created multiple Alexa devices for different budgets:

Dot — $49Echo Tap — $129Echo — $179Echo Show — $229

Amazon often discounts the devices and offers multiple purchase incentives, including on the Dot. To date, Google has only introduced the Home, which retails for $129 but is often discounted to $99. Apple’s Siri-powered HomePod is going to retail for $349 and is positioned as a higher-end smart speaker for the Sonos demographic.

According to one estimate, Amazon dominates the US smart speaker market today. It has also been projected that by the end of 2017, there will be 30 million of these devices in US households. Assistant-enabled smart speakers are the cornerstone for the smart home ecosystem, and we’re in the midst of a land-grab right now.

ComScore has projected that roughly 50 percent of US search queries will be voice-initiated by 2020. More smart speakers in more American homes will help accelerate that trend.

Google adds trending searches and instant answers to iOS app

With a new search app update for iOS, Google has added trending searches and instant answers. (TechCrunch noticed it earlier today.) It replicates a previously introduced Android feature which reportedly resulted in an outcry, causing Google to enable an opt-out.

In the “what’s new” discussion in the iOS App Store, Google says:

See searches that are trending around you when you tap on the search box to start a searchGet instant answers to your questions as you type them, before you even complete the search. Try it out by typing for “goog stock” or “how tall is the eiffel tower” and see the answer show up in the suggestions below the search boxEasily give feedback on any suggestions you see while typing — just swipe left and tap on the “info” icon

Here’s what it looks like:

The trending searches appear to be national rather than specific to my location. The data appear to be Knowledge Graph data, but it’s not consistent — answers don’t always appear to factual questions — or as rich as the Google Answer Box, which appears after the search is actually entered.

Apple offers a somewhat richer version of instant answers with its Spotlight Search on Safari, though there’s no trending data.

Trending searches will be interesting to some people. Others will be indifferent; still others may be horrified (“Starbucks Pumpkin Spice Latte”). If they were truly local queries, it would be more interesting to me.

Instant answers has some utility for quick facts. But it’s now faster, easier and more efficient to use voice search or Google Home, if you have one. I can ask for the time in London or “what’s the population of the United States?” much more quickly with my voice than my phone keyboard.

Google Maps Android app adds 'find parking' feature to show you nearest parking garage

Google Maps is making it easy for Android users to find parking options.

The Android app now has a “find parking” button on the direction card that is displayed once you search for your location. The button leads to a list of parking garages and lots near the intended location.

Users can select their preferred parking option, and the app will automatically add it to their trip, along with walking directions from the parking spot to their destination.

The “find parking” feature was rolled out in 25 US cities, including Atlanta, Chicago, DC, Detroit, Portland, Orlando and St. Louis.

In addition to its latest feature, the app has expanded its “parking difficulty” feature for Android and iOS apps to 25 international cities, including Amsterdam, Dusseldorf, London, Milan, Rio de Janeiro and Vancouver.

When available, the parking difficulty icon appears in the bottom of a direction card screen, and it ranks parking availability as “limited,” “medium” or “easy” based on historical parking data.

Data: Consumers grow more demanding, impatient as brands fall behind

There’s considerable evidence that consumers are growing more impatient and less tolerant of poor or frustrating online experiences. There’s also increasing evidence that most brands aren’t keeping up with customers, creating significant risk and lost opportunities.

This gap is reflected in all the CX (consumer experience) research and reports coming out. There’s also a strong undercurrent of this theme in the Google “micro-moments” research and discussions. All the data about mobile page speeds and consumer abandonment support this idea:

The average U.S. retail mobile site loaded in 6.9 seconds in July 2016, but, according to the most recent data, 40% of consumers will leave a page that takes longer than three seconds to load. And 79% of shoppers who are dissatisfied with site performance say they’re less likely to purchase from the same site again.

Most recently, Google said that geo-modifiers (e.g., ZIP codes) have declined by 30 percent, even as local search volumes have increased:

[D]emanding mobile users now assume they will be given locally relevant search results. For example, while restaurant-related searches have grown by double digits over the past two years, those same searches that include a zip code qualifier have declined by over 30%.

This is evidence of rising expectations; consumers simply assume Google will understand and deliver locally relevant results without specifying that intent explicitly in the query.

Another recent study from Fetch, Dentsu Aegis Network’s mobile agency, found that consumers are less willing to wait for things and have become more impatient, in large part because of technology. Overall, 38 percent of survey respondents (n=2,489) said that technology had made them less patient than they were five years ago. However, the number was even higher for millennials: 45 percent.

More supporting evidence comes from a 2015 Microsoft study, which reported the average Canadian consumer’s attention span was now eight seconds, down from 12 seconds in 2000.  It concluded that the proliferation of technology, devices and social media had contributed to this attention erosion.

Finally, a very recent BrightEdge survey of 252 digital brand marketers found a significant gap between consumer technology adoption and marketer investments (or lack thereof). For example, voice search/digital assistants was at the top of the list of “next big things,” according to these brands. Yet “66 percent of marketers have no plans to begin preparing for voice search.”

The holistic picture painted by these studies and dozens more like them is one of consumers addicted to instant answers and gratification and less willing to wait for their needs to be met. Beyond the now universal advice to “focus on the customer,” which is both cliche and correct, there’s no simple or single answer from a practical perspective.

But brands still must find one.