What to watch for in 2018: Mobile SEO predictions

As we wrap up 2017 and look forward to 2018, many SEOs will speculate about what to expect in the year to come. Since my focus is mobile, I wanted to share my unique expectations for 2018 by outlining what we know and what we suspect on the mobile SEO front.

This past year brought a lot of changes to the mobile ecosystem, though we are still waiting expectantly for the launch of Google’s mobile-first index. We have been assured that it will launch sometime in 2018, and we hope this is true.

For this article, I plan to focus on a few of my key predictions for 2018: the blurring of the lines between app and web, cross-device convergence and the increased reliance on schema markup in HTML, JSON and databases. I will then tie all the trends together with unique speculation about what mobile-first indexing will actually be and what strategies you can start incorporating now to create an immediate SEO benefit.

This background information about mobile trends and the long-term expectations about mobile-first indexing should help you prioritize and plan for a more successful 2018.

Blurring of the app/web lines

The biggest trend in 2017 that will continue to grow in 2018 is a movement toward Progressive Web Apps, or PWAs. You can expect them to be an even bigger focus in 2018.

Just as a refresher, Progressive Web Apps are websites that enable an app shell and configuration file to be downloaded to a phone, which allows it to take on all the best characteristics of a native app while living on the web. Remember, “web apps” are basically just JavaScript-heavy websites that look like native apps, so making them function as a PWA just entails adding a couple of extra files and a little more functionality.

The great thing about PWAs is that they allow for an app icon, full-screen display without an address bar, speedy on- and offline functionality and push notifications. They are a good way to help companies build a bridge between the discoverability of the web and the engagement and satisfaction that users experience with apps, all while minimizing overhead. They can be used directly on the web or installed like a native app on Android devices (and iOS devices soon, too). That means there is a lot less to maintain, optimize and promote, so they are incredibly attractive to savvy companies of all sizes.

The app development trends will start to shift away from native apps and toward PWAs as more companies begin to understand the value that PWAs can provide. The Android OS now treats PWAs almost exactly like native apps, showing their resource consumption and specs in the exact same places, displaying them in the app tray, and soon, adding them to the Google Play Store. Google has also begun to transition many of their specific-interest web resources into PWAs, including Traffic, Sports, Restaurants, Weather, Google Contribute, Maps-Go and Weather PWA.

You can see this trend in action below. The first screen shows a web search result for the local weather. The next screen shows the same search result with a different presentation and the option to add it to the home screen. The third screen shows the dialogue where you accept addition of the PWA icon to your home screen. The final image shows Google’s native weather app and its weather PWA app icons side by side. The two apps do the exact same thing and have the exact same interface.

[Click to enlarge.]

PWAs are also important because they remove the need for companies to set up deep links from their websites into their apps and vice versa — a process that has proven complicated and sometimes impossible for large companies that don’t have exact parity between their app and website content. Google always prefers to recommend and reward the least error-prone options, and in our experience, deep linking the old fashioned way is very error-prone. Every time something changes in the app or content moves on the website (individual 301 redirects or a full migration), app indexing and deep linking is at risk of failing or completely breaking down.

And even when your deep links are working correctly, referral touch points and attribution can be nearly impossible to track without the assistance of third-party services. This is a stark contrast to the simplicity of linking on the web. PWAs are self-contained apps that are already indexed on the web, eliminating all that complexity.

If everything that happens in your company’s app can be achieved in a PWA, it makes sense to focus efforts on the PWA — especially if the company is struggling with deep linking. As long as your PWA is well indexed and delivering a great user experience, Android deep links will be irrelevant.

Since PWAs will be in Google Play with native apps, Android users likely won’t be able to tell the difference between a native app and a PWA. On Android, it is important to note that Google may eventually change how they treat deep links when a PWA is available. Google may begin to prefer PWA content over deep links (especially if the app is not installed), just as they have done for AMP content.

This is less of a concern for iOS, especially if deep linking is happening through iOS Universal links rather than any Firebase implementation. Since Universal Links are executed with the iOS operating system rather than the browser, it seems likely that iOS will continue to honor Universal Links into apps, even if a PWA is available.

Just remember that, in both cases, if the PWA is replacing the website, the app deep links will need to match up with the URLs used in the PWA. If the PWA is in addition to the main website, only the web URLs that are associated with app URIs will trigger the deep links.

As Google begins adding PWAs to Google Play and indexing them on the web, this could make it easier for it to add app logos to SERPs for both Android and iOS, improving the appearance, CTR and engagement of the PWA links. Regardless, there may still be a push for all app deep links to be moved into its Firebase system, to help Google improve its cross-device, cross-OS reporting and attribution. Depending on how quickly Google is able to finish launching mobile-first indexing, this is something that may be a big push for the company in the second half of 2018.

We are seeing similar changes on the app store optimization (ASO) front as well. The Google Play algorithm is historically much less sophisticated than the Google search algorithm, but recent changes to the Google Play app algorithm show a much larger focus on app performance, efficiency, engagement and reviews, and a relative decrease in the importance of app metadata. This could be considered a signal of a potential impending merge between Google Play and regular SERPs, since we know performance is an important ranking factor there. When PWAs are added to the Google Play Store, native Android apps will be competing against PWA websites in terms of performance. Conversely, this will likely mean that PWAs may also be subject to ranking fluctuations based on user reviews and star ratings.

Though it is less prominent for SEO, the same may be true in the Apple world of technology. Historically, Apple was resistant to allowing their Safari browser to support PWAs, but recent announcements make it seem as though the company’s perspective has flipped. In 2017, Apple finally made it clear that Safari would soon support the Service Worker files that make PWAs so useful, and just this month (Dec. 12, 2017), in its quest to eliminate the use of app templating services, Apple seemingly endorsed PWAs as a better option for companies with limited budgets than templated native apps!

Apple’s sudden and emphatic endorsement of PWAs is a strong indication that PWAs will be supported in the next Safari update. It may also indicate that Apple has developed a scheme to monetize PWAs. Apple could also plan on adding them to its App Store (where they can exercise more editorial control over them). This is all yet to be seen, of course, but it will be interesting.

Cross-device convergence

The next major theme to expect in 2018 is cross-device convergence. As the number and purpose of connected devices continues to expand, mindsets will also need to expand to take on a wider view of what it means to be “cross-device.” Historically, cross-device might have meant having apps and a website, or having a responsive design website that worked on all devices. But in 2018, people will start to realize that this is not enough. As the line between app and web merges on mobile, it will also merge on desktop and the Internet of Things (IoT).

As more information moves to the cloud, it will be easier to seamlessly move from one device to another, maintaining the state, history and status of the interaction on all devices simultaneously. The presentation layer will simply include hooks into a larger API. Developers will be more focused on testing data integrations of one app across many different devices, rather than testing multiple, device-specific apps on multiple devices (somewhat similar to the transition to responsive design on the web).

There is a store for Google Home and a store for Google Actions, Google’s Voice-First and Voice-Only channels, but these will probably merge into the same store — possibly when the mobile-first index fully launches, but more likely soon after. You can expect an eventual convergence of mobile and desktop app stores, operating systems and search utilities, though this won’t all be completed or even initiated in 2018. It is just the direction things are going.

We have already seen this happening in some places. The convergence between mobile and desktop is most obvious when you look at the changes that happened in Windows 10. The desktop OS incorporates an app store and looks much more like an Android phone, even including customizable widgets in the “Start” screens. Microsoft announced just this month that Service Workers, push notifications and local cache will all also be enabled by default in Microsoft’s new Edge browser, which is intended for both desktop and mobile.

PWAs and Android apps are already available in the Windows app store, which means that PWAs are already available and partially usable on desktop. In that same vein, Microsoft has now made a point of making some of the top software, like Outlook, Excel and Word, available on Android devices, without a license.

There are also indications that Google may begin to test sponsored App Pack rankings. Since App Pack rankings happen in the regular SERP rather than an app store, this could be important for desktop, too. As companies begin to realize how useful PWAs are, they will have a visual advantage over other sponsored results on both mobile and desktop.

Google and Microsoft/Windows have always been more willing to coexist without walled gardens, while Apple has always leaned toward proprietary products and access. If Safari mobile will support PWAs and Service Workers, then it may also be true for the desktop version of Safari, meaning that the line between mobile and desktop will be merging in the larger Apple universe, too. The MacOS has had its own app store for a long time, but the Apple teams, like the Android and Windows teams, have also reported that they will be merging the MacOS and iOS stores into one in 2018.

This cross-device, voice- and cloud-oriented model is already being pursued with Cortana’s integration in Windows 10, where the mobile and desktop app stores have already merged. Similarly, Siri, Safari and Spotlight work cross-device to surface apps and websites, and Google has added voice search to desktop — but they have both yet to really push the assistant to the front and center as a means of surfacing that app and web content on all devices.

There were rumors that iOS apps would also be available in the Windows app store, but that looks like it has fallen through, at least in terms of 2018 planning. Instead, Apple may have decided to extend or merge its own iOS App Store with the desktop version of the store and could also have decided to include PWAs for the desktop experience.

The last thing to watch out for in this trend is changes with Accelerated Mobile Pages (AMP). AMP was designed to make webpages fast and mobile-friendly, and even though these enhanced pages can work on desktop and probably could integrate easily with voice, Google has reportedly struggled to integrate them into the mobile-first index. While it does provide a lot of advantages, AMP will probably have to make major changes or face a reckoning in 2018. There are still significant problems that need to be resolved in terms of UX and measurement.

Increased reliance on structured data markup in more places

The final thing to watch for in 2018 is Google’s push for webmasters to mark up everything with structured data, including social profiles, corporate contact information, books, events, courses and facts. Structured data, and specifically markup that is formatted in JSON-LD to provide semantic understanding, is what allows Google to understand “entities.” (The “LD” in JSON-LD stands for Linked Data.)

We know that structured data will be a big deal because it helps Google figure out what is going on without having to rely so heavily on crawling and parsing all the content on the web — which has become quite a monumental job with no end in sight. This is why Google has switched to requesting most data-rich assets in the JSON-LD format, including Google Action markup, Web-app manifests, and the files saved by Service Workers.

Last year, before Google I/O, Google made a big point of creating a structured data testing tool that gave specific implementation instructions for a variety of different kinds of markup. The kinds of schema included there, not surprisingly, are specifically good for interactions with Google Home, Google Assistant and Chromecast — things like restaurants, reservations, travel plans, music, TV, movies and recipes.

Content that is well marked up with structured data can be easily parsed and presented on non-traditional devices through voice search and interaction (like with Google Assistant, Google Home, Android Auto). This is also a big deal for non-Google products like Amazon Alexa, Siri, Fitbit (which launched its own OS-specific partner apps) and voice-enabled TV remotes.

The one thing in Google’s structured data documentation that has not gotten due attention is the database or data set markup (i.e., instructions for how to add structured data markup to your database). Databases don’t necessarily have URLs or need websites, and this is core to the theory that the mobile-first index will not require URLs for indexing and that it will rely on schemas and entity understanding.

Let’s look at an example of how markup is creating “entity” understanding. Below, you can see a search result for a specific boot. Rather than showing all the web locations where you might find that boot, Google has aggregated it into a utility that can give users a lot more information directly from the SERP.

The result shows the full name of the boot, as well as what stores have it in stock and at what prices. It also shows the star ratings for the boot and lets me toggle to different sizes and colors. If I click the tabs, I can see more details about the boot and read reviews that have been aggregated from all the places that sell it. Since this information is an aggregation of information from all over the web, it actually does not have a static URL, so Google includes a triangle “share” link so that the aggregation itself can be shared.

This sharing functionality is something that you can expect to see much more of in mobile-first indexing. It is an indication that Google views a topic as an entity and thus has stored, aggregated or assimilated information on the topic as a whole (the entity). Dynamic links are links that Google generates on the fly, for content that it understands, but that does not naturally have a URL.

It is important to remember that Google’s very first (unsuccessful) attempt to encourage app deep-linking used Dynamic Links, as part of Google Now On-Tap. Then, they were used as a unified link that united the same piece of content on the web, in an iOS app and in an Android app. They allowed one link to trigger the right experience on any device, and if the appropriate app was not installed, the link would fall back to the web version of the content. Now, Dynamic Links are still included as an important part of Google’s app indexing platform, Firebase.

In the next example below, you can see how the linked data helps support entity understanding in a search result. The query is for a popular author, so the result shows pictures and a brief biography at the very top. There are only minor differences between the Google Now result and the Google Web result — one has a dynamic share link, and the other offers the ability to “follow” the entity or concept.

In both, the result aggregates information such as quotes and movies attributed to the author, lists influences and links to a Wikipedia page. Below that, Google displays a carousel of his most popular books, with pictures of the cover and the date they came out. Below that, it shows a “People Also Searched For” carousel, which is full of authors who write in the same genre.

We believe Google is using clicks on these bottom two carousels to verify and vet the linked data that it has assimilated about this author. The more clicks a carousel item gets, the more likely it is linked to the topic of the query.

A new way to think of mobile-first indexing

Knowing these trends should help you understand how mobile-first indexing fits into the larger SEO picture. Inclusion of the word “indexing” in Google’s official title for the update is telling. It indicates that this is not just an algorithm update, but an update to the fundamental architecture and organization of the system. Remember, an “index” is just a repository of ordered information that is easy to query or search. Indexes can be created for all different kinds of information and ordered in a variety of ways: alphabetically, numerically, or in Google’s case, historically based on URLs.

Since native apps and progressive web apps don’t require different URLs to show different content, we believe the method of indexing and organizing content has to change. Forcing URLs into those new technologies has proved untenable, so Google needs a new index — and it will be one that prefers “portable” content that lives in the cloud and is well marked up with structured data. It will probably be an “entity index” based on unique “entity concepts” that include domains (with URLs), native app entities and their content, PWA entities and database entities that need no design elements at all.

Use of the phrase “mobile-first” in the name is also interesting. With both the mobile-friendly update and mobile-first indexing, Google repurposes phrases that were previously used to describe design elements — but in both, Google mainly focused on the technological back end that made the design changes possible. For the mobile-friendly update, Google did provide guidelines on how content should look on the page, but based on their testing tool, their main focus was really on the crawlability of dependent files on the site (specifically, the CSS and JavaScript).

The mobile-friendly update was an important precursor to mobile-first indexing because it gave Google what it needed to feed and train its machine learning programs about how they should ingest and interpret JavaScript. As SEOs, we all endured the mobile-friendly update, which preferred sites that qualified as such and awarded them with a mobile-friendly icon when they appeared in search results.

Similarly, the phrase “mobile-first” was originally used to describe a design principle in which responsive design website frameworks were established with the most essential elements of functionality first, and these were meant for mobile devices with the smallest screens. Only later were designers able to add in other, less necessary elements of the design and UX for larger-screened devices that had more room.

It now appears that Google has also co-opted the term “mobile-first” to mean something slightly different, with implications that are much larger than just design. Rather than focusing on mobile devices and screen sizes, Google will put the focus on content accessibility and the cloud and focus much less on the presentation.

This is an important trend because “the cloud” is where Google has been focusing most of their time and innovative energy. Content that is hosted in the cloud, without being formatted specifically for any one device, is exactly what they are after; it is the easiest for them to process with AI and the easiest for them to redisplay on any screen (or read out loud, in voice-only contexts). That is where Google Now and Google Assistant come in.

Google Now was Google’s first attempt at a predictive search engine that anticipated queries before a user even submitted them. It used all the information it knew or could detect about your habits to anticipate information you would want and displayed it in an interface to the left of the home screen on Android phones. It was also available as the Google App on iOS, but it was never as good since they weren’t able to aggregate as many personal habits and preferences from iOS users. Google Now included a voice search capability, but it just translated voice queries into text.

There are minimal differences in most search rankings when you compare regular search in Google.com and a search in Google Now. The primary differences happen when there is a PWA available (like the Weather PWA). There are also some minor variations in the “share” and “follow” functionality, which probably also hint at what to expect in mobile-first indexing. You can see the differences below.

Google Assistant is a bit more sophisticated in that it can sometimes answer simple questions directly rather than just returning a search result. It also uses passive and active signals about a user to ensure that it is giving the most accurate and useful information possible. Google Assistant is the critical element of a Google Home device, which operates primarily with voice but can cast results to connected TVs or phones if visual review is required.

Google Now and Google Assistant are obvious precursors for mobile-first indexing and give us a great deal of insight into what to expect. The two utilities are very similar and may simply be combined for mobile-first indexing. One of the strongest endorsements of this idea is that Google has recently gotten much more aggressive at pushing Android users into the Google Now/Google Assistant world. They moved the query bar from the Google Now interface (one swipe left of the main phone screen) to the standard layout (accessible on all versions of the home screen).

The new search bar just says “Google,” so most users won’t realize that they are accessing a different experience there than in the web-oriented version of Google (google.com).

Google’s most recent blog post about the mobile-first index didn’t really add anything new to the equation, so our best guess is still that the new index will probably also lean heavily on Google’s existing semantic understanding of the web (which is based on Knowledge Graph and its historical incorporation and build-up of Freebase). It will also use cards and AI, like we are used to seeing in Google Now. This concept is backed up by Google’s retirement of the term “rich snippets” and the launch of the new Rich Results Testing Tool on December 19.

The image below shows the different methods Google is using to inform the Google Assistant about an individual user’s preferences, which will help further personalize individual search results. But this data could also be aggregated — in a “Big Data” way — to determine larger patterns, needs and search trends so that it can adapt more quickly.

On the left, you can see a Google Cloud Search, which draws together information about assets on all of my devices that are logged into a Google Account. This includes emails, calendar entries, Drive documents, photos, SMS and apps. Though this has not been the focus of any Google marketing, it is part of Google’s Business GSuite package, which is turned on by default for all GSuite users.

On the right, you can see the Google My Activity tracker. This is another feature that is turned on by default. It is similar to the Cloud Search function, but instead of just being a searchable database, it organizes the information in chronological order. It breaks out my daily activity on a timeline and a map. The data includes the amount of time I spent walking and driving. It also shows the businesses that I visited and the times I was there. It also places pictures that I took on the timeline and associates them with the locations where the pictures were taken.

Elements like this are meant to help Google Assistant have a greater understanding of personal context so that it can respond when surfacing search results, either to an explicit search or to an anticipated want or need (e.g., Google Now).

In the long run, Google Assistant may be the new entry to Google search on all devices, forcing people to log in so that their state and history can be maintained across different devices, and so that a personal history and index can be developed and built out for each user. The beginning of this personal history index is already in Google Now for Android users. It uses active and passive machine learning to track and compile all of a user’s cross-device activity in Google Cloud, then translates that information into predicted needs in Google Now.

Google has already begun promoting a “one-click register and form “complete” and “one-click sign-in” that works and transfers credentials across different devices. This functionality is all currently made possible by Google’s Credential Management API, which means that it relies on a cloud-hosted shared “state” managed by coordination of local Service Workers that pass state changes to the cloud-hosted Google Account. If and when this takes off, it will be a huge boon to engagement and e-commerce conversion because it eliminates the main friction.

Conclusion

From a search prospective, data that lives in one state, regardless of the device, is great — but assimilating all the different types of potential search results into an index is hard. The new mobile-first index will mix together websites with apps, PWAs and other data sets that don’t all have URLs, so this is where structured data markup will come in.

Just as advertising systems profile individual users with device fingerprints, Google will have to organize the new index with similar unique identifiers, which will include web URLs and app URIs. But, for content that does not have an existing unique identifier, like a page deep within a PWA experience or an asset in a database, Google will allow “Dynamic Links” to stand in as their unique identifier so that they can be indexed.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

The Google Assistant SDK adds support for additional languages & more

Google announced it has expanded the Google Assistant software development kit to support additional languages. That means developers can now bring Google Assistant applications to more people. Google Assistant now supports these additional languages and regions English Australia, English Canada, English UK, English US, French Canadian, French France, German and Japanese.

Lack of support for languages can impede development on the platform. For example, my company has been trying to find ways around language barriers to build Jewish apps, but Hebrew is not yet supported. The difficulty is having the Google Assistant APIs understand the language or regional language dialects and respond with a proper answer. So in this example, if someone asks what time is mincha, which is afternoon services in the Jewish world, Google cannot understand the word “mincha” because it is a Hebrew word. Bringing more support for additional languages and regions helps Google expand the ecosystem of the Google Assistant platform.

Other improvements to the Google Assistant SDK include more customized settings, including changing the device’s language, location and nickname and enabling personalized results. The API now also supports text-based queries and responses. Developers can also utilize the new Device Action functionality to build Actions directly into your Assistant-enabled SDK devices. Also, new APIs allow developers to register, unregister and see all devices that you have registered for better device management support.

The upcoming mobile app Monday: Be prepared

The season is upon us: mobile download season. Christmas falls on Monday, and if history holds true, Christmas and the day after will be the top mobile app download days of the year. With less than a week left, your app store optimization (ASO) activities should be in full swing.

Becky Peterson heads up our app store optimization at Walgreens. Becky was looking to be on the nice list, so she put together some optimization tips for new and existing apps to help you maximize the download season.

Optimization essentials

Title: Choose a title for your app that is creative but concise. If appropriate, take advantage of the character limit to include relevant keywords that describe your app’s core functionality. (Just don’t overdo it — you don’t want to appear spammy!)Icon: Create an eye-catching icon that is clean and easily recognizable. A recognizable icon can make the difference when customers are searching specifically for your app.Keywords & description: Conduct keyword research to determine the most valuable and relevant terms for your app. Utilize the keyword field in iTunes Connect, and use your keywords throughout your description and in your creative assets.Video: Create a preview video (or three for iOS!) that walks through your core features and provides visitors an overview of how to use your app. On iOS 11, your previews will autoplay in search results and on your store page; in Google Play, they will underlay the Feature Graphic. Create videos that are engaging. Test, test, test to determine which video version generates optimal downloads prior to the top download days.Screen shots: Create clean and visually appealing screen shots that capture the essence of your app and encourage visitors to continue scrolling through the gallery.

Additional tips

Take full advantage of free app store intelligence platforms, such as App Annie or Sensor Tower, or invest in an app store analytics platform that will provide you with keyword ranking, competitors, Top Chart and download data.If applicable, seasonalize or incentivize your store listing! Use your description to highlight how your app is seasonally relevant and provide offers (i.e., shopping deals, products and more), and update your creative assets to showcase a holiday theme.Respond to your app reviews. Demonstrate to your users that you appreciate their feedback and are constantly working to improve your app. Some reviewers may even choose to update their original review simply because you responded in a considerate manner. Prospective users are more inclined to download an app when it is clear the app owners take feedback seriously.

Capitalizing on the top download days of the year can be the difference between an average app and a top download. Keep your content fresh, do not over-optimize, and remember that the goal is to assist customers in finding the right app for the right purpose. Put in the effort, set download goals, and allocate plenty of time to respond to the flurry of reviews that occur soon after installation and use.

Remember to document your lessons learned once the season is over. Download season will be back before you know it, and those valuable lessons can be the difference-maker next year.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Are you considering call analytics software?

Thanks to the ubiquity of the smartphone, phone calls are finally getting the respect they deserve as an integral part of the customer journey. Mobile calls now account for 60 percent of inbound calls to businesses, according to BIA/Kelsey, which projects that the number of mobile calls to businesses will climb to 170 billion in 2020.

As consumers increasingly use their smartphones to research, browse and connect with businesses, brands are developing a newfound respect for the inbound call as an integral part of the conversion path.

MarTech Today’s “Enterprise Call Analytics Platforms: A Marketer’s Guide” examines the current market for enterprise call analytics platforms and the considerations involved in implementing this technology. If you are considering licensing an enterprise call analytics platform, this report will help you decide whether you need to. This 41-page report provides:

call analytics market overview with the latest industry statistics.in-depth analysis of call analytics features and capabilities.recommended steps for making an informed purchase decision.profiles of 12 leading vendors.

Visit Digital Marketing Depot to get your copy.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

SEO in 2018: Optimizing for voice search

Google Webmaster Trends Analyst John Mueller recently asked for feedback on why webmasters are looking for Google to separate voice search queries in Search Console. If you, like me, want to see voice searches in Google Search Console, definitely submit your feedback on Twitter as John requested.

I hear folks asking about voice search data in Search Console often. Can you elaborate on what you want to see there? What's an example of such a query that would be useful? pic.twitter.com/WOqS7aH4tP

— John ☆.o(≧▽≦)o.☆ (@JohnMu) December 7, 2017

I lived through the very beginnings of mobile SEO, where many people thought mobile search behavior would be completely different from desktop search behavior only to find that much of it is the same. So I see why Mueller and others don’t necessarily understand why Search Console users would want to see voice queries separately. Some queries are the same whether they’re typed into a computer at a desktop or spoken across the room to a Google Home.

That being said, there are some very good reasons to want voice search data. Optimizing for voice search requires some slightly different tactics from those for traditional SEO, and having insight into these queries could help you provide a better experience for those searching by voice.

Not convinced you should care about voice search? Here are three reasons I think you should:

1. More visibility on featured snippets

One of the interesting things about Google Home is that when it answers a question with information from the web, it will cite the source of the information by saying the website’s name, and it will often send a link to the searcher’s Google Home app.

Currently, Google Home and Google Assistant read snippets from sites that are ranked in “position zero” and have been granted a featured snippet. This is why more people than ever are talking about how to optimize for featured snippets. If you look at the articles published on the topic (according to what Google has indexed), you’ll see that the number of articles about how to optimize for featured snippets has grown 178 percent in the past year:

Understanding voice search queries could help us better understand the types of queries that surface featured snippets. As marketers, we could then devote time and resources to providing the best answer for the most common featured snippets in hopes of getting promoted to position zero.

This helps marketers drive credibility to their brand when Google reads their best answer to the searcher, potentially driving traffic to the site from the Google Home app.

And this helps Google because they benefit when featured snippets provide good answers and the searcher is satisfied with the Google Home results. The better the service, the more consumers will use it — and potentially buy more Google Home units or Android phones because they think the service is worthwhile.

If bad featured snippets are found because no one is trying to optimize for those queries, or no featured snippets are found and the Google Home unit must apologize for not being able to help with that query yet, Google potentially loses market share to Amazon in the smart speaker race and Apple in the personal assistant race.

So this one is a win-win, Google. You need more great responses competing for position zero, and we want to help. But first, we need to know what types of queries commonly trigger featured snippets from voice search, and that’s why we need this data in Search Console today.

2. Better way to meet consumer demand and query intent based on context

We saw two major things happen in the early days of mobile SEO when we compared desktop and mobile queries:

    Searchers often used the same keywords in mobile search that they did in desktop search; however, certain keywords were used much more often on mobile search than desktop search (and vice versa).Whole new categories of queries emerged as searchers realized that GPS and other features of mobile search could allow them to use queries that just didn’t work in desktop search.

An example of the first point is a query like “store hours,” which peaks in volume when shoppers are headed to stores:

An example of the second is “near me” queries, which have grown dramatically with mobile search and mostly occur on mobile phones:

The mode of search therefore changes search behavior as searchers understand what types of searches work well on mobile but not on desktop.

Consider this in the context of voice search. There are certain types of queries that only work on Google Home and Google Assistant. “Tell me about my day” is one. We can guess some of the others, but if we had voice search data labeled, we wouldn’t have to.

How would this be useful to marketers and site owners? Well, it’s hard to say exactly without looking at the data, but consider the context in which someone might use voice search: driving to the mall to get a present for the holidays or asking Google Home if a store down the street is still open. Does the searcher still say, “Holiday Hut store hours?” Or do they say something like, “OK Google, give me the store hours for the Holiday hut at the local mall?” Or even, “How late is Holiday Hut open?”

Google should consider all these queries synonymous in this case, but in some cases, there could be significant differences between voice search behavior and typed search behavior that will affect how a site owner optimizes a page.

Google has told us that voice searches are different, in that they’re 30 times more likely to be action queries than typed searches. In many cases, these won’t be actionable to marketers — but in some cases, they will be. And in order to properly alter our content to connect with searchers, we’ll first need to understand the differences.

In my initial look at how my own family searched on Google Home, I found significant differences between what my family asked Home and what I ask my smartphone, so there’s reason to believe that there are new query categories in voice search that would be relevant to marketers. We know that there are queries — like “Hey Google, talk to Dustin from Stranger Things” and “Buy Lacroix Sparkling Water from Target” — that are going to give completely different results in voice search on Google Home and Assistant from the results in traditional search. And these queries, like “store hours” queries, are likely to be searched much more on voice search than in traditional search.

The problem is, how do we find that “near me” of voice search if we don’t have the data?

3. Understanding extent of advertising and optimization potential for new voice-based media

The last reason to pay attention to voice search queries is probably the most important — for both marketers and Google.

Let me illustrate it in very direct terms, as it’s not just an issue that I believe marketers have in general, but one that affects me personally as well.

Recently, one of my company’s competitors released survey information that suggested people really want to buy tickets through smart speakers.

As a marketer and SEO who sells tickets, I can take this information and invest in Actions on Google Development and marketing so that our customers can say, “OK Google, talk to Vivid Seats about buying Super Bowl tickets,” and get something from Google Home other than, “I’m sorry but I don’t know how to help with that yet.” (Disclosure: Vivid Seats is my employer.)

Or maybe I could convince my company to invest resources in custom content, as Disney and Netflix have done with Google. But am I really going to do it based on this one data point? Probably not.

As with mobile search in 2005, we don’t know how many people are using voice search in Google Home and Google Assistant yet, so we can’t yet know how big the opportunity is or how fast it’s growing. Voice search is in the “innovators and early adopters” stage of the technology adoption life cycle, and any optimizations done for it are not likely to reach a mainstream audience just yet. Since we don’t have data to the contrary from Google or Amazon, we’ll have to stay with this assumption and invest at a later date, when the impact of this technology on the market will likely mean a significant return on our investment.

If we had that data from Google, I would be able to use it to make a stronger case for early adoption and investment than just using survey data alone. For example, I would be able to say to the executives, “Look how many people are searching for branded queries in voice search and getting zero results! By investing resources in creating a prototype for Google Home and Assistant search, we can satisfy navigational queries that are currently going nowhere and recoup our investment.” Instead, because we don’t have that data from Google, the business case isn’t nearly as strong.

Google has yet to monetize voice search in any meaningful way, but when advertising appears on Google Home, this type of analysis will become even more essential.

Final thoughts

Yes, we can do optimization without knowing which queries are voice search queries, as we could do mobile optimization without knowing which queries are mobile queries; yet understanding the nuances of voice search will help Google and marketers do a better job of helping searchers find exactly what they’re looking for when they’re asking for it by voice.

If you agree, please submit your feedback to John Mueller on Twitter.

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Microsoft's new Outings app aims to help travelers find their next destinations

Microsoft has launched Outings, a new travel app for iOS and Android. Designed by the Microsoft Garage Project, the app curates travel-specific content and images to help users find potential travel destinations.

“Whether you’re looking for a fun hike near town or planning your next vacation destination, often the hardest part of travel is just figuring out where to go,” writes Lainie Huston on the Microsoft Garage blog. “Outings makes it easier by presenting inspiration for your next adventure, curating high quality travel blogs and beautiful images to show the information — and sneak peek — you need to pick where to go.”

According to Microsoft, the app includes a “Discover” feed listing US locations, and a “Nearby” feed that surfaces content related to local sites. Users can keep track of places they’ve traveled, as well as save and share places with contacts.

“We look forward to users’ feedback from our launch, and we plan to actively respond to them and add several new features in the coming months,” says Microsoft Garage program manager Vimal Kocherla.

Microsoft said it is also open to partnering with travel and local content providers that want to promote their content within the app.

Twitter broadens its AMP support to include analytics

Twitter is broadening its support of AMP (accelerated mobile pages) to include article analytics.

According to the announcement on Twitter’s developer blog, when Twitter loads an AMP version of an article, it will now ping the original article URL to record the view, in addition to passing the query arguments from the original article redirect into the AMP run-time. This will allow publishers to receive the data using the amp-analytics component.

“Pings to your original article are annotated as coming from Twitter,” writes Twitter product manager Ben Ward, “So that you can better understand the origin of the traffic, and distinguish it from organic views of your pages.”

While Twitter has supported AMP since its launch by making it possible to embed tweets within AMP articles, it has not offered analytics attached to AMP content shared on the platform until now. With this latest update, publishers will have deeper insight into how their AMP content is performing on Twitter.

“With this update, Twitter uses AMP to present your articles to more people, faster and more reliably,” writes Ward.

AMP: A case for websites serving developing countries

Like Taylor Swift, Accelerated Mobile Pages (AMP) have a reputation. In a not-very-official Twitter poll, 53 percent claimed AMP was “breaking the web.”

What do you think about AMP?

— Maximiliano Firtman (@firt) March 23, 2017

The mobile ecosystem is already complex: choosing a mobile configuration, accounting for mobile-friendliness, preparing for the mobile-first index, implementing app indexation, utilizing Progressive Web Apps (PWAs) and so on. Tossing AMP into the mix, which creates an entirely duplicated experience, is not something your developers will be happy about.

And yet despite the various issues surrounding AMP, this technology has potential use cases that every international brand should pause to consider.

To start, AMP offers potential to efficiently serve content as fast as possible. According to Google, AMP reduces the median load time of webpages to .7 seconds, compared with 22 seconds for non-AMP sites.

And you can also have an AMP without a traditional HTML page. Google Webmaster Trends Analyst John Mueller has mentioned that AMP pages can be considered as a primary, canonical webpage. This has major implications for sites serving content to developing counties.

Yes, AMP is a restrictive framework that rigorously enforces its own best practices and forces one into its world of amphtml. However, within the AMP framework is a lot of freedom (and its capabilities have grown significantly over the last year). It has built-in efficiencies and smart content prioritization, and a site leveraging AMP has access to Google’s worldwide CDN: Google AMP Cache.

Source: “AMP: Above & Beyond” by Adam Greenberg

All of this is to say that if your brand serves the global market, and especially developing economies, AMP is worth the thought exercise of assessing its implications on your business and user experience.

What in the world-wide web would inspire one to consider AMP?

1. The internet is not the same worldwide

Akamai publishes an amazing quarterly report on the State of the Internet, and the numbers are startling — most of the world operates on 10 Mbps or less, with developing countries operating at less than 5 Mbps, on average.

If 10 Mbps doesn’t make your skin crawl, Facebook’s visual of 4G, 3G and 2G networks worldwide from 2016 (below) will.

Source: Facebook

The visuals show a clear picture: Developing countries don’t have the same internet and wireless network infrastructure as developed economies. This means that brands serving developing countries can’t approach them with the same formula.

2. Websites overall are getting chunkier

While all of this is happening, the average size of website is increasing… and rapidly. According to reports by HTTParchive.org, the average total size of a webpage in 2017 is 387 percent larger than in 2010.

Despite the number of requests remaining consistent over time, the size of files continues to trend upward at an alarming rate. Creating larger sites may be okay in developed economies with strong networking infrastructures; however, users within developing economies could see a substantial lag in performance (which is especially important considering the price of mobile data).

3. Mobile is especially important for developing economies

The increase in website size and data usage comes at a time when mobile is vital within developing economies, as mobile is a lifeline connection for many countries. This assertion is reaffirmed by data from Google’s Consumer Barometer. For illustration, I’ve pulled device data to compare the US versus the developing economies of India and Kenya. The example clearly shows India and Kenya connect significantly more with mobile devices than desktop or tablet.

Source: Consumer Barometer with Google

4. Like winter, more users are coming

At the same time, the internet doesn’t show any signs of slowing down, especially not in developing countries. A recent eMarketer study on Internet Users Worldwide (August 2017) shows a high level of growth in developing countries, such as India, at 15.2 percent. Even the US saw a +2.2 percent bump in user growth!

User penetration as a percent of a country’s total population shows there is still room for growth as well — especially in developing countries.

5. The divide in speed is growing

In the chart below, I choose nine developing countries (per the United Nations’ World Economic Situation and Prospects report) to compare with the United States’ internet speed (which ranked 10th worldwide in the last report). Despite the overarching trend of growth, there is a clear divide emerging in late 2012 — and it appears to be growing.

[Click to enlarge]

Why is this significant? As internet connection speeds increase, so do page sizes. But as page sizes increase to match the fast speeds expected in developed nations, it means that users in developing nations are having a worse and worse experience with these websites.

So, what should one do about it?

The data above paint a picture: Worldwide internet penetration worldwide continues to grow rapidly, especially in developing nations where mobile devices are the primary way to access the internet. At the same time, webpages are getting larger and larger — potentially leading to a poor user experience for internet users in developing nations, where average connection speeds have fallen far behind those in the US and other developed nations.

How can we address this reality to serve the needs of users in developing economies?

Test your mobile experience.

AMP isn’t necessary if your site leverages mobile web optimization techniques, runs lean and is the picture of efficiency; however, this is challenging (especially given today’s web obesity crisis). Luckily, there are many tools that offer free speed analyses for webpages, including:

Test My Site tool (via Think With Google)Page Speed Insights tool (via Google Developers)Mobile-Friendly Test (via Google Search Console)WebPageTest.org

Develop empathy through experience.

Allow yourself to step into your customers’ shoes and experience your site. As former CEO of Moz, Rand Fishkin, once aptly stated, “Customer empathy > pretty much everything else.”

Regular empathy is hard. Empathy for people you don’t know is nearly impossible. If we don’t see the problem, feel it and internalize the challenge, we can’t hope alleviate it.

Facebook introduced a 2G Tuesdays, where employees logging into the company’s app on Tuesday mornings are offered the option to switch to a simulated 2G connection for an hour to support empathy for users in the developing world. If you’re looking to try something similar, any Chrome/Canary user can simulate any connection experience through Chrome Developer Tools through the Network Panel.

Consider if AMP is right for your site.*

You should entertain the thought of leveraging AMP as a primary experience if your brand meets the following criteria:

Your site struggles with page-speed issues.You’re doing business in a developing economy.You’re doing business with a country with network infrastructure issues.The countries you target leverage browsers and search engines that support AMP.Serving your content to users as efficiently as possible is important to your brand, service and mission.

*Note: AMP’s architecture can also be used to improve your current site and inform your page speed optimization strategy, including:

Paying attention to and limiting heavy third-party JavaScript, complex CSS, and non-system fonts (where impactful to web performance, and not interfering with the UX).Making scripts asynchronous (where possible).For HTTP/1.1 limiting calls preventing round-trip loss via pruning or inlining (this does not apply to HTTP/2 due to multiplexing).Leveraging resource hints (a.k.a. the Pre-* Party), where applicable.Optimizing images (including: using the optimal format, appropriate compression, making sure images are as close to their display size as possible, image SRCSET attribute, lazy loading (when necessary), etc.)Using caching mechanisms appropriately.Leveraging a CDN.Paying attention to and actively evaluating the page’s critical rendering path.

Educate your team about AMP, and develop a strategy that works for your brand.

AMP has a plethora of great resources on the main AMP Project site and AMP by Example.

If you decide to go with AMP as a primary experience in certain countries, don’t forget to leverage the appropriate canonical/amphtml and hreflang tags. And make sure to validate your code!

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

Google Lens an impressive start for 'visual search'

Google Lens has gone live or is about to on Pixel phones in the US, the UK, Australia, Canada, India and Singapore (in English). Over the past couple of weeks, I’ve been using it extensively and have had mostly positive results — though not always.

Currently, Lens can read text (e.g., business cards), identify buildings and landmarks (sometimes), provide information on artwork, books and movies (from a poster) and scan barcodes. It can also identify products (much of the time) and capture and keep (in Google Keep) handwritten notes, though it doesn’t turn them into text.

To use Lens, you tap the icon in the lower right of the screen when Google Assistant is invoked. Then you tap the image or object or part of an object you want to scan.

As a barcode scanner, it works nearly every time. In that regard, it’s worthy and a more versatile substitute for Amazon’s app and just as fast or faster in many cases. If there’s no available barcode, it can often correctly identify products from their packaging or labels. It also does very well identifying famous works of art and books.

Google Lens struggled most with buildings and with products that didn’t have any labeling on them. For example (below), it was rather embarrassingly unable to identify an Apple laptop as a computer, and it misidentified Google Home as “aluminum foil.”

When Lens gets it wrong it asks you to let it know. And when it’s uncertain but you affirm its guess, you can get good information.

I tried Lens on numerous well-known buildings in New York, and it was rarely able to identify them. For example, the three buildings below (left to right) are New York City Hall, the World Trade Center and the Oculus transportation hub. (In the first case, if you’re thinking, he tapped the tree and not the building, I took multiple pictures from different angles, and it didn’t get one right.)

I also took lots of pictures of random objects (articles of clothing, shoes, money) and those searches were a bit hit-and-miss, though often, when it missed it was a near-miss.

As these results indicate, Google Lens is far from perfect. But it’s much much better than Google Goggles ever was, and it will improve over time. Google will also add capabilities that expand use cases.

It’s best right now for very specific uses, which Google tries to point out in its blog post. One of the absolute best uses is capturing business cards and turning them into contacts on your phone.

Assuming that Google is committed to Lens and continues investing in it, over time it could become a widely adopted alternative to traditional mobile and voice search. It might eventually also drive considerable mobile commerce.

Apple launches 'set it and forget it' Search Ads Basic for the App Store

Apple says it will bring more installs with less effort “at a predictable cost.” That’s how the company is pitching its new Search Ads Basic offering.

Apple rolled out Search Ads for the App Store in September 2016. Since that time, Apple has seen significant adoption by developers seeking to drive app downloads. Search Ads Basic is a simplified version of Search Ads that eliminates keywords and bidding.

Apple won’t appreciate this analogy, but it’s kind of like AdWords Express for the App Store. Search Ads Basic is designed for developers who don’t have the time, interest or expertise to manage search campaigns. Currently, it’s available for the US only.

However, Search Ads is available in selected non-US markets, so over time, we can probably expect it to go international. The current product now becomes “Search Ads Advanced.” Beyond the differences in bidding and keywords, the two have different dashboards, with simplified data available for Basic and more granular data available with Advanced.

To get started with Basic, you specify a monthly budget and a cost-per-install (CPI) maximum. As with Search Ads Advanced, Apple generates the creative.

Using its data and analytics, Apple will recommend a CPI amount, but developers can set their own. Regardless, the company will seek to optimize campaigns to bring the actual cost in under the daily CPI. Apple is offering a $100 credit for new campaigns.

Users pay only for taps (clicks). I was told that Apple is seeing a very high average conversion rate of 50 percent. However, some campaigns perform even better. The company also indicated that many developers are achieving CPIs of less than $1.50 and some well below $0.50.

The move toward greater simplification is also happening at Google. Earlier this year, the company decided to turn all app-install campaigns into Universal App Campaigns.