Trending Now

Is “not provided” something for us to work around?

Unless you’ve been under a rock (and maybe even if you have been), you know that Google over the last couple of years has repeatedly reduced the availability of keyword referral data for organic search–now it seems that it will go to zero.  Google’s story is that they are encrypting this data to protect the privacy of the searcher, but it is hard to see how they can say that with a straight face, because for 35 cents a click (or whatever your keyword goes for), they will happily violate the privacy of any searcher with their paid search advertisers.

While we can be indignant about it, we can’t really ignore it. Keyword data has been a key part of organic search marketing since its beginning. We have to do something.

Google sells out if the price is right - Net N...
Photo credit: Steve Rhodes

The first thing you can do is to take the crumbs that Google leaves for us. Google Webmaster Tools offers data for the top 2,000 keywords for each subdomain. You can take steps to work this to its maximum, by creating as many subdomains as you can without driving your customers crazy, and by squirreling away all the data each day so that you end up collecting data on many more than 2,000 keywords for your site. If your site has a relatively heavy emphasis on so-called “head” (popular) keywords, this might give you what you need. If you rely on massive “long-tail” traffic, all the subdomains in the world probably won’t replace what you once had.

The second thing that you can do is simply extrapolate the missing data from the data we have left. Depending on which country you are in, Bing might comprise 30% of the searches, so maybe you can extrapolate what you know from Bing. Or perhaps you can extrapolate your organic data from your paid data. Will that perfectly simulate the correct data? No, but it might fill in the blanks with something that is better than nothing.

The third thing you can do is to purchase a tool with an even more elaborate simulation–perhaps one that triangulates all the data we have talked about already, and even data about which pages get more traffic. Some approaches look at the traffic to actual pages and then extrapolate what the keywords must have been according to the words in title tags, for example.

What each of these approaches is attempting is to restore the status quo. We’ve always focused on which keywords are driving traffic to our sites and we don’t want to let go of what we have always done. And that might be the right approach. Perhaps these simulations will fill in the blanks and we can go on as we always have.

But I suspect that it will be hard to keep the old game working. If Google sticks to its guns (and some browsers have started hiding referrals, too) then the simulations will be hamstrung, because the only way that you can test a simulation is against the actual. If we don’t know the actual, we won’t ever know if the simulation is working.

At some point, we might need to admit that the old ways must be updated. Perhaps we will need a more content-oriented approach that doesn’t analyze keywords the way we once did. We still know which content gets the most search referrals (even though we don’t always know which keywords were used) and a content-oriented approach jives with social media and other digital marketing where there are no keywords. And as search results become more personalized, the results depend less on keywords and more on who you are. Moreover, keywords are disappearing even in search, when you look at how Google Now works.

Will Google start to remove even more data from Webmaster Tools, such as the average rank data? Will Google at some point remove paid search referral data? Or will Google be forced to reverse course and reveal organic keyword again? No one knows, but it is clear that search marketers must be prepared to adjust their methods as the data available changes.

Enhanced by Zemanta

Mike Moran

Mike Moran is a Converseon, an AI powered consumer intelligence technology and consulting firm. He is also a senior strategist for SoloSegment, a marketing automation software solutions and services firm. Mike also served as a member of the Board of Directors of SEMPO. Mike spent 30 years at IBM, rising to Distinguished Engineer, an executive-level technical position. Mike held various roles in his IBM career, including eight years at IBM’s customer-facing website, ibm.com, most recently as the Manager of ibm.com Web Experience, where he led 65 information architects, web designers, webmasters, programmers, and technical architects around the world. Mike's newest book is Outside-In Marketing with world-renowned author James Mathewson. He is co-author of the best-selling Search Engine Marketing, Inc. (with fellow search marketing expert Bill Hunt), now in its Third Edition. Mike is also the author of the acclaimed internet marketing book, Do It Wrong Quickly: How the Web Changes the Old Marketing Rules, named one of best business books of 2007 by the Miami Herald. Mike founded and writes for Biznology® and writes regularly for other blogs. In addition to Mike’s broad technical background, he holds an Advanced Certificate in Market Management Practice from the Royal UK Charter Institute of Marketing and is a Visiting Lecturer at the University of Virginia’s Darden School of Business. He also teaches at Rutgers Business School. He was a Senior Fellow at the Society for New Communications Research and is now a Senior Fellow of The Conference Board. A Certified Speaking Professional, Mike regularly makes speaking appearances. Mike’s previous appearances include keynote speaking appearances worldwide

Join the Discussion

Your email address will not be published. Required fields are marked *

Back to top