Google’s announcement and launch this week of a market research offering has had the market research and technology industries buzzing.
And understandably so. As this article from the GreenBook blog points out, the world’s biggest search engine company already has access to millions of consumer data points (thanks to Gmail and Android phones), and now they are going to ask those millions of users and their friends about their shopping, product, and marketing preferences one question at a time.
But before anyone in either industries panics, it is worth doing a little research on Google’s offerings to understand where and when their tool is the most helpful.
What it’s good for
One question, thousands of responses. Google’s tool allows for quick responses to urgent questions, and access to immediate results. A great fit for those urgent one-off questions that come up in a meeting at 5 pm and need to be answered by 9:30 am the following morning.
Your Adwords campaign. The platform runs on Google’s publisher network. So if you are going to advertise via Google and you want to do some simple market research on that population, it is a good option.
DIY research with the Internet population. Unlike many do-it-yourself tools, the only responsibility for the user here is to design the research questions. Google handles the sampling to deliver balanced results, including weighting where it is required. Real time reporting includes charts with descriptive text highlighting interesting differences found in the data.
Demographic information is provided. Google uses a combination of factors, including IP address and cookies to make assumptions about demographic data based on information about the respondent’s originating computer (IP address and sites visited), so demographic questions do not need to be asked unless the demographics need to be precise for the purposes of the research.
The current limiting factors
One question at a time. You can ask a yes/no screener questions to qualify someone to answer that 1 question, but building a longer survey with related questions would be difficult. Piping from one question into another based on response would not be an option, nor would dynamic creation of a question’s content based on previous responses or sample data.
You can certainly ask more than 1 question as part of a research initiative, but not of the same people (each person gets fed just 1 question or 1 screener + 1 question). Each question would need to be able to stand alone (or with a single screener leading into it) and not rely on previous questions in the survey to make sense. This would make it difficult to conduct such exercises as Van Westendorp price modeling, choice based exercises, or even aided and unaided awareness.
Balanced to the US Internet population rather than the US general population. This is not a limitation if the internet population is the target audience for your product, but if you require a traditional census stratification scheme for your sample, you may need to supplement with other online or telephone sample. This is a limitation Google recognizes and references in the white paper listed below.
Sample frame is limited to Google’s publisher network. There may or may not be bias introduced by this limitation, but it bears observing as the product rolls out and we begin to see the way it performs.
Question types are limited to multiple choice, image choice, and 5-point Likert scales.
Not able to append data from other sample sources. With a custom panel, data can be appended from customer databases of transactional data or segmentation schemes. With a general consumer panel, data can be appended from large consumer databases. And with a completely anonymous sample source, the option to do this kind of appending effectively is eliminated.
Sensitive questions may be a problem. Another risk factor noted by Google is that the nature of the interaction with the respondent (intercepting them as they navigate the web) may make very personal questions difficult to get responses to. For consumer panels, these objections can be more easily overcome as there is an existing relationship with the panel provider and a level of trust has been established.
One could argue that an intercept survey is more analogous to a RDD-based phone survey, since the respondents are not necessarily enrolled in an online research panel and are, in effect, randomly selected. Results from a 2010 study by Hines, Douglas, and Mahmood shed some light on the phenomenon of differences in the rates self-reported mental health issues based on the data collection mode. Higher incidences of some mental health disorders were reported among online panel respondents, but the RDD telephone sample reported higher indicences of others.
Non-response bias. All surveys suffer from non-response bias to one degree or another, but response rates for unsolicited pop-up surveys are notoriously low. Fricker (2008) cites an early study conducted in the UK in 2000 that yielded response rates of 15-30%. The issue is less with the response rate itself, but the lack of ability to know anything at all about the non-responders.
Want to know more?
- PC Magazine online does a quick overview of the product
- Check out the GreenBook blog post mentioned above
- Read Google’s white paper on Comparing Google Consumer Surveys to Existing Probability and Non-Probability Based Internet Surveys
 Denise A. Hines, Emily M. Douglas, Sehar Mahmood, “The effects of survey administration on disclosure rates to sensitive items among men: A comparison of an internet panel sample with a RDD telephone sample,” Computers in Human Behavior, Volume 26, Issue 6, November 2010, Pages 1327-1335, ISSN 0747-5632, 10.1016/j.chb.2010.04.006.
 Fricker, Ronald D. (2008) “Sampling Methods for Web and E-Mail Surveys,” from Fielding: Online Research Methods (Handbook) p. 195-217.