Until (relatively) recently, electric utilities pretty much had a single mission: the generation, transmission, and monetization of electrons. It was a simple business model. The customer flips a switch, or claps their hands, or yells across the room to Alexa to turn on the lights, and the fulfillment process happens, literally at the speed of light. For many years, that was enough. But what happened before is not what happens today. Instead, we live in an economic marketplace where cable service providers provide home security, phone companies provide video entertainment options, drivers with some spare time provide rides from the airport, and Amazon provides, well, everything. Utility companies learned that within a fairly narrow space, they too can compete for share of consumers’ share of wallets for products that most consumers would have traditionally sought elsewhere. The “narrow space,” at least today, tends to center on either 1) products and services directly related to the home, such as things like home warranties, appliance repair and maintenance, home management and efficiency systems, and yes, home security; or 2) optional, voluntary programs like community solar. Configuring and ultimately pricing these types of products is a well-established, tried and true exercise in product development research. Got ballpark pricing questions? Use Van Westendorp Price Elasticity research. Want to see what consumers will trade off for lower prices? Use conjoint analysis. Want to offer complex pricing models based on tons of optional add-ons and choices? Use adaptive conjoint. Want to pick a winner among some product finalists? Use a monadic experimental design. The limitation with all these methods is that they generally yield relative consumer adoption estimates, not absolute ones. Relative metrics reliably show which product will outperform the others but doesn’t help much in populating that top line in the product’s pro forma. And obviously, it helps to know how many people will sign up for a community solar initiative—and at what price—before a utility can reasonably estimate their investments in hardware, marketing, insurance, rolling trucks, regulatory compliance, and, importantly, impact on their brand. In research for consumer package goods, the transformation from relative adoption estimates to hard forecasts of adoption and dollars has relied on mountains of historical data that model specific products in specific categories (e.g. a new flavor of potato chips in the salty snacks category). For new concepts like community solar initiatives, however, this history doesn’t broadly exist. Fortunately, many utilities do have historical adoption rates for many analogous, if antiquated, products. In the final, monadic stage of product development, a key question is about purchase intent. Generally speaking, the concept with the highest PI wins. Producing a forecast from these outputs, while far from an elementary exercise, is founded on best-practice modeling and marketing science. Adoption of some original products can be treated as dependent variables is a logistic model, with things like household demographics, marketing outreach, seasonal factors, and other characteristics as predictors to that outcome. A similar process can then be applied to the “new product,” using an outcome of “definitely would purchase” as the binomial target result and the current values (or, in the case of marketing effort, estimated values) along with their previous coefficients as the predictors. The final data science application involves scaling the model’s results to the adoption levels of previous products. In a nutshell, the modeling application for these utility applications is similar to the large-scale efforts applied in CPG and other sectors, except that the process tends to be unique to each utility, based on their history, customers, and many other factors. This fact doesn’t allow for an easy, out-of-the-box solution, but it does allow for a scientific, empirical, and defensible approach to forecasting new product sales.
Like most people pressed for time, I scan the news headlines just to get a sense of what’s going on in the world, from the missile antics of North Korea, to the frenzy state of bitcoin. Speaking of bitcoin, there’s been a wave of news lately regarding the legitimacy and astronomical price increase of bitcoin. Many financial analysts are warning the bubble will eventually burst, while others claim it’s the digital currency of the future, and you need to invest now.
Data Decisions Group has been conducting primary marketing research studies for over three decades. That's a lot of survey data! For many years, these meticulously and methodically collected survey results, and the ensuing insights, were our only deliverables to our clients. Not anymore. In 2018, the best solutions we have to offer our customers often begin with the marriage of survey data and some other data source.
In my previous blog post in this series I talked about the impact machine coding has on the quality of your data. Having a human being actually looking at each open-ended response in your survey may be time-consuming, but the results are hard to argue with - a code scheme that accurately represents the intention of the respondents' comments and a true reflection of the study results.
It's no secret that Hispanic buying power is surging in the United States. How could it not, with an estimated population reaching nearly 58 million in 2016, an increase of almost 8 million people in just six years. Because of their approximately $1.4 trillion in buying power, retailers and marketers are heavily focused on capturing data about Hispanic shopping habits, lifestyles, and product preferences. This translates into more dollars dedicated to Hispanic-targeted marketing and advertising and...you guessed it...market research.
In the fall of 2016, an update to iOS 10 sent shockwaves through the market research industry. The update included an upgrade to the default email application which places a banner (see image below) above the body of the incoming mail when the email in question is part of an email distribution list.
You've written the perfect list, encompassing every possible response to your survey question. Or so you think. No matter how comprehensive your list is, the range of human experience exceeds it, and that means that sometimes, someone is going to click that little radio button at the bottom, and then reach for their keyboard to type a response in the little box labeled "Other, please specify".
Drones have become a hot and controversial topic in recent years, with businesses such as Amazon testing package delivery via drones through their Amazon Prime options and Domino's testing pizza delivery in select markets, to local law enforcement using drones to track suspects and monitor emergency situations. To the casual observer of this recent phenomenon, it seems like drones would only be practical in recreational settings. But, as the above examples demonstrate, drones are becoming more prevalent in business as companies look for quicker and cheaper ways to obtain competitive advantages over their competition.
Custom online research panels continue to offer an efficient way of collecting survey feedback from participants. Custom panels often make research speedier, more affordable, and more actionable, but only if they are managed and maintained properly. One of the critical (and often overlooked) elements to properly managing and maintaining a custom panel is regularly scheduled panel refreshment - essential to maintaining a healthy and active panel.
Let's face it: we live in a "if you can't quantify it, it's not actionable" world. Companies ultimately want to know what their customers want, how much of it they want, and how they want it delivered to them. And they want the numbers to back it up. If you are looking for research that...