Until (relatively) recently, electric utilities pretty much had a single mission: the generation, transmission, and monetization of electrons. It was a simple business model. The customer flips a switch, or claps their hands, or yells across the room to Alexa to turn on the lights, and the fulfillment process happens, literally at the speed of light.
For many years, that was enough. But what happened before is not what happens today. Instead, we live in an economic marketplace where cable service providers provide home security, phone companies provide video entertainment options, drivers with some spare time provide rides from the airport, and Amazon provides, well, everything. Utility companies learned that within a fairly narrow space, they too can compete for share of consumers’ share of wallets for products that most consumers would have traditionally sought elsewhere. The “narrow space,” at least today, tends to center on either 1) products and services directly related to the home, such as things like home warranties, appliance repair and maintenance, home management and efficiency systems, and yes, home security; or 2) optional, voluntary programs like community solar.
Configuring and ultimately pricing these types of products is a well-established, tried and true exercise in product development research.
Got ballpark pricing questions? Use Van Westendorp Price Elasticity research.
Want to see what consumers will trade off for lower prices? Use conjoint analysis.
Want to offer complex pricing models based on tons of optional add-ons and choices? Use adaptive conjoint.
Want to pick a winner among some product finalists? Use a monadic experimental design.
The limitation with all these methods is that they generally yield relative consumer adoption estimates, not absolute ones. Relative metrics reliably show which product will outperform the others but doesn’t help much in populating that top line in the product’s pro forma. And obviously, it helps to know how many people will sign up for a community solar initiative—and at what price—before a utility can reasonably estimate their investments in hardware, marketing, insurance, rolling trucks, regulatory compliance, and, importantly, impact on their brand.
In research for consumer package goods, the transformation from relative adoption estimates to hard forecasts of adoption and dollars has relied on mountains of historical data that model specific products in specific categories (e.g. a new flavor of potato chips in the salty snacks category). For new concepts like community solar initiatives, however, this history doesn’t broadly exist.
Fortunately, many utilities do have historical adoption rates for many analogous, if antiquated, products. In the final, monadic stage of product development, a key question is about purchase intent. Generally speaking, the concept with the highest PI wins. Producing a forecast from these outputs, while far from an elementary exercise, is founded on best-practice modeling and marketing science.
Adoption of some original products can be treated as dependent variables is a logistic model, with things like household demographics, marketing outreach, seasonal factors, and other characteristics as predictors to that outcome. A similar process can then be applied to the “new product,” using an outcome of “definitely would purchase” as the binomial target result and the current values (or, in the case of marketing effort, estimated values) along with their previous coefficients as the predictors. The final data science application involves scaling the model’s results to the adoption levels of previous products.
In a nutshell, the modeling application for these utility applications is similar to the large-scale efforts applied in CPG and other sectors, except that the process tends to be unique to each utility, based on their history, customers, and many other factors. This fact doesn’t allow for an easy, out-of-the-box solution, but it does allow for a scientific, empirical, and defensible approach to forecasting new product sales.