<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=204513679968251&amp;ev=PageView&amp;noscript=1">

Do You Even Data

A data-driven marketing blog

Want to learn how you can translate incredible data list information into killer marketing campaigns? Want to better understand how data research and models can enhance the data you already have?
All Posts

Measuring Importance with Maximum Differential (MaxDiff) Scaling

Other methods have significant but grudgingly accepted flaws; Likert scales, for example, are subject to scale bias because people use scales differently (one person’s 7 might be another person’s 9).  Ordinal rankers, where respondents simply rank their preferences from 1 to k[1] in order of importance, can reliably provide information on which single item is most important, in aggregate, but discrimination between attributes is quickly lost after that.  Further, most survey respondents can rarely make meaningful comparisons between more than a few items. 

It is notable that when using MaxDiff results, specifically the hierarchical Bayes utilities output, it is the correlated and projectable collection of those attributes each person identifies as important that matters, not any one individual item in the list. The MaxDiff Scaling results provide two important metrics: the relative rank of the items in order of importance, and the magnitude of the importance of those items relative to each other.

In this MaxDiff exercise, respondents are asked which of the concepts—which we will have described in detail prior to the exercise—is most appealing to them.  At the same time, they are asked which is least important, as depicted in the illustrative choice set below.  They answer those questions for multiple choice sets, comparing items 3 or 4 times each in different, apparently random combinations[2], until we have sufficient data to conduct our analysis.

Max Diff Image ImageMax DIff Image Image 2

[1] Where k is the number of potential choices
{2] The items in each of the choice sets appear random to the respondents, but they are not.  The exercise utilizes an orthogonal design, which assures that all of the concepts appear in combinations that serve as reliable representations of all possible combinations.

 

 

Dino Fire
Dino Fire
Dino Fire

Dino serves as President, Market Research & Data Science. Dino seeks the answers to questions and predictions of consumer behavior. Previously, Dino served as Chief Science Officer at FGI Research and Analytics. He is our version of Curious George; constantly seeking a different perspective on a business opportunity — new product design, needs-based segmentation. If you can write an algorithm for it, Dino will become engaged. Dino spent almost a decade at Arbitron/Nielsen in his formative years. Dino holds a BA from Kent State and an MS from Northwestern. Dino seems to have a passion for all numeric expressions.