Получи случайную криптовалюту за регистрацию!

​​Stated “Versus” Derived Importance: A False Dichotomy. Takin | InsightStream

​​Stated “Versus” Derived Importance: A False Dichotomy. Taking a closer look at what these two methods really measure
by Keith Chrzan, Vice President, Marketing Sciences, Maritz Research and Juraj Kavecansky, PhD, Director, Marketing Sciences, Maritz Research

A perennial question among applied marketing researchers is whether to measure stated or derived importance. The debate focuses on whether stated or derived importance is a better method, whether one is more valid or more actionable than the other and so on. Much of this attention is misguided, however, resulting from the mistaken conflating of two similar, but not identical, concepts. We illustrate an under-appreciated point made by Myers and Alpert over 30 years ago – that the choice between stated and derived importance is a false dichotomy: the two methods measure different constructs, they accomplish different objectives and they fulfill different information needs. Drawing upon brand studies with choice-based derived importance models and customer satisfaction studies with regression-based derived importance models, we show that, when done properly, both stated and derived methods have solid predictive validity, albeit with different strengths and weaknesses.

Motivation – Two Case Studies

Two disguised case studies illustrate what can happen if you measure importance badly.

Suckered by Stated Importance

A service company wanted to know which aspects of its service most satisfied its customers. They asked 400 of their customers to rate the importance of each of the aspects on a scale from 0=Not Important At All to 10=Critically Important. When the results came back, all of the aspects had average importances in the range of 7.4 to 7.8. The survey didn’t give the service company any valuable feedback about which aspects of the service customers valued more than others, so it was a waste of a few tens of thousands of dollars and some goodwill, because some customers expected the service provider to make changes based on the survey results.

Derived Importance Debacle

In the 1980s a medical supplies manufacturer had a 60% share of their market. A fancy consultant convinced the manufacturer that it should be using derived importance modeling to quantify the impact of attributes on customers’ choices. The consultant suggested a super-sophisticated method called multiple regression. He even put it in quotes, “multiple regression,” so that it would be clear to senior management how very cool and new and sophisticated it was to use regression instead of the stated importance methods the company had been using. So when the old stated importance methods said that ease of use was important to customers, the consultant and his regression analysis said that ease of use wasn’t important at all and that the key to incremental sales was size - the smaller the better. The manufacturer redirected new product development efforts away from easy to use products and toward small ones. The next year, a competitor launched an especially easy to use product and grew from a 15% share to 50%, almost overnight. The manufacturer, caught flat-footed, dropped from 60% to 30%, also almost overnight. Within a couple of years, the decision to emphasize size at the expense of ease of use had cost the manufacturer hundreds of millions of dollars. What happened?

These two case studies and many others like them illustrate some of the pitfalls associated with stated and derived importance measurement.

Article in PDF >>>

#analytics #case #marketing #methodology #mr #psychology #science #sociology