Recently, I spoke with Mark Sanders about his Discussion Paper for the Utrecht University School of Economics: Choice Complexity, Benchmarks and Costly Information (2017, Job Harms, Stephanie Rosenkranz, Mark Sanders).
In this study we investigate how two types of information interventions, providing a benchmark and providing costly information on option ranking, can improve decision-making in complex choices.
In our experiment subjects made a series of incentivized choices between four hypothetical financial products with multiple cost components. In the benchmark treatments one product was revealed as the average for all cost components, either in relative or absolute terms. In the costly information treatment subjects were given the option to pay a flat fee in order to have two products revealed as being suboptimal. Our results indicate that benchmarks affect decision quality, but only when presented in relative terms. In addition, we find that the effect of relative benchmarks on decision-quality increases as options become more dissimilar in terms of the number of optimal and suboptimal features.
This result suggests that benchmarks make these differences between products more salient. Furthermore, we find that decision-quality is improved by providing costly information, specifically for more similar options. Finally, we find that absolute – but not relative – benchmarks increase demand for costly information.
In sum, these results suggest that relative benchmarks can improve decision-making in complex choice environments.
Subjects had 30 seconds to complete a complex task, namely to choose one out of four products. And to choose the product with the lowest costs. This was their proxy for a complex financial decision. Not only a calculating excercise, but you also have to know that the management fee is to be added to the starting costs, and the tax break can be substracted from the monthly costs.
Table below shows the task; I calculated the bottom row, that was not shown in the experiment. Product D is the average of A, B and C. In this example, product A is the optimal choice (payoff =5). Product C is worst (or “suboptimal” as it is called in the paper) with a -5 payoff.
I am not too sure if I think this is a valid proxy for e.g. buying a mortgage. Considerable maths-skills are needed. A good heuristic seems to be to look at the monthly costs. So perhaps this valliant effort to operationalize a complex financial decision for a lab-setting with students ahs merit.
|Management fee (%)||15%||31%||16%||21%|
|Tax deduction (%)||11%||10%||10%||10%|
|total cost (not shown)||579||771||873||734|
Experimental conditions: benchmark & advice
The randomized controlled trial had a 3 x 2 factorial design. Either no benchmark, an absolute bechmark (average product, i.e. product D in table above), or a relative benchmark (all values for bechmark D are rescaled to 100).
This was crossed with: option to get advice or absence of this option. Advice in this experiment consisted of eliminating the worst option and 1 of the 2 remaining non-optimal options. So a subject choosing advice for the tabel above woud get to see product A (optimal) and (randomly) either B or D. Getting advice costs 2.5 in payoff, so max pay-off becomes 2.5. But it reduces downside risk of picking the worst product. Subjects could still pick any of the 4 options, even when advised against.
Results: advice and relative benchmarks work
Why papers by economists always lack graphs still does not cease to amaze me. So I copied their table 4 to Excel and used Conditional formating to get some bars. (aside: fortunately, tables were included in the text and not at the end, that always annoys me too, having to flip back and forth).
Pretty clear that advice works: subject respond more within time (fourth column; 82%-87% without advice vs 92%-97% for with advice). Good to mention: if you fail to respond within 30 seconds, the worst option is automatically selected for you.
Advice also leads to higher pay-offs, which is both driven by more often picking the best choice and by less often choosing the worst option.
Regression results indicate that absolute bechmarks do not affect the quality of the decisions (i.e. the payoff). Relative benchmarks do affect the quality, albeit it via a significant interaction effect with product dissimilarity; “In contrast, relative benchmarks do improve decision-making as options in the choice set become increasingly dissimilar in terms of the number of optimal and suboptimal attributes“.
The effects of benchmarks on decision quality are either absent ( for absolute bechmark) or not very strong/convincing (for relative benchmark); only the interaction with product dissimilarity is significant. I’m not too enthusastic about the experimental task, on how relevant that is to natural decisions and the product dissimilarity might also be more of an artefact than something real.
So: laudable and important research, not too convinced yet about the real-life implications. More research is needed…