The Rise of Behavioural Discrimination & Virtual Competition

This blog post Big data and first-degree price discrimination (thanks Patricia) led me to the work of Ariel Ezrachi and Maurice Stucke. As Silvia Merler writes:

[Ezrachi and Stucke] argue that online behavioural discrimination will differ from the price discrimination we have seen in the retail world in three important respects:

  1. Big data allow the shift from third-degree, imperfect price discrimination to near perfect price discrimination;
  2. Sellers can use big data to target consumers with the right “emotional pitch” to increase overall consumption (the demand curve shifts to the right)
  3. As more online retailers personalise pricing and product offerings, it will be harder for consumers to discover a general market price and to assess their outside options, thus implying that behavioural discrimination becomes more durable.

Ezrachi and Stucke published a book in 2016: Virtual Competition (on my to read pile, reserved it at the University Library; book’s webpage also contains a lot of extra info/links).

Behavioural discrimination
I did read their paper The rise of behavioural discrimination (37 European Competition Law Review 484 (2016)).

New dynamics that reduce our welfare? (…) Our article explores how e-commerce and the personalisation of our online environment can give rise to behavioural discrimination, a durable, more pernicious form of price discrimination.”

I. Near perfect price discrimination

Third-degree price discrimination, which involves the charging of different prices to different groups. The price can depend, among other things, on your location (i.e. where you live), your age, or your sex. Cinemas, bus services, and restaurants, for example, may charge adults higher prices than children, students or senior citizens.

By contrast, in this article, our focus is on the possible shift to perfect, or first-degree, price discrimination—where firms can identify and charge for each individual the most he or she is willing to pay, i.e. the reservation price.

“Big Data, learning by doing, and the scale of experiments come into play to better approximate your reservation price.”

“In this data-driven economy, the algorithm—to maximise profitability—will estimate the likelihood of our shopping elsewhere or being aware of better deals and accordingly provide us with a convincing sales pitch.” (e.g. coupons and promotion codes for customers more sensitive to outside options, i.e. more price-sensitive customers who are likeley to compare option, more sophisticated consumers. Naieve consumers can be exploited more efficiently).

II. Shifting the demand curve to the right

Sellers using our personal data to induce us to buy more products or services than
we otherwise would have purchased.

A few consumer biases, which firms may exploit to promote consumption:

  • Use of decoys
  • Price steering, e.g. On Orbitz, Mac Users Steered to Pricier Hotels
  • Increasing complexity; facilitate consumer error or bias and manipulate consumer demand to their advantage (…) companies can, by designing the number and types of options they offer, better exploit consumers’ cognitive overload. In increasing complexity, the firms can also increase consumers’ search and switching costs, thereby reducing the visibility (and attraction) of outside options, and giving them more latitude to exploit consumers.
  • Imperfect willpower “framing effects” (how the issue is worded or framed) do matter. Credit cards are one example. Here they cite a Dutch study The abolition of the No-discrimination Rule from 2000 (!) with N=150 consumers (!) surveyed. Dutch
    merchants could impose surcharges or offer discounts based on how the customer was going to pay. Of the consumers surveyed, 74% thought it (very) bad if a merchant asked for a surcharge for using a credit card. But when asked about a merchant offering a cash discount, only 49% thought it (very) bad. A weak spot in an excellent paper.

The road to near-perfect behavioural discrimination will be paved with personalised coupons and promotions: the less price-sensitive online customers may not care as much if others are getting promotional codes, coupons, and so on, as long as the list price does not increase. (p.488)


Another way to frame behavioural discrimination in a palatable manner is to ascribe the pricing deviations to shifting market forces. Few people pay the same price for corporate stock. They accept that the pricing differences are responsive to market changes in supply and demand (dynamic pricing) rather than price discrimination (differential pricing). So once consumers accept that prices change rapidly (such as airfare, hotels, etc.), they have lower expectations of price uniformity among competitors. One hotel may be charging a higher price because of its supply of rooms (rather than discriminating against that particular user). (…) Thus, we may not know when pricing is dynamic, discriminatory, or both.


III. The durability of behavioural discrimination

it will be harder to know what others see. (…) As personalised offerings increase, search costs will also increase for consumers seeking to identify the “true” market price.

Behavioural discrimination—while not always possible—could occur more often than we expect. Furthermore, as we shift more of our activities to a controlled online ecosystem, it is likely to intensify.

The power to discriminate may be curtailed by possible pushback from consumers (I personally doubt it).

Price comparison websites may foster, rather than foil, behavioural discrimination and switching costs may be higher than one assumes, despite perceived competition being only a click away. (from the footnotes related to this quote: As more consumers rely (and trust) an intermediary to deliver the best results (whether relevant results to a search query or array of goods and services), the less interested they become in multi-homing—that is, from checking the availability of products and prices elsewhere. And: many users who indicated that when a search result is fails to meet their expectations they will “try to change the search query—not the search engine.”


IV. The welfare effects of behavioural discrimination

sellers can manipulate our environment to increase overall consumption, without necessarily increasing our welfare.

Once one accounts the consumer perspective, the social welfare perspective, and the limited likelihood of total welfare increasing, behavioural discrimination is likely a toxic combination. Moreover, behavioural discrimination may blur into actual discrimination due to the limits and costs of refined aggregation.

The worrying thing is that we (and the enforcers) may not even know that we are being discriminated against. Under the old competitive paradigm, one might suspect one was discriminated against if access was inexplicably denied (e.g. restaurants for “whites only”) or was charged a higher price based on this single variable. Under the new paradigm, users may not detect the small but statistically significant change in targeted advertisements (or advertised rates).



As pricing norms change, price and behavioural discrimination eventually may be accepted as the new normal. Just as we have accepted (or become resigned to) the quality degradation of air travel, and the rise of airline fees—from luggage to printing boarding passes—our future norms may well include online segmentation and price discrimination.

The costs can be significant. The new paradigm of behavioural discrimination affects not only our pocketbook but our social environment, trust in firms and the marketplace, personal autonomy, privacy and well-being.


Some other relevant links:

Why controllers compromise on their fiduciary duties: EEG evidence on the role of the human mirror neuron system

Why controllers compromise on their fiduciary duties: EEG evidence on the role of the human mirror neuron system – Philip I. Eskenazi, Frank G.H. Hartmann & Wim J.R. Rietdijk. Accounting, Organizations and Society 50 (2016): 41-50.


Business unit (BU) controllers play a fiduciary role to ensure the integrity of financial reporting. However, they often face social pressure from their BU managers to misreport. Drawing on the literature on the human mirror neuron system, this paper investigates whether controllers’ ability to withstand such pressure has a neurobiological basis. We expect that mirror neuron system functionality determines controllers’ inclination to succumb to social pressure exerted by self-interested managers to engage in misreporting.

We measure mirror neuron system functionality using electroencephalographic (EEG) data from 29 professional controllers during an emotional expressions observation task. The controllers’ inclination to misreport was measured using scenarios in which controllers were being pressed by their manager to misreport.

We find a positive association between controllers’ mirror neuron system functionality and their inclination to yield to managerial pressure. In line with our expectation, we find that this association existed specifically for scenarios in which managers pressed their controllers out of personal rather than organizational interests. We conclude that BU controllers’ neurobiological characteristics are involved in financial misreporting behavior and discuss the implications for accounting research and practice.

Hypothesis: For BU [Business Unit] controllers, we expected that hMNS [human mirror neuron system] functionality predicts controllers’ vulnerability to the social pressure to misreport exerted by BU managers.

An important characteristic of the role of BU controllers is the combination of local (to support their BU managers in operational and strategic decision making) and functional (fiducary duty) responsibilities.

Method: N=29 study with 3 scenario’s x 2 contexts (managers’ personal/ self-interest, or organizational interest).

EEG: Individual levels of hMNS functionality can be observed in electroencephalogram (EEG) recordings of brain activity (…) The associated weakening of the EEG signal is called mu suppression. Mu suppression has been shown to be a robust and valid indicator of hMNS functionality (…) lower values indicate more “mirroring”, associated with higher levels of sensitivity to others’ emotions.

For example scenario below, correlation between cooperation and MU: r = .406, p = .029


Result: Our findings indicate a strong association between hMNS functionality and controllers’ inclination to yield to BU managers’ pressure to misreport when this pressure stems from BU managers’ personal interests rather than from managers’ concerns with organizational interests.

our study suggests that emotional influence may cause excessive alignment between the interests pursued by the BU manager and those served by the reporting behaviors of the BU controller. In designing internal control structures, organizations need to be aware of the reporting risks associated with the expansion of “business partner” controllers.

Low Interest Rates and Risk Taking: Evidence from Individual Investment Decisions

Low Interest Rates and Risk Taking: Evidence from Individual Investment Decisions (July 2017) Chen Lian, Yueran Ma, and Carmen Wang. SSRN version. My summary is from earlier, 2016 version.


In recent years, interest rates reached historic lows in many countries. We document that individual investors “reach for yield,” that is, have a greater appetite for risk taking when interest rates are low. Using an investment experiment holding fixed risk premia and risks, we show that low interest rates lead to significantly higher allocations to risky assets, among MTurk subjects and HBS MBAs. This behavior cannot be easily explained by conventional portfolio choice theory or by institutional frictions. We then propose and test explanations related to investor psychology. We also present complementary evidence using historical data on household investment decisions.

We provide evidence that individual investors “reach for yield”, that is, have a greater appetite for risk taking in low interest rate environment (…) We find significantly higher allocations to risky assets in the low rate condition.

Experiments (N=400) with 2 groups, allocate $100.000 between ;
[groep 1] risk free = 5% vs risky asset = 10%
[groep 2] risk free = 1% vs risky asset =6% (investment horizon = 1 year).
We show that individuals demonstrate a stronger preference for risky assets in their investment decisions when the risk-free rate is low. (…) The difference is about 7 to 9 percentage points, on a basis of roughly 60% allocations to the risky asset.


Why? (what mechanisms?)

  • People may form reference points of investment returns. we find that there is significant reaching for yield behavior when interest rates are below 3%, whereas investment decisions are not significantly different when interest rates are above this level. This cut-off seems consistent with the level of interest rates that most participants are used to prior to recent years
  • Salience of the higher average returns on the risky asset in different interest rate environment. Most simply, 6% average returns relative to 1% risk-free returns is more salient than 10% average returns relative to 5% risk-free returns. Reaching for yield behavior is dampened if investment returns are completely framed in gross terms (e.g. instead of saying 5% returns, we say that one will get 1.05 units for every unit of investment).


In the 2017 version they made the graph that I had made myself in my 2016 summary:


Section 5: Suggestive Evidence from Observational Data: the data suggest that portfolio shares of stocks and flows into risky assets increase (while portfolio shares of safe assets and flows into deposits fall) when interest rates decrease. (p.28)

  • In terms of magnitude, a one percentage point decrease in interest rates is associated with about 1.5 percentage points increase in allocations to stocks and a similar size fall in allocations to “cash”.
  • Interestingly, the magnitude of allocations’ response to interest rates seems to be similar in the experiment and in the observational data. (p.29)
  • We see that across different data sources, decreases in interest rates are associated with flows into risky assets and out of safe interest-bearing assets. (p30)
  • we hold the findings in this section 5 to be merely suggestive and complementary to our experimental evidence, yet we are intrigued that data across several different sources show consistent patterns.


Other interesting bits:

  • P1: A number of papers also provide empirical evidence that banks, money market mutual funds, and corporate bond mutual funds invest in riskier assets when interest rates are low (Maddaloni and Peydr_o, 2011; Jim_enez, Ongena, Peydr_o, and Saurina, 2014; Chodorow-Reich, 2014; Hanson and Stein, 2015; Choi and Kronlund, 2015; Di Maggio and Kacperczyk, 2016).
  • footnote 1, p1: The \reaching for yield” behavior we study in this paper, most precisely, is that people invest more in risky assets when interest rates are low, holding constant the risks and excess returns of risky assets.
  • Footnote 3, p5: For example, Di Maggio and Kacperczyk (2016) and Choi and Kronlund (2015) show that money market mutual funds and corporate fund mutual funds who reach for yield get larger in inflows, especially when interest rates are near zero. These inflows most likely come from yield seeking end investors. It seems plausible that households’ yield seeking behavior could be an important cause of some financial institutions reaching for yield.
  • P5: our evidence on risk taking and interest rate environment may also have implications for security design and consumer protection, as households’ biases could be exploited by institutions and asset managers that highlight returns and shroud risks (C_el_erier and Vall_ee, 2016).
  • P10-11 In our data, Harvard Business School MBAs and MTurk workers reach for yield by a similar degree. Nor do we find that reaching for yield declines with wealth, investment experience, or education among MTurks, or with investment and work experience in finance among MBAs
  • P32 The impact of the interest rate environment on investor behavior could have important implications for connections between key macroeconomic issues and capital market dynamics and financial stability.

Overcoming Negative Media Coverage: Does Government Communication Matter?

Liu, Brooke Fisher, J. Suzanne Horsley, and Kaifeng Yang. “Overcoming negative media coverage: Does government communication matter?.” Journal of Public Administration Research and Theory (2012): 597-621


Public administration scholars often note that government should engage in more effectiveexternal communication to improve citizen trust and maintain political legitimacy. An important part of the belief is that more effective communication can lead to more favorable media coverage that ultimately shapes citizen trust in government. However, the link between government communication and media coverage remains empirically untested.

Through a survey of 881 government and business communicators, this study tests the relationship between external communication activities and media coverage.

The study shows that government organizations report being less likely to have favorable news coverage than their private counterparts, but most government organizations do report that their media coverage is favorable. Moreover, the results show that active media interaction, organizational support for communication, and adequate communication budget are associated with reporting more favorable coverage. In comparison, a different set of variables, except adequate communication budget, are found to affect whether business organizations report having more favorable media coverage.

Empirical research on effects government communication
p598: “Given the importance and challenges, it is crucial for public administration scholars to more rigorously study government communication and its impact on media coverage and, in turn, citizen trust

it is reasonable to state that the mechanisms linking external communication and government performance have not been mapped out with empirical evidence.

The purpose of this study is two-fold:

  1. To identify the types of government communication activities
  2. To test how the activities affect perceived media coverage

p604: “The survey consisted of 68 questions (…) the dependent variable is measured by three dimensions: the extent to which the media coverage is perceived as favorable, accurate, and fair.

Media more negative on government
p599 “results of a survey of government and business communicators found that government communicators reported being covered more frequently and more negatively than business communicators (Liu, Horsley, and Levenshus 2010).”

Government communication people feel they receive more unfavorable press/less favorbale press than business communication professionals, see Table 4.

To further understand the dependent variable, table 4 presents the responses’ detail distribution in government and business subsamples. Note that 9 is the scale midpoint depicting a neutral evaluation. Consistent with the t-test, higher percentages of business communicators (as opposed to their government counterparts) had responses higher than 9. Among government respondents, although very few of them reported extremely low values (3, 4, and 5), 15.3% of them reported that on average their organization had experienced unfavorable media coverage in the past six months. In contrast, 65.3% of government respondents reported favorable media coverage.


Positive effect media interaction for government
p609: The results show that media interaction for government (Model 2, Beta = 0.29, p < .001) does lead to positive media coverage. Media Interaction is the composite of: Write news releases and advisories, Hold news conferences, Conduct media interviews, Respond to media inquiries, Pitch stories to the media, and Track media clips.

Media interaction (news releases, news conferences, media interviews, responding to media inquiries, pitching stories to the media, and tracking media clips) is found to positively affect media coverage for the government subsample, but no such relationship is found for the business subsample (p612)

Het meten van effecten van de handhaving door de Belastingdienst

In het laatste nummer van het Tijdschrift voor Toezicht van 2016 schrijven drie medewerkers van de Belastingdienst over effectmeting (“een onmisbaar element van ‘goed toezicht’”): “Centraal in dit (beschrijvende en verkennende) artikel staat de vraag hoe de Belastingdienst de effecten van zijn handhavings- en toezichtactiviteiten meet en wat de uitdagingen hierbij zijn.” (p9)

Het meten van effecten van de handhaving door de Belastingdienst (2016) Sjoerd Goslinga, Maarten Siglé en Lisette van der Hel [pdf]

“Met effectmeting vindt de beoordeling plaats of het uitvoeren van de handhavingsactiviteiten daadwerkelijk de determinanten van compliance heeft beïnvloed en of dit vervolgens effect heeft gehad op de compliance” schrijven Goslinga et al.

Bijvoorbeeld: Aangiftecampagne om de compliance (tijdig aangifte doen, voor 1 april) van burgers te verhogen door middel van voorlichting.


‘Outcome’ (=effect) representeert in deze effectketen de uiteindelijke impact van de activiteiten van de Belastingdienst op zijn strategische doel: compliance. ‘Output’ (=resultaat) daarentegen is datgene wat door de inspanningen van de Belastingdienst is geproduceerd (zoals het aantal verstuurde brieven of uitgevoerde controles. Het uitvoeren van handhavingsactiviteiten wordt ook wel omschreven als een interventie. In termen van de effectketen gaat het hier om input, proces en output.

De auteurs zijn eerlijk (en reëel) over de stand van zaken met effectmeting:


Naar onze mening is de kern van het probleem dat belastingdiensten überhaupt niet gewend zijn om effecten te meten, maar zich vooral beperken tot output omdat dat veelal gemakkelijker vast te stellen is dan effecten.

De auteurs noemen vijf uitdagingen voor effectmeting:

  1. Expliciteren aan hoe activiteiten bijdragen (de inzet van mensen en middelen) aan het realiseren van de doelstellingen. Bijvoorbeeld met een doelenboom.
  2. Vinden van de juiste achterliggende oorzaken van (non-)compliance
  3. Meten van effecten van preventieve activiteiten – “Nadenken over nieuwe soorten indicatoren, die verder weg lijken te staan van de outputindicatoren waar belastingdiensten voorheen sterk op stuurden.”
  4. Opzetten van methodologisch verantwoord onderzoek

    Om vast te stellen in hoeverre inspanningen van de Belastingdienst bepalend zijn (geweest) voor dat nalevingsniveau is het noodzakelijk om een vergelijking te maken met het nalevingsniveau in een situatie waarin de inspanningen niet zouden zijn geleverd (counterfactual). (…) Het ideale onderzoeksdesign is de zogenoemde randomized controlled trial of gecontroleerd veldexperiment (p26)

    Er zijn veel situaties denkbaar waarin het niet mogelijk is een hoger niveau van onderzoeksvaliditeit te bereiken; de eerste twee niveaus kunnen dan zeker van toegevoegde waarde zijn. (p25)

  5. Organisatorische inbedding van effectmeting.

    Een uitdaging voor veel toezichthouders is om effectmeting op een structurele manier te borgen in de organisatie en onderdeel te maken van de manier van werken.


Effectmeting is een continu proces omdat het bij het realiseren van de (algemene) beleidsdoelstelling gaat om een ‘duurzame’ verandering in het gedrag van belastingplichtigen en de borging van de continuïteit van belastingopbrengsten.

Spanish regulation for labeling of financial products: a behavioral-experimental analysis

Spanish regulation for labeling of financial products: a behavioral-experimental analysis – Y Gómez, V Martínez-Molés, J Vila – Economia Politica, 2016 [Pdf].


This paper assesses the impact of the Spanish Ministry of Economy and
Competitiveness’ (Board of Executives (BOE) Order ECC/2316/2015. Economy
and Competitiveness Ministry, Spain, 2015) new regulation for financial product labeling.

We design and conduct an economic experiment where subjects make risky investment decisions under three different treatments: a control group where subjects have only objective information about the key features of the products they must select and two treatment groups introducing visual labels resembling the labels required under the new Spanish regulation. The results of the experiment are analyzed within the framework of rank-dependent utility theory.

While visual labels do not change the utility function of the subjects, they do significantly affect the subjects’ weighting functions. The introduction of numerical and color-coded labels significantly increases the concavity of the weighting functions and increases pessimism and risk-aversion in cases where the probability of obtaining the best outcome is high.

Labels widen the difference between real subjects’ behavior and that of the perfectly rational agents described by expected utility theory. Consequently, our empirical findings raise doubts as to whether the new regulation actually achieves its objectives.

The regulation seeks to empower retail investors by enhancing their understanding of financial products. Introducing the visual labels, however, seemingly increases the differences between actual risk levels and the decision weights applied by subjects when making decisions.

Moreover, labels increase investors’ pessimism and risk-aversion when the best outcome is likely and fail to alter investors’ risk-aversion when the worst outcome is likely.


Method was at times too complicated for me (utility & weighting functions), but interesting outcomes nonetheless:

  • Consequently, our empirical findings raise doubts as to whether the new regulation actually achieves its objectives.
  • In summary, visual labels affect subjects’ understanding of risk levels. Visual labels cause subjects’ understanding to diverge from that of perfectly rational agents. Furthermore, labels make subjects more risk averse in cases where the probability of the best output is high.
  • The behavioral experiment presented in this paper shows that the labels proposed under the new regulation are seemingly a long way from achieving their goal. Taking decisions made by the rational agents described in rational choice theory as a benchmark, our experiment shows that both graphical and numerical labels actually worsen subjects’ decision-making. Introducing labels makes retail investors’ decisions less rational.
  • The practitioners claimed that introducing labels has increased the perception of risk associated with the safest products (for instance, bank deposits), mainly among investors with low financial literacy.

Disclosure and warnings are often employed as the solution for everything (market failures). Important and good that such interventions are also measured and assessed on merit. The proof of the pudding is in the eating.

Minimum Payments and Debt Paydown in Consumer Credit Cards

Ben Keys and Jialan Wang have a working paper called Minimum Payments and Credit Card Paydown. Most of my summary below are copy/pasted sentences from the paper.


Using a dataset covering one quarter of the U.S. general-purpose credit card market, we document that 29% of accounts regularly make payments at or near the minimum payment. We exploit changes in issuers’ minimum payment formulas to distinguish between liquidity constraints and anchoring as explanations for the prevalence of near-minimum payments. At least 10% of all accounts respond more to the formula changes than expected based on liquidity constraints alone, representing a lower bound on the role of anchoring.

Using a back-of-envelope calculation, we estimate that anchoring consumers would save at least $570 million per year in interest charges if all issuers adopted the highest observed minimum payment formula in our sample.

Disclosures implemented by the CARD Act, an example of one potential policy solution to anchoring, resulted in fewer than 1% of accounts adopting an alternative suggested payment. Our results show that the design and salience of contract terms in credit products have significant impacts on household balance sheets.

Keys and Lang  position their paper as “the first empirical study to estimate the economic signicance of anchoring in the credit card market”; “Because the minimum payment is a lower bound on the optimal payment amount for the vast majority of consumers, anchoring would downwardly bias payment amounts and lead to suboptimally high debt levels, lower average consumption, and greater consumption volatility for affected consumers.”

They used the CFPB Credit Card Database (CCDB), that covered February 2008 to December 2013, and the issuers in the full dataset comprise over 85% of credit card industry balances. Based on a 1% random sample with about 40 million observations, they analysed three questions:

  1. Who pays the minimum? “We find that 29% of accounts pay exactly [9%] or close to (i.e. within $50 of) [20%] the minimum in most months. (…) Either many consumers are liquidity constrained at amounts that happen to be near the minimum, or that repayment decisions are in influenced by anchoring.”In the 1970s, typical minimum payments were about 5% of the outstanding balance. By the 2000s, the average minimum payment had fallen to 2%.

    Payment behavior is highly persistent over time both within and across accounts, it is only weakly correlated with traditional proxies for liquidity constraints.

  2. Minimal payments due to anchoring? “Taking advantage of the fact that several issuers changed their minimum payment formulas during the sample period. allows us to estimate the fraction of anchoring consumers by measuring before and after formula changes. using a dierence-in-dierences approach we nd that 9 to 20% of all accounts changed their payments by more than the mechanical effect alone.”At least 22% of accounts payed close to the minimum and at least 9% of all accounts anchor to the minimum payment. Estimated range is between 22% and 38%. Notably, the behavioral response is consistent, yielding a signicant fraction of anchoring consumers in response to both minimum payment increases and decreases. Consumers’ repayment choices are sensitive to changes in minimum payment formulas.
  3. Did the CARD-act nudge work? “”Nudges” that encourage higher payments; they measured the effect of one such disclosure required by the Credit Card Accountability Responsibility and Disclosure (CARD) Act of 2009. The disclosure was mandated on more than half of all statements, and presents a calculation of the payment needed to amortize the outstanding balance in three years.” Figure 7 below shows what the disclosure looks like. “Fewer than 1% of accounts adopt the three-year repayment amount (…) a prominent policy change aimed at de-biasing consumers failed to yield a large economic effect relative to the influence of anchoring.”


We interpret the fraction of accounts that adopt the three-year repayment amount as an estimate of the ability for mandated disclosure to establish new anchors for consumer payments. The regulation specified that consumers who paid their balances in full for two months in a row and those whose minimum payments are higher than the three-year repayment amount are exempt from the disclosures.

Panel B of Figure 8 (see below) presents the difference-in-differences results around the implementation date. There are no pre-trends in the period prior to the implementation of the disclosure, in large part because very few consumers actively chose the three-year repayment amount in the absence of the disclosure.


In the five months following the CARD Act, we observe a sharp increase in the share of accounts paying the three-year disclosure amount. Although the economic impact is small, with treatment effects of less than 1%, the effect is statistically significant. Another trend visible in the figure is a deterioration of the effect of the disclosure over time. One reason for the decline in the disclosure’s effect could be habituation as consumers become accustomed to seeing the disclosure and “tune out” after its novelty wears off. We use this medium-run effect of 0.5% as the benchmark estimate of the disclosure’s overall impact.

Economic significance
Assuming that 0.5% of consumers who adopt the three-year payment amount would have otherwise made the minimum payment, we find that the disclosures led to an $0.18 per month increase in payments averaged across all accounts. We estimate that the disclosures saved consumers $62 million in interest charges in 2013.

If the disclosures had instead caused all anchoring consumers (estimated range between 22% and 38%) to move from the minimum payment to the three-year payment amount, we find that the interest savings in 2013 would have been two orders of magnitude larger, between $2.7 and $4.7 billion. The effect of the disclosures is substantially smaller than the economic role
of anchoring.

The modest effects we document of the CARD Act disclosures illustrate the challenges of changing real-world behavior using traditional forms of disclosure.

The answers to the 3 questions:

  1. Who pays the minimum? 29%
  2. Minimal payments due to anchoring? 22% – 38%
  3. Did the CARD-act nudge work? Yes, but very little (<1%)

From advert to action: behavioural insights into the advertising of financial products

The Financial Conduct Authority (FCA) published Occasional Paper No. 26 on April 12th 2017: From advert to action: behavioural insights into the advertising of financial products. It was written by Paul Adams and Laura Smart.

Laura Smart also wrote an Insight (FCA’s term for a blogpost I suppose): Economical with the truth: three ways behavioural science can help to spot a misleading advert. Add there is an infographic.

And on June 29th, they have a nice event for regulated firms. Experts Rory Sutherland and Joe Gladstone will present, as will Laura Smart and the FCA Financial Promotions team.


How are we affected by financial advertising? What do we pay attention to and when might we be misled? We explore the science of advertising to answer these questions. Building on earlier FCA work into behavioural biases, we summarise a large body of academic literature to explore the mechanisms behind consumer attention, understanding, and behaviour. We build this into a framework for understanding how consumers process information in the form of advertisements, divided into three stages: See, Interpret and Act. We then apply our findings in a novel setting: explaining what the science says about when an advert may be unclear, unfair or misleading.

In See, we find that attention may be predicted by the relative salience of information and is also affected by consumers’ motivation and intentions; for example, those searching for a house are more likely to notice mortgage deals.

In Interpret, we find that certain ways of presenting information, particularly those which make use of behavioural biases or which involve percentages may impede understanding and have the potential to mislead consumers in certain circumstances.

In Act, we see that consumers may be influenced into action through techniques which encourage reliance on heuristics or emotion, rather than reason, and that this may cause problems.


What is advertising for? (p6)

  • Marketing professionals point to the role of advertising in changing customers’ preferences or improving their brand recognition
  • Psychologists and behavioural scientists argue that advertising aims to prime potential customers to buy products when opportunity presents itself.
  • Economic approach
    • persuasive; that it altered consumers’ tastes and created (potentially spurious) product differentiation and brand loyalty.
    • informative; advertising helped to solve the problem that it is costly for consumers to search for products by providing information directly and efficiently.
    • complementary to the advertised product; that it does not change views or provide information, but simply enhances the existing features of a product.
  • Traditional approach: AIDA model attention, interest, desire and action. “require a high level of cognitive involvement, which does not necessarily concur with the behaviour we see
  • Behavioural approach (used by this FCA paper): how advertising draws on inbuilt psychological mechanisms, invokes our emotions, changes our preferences and invites automatic responses, as well as tells us a story.


How do we process adverts?

  1. See: getting our attention
    • Salience (“bottom-up attention”); size, colour, incongruities, pictures, music , language (e.g. personalised, or containing signal words such as “danger” or “warning” [Wogalter et al (2002) Research-based guidelines for warning design and evaluation])
    • Motivation (“top down attention”); people are also affected by their current circumstances; what they are thinking and feeling at the time in which they come across an advert. This highlights the importance of considering context and possible effects of priming in assessing consumer responses.
  2. Interpret: reaching an understanding
    • Numbers: “People are highly likely to make systematic errors when processing numbers”, especially with percentages. Or availability bias. “When it comes to communicating risk, comprehension may be reduced still further”. I fully endorse the FCA’s recommendation of David Spiegelhalter (@d_spiegel) and Gerd Gigerenzer’s work (see this book review for summary Simple Heuristics That Make Us Smart).
    • Framing: such as playing to loss aversion, tinkering with the choice-set (e.g. decoy effect), defaults (save lives), anchoring, and drip pricing:

      “Another way to present costs in a way that makes them seem less unattractive is to present the first cost and then add additional or optional costs later (such as adding sales fees, platform fees and termination fees for investments after presenting the initial cost; OFT, 2010, Advertising of Prices). Because the customer is already psychologically invested in the purchase by this point, they are less likely to back out when the further costs appear.”

    • Words and truth. I had never heard of Gricean Maxims, named after Paul Grice, who described conversational implicature in a piece called Logic and Conversation (1975).

      Omissions and caveats which lead to false impressions are often called “pragmatic implications” (see Gricean Maxims box). Common examples include:
      * two juxtaposed phrases which imply a causalrelationship: “You want only the best. Buy brand X”,
      * hedge words such as “may”,
      * comparative adjectives: “Gives you more rewards”, and
      * piecemeal survey results: “Better than Competitor A on price, better than Competitor B on coverage”.
      It may be helpful to consider pragmatic implications in understanding what consumers take away from advertisements and to pay attention not only to what is said, but also how it is said. Even if the words are literally true, the message that the customer takes away could be incorrect.

      Adams en Smart conclude on #2 Interpret: “Techniques such as framing and pragmatic implications affect what consumers take away from an advertisement, which may be a different impression from what the words literally say.” (p26)

  3. Act: being influenced
    Consumers may be influenced to purchase products through appeals to emotion or the use of principles of influence, such as reciprocity or scarcity.

    • Emotion (“affect”). Possible counters:
      • “cooling on” periods, customers need to actively do something to complete the decision and activate the product. This provides a pressure-free period in which the customer can stop and think
      • pop-up warnings during purchase processes, (…) To test comprehension directly, it would even be possible to ask mandatory questions to check that a customer has understood what they are buying.
    • Influence. The Cialdini 6: Liking, Authority, Scarcity, Social Proof, Consistency, Reciprocity.


On Targeting & Timing

Targeting: now easier than ever to target adverts to consumers based on data about them. Two important considerations:

  1. As choices become more tailored to the individual’s current preferences, the individual is less likely to discover new preferences . They may even develop a distorted knowledge of what products are actually available.
  2. Data about consumers may be used to target those in particular circumstances, for example, those in debt or those who enjoy gambling, which could be detrimental to customers who are less able to ignore poor value or risky offers (Ronson, 2005 Who killed Richard Cullen?).

On Timing (p15): I tweeted the Ellering 2016 reference. He also quotes Dan Zarella who, in my experience, also has good, data-backed advice on how to get retweets or mail opens. Blog is not updated much though.


FCA regulation of Financial Promotions

The overarching principle [of the FCA] is that financial promotions must be clear, fair and not misleading.” (p4)

The FCA already requires that all relevant product information, including risk warnings and key exclusions, is sufficiently prominent” (p15)

The FCA published guidance on social media and customer communications in 2015 which explained that shorter adverts, including tweets, should still be standalone compliant (clear, fair and not misleading) without the need for users to click on a link to see balancing information or caveats (Financial Conduct Authority, FG15/4, 2014). However, as part of the Smarter Consumer Communications initiative, the FCA is undertaking further work to explore alternative approaches to firms’ communications through social media (Financial Conduct Authority, 2016). (p24)


Where to draw a line? What is acceptable advertising?
When is a sell too hard? When does selling become misselling? (In Dutch: the difference between “verleiding” & “misleiding”).

“What is the difference between unethical and ethical advertising? Unethical advertising uses falsehoods to deceive the public; ethical advertising uses truth to deceive the public.”

Vilhjalmur Stefansson, explorer and ethnologist (p16)

It is difficult to find a suitable way to measure when techniques might be unfair. Is it better to measure consumer understanding of their products, the decision making process of the consumer or the literal interpretation of the rules? In practice, it might be appropriate to take all of these factors into account. (p33)

For example, the UK Advertising Standards Authority recently adjudicated a case in which a company sent out marketing material in white windowed envelopes and found that the envelope breached the CAP code by making it insufficiently clear that the direct mailing was a marketing communication before opening it (Advertising Standards Authority, 2017). (p13)

Dispositional Greed (paper)

In 2015, two papers came out with exactly the same title: Dispositional Greed. Here, I focus on the paper by Seuntjes et al (Tilburg). The other one is by two Belgian scholars (Ghent). Fortunately, results were similar. A concurrent replication.


Greed is an important motive: it is seen as both productive (a source of ambition; the motor of the economy) and destructive (undermining social relationships; the cause of the late 2000s financial crisis). However, relatively little is known about what greed is and does.

This article reports on 5 studies that develop and test the 7-item Dispositional Greed Scale (DGS). Study 1 (including 4 separate samples from 2 different countries, total N = 6092) provides evidence for the construct and discriminant validity of the DGS in terms of positive correlations with maximization, self-interest, envy, materialism, and impulsiveness, and negative correlations with self-control and life satisfaction. Study 2 (N = 290) presents further evidence for discriminant validity, finding that the DGS predicts greedy behavioral tendencies over and above materialism. Furthermore, the DGS predicts economic behavior: greedy people allocate more money to themselves in dictator games (Study 3, N = 300) and ultimatum games (Study 4, N = 603), and take more in a resource dilemma (Study 5, N = 305).

These findings shed light on what greed is and does, how people differ in greed, and how greed can be measured. In addition, they show the importance of greed in economic behavior and provide directions for future studies.

To compare, the Belgian paper had two studies, N=317 “fully employed US citizens” and N=218 US MTurkers.


Further Research
In Study 1, the authors found an “unexpected result, namely the absence of a relationship between greed and risk taking.


Future research could also focus on the observation that some groups of people appeared to score higher on dispositional greed than others. For example, we found that younger people were greedier than older people. (…)

We also found relationships between greed and levels of education and between greed and gender, but, interestingly, we did not find relationships with income or religiosity.

The Belgian study concluded “Greed is higher in men, professionals in financial sectors and non-religious people“;

As expected, men (M = 3.72, SD = 1.27) are more greedy than women (M = 3.40, SD = 1.13, t(216) = 1.99, p < .05). By regrouping the 20 potential industries, we found that respondents working in financial and management sectors (M = 3.84, SD = 1.28) are significantly greedier than those working in services, or the arts (M = 3.22, SD = 1.20, t(114) = 2.70, p < .01). Whether greedy people are more likely to start a financial job or whether financial jobs trigger a greedy disposition is not clear from our results and requires further research. [Krekels & Pandelaere, 2015]

DGS is Dutch
The Dispositional Greed Scale (DGS) consists of these 7 items, with response on a 5-point scale; (Sterk oneens |Oneens | Niet oneens/niet eens | Eens |Sterk eens):

  • Ik wil altijd meer
  • Ik ben eigenlijk wel hebberig
  • Geld heb je nooit genoeg
  • Zodra ik iets heb denk ik alweer aan het volgende dat ik wil hebben
  • Het maakt niet uit hoeveel ik heb, ik ben nooit echt tevreden
  • Mijn levensmotto is ‘meer is beter’
  • Ik denk dat ik nooit genoeg spullen kan hebben

Choice Complexity, Benchmarks and Costly Information

Recently, I spoke with Mark Sanders about his Discussion Paper for the Utrecht University School of Economics: Choice Complexity, Benchmarks and Costly Information (2017, Job Harms, Stephanie Rosenkranz, Mark Sanders).


In this study we investigate how two types of information interventions, providing a benchmark and providing costly information on option ranking, can improve decision-making in complex choices.

In our experiment subjects made a series of incentivized choices between four hypothetical financial products with multiple cost components. In the benchmark treatments one product was revealed as the average for all cost components, either in relative or absolute terms. In the costly information treatment subjects were given the option to pay a flat fee in order to have two products revealed as being suboptimal. Our results indicate that benchmarks affect decision quality, but only when presented in relative terms. In addition, we find that the effect of relative benchmarks on decision-quality increases as options become more dissimilar in terms of the number of optimal and suboptimal features.

This result suggests that benchmarks make these differences between products more salient. Furthermore, we find that decision-quality is improved by providing costly information, specifically for more similar options. Finally, we find that absolute – but not relative – benchmarks increase demand for costly information.

In sum, these results suggest that relative benchmarks can improve decision-making in complex choice environments.

Complex task
Subjects had 30 seconds to complete a complex task, namely to choose one out of four products. And to choose the product with the lowest costs. This was their proxy for a complex financial decision. Not only a calculating excercise, but you also have to know that the management fee is to be added to the starting costs, and the tax break can be substracted from the monthly costs.

Table below shows the task; I calculated the bottom row, that was not shown in the experiment. Product D is the average of A, B and C. In this example, product A is the optimal choice (payoff =5). Product C is worst (or “suboptimal” as it is called in the paper) with a -5 payoff.

I am not too sure if I think this is a valid proxy for e.g. buying a mortgage. Considerable maths-skills are needed. A good heuristic seems to be to look at the monthly costs. So perhaps this valliant effort to operationalize a complex financial decision for a lab-setting with students ahs merit.

Starting costs 87 92 103 94
Monthly costs 35 49 64 49
Maturity costs 72 91 2 52
Management fee (%) 15% 31% 16% 21%
Tax deduction (%) 11% 10% 10% 10%
 total cost (not shown) 579 771 873 734

Experimental conditions: benchmark & advice
The randomized controlled trial had a 3 x 2 factorial design. Either no benchmark, an absolute bechmark (average product, i.e. product D in table above), or a relative benchmark (all values for bechmark D are rescaled to 100).

This was crossed with: option to get advice or absence of this option. Advice in this experiment consisted of eliminating the worst option and 1 of the 2 remaining non-optimal options. So a subject choosing advice for the tabel above woud get to see product A (optimal) and (randomly) either B or D. Getting advice costs 2.5 in payoff, so max pay-off becomes 2.5. But it reduces downside risk of picking the worst product. Subjects could still pick any of the 4 options, even when advised against.

Results: advice and relative benchmarks work
Why papers by economists always lack graphs still does not cease to amaze me. So I copied their table 4 to Excel and used Conditional formating to get some bars. (aside: fortunately, tables were included in the text and not at the end, that always annoys me too, having to flip back and forth).

Pretty clear that advice works: subject respond more within time (fourth column; 82%-87% without advice vs 92%-97% for with advice). Good to mention: if you fail to respond within 30 seconds, the worst option is automatically selected for you.

Advice also leads to higher pay-offs, which is both driven by more often picking the best choice and by less often choosing the worst option.

Regression results indicate that absolute bechmarks do not affect the quality of the decisions (i.e. the payoff). Relative benchmarks do affect the quality, albeit it via a significant interaction effect with product dissimilarity; “In contrast, relative benchmarks do improve decision-making as options in the choice set become increasingly dissimilar in terms of the number of optimal and suboptimal attributes“.


The effects of benchmarks on decision quality are either absent ( for absolute bechmark) or not very strong/convincing (for relative benchmark); only the interaction with product dissimilarity is significant. I’m not too enthusastic about the experimental task, on how relevant that is to natural decisions and the product dissimilarity might also be more of an artefact than something real.

So: laudable and important research, not too convinced yet about the real-life implications. More research is needed…