From the Abstract:
This article provides the first public evidence of the power of dark patterns. It discusses the results of the authors’ large-scale experiment in which a representative sample of American consumers were randomly assigned to a control group, a group that was exposed to mild dark patterns, or a group that was exposed to aggressive dark patterns. All groups were told they had been automatically enrolled in an identity theft protection plan, and the experimental manipulation varied what acts were necessary for consumers to decline the plan.
Users in the mild dark pattern condition were more than twice as likely to remain enrolled as those assigned to the control group, and users in the aggressive dark pattern condition were almost four times as likely to remain enrolled in the program. There were two other striking findings. First, whereas aggressive dark patterns generated a powerful backlash among consumers, mild dark patterns did not – suggesting that firms employing them generate substantial profits. Second, less educated subjects were significantly more susceptible to mild dark patterns than their well-educated counterparts. Both findings suggest that there is a particularly powerful case for legal interventions to curtail the use of mild dark patterns.
The article proposes a quantitative bright-line rule for identifying impermissible dark patterns.
Dark patterns are presumably proliferating because firms’ secret and proprietary A-B testing has revealed them to be profit maximizing. We show how similar A-B testing can be used to identify those dark patterns that are so manipulative that they ought to be deemed unlawful.
[almost all text below are quotations from the paper]
First wave of scholarship created a useful taxonomy (e.g. see Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites), natural follow-up question: “how effective are dark patterns?”
Sometimes the law enacts outright prohibitions with substantial penalties. Other times it creates cooling off periods that cannot be waived. A key question we address is what online tactics are egregious enough to warrant this kind of special skepticism.
Our bottom line is that dark patterns are strikingly effective in getting consumers to do what they would not do when confronted with more neutral user interfaces
Now that scholars can test dark patterns, we can isolate causation in a way that’s heretofore been impossible in the brick-and-mortar world.
Mild dark patterns are worse
We provide powerful evidence that dosage matters – aggressive dark patterns generate a powerful customer backlash. Mild dark patterns usually do not, and therefore, counterintuitively, the strongest case for regulation and other legal intervention concerns subtle uses of dark patterns.
In the wild
From the section on dark patterns in the wild, an example from Nazi Germany:
Funny footnote 19: We apologize for the small font size necessary to squeeze the table onto a page. We promise the table is not intended to be a dark pattern – we actually want you to read the categories and examples closely.
An Experimental Test of the Effectiveness of Dark Patterns
we designed a bait-and-switch scenario. After survey on attitudes about privacy, we would deceive those adults into believing, at the end of the survey, we had signed them up for a costly identity-theft protection service and would give them the opportunity to opt out. We would randomly vary whether the opportunity to opt out was
unconstrained (control) or impeded by different dosages of dark patterns (mild or aggressive).
Participants were then allowed to either accept or decline the data protection program. But the steps that were required to do so varied by the level of the dark pattern manipulation.
- Control: “Accept” or “Decline” (1 question needed to decline)
- Mild: “Accept and continue (recommended) or “Other options”. If other options; “I do not want to protect my data or credit history” (confirm shaming), then “Why not?” (3 questions needed to decline)
- Aggressive: same as Mild, followed by three extra screens with questions and information (nudging to accept), followed by trick question; Are you sure you want to decline this free identity theft protection?” The two options were “No, cancel” and “Yes.” (7 questions needed to decline)
Final sample of 1,963 participants. We pre-registered the experiment with AsPredicted.Org [but unlike osf.io, I don’t know where the public link is to the pre-registration].
Acceptance rates by Condition (I made a graph from data in Table 3 and Appendix A):
this data demonstrates that seemingly minor dark patterns can have relatively large effects on consumer choices. In the control group condition, participants were able to choose “Accept” or “Decline.” Changing these options to “Accept and continue (recommended)” and “Other options,” with the former pre-selected, all by itself, nearly doubled the percentage of respondents accepting the program.
(Stakes were also experimentally varied in two random conditions, but “Rates of acceptance were not related to stakes”).
Potential Repercussions of Deploying Dark Patterns
The mild dark pattern condition more than doubled the acceptance rate and did not prompt discernible emotional backlash. (…) But overexposure to dark patterns can irritate people.
People who declined the program reported more displeasure (M=3.50, SD=1.99) than those who accepted the program (M=3.21, SD=1.78). There was also some sort of interaction effect ( among those who declined, respondents in the aggressive dark pattern condition were more aggravated than those in the control group and mild conditions. The latter two conditions did not differ) . Data to make a graph were not available, but the pattern looks something like this:
Also: Only 9 participants dropped out in the mild condition, while 65 dropped out at some point during the aggressive condition.
What kinds of people are more vulnerable to being manipulated by dark patterns?
They cite this book that Cass Sunstein recommends (& I am currently reading); ” In other contexts, scholars have found that people with fewer financial resources have more difficulty overcoming administrative burdens that people with more resources”
More extraverted people and less conscientious people are more likely to accept the program. 4 However, both traits fail to predict behavior (accepting or declining the program) in the aggressive condition.
To summarize the data we have collected and analyzed here, it appears that dark
patterns can be very effective in prompting consumers to select terms that substantially benefit firms. These dark patterns might involve getting consumers to sign up for expensive goods or services they do not particularly want, as in our study and several real-world examples discussed in the previous part, or they might involve efforts to get consumers to surrender personal information – a phenomenon we did not test but that also is prevalent in ecommerce.
Are Dark Patterns Unlawful?
There appears to be a substantial market failure where dark patterns are concerned – what is good for ecommerce profits is bad for consumers, and plausibly the economy as a whole. Legal intervention is justified
Can the legal system draw stable lines between permissible (and constitutionally protected) commercial persuasion and impermissible dark patterns?
Some pertinent examples:
Federal Trade Commission v. AMG Capital Management (2018). This payday lender made “renewal” the default option and buried information about how to switch to the “decline to renew” option in a wall of text. Judge O’Scannlain was not impressed with protestations that his disclosures were “technically correct.” In the Court’s view, “the F.T.C. Act’s consumer friendly standard does not require only technical accuracy…. Consumers acting reasonably under the circumstances – here, by looking to the terms of the Loan Note to understand their obligations – likely could be deceived by the representations made here. Therefore, we agree with the Commission that the Loan Note was deceptive.”
FTC vs Cyberspace.com (2006) 103 In that case, a company mailed personal checks to potential customers, and the fine print on the back of those checks indicated that by cashing the check the consumers were signing up for a monthly subscription that would entitle them to internet access. Hundreds of thousands of consumers and small businesses cashed the checks, but less than one percent of them ever utilized the defendant’s internet access service.
F.T.C. v. Commerce Planet, Inc (2012) it was unfair conduct for material language to appear in blue font against a blue background on an “otherwise busy” web page
If it appears that a large number of consumers are being dark patterned into a service they do not want (as occurred in our experiment) then this evidence strongly supports a conclusion that the tactics used to produce this assent are deceptive practices in trade.
In 1980, the F.T.C. laid out the test that is still currently utilized to find an act or practice
“unfair.” Under this test (codified in section 5(n) of the F.T.C. Act), an unfair trade practice is one that:
- causes or is likely to cause substantial injury to consumers (“an injury may be sufficiently substantial, however, if it does a small harm to a large number of people.”)
- is not reasonably avoidable by consumers themselves
- is not outweighed by countervailing benefits to consumers or competition
Where does one draw the line?
An “I know it when I see it” approach to dark patterns creates uncertainty, notice problems, and raises the specter of unequal enforcement.
A quantitative approach to identifying dark patterns could be workable. More precisely, where the kind of A/B testing that we discuss above reveals that a particular interface design or option set more than doubles the percentage of users who wind up “consenting” to engage in a consumer transaction, the company practice at issue could be deemed presumptively an unfair or deceptive practice in trade.
“More likely than not” rule
The “more likely than not” standard is widely employed in civil litigation over torts and other kinds of liability, and it could work well in this context too, ideally with the F.T.C. and academics working hand in hand to replicate high-quality research that quantifies the effects of particular manipulations.
As a statistical matter, each individual research subject in our study who was signed up for the data protection plan was more likely than not to have done so because of the dark pattern rather than because of underlying demand for the service being offered [Baseline/control: 11% signup, with mild dark patterns +15%point, with aggressive dark patterns +30% points. “more likely than not” standard: 15%/30% < 11%]
Admittedly, one challenge here is to develop a neutral baseline against which the A/B testing can occur.
In embracing a “more likely than not” rule, we do not mean to rule out the development of multifactor standards that can supplement a rule-based approach. A multi-factor test for dark patterns that looks to considerations such as:
- (a) evidence of a defendant’s malicious intent or knowledge of detrimental aspects of the user interface’s design
- (b) whether vulnerable populations are particularly susceptible to the dark pattern
- (c) the magnitude of the costs and benefits produced by the dark pattern would be a good starting point (But: if it turned out that consumers were happy ex
post with a good or service that a dark pattern manipulated them into obtaining, this would be revealing evidence cutting against liability for the seller. The ends could justify the means for a firm that genuinely was trying to trick consumers for their own good.)
- (d) experimental evidence about how effective the dark pattern was compared to a neutral choice architecture should be relevant
Ben-Shahar and Strahilevitz (2017)
To the extent that there is any doubt about a new technique, companies can always examine their own design choices (e.g. with beta-testing) and see whether any cross the line. See Ben-Shahar, Omri, and Lior Jacob Strahilevitz. “Interpreting Contracts via Surveys and Experiments.” NYUL Rev. 92 (2017): 1753, who argue “Let majorities of survey respondents decide” [on interpreting the language of contracts] because it is pragmatic, normative (interpretation of respondents > judge), and doctrinal.
“survey interpretation method”—in which interpretation disputes are resolved through large surveys of representative respondents, by choosing the meaning that a majority supports
In trademark law, some courts apply a “15% rule”— holding that consumer confusion exists if more than 15% of surveyed consumers are confused by the mark. Thresholds on the order of 20% are prevalent in false advertising cases.
This is a very low threshold, suitable perhaps to the protection of a proprietary mark, but not to the interpretation of contractual language. we would be very reluctant to characterize contractual language as ambiguous just because 15% or 20% of a representative sample regard it as such.
Bipartisan legislation is presently pending in the Senate to prohibit dark patterns (see Big Tech’s ‘dark patterns’ could be outlawed under new Senate bill). Senate Bill 1084 Deceptive Experiences To Online Users Reduction Act (DETOUR) does two things:
- Encourages the creation of a standard-setting industry body
- Directs this industry body to “define conduct that does not have the purpose or substantial effect of subverting or impairing user autonomy, decision making, or choice
The legislation recognizes that this open-ended prohibition of dark patterns may leave a lot of discretion in the hands of the Commission (related from Dutch regulator, see: AFM invites businesses to respond to its ‘Principles for Choice Architecture’ and ACM marks the boundaries of misleading practices online).
Luguri & Strahilevitz are not worried about the line-drawing problem: Dark patterns were developed through A-B testing, and A-B testing can be used to develop relatively clear and predictable rules about what is permissible.
Not all social proof is a dark pattern
In our revised taxonomy we have been more careful than the existing literature to indicate that social proof (activity messages and testimonials) and urgency (low stock / high demand / limited time messages) are only dark patterns insofar as the information conveyed is false or misleading. If a consumer is happy with a product and provides a favorable quote about it, it isn’t a dark pattern to use that quote in online marketing.
I like that the authors made this point, in the 11k scrape paper, I felt all social proof was unfairly categorized (or thus insinuated) as darkt pattern.
Strategies like toying with emotion, as well as confirmshaming, may be hard to restrict under current doctrine given firms’ speech interests (i.e. free speech).
Nagging presents perhaps the thorniest type of dark pattern from a First Amendment perspective. CNN’s web site employs a nagging dark pattern, one that regularly asks users whether they wish to turn on notifications.
Given the potential uncertainty over whether nagging and other forms of annoying-but-nondeceptive forms of dark patterns can be punished, the most sensible strategy for people interested in curtailing these dark patterns is to push on the contractual lever: consent secured via those tactics is voidable.
Three important contributions:
- There is now an academic paper that demonstrates the effectiveness of various dark patterns
- The available experimental evidence helpfully points towards a bright line rule (“More likely than not” rule) that can be employed to address the aforementioned boundary question (Where does one draw the line?)
- The F.T.C. is beginning to combat dark patterns with some success, at least in court. The courts have established some key and promising benchmarks already