Imagine waking up to find your entire Instagram history – every photo, comment, and like – has been harvested without your consent to train AI models. This isn't dystopian fiction, it’s the shocking reality that confronted millions of Meta users in June 2024.
Earlier this year, Meta quietly announced a seismic shift in its data policies. Starting June 26, 2024, the tech giant would begin scraping user data from all public Meta accounts to train their AI models. No permission asked. No easy way out.
For those in the EU and UK, robust data privacy laws offered a lifeline – a clear option to decline this invasive data grab. You have a choice to say "no thanks" in your settings. But for users elsewhere? Meta deployed a labyrinth of obscure settings and buried terms, making opting out nearly impossible. The only recourse? A complex filing process, hidden deep within the platform's Settings, to submit a request exercising your “Right to object”.
This deliberate hiding of important information, and forcing user action is a prime example of Deceptive UX patterns.
What is Deceptive UX?
Deceptive UX, also known as "dark patterns," refers to design techniques that manipulate or mislead users into making decisions they might not otherwise make. It's the digital equivalent of fine print, hidden clauses, and high-pressure sales tactics. In Meta's case, it manifested as crucial privacy information tucked away in obscure corners of the app, making it challenging for users to protect their data.
If you've ever found yourself subscribed to an unwanted newsletter, struggled to cancel a subscription, or been nudged into sharing more personal information than you intended, you've encountered Deceptive UX. These practices aren't just annoying – they're a breach of user trust and, in many cases, ethical boundaries.
While all of this is relevant to you as a consumer, it’s even more significant if you’re a UX researcher, UX designer, or Product Manager. You're at the forefront of shaping digital experiences.
Here's the thing: the reality of consequences for Deceptive UX is complicated. While these practices should ideally face consequences, many have become unfortunate industry standards. Tech giants like Google and Meta often employ such tactics and seem to get away with it. From fake low stock warnings on e-commerce sites to misleading discount claims, these deceptive patterns are frustratingly common.
Legislation plays a big role in holding the sinners accountable–look at how everyone’s reacting to the data regulations imposed by the EU. There are plenty of lawsuits ongoing in federal courts in the US too, but sometimes the businesses involved (like Amazon) are too big for fines to matter much.
That said, if you're just starting out with your product and looking to actively gain trust with your users, it's best to avoid deceptive UX patterns. We believe they won't hold up in the long run. While they might lead to short-term gains, users are becoming savvier and less tolerant of manipulative design. Brands that prioritize transparency and user empowerment are more likely to win the trust game in the long term.
In this article, we'll explore common deceptive patterns to watch out for, discuss the ethical implications of these practices, and provide practical strategies for advocating for user-centric design in your organization. Expecto Patronum!
Bad UX vs Deceptive UX
What's the difference between a design that's simply bad and one that's intentionally deceptive?
Not all bad UX is Deceptive UX. At first glance, they seem similar – they both result in poor user experiences. However, the key difference lies in intention.
Bad UX is accidental, stemming from lack of knowledge, resource constraints, or simple oversight. It's like a poorly designed tool that's frustrating to use but not intentionally so. Deceptive UX, on the other hand, is deliberate and manipulative. It's a calculated effort to exploit user psychology for the benefit of the business, often at the expense of the user's best interests.
Bad UX is like accidentally stepping on someone's foot. It's clumsy and inconvenient, but there's no malice behind it. Maybe the designer was inexperienced, or the team was working under tight deadlines. Whatever the reason, the poor user experience wasn't the goal. Here are some classic bad UX offenders:
- Confusing navigation: Remember those websites with a million tabs and dropdowns? That's Bad UX in action.
- Slow loading times: Nothing says "we don't value your time" like a website that takes ages to load.
- Inconsistent design: When every page looks like it belongs to a different website, you're in Bad UX territory.
- Lack of mobile optimization: If you need a magnifying glass to read a website on your phone, that's Bad UX.
Deceptive UX, on the other hand, is like deliberately tripping someone up. It's calculated and purposeful. The creators of the app know exactly what they're doing – they're using their understanding of human psychology and UX principles to manipulate users into actions that benefit the company. Here are some common Deceptive UX tactics.
- Hidden costs: Ever reached the checkout only to find a bunch of unexpected fees tacked on?
- Default Opt-In: When you end up subscribing to marketing emails and already bought a free trial of the product just by creating an account? Because they never gave you the option to check out? That’s dark UX.
- Forced continuity: Those "free trials" that automatically convert to paid subscriptions? Classic Dark UX.
- Trick questions: "Click here if you don't want to not receive our newsletter." Confused? That's the point.
Let’s look at an example.
Imagine you're trying to cancel your subscription on a streaming service. You click around, but can't find the cancel button. It's not under 'Account', not under 'Billing', and not even under 'Help'. After 10 minutes of frustrated searching, you finally spot it hidden in a submenu called 'Membership Status'.
This may be Bad UX. The designers probably didn't mean to make it hard—-they just did a poor job organizing the site. It's annoying, but not on purpose.
Now, picture the same scenario, but the 'Cancel Subscription' button is actually in a logical place - under 'Account Settings'. But here's the twist: when you click it, instead of canceling, it takes you to a page full of "Don't leave us!" messages and special offers. The actual cancel option is a tiny, grey link at the very bottom, barely visible against the background. And if you do click it, you get a pop-up asking "Are you ABSOLUTELY SURE?" three times before it lets you cancel.
This is Deceptive UX. The designers knew exactly where to put the cancel button, but they deliberately made the process frustrating and guilt-inducing to keep you subscribed.
8 Common types of Deceptive UX
Let’s look at the common types of Deceptive UX practices out there—collated, named and shamed by advocacy groups like the Deceptive Patterns site. We'll look at what they are, how they work on our brains, and where you're likely to encounter them in the wild. We'll also cover some real-world examples and existing legislation that's trying to keep these practices in check.
1. Forced Action
What is it: Forced action requires users to perform an undesired action to access something they want. It may be combined with other deceptive patterns like sneaking products into your cart or confusing wording to make the action seem more palatable or less noticeable.
Forced action is often carried out with “bundled consent”, in which businesses combine multiple agreements in one section, making it difficult for users to selectively give consent.
Where you’ve seen it: Remember Linkedin’s old registration process, back in the day? Classic example. Users were presented with a seemingly harmless step to "Get started by adding your email address." The ‘Continue’ button looked mandatory, and users However, this step actually granted LinkedIn access to the user's email contacts. This function WAS described on the page, but the text was low-contrast and super easy to overlook. The option to skip this step was also made visually inconspicuous.
Laws protecting users against it: For folks from the EU, GDPR says companies can't force you to agree to data collection. They need to get your clear, voluntary permission first, which can “not inferred from silence or pre-ticked boxes, must be clear, concise and non-disruptive.” India also has extremely stringent laws against Deceptive UX patterns including forced action, confirmshaming, disguised ads, false urgency and basket sneaking under the Consumer Protection Act, 2019.
2. Confirmshaming
What is it: Confirmshaming emotionally manipulates users into actions they might otherwise avoid. It often appears in opt-out messages, using guilt-inducing or belittling language to make users feel bad about declining an offer.
Talk about being cyberbullied by a checkout system.
Where you’ve seen it: A really popular case of this in action was from mymedic.com, which sells first aid supplies. In 2018, their notification opt-out link was labeled "No, I don't want to stay alive" or "No, I prefer to bleed to death." You can imagine how disturbing this must be to read and actively choose, especially given their target audience of individuals is likely exposed to trauma in their work.
3. Disguised Ads
What is it: Disguised ads blur the line between actual content and advertising, often mimicking interface elements or native content to increase click-through rates. This practice can generate more revenue for website owners and potentially more sales for advertisers.
Where you’ve seen it: Older versions of sites like Softpedia featured a lot of disguised ads. The site often displayed advertisements with prominent download buttons that closely resemble the actual download button for the desired software. Users unknowingly clicked on these ads, thinking they're downloading their intended software.
This practice exploits the user's trust in familiar interface patterns and their expectation of consistency. It also takes advantage of the user's focused attention on completing their task (e.g., downloading software), making them less likely to scrutinize each element carefully.
A more recent example is the new Instagram feed, in which sponsored posts and ads appear organically mixed in with regular posts.
Laws protecting users against it: Many countries now have regulations requiring clear labeling of advertisements. For instance, in the UK, the Advertising Standards Authority requires that marketing communications must be clearly identifiable as such. India’s Consumer Protection Act guidelines also bans all forms of disguised ads.
4. Comparison Prevention
What is it: Comparison prevention makes it difficult for users to compare products or services, often by presenting information in a complex or inconsistent manner. This practice exploits the cognitive load theory, overwhelming users with information and making decision-making harder.
Where you’ve seen it: T-Mobile's plan comparison page was once reported for this reason. They bundled features differently across plans, requiring mental arithmetic to evaluate the plans against each other. Some plan details were hidden behind multiple clicks, and the lowest-priced plan was obscured at the bottom of the page. This complexity led users to choose more expensive plans out of frustration or confusion.
Laws protecting users against it: In the EU and UK, the Unfair Terms in Consumer Contracts Regulations (1999) aims to protect consumers against such practices. These regulations require clear and comprehensible contract terms, which could be applied to product comparisons.
5. Fake Scarcity
What is it: Fake scarcity creates an artificial sense of limited availability or high demand for a product or service. This tactic pressures users into quick decisions by exploiting the fear of missing out (FOMO).
Where you’ve seen it: You’ve definitely come across this on travel and hotel booking sites, or certain e-commerce stores— showing fake low stock messages and sales numbers, creating a false impression of scarcity and popularity.
This practice leverages the scarcity principle, a cognitive bias where people assign more value to items perceived as rare or in short supply.
6. Hidden Costs
What is it: Hidden costs are extra fees that sellers don't tell you about until you're almost done buying something. By then, you've already spent time and energy on the purchase, so you're more likely to go through with it even though it costs more than you thought.
Where you’ve seen it: For example, Stubhub, a company that resells tickets, was caught for showing low prices at first. They'd make you go through a long process before showing you the real, higher price right before you pay. Research has shown that when people didn't see the full ticket price upfront, they spent about 21% more money and were 14% more likely to buy tickets.
Laws protecting users against it: Many countries have laws requiring clear price disclosure. For example, in the EU, the Consumer Rights Directive requires traders to disclose the total price of goods or services, including all applicable fees and charges, before the consumer is bound by the contract. So does India’s Consumer Protection Act, 2019, updated in 2023.
7. Sneak into Basket
What is it: Sneaking involves deliberately hiding or delaying the presentation of important information to manipulate users into actions they might otherwise avoid. This could include adding items to a cart, subscribing to services, or agreeing to data sharing.
Where you’ve seen it: Amazon has been guilty of this, with many cases of automatically adding Amazon Prime subscription renewals to the user’s cart, and making it really difficult to cancel/exit without buying the service.
8. Privacy Zuckering
What is it: Privacy Zuckering, named after the Meta founder (imagine if a Deceptive UX practice is your actual legacy), refers to the practice of tricking users into sharing more personal information than they intended or realize. This deceptive pattern often involves complex privacy settings, confusing language, or default options that favor data sharing.
Where you’ve seen it: Remember the Meta-Cambridge Analytica scandal? That is Privacy Zuckering at its worst.
In 2014, Cambridge Analytica orchestrated one of the biggest data breaches in tech history, harvesting millions of Facebook profiles to influence political outcomes.
The scheme unfolded through a seemingly innocent personality quiz app, "thisisyourdigitallife". While users consented to share their data for "academic use", the app sneakily collected information from their Facebook friends as well. This snowball effect resulted in a vast data pool of over 50 million profiles.
Cambridge Analytica, backed by billionaire Robert Mercer and overseen by Steve Bannon (later Trump's key adviser), weaponized this data. They built sophisticated algorithms to profile US voters and target them with tailored political ads, potentially swaying both the 2016 US Presidential election and the Brexit referendum.
The scandal exposed Facebook's lax data protection policies and raised alarming questions about the platform's role in election integrity. A 2014 contract revealed that Cambridge Analytica's parent company, SCL, had explicitly arranged for the harvesting and processing of Facebook data – a clear violation of the platform's policies.
The Ethics of Honest UX
In this section, we’re put in the uncomfortable position of answering the question–Why should you fight the Dark arts of UX?
If the answer doesn’t seem obvious, a rereading of the 7 Harry Potter books might help.
First, we need to ask: do companies really care about honest UX? Should we, as UX professionals care? The right answer is yes, but when you see tech giants like Meta, Google, and others using dark pattern UX without lasting damage, it makes you wonder - do people really care?
Consider how AI companies are using artists' and writers' work without compensation to build software that generates millions in revenue. Do the millions of OpenAI users care about the plagiarism it’s built on? Not really.Think about how platforms like Instagram use data from your micro-interactions to bombard you with targeted ads and control your feed. Who pays for such unethical behavior and practices?Especially when these products are constantly innovating to stay ahead of legislation?
For one, Deceptive UX practices can have a significant impact on user trust and brand reputation. Companies can and do get sued for hundreds of millions of dollars for deceptive practices.
Also, users are vocal about their choices when given an option!
Apple proved this by introducing a feature called App Tracking Transparency (ATT) on iPhones in 2021. This lets users decide if they want apps to track their activity. Unsurprisingly, over 95% of users said no to tracking when given the option.
When Apple made its privacy move, Facebook wasn't happy. They complained that it would hurt small businesses that rely on targeted ads. But Apple stood firm, arguing that respecting user privacy is more important than unchecked data collection. Companies that prioritize privacy, like Apple, are likely to win more trust from their users in the long run.
According to research, 47% UK customers who have experienced issues when attempting to unsubscribe from a brand's online service said they would never deal with that particular brand again. That's a substantial hit to customer lifetime value.
Honest UX is also about more than just design principles or business ethics. It's about the kind of digital world we want to live in. Do we want an internet that respects us as users, or one that sees us merely as resources to be exploited? Do we want technologies that enhance our lives and respect our choices, or ones that manipulate us?
As UX researchers and designers, your work is ideally about delivering value to customers. Yes, it’s also about driving company goals. There is a unique opportunity here—and responsibility—to shape these interactions for the better.
But individual action, while important, isn't enough. To truly combat deceptive patterns and promote honest UX, structural change is key. This means pushing for legislation that prevents deceptive practices and holds companies accountable for their UX choices.
There are already digital advocacy groups working towards this goal. Organizations like The Electronic Frontier Foundation, Consumer Reports, and Access Now provide platforms for flagging deceptive patterns and pushing for regulatory change. Next time you come across a shady CTA or website? Report it to these guys. Deceptive Patterns has a public Hall of Shame for such cases, it’s quite an entertaining read.
Legislation against deceptive patterns is also crucial. While some regions, like the European Union with its GDPR, have made strides in protecting user privacy and requiring clearer consent mechanisms, there's still a long way to go for the US and other nations. We need better laws that actually keep up with these practices, and hold these businesses accountable!
Here’s a collection of all the existing laws worldwide that applies to Deceptive UX malpractices.
Checklist: Is your design Deceptive UX?
One way to avoid using deceptive UX patterns is rigorous usability testing. However, smaller offenses can slip by unnoticed, especially if your test participants aren’t looking for them. Instead of just waiting for user complaints, it's better to use a step-by-step checklist to spot any unfair design choices.
Here’s a checklist borrowed from the brilliant folks at NnG on making sure your design isn’t deceptive UX. While evaluating a new website or prototype, this can be very handy!
- Could users spend more or provide more of their data than they intended or needed?
- When users consent to something in exchange for a capability, product, or experience, is the exchange fair and appropriate?
- Is the information presented about each choice factually correct?
- Given how the information or options are presented, could users easily misinterpret the choices (or availability of choice)?
- Could users miss another choice in the interface (for example, because it is obscured or in a location the user might not expect)?
- Could users miss an essential piece of information that would assist them in making a choice?
- Can users access all the information regarding each choice available to them quickly?
- Can users quickly implement a choice they want to make (or do many unnecessary steps block them)?
- Are users rushed into making a decision?
- Are users unfairly pressured or emotionally manipulated when making a choice?
- Could users feel ashamed, nervous, or guilty when declining a choice?