The concept of consent may not yet be broken, but it is certainly under strain.
Nowhere was this more evident than at the IAPP ANZ Summit in late 2019, where speaker after speaker hammered another nail in the coffin of the ‘notice and consent’ model of privacy regulation.
Keynote speaker Professor Woody Hartzog spoke about how notice and consent, choice and control are attractive options, both for legislators needing to ‘do something’ about privacy and for companies tackling product design decisions, because at face value they seem empowering for consumers. But, he said, such an approach asks too much from a concept which works best in limited doses. In reality, people don’t have enough time, energy or understanding to navigate every interaction they have with technology: “Our consent cannot scale; it loses its meaning”. Illustrating his point with visuals of endless privacy control buttons and convoluted click-throughs, he concluded: “If we get our wish for more control, we get so much privacy we end up choking on it”.
Next up was Privacy Commissioner of New Zealand, John Edwards, who in a powerful call to arms for both governments and regulators to address the power asymmetry of Big Tech, warned that “the days of click to consent are numbered because it is not meaningful consent”.
And then Victorian Information Commissioner Sven Bluemmel asked whether consent in an online and hyper-connected world can ever be fully informed, or whether anyone can ever voluntarily consent when dealing with government. He posed the question: “Is consent still fit for purpose, as a tenet of privacy regulation?”
Of course you might think that for we antipodeans (and indeed for most of the rest of the non-American world), whose privacy laws have never relied wholly on a notice and consent model, criticising the business practices of Silicon Valley is a well-trodden path, leading nowhere. I’ve seen the ‘my model of privacy regulation is better than yours’ argument at countless global privacy conferences.
Except that this time it feels different. This time, the Cambridge Analytica scandal has thrown a star-spangled spanner in the works. You know the fix is in for notice and consent when even conservative American think-tank the Brookings Institute is arguing that this particularly American model of privacy protection should be killed off.
Is news of the Notice and Consent model’s demise premature?
But like the Monty Python character who protests “I’m not dead yet”, the regulatory model of notice and consent just won’t die.
Indeed the California Consumer Privacy Act, which commenced in January 2020, directly ties collection limitation and use limitation back to transparency: section 1798.100(b) says “A business shall not collect additional categories of personal information or use personal information collected for additional purposes without providing the consumer with notice consistent with this section”.
It’s not only California; other American legislators are similarly still focussed on the myth that transparency, and consumer controls like opting-out of the sale of personal information, can deliver privacy protection, instead of setting meaningful limits on when personal information can be collected or used in the first place.
Even the Australian consumer protection regulator, the ACCC, has proposed strengthening notice and consent provisions in the Privacy Act as a solution to the twin problems of information asymmetry and unequal bargaining power between consumers and the Big Tech digital platforms. But will more transparency really help?
Privacy academic Daniel Solove describes the idea of ‘privacy self-management’ – notice, consent, user controls, “more buttons, switches, tick boxes and toggles” as just more homework.
What the Notice and Consent model means
OK, I hear you saying, wait up a bit. What is this notice and consent model anyway, why do we suffer through it, and what else is on offer?
I’m going to let privacy academic Dr Gabriela Zanfir-Fortuna set the scene for you:
“A ‘notice and consent’ framework puts all the burden of protecting privacy and obtaining fair use of personal data on the person concerned, who is asked to ‘agree’ to an endless text of ‘terms and conditions’ written in exemplary legalese, without actually having any sort of choice other than ‘all or nothing’ (agree to all personal data collection and use or don’t obtain access to this service or webpage).”
The alternative model of privacy regulation is to start from a point of restricting data flows (i.e. the collection, storage, use and disclosure of personal information) unless they can be justified within a framework of overarching principles like necessity, proportionality and fairness; create some public interest exemptions like law enforcement and medical research; create a ‘with consent’ exemption; build in some actionable rights (access and correction, and sometimes more); and then layer transparency over the top (ideas like privacy policies and collection notices).
Much of the developed world has omnibus privacy laws which cover wide swathes of the economy, including public sector agencies and much of the private sector. They incorporate most if not all of the above features in those privacy laws.
But in the US, instead they have what is known as a sectoral law approach. They have one piece of legislation that just talks about the privacy of financial records in banks, and another one just for federal government agencies. They have a separate law about the privacy of health insurance records; another law that talks about students’ privacy; and yet another law about the privacy of video rental records. And there is an Act which protects the privacy of children online. So the US has a few privacy laws, each designed for a different sector.
But what they don’t have is one set of rules that applies to all sorts of different businesses. So as a result Big Tech – the data-gobbling tech companies like Facebook, Amazon, Alphabet (Google), Apple, Microsoft, Netflix, Uber and Air BnB – are for the most part not regulated by privacy legislation. (OK yes there is the new Californian law CCPA, which applies to all sorts of industries including tech companies, but for the most part that should have been called the “Don’t Sell My Data” Act, because that’s about all it covers; it doesn’t come close to being a principles-based privacy law like most other countries have.)
Because of this gap in privacy regulation, the default form of privacy protection for most industries in the US, including those industries which matter most in the online world, is consumer protection and trade practices law. This where the ‘notice and consent’ model comes from. When you come at the issue of authorising data flows purely from a trade practices angle (instead of a human rights angle), the chief requirement is to ensure that contracts are not misleading or deceptive.
In other words: under the ‘notice and consent’ model, you just have to tell people up front what you are going to do with their personal information, and then you can go ahead and do it. So long as you bury somewhere in some fine print some kind of explanation of what your company is going to do with people’s data, then if people choose to buy your product or use your service anyway, well then, they must have ‘consented’ to whatever it was you said you were going to do with their data.
So what’s wrong with that?
The first problem with the ‘notice and consent’ model is that companies can bury whatever they like in those terms and conditions – because, let’s face it, almost nobody ever reads them. Rather like the Londoners who ‘consented’ to give up their first born child when signing up for free wifi, most of us don’t read T&Cs, because they are longer than Shakespearean plays. And deliberately so: privacy notices under the US model are not about delivering transparency; they are legal tools for arse-covering.
Notice and consent just doesn’t scale. This art installation illustrates the problem, as does this video, in which the advocacy group Norwegian Consumer Council asks their Consumer Affairs Minister to go for a jog while they read her the privacy policy from her fitness tracker. The Minister manages to run 11km in the time it takes for the policy to be read out to her. When the same group tallied up the T&Cs for the apps found on an ‘average’ mobile phone, reading them took 37 hours. And that’s before you get to the quagmire posed by the Internet of Things: where on your smart toothbrush are you going to adjust your privacy settings?
Second, if a customer does read the fine print, they probably don’t understand that phrases like ‘share data with our partners to personalise your experience’ means the kind of privacy-invasive profiling practices on which data brokers and AdTech thrive. Speaking at a webinar to mark Privacy Awareness Week in May 2020, Australian Privacy Commissioner Angelene Falk noted that the OAIC’s 2020 Community Attitudes to Privacy Survey found that only 20% of people felt confident that they understood privacy policies.
And sometimes that is deliberately so. The Notice of Filing in the OAIC’s lawsuit against Facebook for disclosing the personal information of 311,127 Australian Facebook users in the Cambridge Analytica scandal states: “The opacity of Facebook’s settings and policies hampered (Australian Facebook users) in understanding that their data was disclosed to the app. The design of the Facebook website was such that Users were unable to exercise consent or control over how their personal information was disclosed”.
Third, consumers don’t have enough power, knowledge or time to genuinely exercise what little choice or control they might be offered. There is a power imbalance between consumers and corporations; and between citizens and governments. The OAIC’s submission to the ACCC’s Digital Platforms Inquiry calls this out clearly:
“consumers may be informed and understand the inherent privacy risks of providing their personal information, but may feel resigned to consenting to the use of their information in order to access online services, as they do not consider there is any alternative. Further, while ‘consent’ is only a meaningful and effective privacy self-management tool where the individual actually has a choice and can exercise control over their personal information, studies also show that consumers rarely understand and negotiate terms of use in an online environment”.
Describing consent as a “legal fiction”, the editorial board of the New York Times nailed the pointlessness of even reading privacy policies: “Why would anyone read the terms of service when they don’t feel as though they have a choice in the first place? It’s not as though a user can call up Mark Zuckerberg and negotiate his or her own privacy policy. The ‘I agree’ button should have long ago been renamed ‘Meh, whatever’.”
As Digital Rights campaigner Sam Floreani remarked at NetThing last year, there is an element of elitism and privilege behind the very notion of notice and consent: suggesting to consumers that if they don’t like what’s happening with their privacy, they should just opt out of using Google / Facebook / Uber / etc ignores the reality that much of our civil and political life depends on or is mediated through a small number of dominant technology platforms and service providers.
Fourth, it is a fantasy to think that consumers can calculate the privacy risks arising from every single transaction they enter into, let alone whether the benefits to be obtained now will outweigh the risks to be faced later. Rachel Dixon, the Privacy and Data Protection Deputy Commissioner in Victoria, has said about the role of consent that because most data is collected during transactions where we as consumers or citizens want something, the ‘consent’ obtained is almost never fair: “There is always an inherent lack of attention paid to the downstream consequences”.
Privacy risks are usually time-shifted, and obscure. And in the context of artificial intelligence in particular, ‘consent’ can almost never be informed. Rachel Dixon again: “No matter how much you think you can explain how the AI works to a regular person … people don’t understand what they’re giving up”.
And if you don’t understand the risks, your consent will not meet the test for ‘informed’, let alone any of the other elements needed to gain a valid consent under privacy law. (To be valid under privacy law, consent must be voluntary, informed and specific, current and given by a person with capacity.)
Why can’t we be informed about the risks?
So why can’t companies and governments do a better job of explaining the risks to us? Well, because sometimes they don’t even know.
Lawyer Andrew Burt has written about how the nature of privacy risks has shifted. Where once organisations and individuals alike worried about personal information being misused or disclosed without authority, now, in this world of Big Data and machine learning, he suggests the biggest threat comes from the unintended inferences drawn from our personal information: “Once described by Supreme Court Justice Louis Brandeis as ‘the right to be let alone’, privacy is now best described as the ability to control data we cannot stop generating, giving rise to inferences we can’t predict.”
Daniel Solove says it is “nearly impossible for people to understand the full implications of providing certain pieces of data to certain entities. … Even privacy experts will not be able to predict everything that could be revealed… because data analytics often reveal insights from data that are surprising to everyone”. The benefits of ‘consenting’ are usually obvious and immediate, while the possible privacy risks are unpredictable, obscure and time-delayed.
By way of example, the public release of Strava fitness data, although de-identified, gave rise to privacy and security risks that the company themselves had failed to predict. Strava is a social network of people who use wearable devices to track their movements, heart-rate, calories burned etc, and then share and compare that data with fellow fitness fanatics. After releasing a data visualisation ‘heat map’ of one billion ‘activities’ by people using its app, an Australian university student pointed out on Twitter that the heat maps could be used to locate sensitive military sites.
So if service providers cannot imagine the risks posed by the data they hold, how is a consumer expected to figure it out?
When data is combined from different sources, or taken out of context, or when information is inferred about individuals from their digital exhaust, the privacy issues move well beyond whether or not this particular app, or device, or type of data, poses a risk to the individual. We have to assess the cumulative impact. The herculean task of assessing the likely risks posed to an individual’s privacy means that notice about likely risks is impossible to deliver, and therefore informed consent is impossible to obtain.
It’s just not fair
Even if you could magically solve the problems of digital literacy, power imbalances, and the difficulties of calculating privacy risks, and deliver your consent solution at scale, the notice and consent model still suffers a terrible weakness: it’s just not fair.
At the IAPP ANZ Summit in 2019, Professor Hartzog described how notice and consent, as well as the related idea of solving privacy problems by offering more user controls, are both ways of shifting risks onto individual consumers and citizens. He has also written about the “fallacy” that it is up to us as individuals “to police Facebook and the rest of the industry”.
Even in the context of discussing privacy laws such as ours which do not rely entirely on consent – i.e. the consent of the subject is what you need when you can’t rely on any other ground to lawfully collect, use or disclose personal information – Australasian privacy regulators are calling time on the over-reliance of consent as a mechanism, on the grounds of fairness.
NZ Privacy Commissioner John Edwards has said of consent that it asks too much of a consumer, and described it at the same 2019 conference as an “abdication of responsibility”. He has published guidance telling companies to lift their game when it comes to designing consent mechanisms, saying the practice of ‘click to consent’ is simply not good enough anymore.
Likewise, the Australian Privacy Commissioner, in the OAIC’s submission to the ACCC in response to its Consumer Loyalty review, has said that “Overreliance on consent shifts the burden to individuals to critically analyse and decide whether they should disclose their personal information in return for a service or benefit.” In a similar vein, the OAIC has also said that that burden “should not fall only on individuals, but must be supported by appropriate accountability obligations for entities, as well as other regulatory checks and balances”.
Consent should be the last resort, not the first or only choice from a menu of regulatory or design responses to privacy problems. The responsibility for protecting our privacy should fall on privacy regulators, government legislators, and organisations themselves – not on us as individual consumers or citizens.
So what are the alternatives?
Let’s turn now to some positive steps being taken to improve matters: enforcing the law we have, making transparency meaningful, and regulating for fairness.
Enforcing the law we have
This is easier said than done. Privacy regulators around the globe are under-resourced, compared with the budgets of Big Tech.
The European law GDPR has already been in place for two years. It explicitly requires that personal information can only be processed (collected, used, disclosed) on one of six legal grounds, one of which is ‘with consent’. It also says that to be valid, consent must be voluntary, informed and specific: a proactive ‘yes’, with the freedom to say no (or say nothing), not bundled up with any other choices, or built into T&Cs. Yet complaints lodged by privacy advocacy body NOYB against Facebook’s ‘forced consent’ model on the first day of GDPR’s operations are still yet to be ruled upon by the Irish Data Protection Commissioner. Fingers crossed.
Making transparency meaningful
I am always on the lookout for new ways to do privacy comms better. So I got all excited to see what the Brookings Institute were suggesting, when they proposed making transparency “targeted and actionable”. And then I deflated again when I realised that in effect, what the American think-tank came up with is what Australian privacy law already requires: a comprehensive privacy policy available to the public at large, and specific notices provided to the consumer at the precise point of collection. Ho hum.
Sure, there are some fun ways to deliver your privacy policy, like using graphics or animated videos. But even with pictures of cute fish, I can’t be bothered reading to the end. Because actually, as a consumer, if I am looking at this stuff at all, I just want to know (quickly, at the point in time that suits me, in language I understand, in a format that works for me) what I would not already expect, what my choices are, and what I don’t have choice about. In other words: Just tell me now if it is safe for me to proceed. Or: Just tell me if this app is safer than this other one. But because how one person might be harmed is different to the next person, and what I value is different to what the next customer values, and because children are different to adults, that information should be contextual for me.
European think-tank DataEthics suggests taking a layered approach, and using icons to categorise types of information. They were part of a consultation group reviewing IKEA’s new app, which promises to put customers in control of their data. Watching a video on how the app works for a fictional customer, it is encouraging to see how explanations, toggle controls, prompts and links to more information are integrated within the user experience. IKEA’s customer data promise and the app’s design were seen as innovative enough to be the subject of a presentation at the World Economic Forum in Davos earlier this year.
The IKEA app is a great example of building in privacy thinking as part of a customer-centric experience, instead of a legal compliance bolt-on after the fact. But it is not perfect. I noticed that the app still has a button which makes the customer “accept the Privacy Policy” (argh, WHY?), and DataEthics has also pointed out that IKEA uses third party tracking cookies, and stores data in the US.
The UK Government’s Behavioural Insights Team has also developed a Best Practice Guide to presenting T&Cs and privacy notices. They tested 18 different techniques to see which best improved customer understanding and engagement. (The summary version: FAQs, icons, presenting ‘just in time’ explanations in ‘short chunks’ of info, illustrations and presenting terms in a scrollable text box all work better than emojis, summary tables or expandable terms.)
But taking the IKEA app design and the UK behavioural insights research as a step forward, what next?
First, we need universal terms and icons, as well as ways to present them better, to quickly communicate privacy messages and choices to individuals. (And I mean all individuals, not just highly educated, literate adults with time to spare.)
To help consumers better understand data practices and exercise choice, the OAIC has expressed the need for “economy-wide enforceable rules, standards, binding guidance or a code”, to create “a common language in relation to privacy and personal information, which could include the use of standardised icons or phrases”. The new Consumer Data Right scheme provides a positive example of what can be achieved. The CSIRO’s Data61 has legislated power to set standards under the CDR scheme; their guidance on translating the legal elements of consent into design specifications is extremely valuable.
Other ideas for universal systems are modelled on successful designs used in safety messaging (traffic light indicators), and product labelling (star ratings, nutrition labels).
Privacy advocate Alexander Hanff has developed a proof of concept matrix of data types, internal uses and disclosure types, each one coloured red, amber or green, allowing users to click through for more information.
Researchers in privacy and trust from Carnegie Mellon University and Microsoft have presented an idea for using nutrition labelling as a model for communicating privacy messages. Having tested and rejected matrix designs as too complex, their simplified label broke messaging into What, How and Who: what categories of personal information are being collected; the different purposes for which the data will be used, including whether the user can restrict those purposes by opting in or out; and to whom it will be disclosed, including any choice over such disclosures. They struck difficulties: even University students didn’t understand the symbols used to illustrate opting in versus opting out.
The idea of nutrition labels received a more recent boost from Ghostery President Jeremy Tillman, who argued that the US government should develop uniform labelling: “What consumers need is a privacy nutrition label – something quick and scannable they can look at to see what the privacy impact of a digital service is before they use it, the same way they would look at nutrition info before eating a candy bar.”
But do you see the problem here? In this scenario, the consumer checks the nutrition label but then still eats the candy bar. If consumers made only rational decisions, they would put down the candy bar and bite into an apple instead. But we all know, that’s not how humans necessarily behave. Sometimes we crave instant gratification, whether that is a sugar rush or to download a game of Candy Crush.
And nutrition labels still depend on we humans to stop and read them, over and over and over again, and use them to compare one product to another. Ugh.
The most innovative idea I have seen in this space comes from Data61, who propose a machine-readable solution. Senior experimental scientists Alexander Krumpholz and Raj Gaire wrote: “Wouldn’t it be nice if we could specify our general privacy preferences in our devices, have them check privacy policies when we sign up for apps, and warn us if the agreements overstep?”
Basing their idea on Creative Commons icons which are universally agreed, legally binding, clear and machine-readable, their proposal would also need legally binding standards and universal icons, but it would work more efficiently than star ratings, traffic light matrices or nutrition labels alone. They propose ‘Privacy Commons’ classifications to cover what they call Collection, Protection and Spread: the categories of personal information collected, the data security techniques applied, and who the personal information will be disclosed to. In my view they need to add also include the purposes for which the personal information will be used within the organisation, but the idea is a great start.
It would be a brilliant time-saver, by getting consumers to think deeply once about what they are trying to achieve in terms of their personal privacy goals, and then automating the legwork of reading and comparing privacy policies against those goals. I would love to see a legal and technology framework which allowed an individual to set their own privacy risk profile (e.g. add me to a mailing list is OK, never share my home address, don’t collect my date of birth, don’t collect location data without checking with me, etc), and then facilitated an automated ‘reading’ of a company’s data practices against that profile (and even better, automating the toggling on or off of settings to matchthe profile), to come up with tailored gatekeeping advice about whether it is safe for them to proceed.
Now that might actually deliver on the promise of transparency.
And finally, we need to better define what Solove describes as “the architecture of the personal data economy” – the rules under which personal information can be collected, stored, used and disclosed.
Because even with innovative approaches like machine-readable privacy policies, it will still be hard to code for whether any given collection or use of data is necessary, proportionate, reasonable or fair. As privacy and technology lawyer Peter Leonard argues, consumers shouldn’t even be put in the position of having to figure out for themselves whether a company’s data practices are reasonable: “Regulators don’t require consumers to take responsibility for determining whether a consumer product is fit for purpose and safe… Why should data-driven services be any different?”
Thus we need a more wholistic and protective approach to privacy regulation, in which an organisation can only collect, use or disclose personal information when it is fair to do so. There are some practices so privacy invasive or socially damaging that even ‘consent’ should not be allowed to authorise them. The late Giovanni Buttarelli, European Data Protection Supervisor, argued that “The right to human dignity demands limits to the degree to which an individual can be scanned, monitored and monetised — irrespective of any claims to putative ‘consent’.”
Because we care about human dignity and autonomy, in Australia we do not allow trade in human organs or tissue. ‘Consent’ doesn’t even come into it. It’s time we outlawed some types of data exploitation too.
Canadian privacy law includes a gatekeeper provision. Section 5(3) of PIPEDA says: “An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances.” As a result the Canadian Privacy Commissioner publishes guidance on ‘no-go zones’, based on court interpretations of s.5(3) as well as consultations with stakeholders and focus groups.
Of course what is ‘reasonable’, ‘appropriate’ or ‘fair’ are subjective assessments, but the Canadian model at least creates a space for reflecting community expectations in the application of legal tests.
The OAIC has proposed introducing a similar “general fairness requirement for the use and disclosure of personal information” as a way of addressing “the overarching issue of power imbalances between entities and consumers” and “protecting the privacy of vulnerable Australians including children”. The Australian Government has since committed to reviewing the Privacy Act “to ensure it empowers consumers, protects their data and best serves the Australian economy”.
Reforming the Australian Privacy Act to create no-go zones, in which even ‘consent’ would not be sufficient to authorise data practices which would otherwise be unfair, have discriminatory impacts or diminish human dignity, would be a fantastic result.
No more privacy theatre
The US-influenced model of ‘notice and consent’ has failed. User controls, notice and consent are too often just privacy theatre: smoke and mirrors offering the illusion of control and choice, but within confines that are invisible.
Successful privacy protection should not depend on the actions of the individual citizen or consumer. Placing the burden of privacy protection onto the individual is unfair and absurd. It is the organisations which hold our data – governments and corporations – which must bear responsibility for doing us no harm.
Designing, implementing and enforcing privacy protection is the task of legislators, the organisations which want to use our data, and privacy regulators. Not consumers, and not citizens. Under a truly effective model of privacy regulation, the hard choices about limiting the use of personal information, protecting autonomy and dignity, and avoiding privacy harms, must be made well before the individual user, consumer or citizen ever becomes involved.
Photograph (c) AdobeStock