Australians don’t want to be tracked. We don’t want to be tracked, profiled, packaged up and traded, then targeted.
Across surveys from the OAIC, ACCC and the Consumer Policy Research Centre, we know that Australians regard online surveillance of our online behaviour, for the purpose of profiling us and sharing our data to fuel targeted, personalised advertising or other messaging, as an invasion of our privacy. We want it to stop.
Even surveys conducted by for-profit companies find that we don’t want to be tracked. In 2020 Deloitte found that 93% of consumers want opt-in for non-essential uses of their personal information, and 83% of consumers are concerned by tracking cookies used for targeted marketing. And according to Adobe’s Future of Marketing Research in 2021, the number one brand behaviour that consumers regard as trust-breaking is online tracking.
So how is it that media and marketing companies can collect, collate, share and use our data, across different brands, services, platforms and apps, in order to track, profile and target us, without our active agreement to do so?
Isn’t the Privacy Act supposed to stop that?
I have identified four common claims made about why companies are allowed to collate and share our data for personalised, targeted, marketing purposes, which may or may not be BS. Plus one actual reason why.
BS claim #1: It’s not personal information
The first piece of BS to watch out for is when media and marketing players claim that the data they are handling is not ‘personal information’ – the implication being that therefore there is no reason for anyone to worry about their privacy.
But what media and AdTech companies tell brands, versus what they tell the public, are very different stories.
As consumers, we are told a lot about ‘anonymous’ data, data that is ‘not personally identifiable’ or which ‘does not usually identify you personally’, and data ‘associated with your browser or your device’.
The Nine Group privacy policy, for example tells consumers that they collect “anonymous data” which “may not be considered personal information under the Privacy Act”, such as data about “your interaction with our digital services and third party digital services … using tracking technologies such as cookies and web beacons”. This information may be used by Nine Group or provided to third parties “for purposes that may include, without limitation, providing you with more relevant advertising, preparing audience insights, customer research, campaign analysis, conducting launch surveys and modelling”.
The rival News Corp data usage policy likewise tells consumers that they collect information that “does not usually identify you personally”, because it is “associated with your browser or your device rather than personally identifiable information”. Examples include “information about your interaction and use of our websites, digital and mobile services across our network and newsletters and marketing communications we send you”.
It also describes as “anonymous information” (unless users are logged in to access their content, which is increasingly required): “your IP address, the type of browser you use, the website you came from, the pages you view within our network, what you click on and the time you spend on a page”.
They also ‘supplement’ this ‘first party’ data “with information collected from other trusted businesses with whom you also have a relationship or from public sources”. This is still described as “anonymous information (unless we collect it when you are logged in as a recognisable registered user)”, which they may use “to provide you with editorial content and advertising (both on and off our network of digital properties)”.
So, media companies are describing this data as ‘anonymous’, ‘not personally identifiable’ or data ‘associated with your browser or your device’ instead of you.
To the average consumer, these phrases might sound like we cannot be identified, and therefore we cannot be profiled or targeted at an individual level.
But it’s just a semantic sleight of hand.
The business model upon which News Corp and Nine Group rests is their promise – to other companies or ‘brands’ which want to market their products – that they can recognise, track, profile, and microtarget people at the individual level, and influence them accordingly.
Legal academic Dr Katharine Kemp has contrasted their claims about ‘anonymous’ data made to consumers, with what these same companies tell brands about their cross-brand data-matching, online tracking, profiling and ‘addressable’ targeting capabilities. She raises important questions about compliance with consumer law as well as privacy law, if the ‘anonymous’ claims made to the public are spurious.
Dr Kemp has noted that media companies compete based on how many millions of consumers they can ‘address’ directly on behalf of advertisers, and their ability to distinguish these users as individuals (with associated customer profiles), by allocating unique identifiers to each of them.
As of 2022, News Corp boasted that it can reach 16 million addressable Australians this way. Nine Group went one better, claiming it can reach 20 million Australians (up from 14 million just a year earlier), and Seven West Media claims 13 million, each allocated a proprietary unique identifier.
Think about the sites controlled by News Corp, for example. They own newspapers (The Australian, Daily Telegraph etc), TV streaming services (Foxtel and Binge), sports streaming (Kayo), real estate listings (REA), a rental application app (Ignite) and racing and gambling sites and apps (Punters Paradise, Racenet and more).
News Corp tells brands that “we can take advantage of (user) intent and data to both target advertising, inform content recommendations but also measure the outcomes of advertising”. By way of example, in an article bragging of its success in using “six million daily Westpac DataX transactions” to model “purchase intent” and match “users to products and services”, News Corp claimed that its campaign for Moet & Chandon led to “19,000 click-to-shop interactions” and “increased foot traffic to Dan Murphy’s and Vintage Cellars by 37 per cent using Near geo-location user data matched to ad exposures across News Corp’s network.”
So – for 16 million of us, even across different logins, different devices, different email addresses, different brands and platforms, News Corp can match our newspaper and magazine reading to our TV watching and real estate browsing habits, and profile that against when we apply for housing and when we gamble on the horses. From there they can predict purchasing behaviour and intent, somehow throw in our banking transactions as well, then micro-target ads to the right individuals. Then finally they can use our mobile phone geolocation data to see which of us went into a bricks and mortar shop, versus which of us purchased the advertised product online.
News Corp can do all of that, but incredibly claims not to know who we are.
In response to Dr Kemp’s articles last year, a spokesman for News Corp Australia said: “While we can recognise a user as a discreet user, with a particular unique identifier, we do not know who they actually are”.
I call bullshit.
This semantic slipperiness is straight out of the digital platforms playbook, like when the chief privacy officer at Facebook claimed that they serve ads “based on your identity… but that doesn’t mean you’re ‘identifiable’”. Yeah right.
Now contrast what these companies describe as ‘anonymous’ or ‘not personally identifiable’, with what the OAIC says constitutes ‘personal information’ for the purposes of privacy regulation.
Since guidance published in 2017, and in determinations against 7-Eleven and Clearview AI, the OAIC has consistently said that data will be considered ‘personal information’ (and thus in scope for regulation by the Privacy Act), even without knowing who the person ‘actually is’:
“Generally speaking, an individual is ‘identified’ when, within a group of persons, he or she is ‘distinguished’ from all other members of a group. For the purposes of the Privacy Act, this will be achieved through establishing a link between information and a particular person”.
If companies are using identifiers to track, profile and target distinguishable individuals, that’s ‘personal information’, and those activities need to comply with the APPs.
It’s not only in Australia that media and marketing industry types are trying to argue that their activities are not regulated by privacy law. In the EU, legal academic Nadezhda Purtova has argued that online targeted advertising effectively ‘identifies’ individuals via “individuation, zooming in on a particular individual who is distinct from others … in order to subject that individual to tailored treatment or content”, and that therefore “industry arguments” that such practices do not collect ‘personally identifying’ information (and thus do not come within the scope of data protection or privacy laws) “do not necessarily withstand legal scrutiny”.
BS claim #2: We’re not ‘collecting’ anything because no human sees it
Advertisers worried about losing track of our eyeballs on our devices (because of, you know, consumers expressing their strong desire not to be surveilled by opting out en masse of tracking online the second our phones let us do so), would love to roll out street furniture featuring facial recognition to track our gaze instead.
Amplified Intelligence CEO Karen Nelson-Field describes her tech as “privacy safe” because they use “edge computing” which means that “none of the data leaves the computer hardware, all we receive is zeroes and ones … that makes it by law extremely compliant because we’re not storing or keeping or looking at (footage)”.
Of course, as much as I love the idea of ‘extreme compliance’, this is bullshit.
Collection doesn’t only happen when you ask customers to fill out a form. Collection doesn’t only happen when someone inside a company ‘sees’ the data. In the Uber case, the OAIC said that the ‘creation’ of new personal information, such as by way of combining data or inferring information from existing data, will also constitute a ‘collection’ for the purposes of the APPs.
And in the 7-Eleven case, the OAIC found that even a transient collection, such as facial images which were stored on a tablet for around 20 seconds before being uploaded to a server in the cloud, will constitute a ‘collection’ for the purposes of the APPs.
If anyone is still under the impression that they can use facial detection or recognition tech (even if you don’t know or learn the ‘identity’ of the individuals whose faces will be caught by your cameras, and even if you are using ‘edge computing’ such that the images are never seen by humans) without having to comply with the ‘collection’ rules in APPs 3 and 5, please learn right now that 7-Eleven have already been found in breach of the Privacy Act for this practice.
BS claim #3: It’s ‘first party’ data so we’re allowed to do what we like with it
Here is Nine boasting about its “Audience Match” capabilities using data brokers like LiveRamp to track, link and reach 20M Australians: “With us, if they’re consuming NineNow, if they’re listening to 2GB or 3AW, reading The Sydney Morning Herald, The Age, The Financial Review, if they’re searching for homes on Domain etc., we’re creating billions of data points that create rich segmentation for us to deliver more targeted advertising and be able to do that at scale, because of those 20 million IDs.”
This is apparently all magically ‘first party data’. What does that mean under privacy law?
Nothing. You won’t find mention of it in the Privacy Act.
‘First party data’ is a term invented by the marketing industry, to make it sound like businesses just collect data direct from their own customers, and that there is no sharing going on. But as marketing industry insider Henry Innis said recently: “For too long, adland has been focused on the business of collecting other people’s data. We make ourselves feel better by crafting non-threatening names. Things like ‘audience profiling’. ‘Segmentation’. ‘Targeting data’. Which are nice in a deck, and sound a lot less like we’re collecting someone else’s information”.
He also said: “When you look at first party data, most of it isn’t first party. Businesses collect datasets to identify people, then enrich that data with brokered attributes. Sure, they might ‘own’ the data. At best though, it’s questionable if it’s first party. If it looks and smells like third party, it probably is”.
As News Corp admits in their data usage policy, they ‘supplement’ the data they collected themselves with data from “other trusted businesses”. So the resulting profiles are not exclusively ‘first party’ data anyway.
In any case, it doesn’t matter what marketeers call it, or where they get it from: personal information can only be collected in accordance with APPs 3 and 5, and used or shared within the limitations set by APPs 6-9.
And what that means is that most of the time, personal information can only be collected when it is reasonably necessary to do so; only when it is collected via lawful and fair means; and only when it has been collected directly from the consumer unless it is unreasonable or impracticable to do so.
For businesses which have a direct relationship with a consumer, as News Corp and Nine Group do with their subscribers, it is not “unreasonable or impracticable” for those media companies to ask those consumers directly about other pieces of information the company would like to know. So when, instead of asking us directly for more information to ‘enrich’ our customer profiles, companies instead go behind our backs, to collect data about us from other companies …. well, that’s not just a bit sneaky. Dr Kemp has alleged it is also likely in breach of APP 3.6.
In other words, because News Corp could directly email its subscribers to ask about their shopping habits, the Privacy Act says that they should not go behind our backs to find out who did or did not walk into their local grog shop to buy some bubbly.
(Of course, if consumers were asked directly to voluntarily reveal precisely where they went every minute of every day, and how they spent every single dollar, on top of the data the media companies already knew about what they watched and read and clicked on, I suspect the vast majority of consumers would tell the company asking to mind their own beeswax, in no uncertain terms. Which is why companies go behind our backs to spy on us indirectly instead.)
Compliance with the APPs also means that personal information can only be used or disclosed for the primary purpose for which it was collected; or a directly related secondary purpose within the individual’s expectations; or for another purpose authorised by law, or under a public interest exception.
If a use or disclosure of personal information doesn’t meet one of those tests, then it is prohibited, unless the company has the consent of the individual for that use or disclosure.
Take for example my subscription to TV streaming service Netflix. The primary purpose for which Netflix is using my personal information is to verify me as a paid subscriber, populate my ‘watch list’ of shows, and bill me for the service delivered.
As a result of my interactions with it, Netflix will be ‘collecting by creating’ new data about me, like the inference that I like to watch scandi noir crime dramas. So a directly related secondary purpose for Netflix to use my personal information, which I would entirely expect, is for Netflix to recommend that I watch the latest gritty police procedural set in suburban Stockholm in the depths of winter.
But what if Netflix now wanted to share that insight about me with someone else? I would expect them to ask my permission, before I am profiled to advertisers as either depressed or a fan of all things Swedish, and I start seeing ads in the newspaper I read online for mood-lifting medication or flat-packed self-assembly furniture.
Which brings us to consent.
BS claim #4: We got your consent
When not arguing that the data they collect is not personal information, the big media companies like to argue that even if it is personal information regulated by the Privacy Act, they have our consent to use and disclose it anyway.
This article suggests that News Corp claims that millions of us have ‘consented’ to have our online activities tracked by them (even when not logged in to a News-owned site like Binge or Foxtel), collated and matched with data about us from other companies, including mobile phone location tracking by Near, in order to be targeted with personalised ads. Really? 16 million of us consented to that?
The OAIC says that a ‘consent’ will only be valid if it is voluntary, informed, specific, current, and given by a person with capacity to understand what they are agreeing to. Chief among these is that it must be voluntary.
The News Corp data usage policy says that: “By using any digital property which is linked to this Policy, you agree to our conduct in accordance with this Policy. We may change this Policy at any time.”
Does that sound ‘voluntary’ to you?
The rival Nine Group privacy policy also claims that as consumers of Nine Group services, we “expressly acknowledge and agree that information including (our) personal information may be shared within the Nine Group and with the third parties referred to in this Privacy Policy for use by those third parties for the purposes disclosed in this Privacy Policy”.
So both companies seem to be saying that they can rely on our ‘consent’ to justify their data collection, profiling, tracking and targeting activities under the Privacy Act, because they have a published policy which says that all consumers of their services have consented to something vaguely described in that policy. This is notwithstanding that the policy can be changed at any time.
Plus I am confident in predicting that the vast majority of people have never read the policy, let alone understood it, appreciated the ‘value exchange’ inherent in online behavioural advertising, and then made an informed decision to positively say ‘yes please I want me some of that’.
We also know from the Flight Centre case that a privacy policy does not deliver consent. There the OAIC stated that a privacy policy is “a transparency mechanism… It is not generally a way of providing notice and obtaining consent”.
In particular, the OAIC found that “Any purported consent was not voluntary, as the Privacy Policy did not provide individuals with a genuine opportunity to choose which collections, uses and disclosures they agreed to, and which they did not”.
So given determinations and published advice from the OAIC about what a valid ‘consent’ requires in law … as well as when collection won’t meet the ‘reasonably necessary’ test (a separate requirement which can’t be ‘consented’ out of) … and what the OAIC has said about unfair methods of collection such as when customers are not actively engaged with the collection of data about them … plus the forgotten privacy principle of direct collection … I would love to understand how the online tracking, profiling and targeting of consumers – across “all of the signals of interest, all of those really rich content consumption and behavioural signals from across (the News Corp) network” of 20 plus brands, 30 plus sites and apps, as ‘supplemented’ with extra data collected from third parties – complies with the Privacy Act.
So what’s the real reason?
There are so many BS arguments made about how media and marketing companies are allowed, under the Privacy Act as currently drafted, to collect, collate and share our data, across different brands, services, platforms and apps, and to track, profile and target us at an individual level, against our wishes.
But I think it all boils down to this one reality:
They are allowed to do it, because so far no-one has stopped them.
We need the Privacy Act strengthened, but we also need the OAIC to enforce the existing law against the media and marketing industry.
No more BS.
Photo © Shutterstock