So November has been quite the month for discussing big ideas about Big Data. Between the iappANZ ‘Trust in Privacy’ Summit, the Privacy Commissioner’s De-identification Workshop, and the Productivity Commission’s draft report into Data Availability and Use, much has been said about public trust or ‘social licence’ as a pre-condition to effective data use.
(And that was before we even got to the two damning reports into #Censusfail released last week, from the Senate Economics References Committee and Cybersecurity Special Advisor to the PM Alastair MacGibbon.)
But how do you create the right conditions for better data-sharing?
I believe that if you want to facilitate data-sharing for the public good, you need two conditions:
- First, you need data custodians to feel they are on solid legal ground when they decide to release data; and
- Second, you need public trust.
I was asked to appear before the Productivity Commission earlier this week, to discuss some of their draft recommendations on this topic. With the objective of releasing the public value in datasets held by both government and the private sector, the Productivity Commission has recommended creating a new regulated category of data, to be known as ‘customer data’. Although I disagree with that particular recommendation – as outlined in the Salinger Privacy submission I believe the scope of the definition of ‘personal information’ is already sufficient – I nonetheless enjoyed a spirited debate on the issue with Chairman Peter Harris AO.
Mr Harris said that the reason he wanted to move away from the definition of ‘personal information’ and instead talk about ‘customer data’ is because he wants businesses to treat their data as an asset, instead of as a ‘privacy compliance issue’. (There followed a brief period of furious agreement between us that privacy policies are generally well-crafted yawn-fests which consumers ignore.)
Mr Harris also believes that consumers should be able to realise the value of their own data.
Personally, I think discussions on assets and the valuing of data tends to spill over into debate about who ‘owns’ data, which just muddies the waters. Privacy laws are deliberately drafted to be agnostic on the question of the ownership of data.
However coincidentally at the iappANZ Summit just a few days earlier, Malcolm Crompton had also raised the concept of data being classed as an asset – although he was coming from quite a different angle. Malcolm’s point was that assets not on the balance sheet are usually ignored by company directors, so that by bringing personal information onto the balance sheet (for example through a change in accounting standards), you could potentially have more of an impact on ensuring privacy protection than strengthening our existing principles-based privacy laws.
In a similar vein, the brilliant information security blogger Bruce Schneier had this to say earlier this year, after yet another damaging data breach revelation: “data is a toxic asset and saving it is dangerous”.
So I came away unconvinced that we need a new, regulated class of ‘customer data’. If a business doesn’t yet understand that the information they hold about their customers is potentially both an asset and a liability, and it is therefore in their best interests to get their privacy practices right, calling it something new is not going to help.
But on the subject of language, I admit to being a fan of the term ‘data custodian’. The Productivity Commission’s draft recommendation 5.4 is to impose annual reporting obligations on data custodians, to make them justify their decisions about data access requests. Hmmm, I’m not convinced on that one. I made a submission that if the objective is, as the Productivity Commission says, to “streamline approval processes for data access”, then what data custodians need instead is pragmatic assistance.
My view is that the ideal privacy law sets tough standards that are nonetheless easy to comply with.
My experience over many years working with clients trying to comply with privacy laws is that the wording or ‘toughness’ of the rules themselves is almost irrelevant to the individual who needs to apply them. What matters to that decision-maker is how quickly and easily the standards can be found, understood, and followed.
For example, put yourself in the shoes of Phil the physiotherapist, or Sue the Centrelink manager, or Shari who is rostered on the front counter at a business. An insurance company investigating a personal injury claim has asked to see their file on Joe Bloggs. Phil and Sue and Shari don’t know whether they’re allowed to hand it over. Their first thought is: “Am I allowed to disclose this information?”
And likewise, the custodian of datasets of public value wants to know: “Can I lawfully disclose this information, in this format, in these circumstances, to this person or body requesting it?”
To answer any one of these questions can often involve a painstaking task of navigating through privacy principles, and exemptions, and applying the case law. It’s a lot easier to just say “no”, and “because of the Privacy Act”. Privacy gets a bad name. Research projects get bogged down. People start demanding exemptions from privacy law.
Instead, I would like to see the process of navigation made much simpler. The rules can be tough, so long as they are easy to find, understand and apply.
Earlier this year we developed a tool for organisations regulated under NSW privacy laws, which includes not only State and local government agencies, but also private sector organisations operating in NSW if they hold ‘health information’. We mapped out the Disclosure rules under the two NSW privacy statutes into a flowchart based, question-and-answer format, to guide decision-making. Because of all the different exemptions and special rules for different types of personal information, the flowcharts in our Untangling Privacy guide run over seven pages – but the user can move through them quickly.
Untangling Privacy works together with our annotated guide to the NSW privacy laws, PPIPA in Practice, which explains the interpretation on offer from both the Privacy Commissioner and case law (updated quarterly) about what each part of each test of each rule means in practice.
But although they come as downloadable eBooks, our guides are effectively still in analogue form. We would love to have the time and funding to turn our two guides into a properly automated and digital tool: an ‘app’ so that all types of data custodians, both big and small, could very quickly navigate through to the correct rule for their situation, and could also click through to see up-to-date interpretation of that rule. This type of pragmatic tool would enable data custodians to really quickly figure out their answer, each time they are approached with a request to share or disclose the data that they hold.
In an ideal world, the app would also be made available to the public for free. There would be no more hiding behind “because of the Privacy Act”. I suggested to the Productivity Commission that this type of app would also help consumers exercise control over their data, because they could more easily understand what the privacy laws actually allow for.
So instead of creating yet more legal and reporting obligations on data custodians, let’s build them pragmatic tools.
(Venture capitalists / Treasury officials / Philanthropists / Google : you know where to find me!)
But how about the second half of the equation needed to facilitate greater data-sharing?
I suggest that to gain the kind of public or consumer trust necessary to allow for more data-sharing, you have to make every effort to ensure that every possible step is taken to prevent things going wrong … but also that people will be protected in the event that something does go wrong.
Prevention of data breaches requires better education of both data custodians and policy-makers. Alastair MacGibbon, in his recent review of the Census, recommended that there should be a ‘Cyber Bootcamp’ for Ministers and senior public servants. What a brilliant idea! I would love to see a ‘Privacy Bootcamp’ as well. (Mr Harris raised only a bemused eyebrow at this suggestion of mine.)
But while prevention is better than the cure, we need to ensure there are cures as well. Our system of statutory privacy principles is not enough. There are many privacy breaches which cause individuals harm, for which they currently cannot seek a remedy.
So I argued that if you want to promote greater data-sharing, you will need to convince the public that their privacy is going to be protected – or that if all else fails, they will be compensated for any significant harm that they suffer. In my view, that means that the Government should take greater steps to offer remedies for people who suffer serious privacy harm, in parallel with any steps to increase the level of risk posed to individuals from greater data-sharing.
The Australian Government currently has two privacy-related Bills before Parliament, one of which is the data breach notification bill, and the other is a proposal to criminalise the re-identification of ‘de-identified’ government datasets. However neither of those Bills will actually provide remedies for victims of privacy invasions.
I suggest that if the Government is serious about unlocking the public value in data, it should not proceed with legislation or projects to increase the amount of data-sharing without first engendering public trust, or gaining a ‘social licence’. At the least, we need legislation to create a statutory tort of privacy, as already recommended by the Australian Law Reform Commission and other inquiries.
I don’t think we need new names for personal information, or new accounting standards for data. But if we want to promote data-sharing in the public interest, while at the same time protecting privacy, we need to offer pragmatic assistance to data custodians, and better legal protections to consumers.
Photograph (c) Shutterstock