With the benefit of some time to think things through, I wanted to take a stab at talking about the point I didn’t address in my prior post about open data, namely on the possibility of developing appropriate safeguards for personal data. The FT beat me to it yesterday morning with an editorial (behind a pay wall, unfortunately) that addresses exactly that:
For Winston Smith, “the black moustachio’d face that gazed down from every commanding corner” was a perpetual reminder: Big Brother was watching. The surveillance society of George Orwell’s 1984 gained power from its conspicuousness.
In the modern surveillance economy, the opposite principle applies. The machinery of online data gathering, information broking and behavioural retargeting churns in an obscure nether world, transforming the stuff of everyday digital life into a valuable commodity. Few of the 2.5bn people who are online have any idea what information about them is being collected, or how it is being used.
The technology underlying the surveillance economy is evolving faster than the ability of social norms to adapt, or regulators to keep pace. Take two of the most striking recent developments. More than 17bn mobile apps were downloaded from Apple’s online store in the year to September, as many as the store’s first three years combined. Yet, despite vague blanket disclosures, users do not know exactly what personal data these apps are sucking out of their smartphones.
Meanwhile, thanks to big advances in artificial intelligence, facial recognition software has reached a level of accuracy that would have seemed unimaginable a few years ago. A 2010 test by a US agency found that a single person could be identified from a database of 1.6m mugshots with 92 per cent accuracy – and the technology has improved rapidly since then. To judge by their acquisitions of facial recognition companies in 2012, Google and Facebook both understand the commercial potential, even if they have sworn off most applications of this controversial technology for now.
The editorial goes on from there to list “four basic rights [that] need to be given urgent priority”, all of which make sense. But the biggest problem I see is not that companies are unlikely to grant these rights – it’s that most people are unlikely to care enough to demand them.
The recent dustup over Instagram and its privacy settings provides a natural experiment to test this unpleasant view. The hugely popular photo-sharing site announced last month that it planned to adjust its privacy settings in a number of ways, including one change that would allow the site to sell users’ photos. This seemed like it might prove to be the straw that broke the camel’s back in terms of people’s willingness to accept this sort of behavior. But even amidst the stories about high-profile users like Kim Kardashian and National Geographic leaving, the largest estimate of departing users was 25% (and even that number has been challenged). If those numbers are correct, then we have a pretty clear signal of what the overwhelming majority of people really care about, and it’s not privacy.
With that in mind, it’s rhetorically powerful but ultimately ironic that the FT editors chose to lead with 1984 given how little resistance the citizens of Oceania offered. Thankfully, we don’t live in that world. But the implications for the minority of us who are actively concerned about our data are worth bearing in mind.