Privacy and secrecy are not epistemic concepts, and have nothing to do with ideas about the nature of knowledge. Epistemology is, or has been up to now, a normative theory: it is about what something should be if it is to be knowledge and what sorts of things can be knowledge-holders. What used to make the subject tricky was the need to include the revelations of faith, and what makes it tricky now is that the most useful and successful scientific theories, theories that send satellites to distant planets and identify brain tumours, are actually false. So if knowledge must be true, it excludes our best physical theories and is danger of being trivial, but how do we distinguish between "good" falsity and "bad" falsity? And no, mere predictive accuracy is not enough.
Privacy is the condition of being unobserved by your enemies - those who would seek to use what they observe to frustrate your intentions and plans, to ridicule or otherwise harm or irritate you. (I'm assuming you don't mind being observed by your friends.) Privacy is what we need when we are doing things other people disapprove of, or, of course, when we are living in a police state. Any right to privacy has to be conditional, because murderers, kidnappers, drug dealers and other assorted carteliers don't have a right to expect that they can hatch their plans un-monitored.
Secrecy is the condition of being kept from public knowledge, a secret is something that only a few people know and they intend to keep it that way. The contents of my kitchen cabinet are not a secret because I take no steps to hide them, the contents of an encrypted journal that I keep on my computer are a secret. Until, that is, some reads it and publishes it to my enemies - then it isn't a secret. A friend who says nothing about what they have read is "keeping the secret".
Neither privacy not secrecy feel to me like the kind of ideas that will take the weight of a philosophical debate: some robust, commonsense legal discussion, maybe. Anyway, the key ideas here aren't really secrecy and privacy, what's important is permission and control.
An unstated but driving idea in Western (Greco-Roman) culture is that we can, should and indeed must, control what others know about us. In the past people did so by behaving in a measured, self-controlled manner, not "giving away" their thoughts, feelings or plans, keeping a "poker face", behaving one way in public or in front of their enemies, and another in the supposed "privacy of our home" or with our friends. Many homes were not actually very private places - with servants coming and going at all times. There was little to know about us, simply because very few people did much and that infrequently. There was word-of-mouth amongst traders about people who didn't pay bills, but no credit-rating agencies, and very few people paid taxes or used banks. On the other hand, in a small town, everybody knew everybody else, if only by sight, because they all went to Church Sunday (or Saturday or whenever).
This worked fairly well for thousands of years, until, to pick a symbolic date, the first urban myth about the job applicant who was turned down because the employer's HR snoops found a Facebook photograph of them smoking a spliff on the beach at Goa. There had been fears about Big Brother government databases, but these subsided as governments and IT contractors showed time and time again that they were simply not capable of constructing such things, and as the huge costs of high-quality data cleansing, verification and stewardship dawned on everyone. (I would also like to think that by the 2000's governments realised that such systems would in practice be run offshore in countries over which they had no jurisdiction, or if onshore, then by people who would have no stake in the proper running and data-fill of the systems, and that the security and economic risks were simply too great.)
The issue is about who controls who uses what information about us. It isn't even about ownership of the data: the data a bank has about my current account transactions wouldn't exist without its computers or my activity, so we're both creators and owners of the data. I create it, they store it. Ownership isn't the way into this conundrum. It's data-about-me, and what makes that different from data-about-alpha-centauri is that I am a person, and alpha-centauri isn't. As a Western person, I expect to control the information you have about me - as I allow you to control the information I have about you. Applying this principle, if the banks want unlimited access to the information they gather about us, then in return we get to be able to gather unlimited information about the banks. Which their Directors do not want us to have.
The point is that I provide the bank with information so that it can act on my behalf, and that's it. The bank will use my payment record to make judgements about how much it is willing to lend me and for how long, and that feels like a legitimate use of the data. My record on paying my bills is a legitimate matter of public interest, even if the "public" is somewhat limited. Sending me "targeted" junk mail, or giving me discriminatory pricing based on my behavioural propensities derived from a model based on "my" data amongst others, doesn't feel as legitimate.
None of this has anything to do with the theory of knowledge. It does have to do with the management of information and data, and many of these issues have been raised and addressed. How long should "personal data", which I understand as data-about-people-and-what-we-do, be kept by what kinds of organisation? What protection should various kinds of data have? What purposes can various types of data be used for, without the explicit permission of the person-or-their-activities-it-is-about? Is personal data-driven advertising just a narrowcast version of broadcast advertising or is there a qualititaive difference involved?
Far from being less valuable than, say, pharmaceutical research, personal data is much more valuable to business and the State. Of course, it is transient and by definition non-universalisable, and so not the kinds of facts that science and technology are about. It doesn't tell us about "the world", only about some stranger in another town whom some company thinks will be a sucker for this special offer. Which is cosmically meaningless, even if all those strangers add up to a lot of money.
Perhaps the real task for epistemologists is to develop a criterion for "cosmically meaningful" information: the kind of knowledge that should be defended by the Western Liberal Rationalist knowledge-is-preferable-to-ignorance creed. This might sound simple, but I suspect that if it's simple then it's going to be trivial. I'd like weapons research not to be preferable to ignorance, but how about research into body armour? I'd like to think that medical research is preferable to ignorance, but some of the results are very, very expensive and have marginal effects or don't cure but merely manage symptoms, and the drug companies are very good at PR designed to get such drugs on the NICE list, thus costing the taxpayer money that should be spent elsewhere. David Hume never thought about these issues - nor has anyone had to prior to 1945.
The privacy and secrecy debates in the press and legal circles are a way of having a debate about who controls data-about-me-and-what-I-did. Very, very large sums of money are involved. If Facebook can't use what we "Like" to target advertising at us, it has de minimus financial value as a business, and neither does Google. The commercial basis of the Internet is that it offers highly targeted advertising, but if we can control ourselves out of it, the Internet starts to lose its commercial value. And it employs a lot of people who won't get jobs that pay as well anywhere else. Privacy and secrecy are about "the economy, stupid". Not the theory of knowledge.
No comments:
Post a Comment