We all claim to value our privacy. At the same time, we are all extraordinarily careless about the amount of personal information about ourselves that we allow to be available on the internet.
This blog identifies four main routes to the protection of personal information. It suggests we will need to move away from the current reliance on the principle of informed consent to a more robust approach to the structure of the internet industry, including a ‘safe harbour ‘obligation placed on data controllers and data processors.
We think of the ‘private’ as a space that belongs to us as individuals, where we are unobservable, and have anonymity. Traditionally, it is associated with our home life and with personal correspondence and communication. The difficulty arises in extending this traditional understanding to the age of communication over the internet. On the internet, on the basis of very little information, we can be identified, tracked, our personal information aggregated and distributed, and our behaviour predicted. Our ‘home’ space is routinely invaded.
There are four main approaches to extending the traditional idea of privacy around the home and personal communication so as to protect also the privacy of personal information accessible on the internet.
The first is to assert privacy rights. The second is to assert the principle of ‘informed consent’. The third is ‘structural’. It involves addressing positions of market dominance in the field of data collection and dissemination. The fourth involves arrangements for ‘safe harbour’.
Current attempts to protect internet privacy, such as in the EU’s Data Protection Regulation (GDPR), rely mainly on the first two approaches. They are inherently flawed. There is a need to move further towards structural remedies and the creation of safe harbours.
Privacy can be asserted as a general right or as a specific right in specified circumstances. The US Supreme Court has found there is a general right to privacy included in the underlying presuppositions of the American constitution. The EU’s GDPR asserts both general rights, based on the EU’s Charter of Fundamental Rights, and some specific rights, notably in the circumstances of the so-called ‘right to be forgotten’, or ‘right of erasure’ where an individual can require information pertaining to themselves to be taken down.
There are two main weaknesses in a rights-based approach:
First there is the view that any claims about rights necessarily involves accepting corresponding duties and obligations. The corresponding duty or obligation in the case of the internet would be for us to exercise care about the kind of information we provide or allow to be available. Generally, we do not exercise the necessary care.
Secondly, specific claims are open to reasonable contest. The EU’s right to be forgotten can be reframed as a ‘right to conceal’. Few would agree that there is such a right.
A rights-based approach offers a false assurance of legal certainty. It is of benefit mainly to lawyers. Politicians in the EU like it too because it has a ‘feel good’ appeal.
The principle of informed consent has its origin in medical practice. It refers to situations where the patient needs to know that a pill might have some bad side-effects as well as benefits, or that a surgical procedure also has risks as well as benefits. The doctor has a duty to inform the patient so that the patient can make their own decision about what they themselves consider to be in their own best interest.
This general approach has been carried over into the internet and is reflected in the GDPR. We meet it every day when we tick the box marked ‘accept’.
The problem is that individual behaviour is strongly influenced by context. The principle of informed consent does not recognize the change in context from medicine to internet.
When we are face to face with a doctor or surgeon in a choice that has potentially negative, or even fatal, consequences for ourselves, there is a strong incentive to pay attention. In the case of the internet the convenience outweighs caution. We do not read the terms and conditions but tick the ‘accept’ box anyway. In the medical case, the chain of cause and potential effect is clarified. In the case of the internet, the chain of usage of information is often obscure.
It has been understood since the time of Adam Smith that businesses can undermine the market. The fear is reflected in long standing efforts in both the US and EU to ensure that markets are open to new entrants and competition. Competition law works to prevent discriminatory practices and the exploitation of positions of dominance.
In recent years, competition policy in the EU has been changing to show increasing concern about dominance and discrimination in the gathering and use of information and data. Germany’s competition authority made a landmark finding against Facebook in 2019 and the EU’s competition authorities have pursued cases against Apple, Amazon and Microsoft. The US Justice Dept. and Federal Trade Commission are beginning to show a similar interest.
Action against market dominance may help fight the agglomeration of personal data into a few hands and weaken the potential for the misuse of market power in the handling of data. However it still does not ensure privacy.
The practice of providing for ‘safe harbour’ comes from accident prevention. It is about providing a space where, for example, an air pilot or air traffic controller can reveal what they know about near accidents, including their own behaviour, without fear of being blamed or incurring penalties and without the information they provide going any further.
A team of engineers or scientists or surgeons might similarly benefit from a setting where they can reveal concerns, or discuss where something went wrong, without blame being attributed or the information being turned against the provider of information.
If this practice were carried over into the world of disclosure on the internet it could be framed to place the responsibility for providing a space for privacy on the internet business itself. It would generalise the currently limited possibility for data subjects to restrict the processing of personal data.
Under this approach the company gathering and controlling the data would be placed under an obligation to provide a safe harbour for the information that it gets from the data subject. It would need to hold the information within the processing area pertaining to the specific purposes of the transaction and keep it between them and the person. In particular, it would not be able to pass on that information to third parties outside the immediate area of the transaction. It would switch the onus of providing for privacy onto the data gathering and controlling company and would not place the onus on the consumer or data subject.
The provision of safe harbour is an approach that could inform the structural remedies open to competition authorities in addressing market abuse. In addition, the requirement to provide safe harbour could become a condition of company incorporation, or a condition of licence for a company operating in a regulated sector.
There will still be many problems in constructing a safe harbour internet world. There will be questions about how to define the ‘harbour’ and its limits; it will be disruptive of the present industry structure; it will still call for permissions. It still involves a shift in the context surrounding behaviour. But it would shift the onus of providing for privacy.
The direction of travel
Efforts to ensure privacy in the internet age have so far emphasised rights and informed consent. Such approaches are inherently weak because they ignore human behaviour to act without due care and to put the convenience of the moment above all else. There is a need to move towards an obligation on businesses to provide safe harbour. In addition, there is a need for the competition authorities to move still more actively against companies that acquire dominant positions in the market for personal data gathering and processing.