Alexander Nix, the former CEO of Cambridge Analytica and a former director of the SCL Group, a behavioral research and strategic communications consultancy, will appear before the Digital, Culture, Media and Sport Committee of the British Parliament on June 6.
The committee intends to put to Nix questions about Cambridge Analytica’s “business practices and alleged misuse of data,” following the revelations of the consultancy’s harnessing of millions of personal Facebook profiles to predict and influence voters, which affected over 1 million U.K. Facebook users.
The manner in which officials in different countries have responded to the disclosures of Cambridge Analytica’s theft of personal data of over 87 million people and its use to influence voters in the U.S. election and Brexit referendum is commendable – investigations have been initiated in a number of countries, and the consultancy, as well as its parent company SCL, closed down on May 2, just 46 days after the initial revelations.
However, what this case has demonstrated is that policy-makers are resolved to sanction only those instances in which personal data is collected without users’ consent. Indeed, let us recall that detailed accounts of Cambridge Analytica’s involvement in U.S. elections using personal data on American voters to help the Donald Trump campaign in 2016 were available already in January 2017, but they did not attract much attention, neither from the general public nor from state authorities.
The outrage in which the Internet became engulfed, and the subsequent reactions from public authorities, came only after the revelations of Cambridge Analytica’s acquisition of personal data without user consent. In short, from the public regulators’ point of view, it was not a problem to engage in targeted political advertisement using personal data obtained “overtly”; it became a problem only when the information on users was obtained “covertly.”
Now, the view that data collection, authorized by users who have accepted terms of service (ToS) is legitimate, has always been problematic because in the majority of cases ToS have been characterized by incredible complexity and unreasonable length. Although there is no precise data on how many users actually tried to decipher the provisions of different ToS, it is plausible to assume that their proportion has been negligible, especially in cases where it required hours of reading.
The problem is that now, after the recent entry into force of the E.U.’s General Data Protection Regulation (GDPR) and of the U.K. Data Protection Act, this view of what constitutes legitimate data collection, with some qualifications, has been reaffirmed. The GDPR, for instance, requires that “any information and communication relating to the processing of personal data be easily accessible and easy to understand, and that clear and plain language be used” (we may note in passing that the document itself with its 88 pages of text, and the U.K.’s Data Protection Act with that is 339-page long, may hardly be said to provide the best examples of such criteria).
There are two immediate problems here: first, there is the issue of interpretation – what exactly constitutes an “easily accessible and easy to understand” text a language that is “clear and plain”? Second, it turns out that to comply with these requirements – to explain more clearly the means by which user data is collected, and for what purposes – the ToS require even more words: thus, the previous version of Facebook’s ToS was about 2,700 words, while the new version is 4,200 words; in the case of Twitter, the number of words jumps from 3,800 to almost 8,900. The question is, will individuals, who previously did not bother reading these texts, now begin to decipher even longer versions?
But leaving aside these more technical problems, what these new regulations tell us is that so long as consent given by users is “a clear affirmative act,” by which they accept the means by which their personal data will be collected, and the purposes for it will be used, the providers may carry on with their business as usual.
Thus, as regards, for example, Facebook’s new ToS, we may now, indeed more easily than before, find out that the platform collects the information on messages and photos posted, metadata inside the photos (the location and even everything that is seen through the camera in Facebook apps), data concerning contacts, call history and text messages from the smartphone used, as well as the smartphone’s battery level, signal strength, and available storage space; from users logging in from computers, Facebook records the browser and plugins used, and even mouse movement.
We also know that Facebook shares users’ data with third-parties “to help advertisers and other partners measure the effectiveness and distribution of their ads and services, and understand the types of people who use their services and how people interact with their websites, apps, and services.” But what precisely does the fact of knowing all this change for users? Apparently not much as no massive exodus of users has been registered after the update of the ToS, which means that most existing users accepted the new ToS (and, for sure, without bothering to read this longer version).
In other words, personal data continues to be mined, shared and used, including for targeted advertising, exactly as before. In this respect it is surprising, to say the least, that activists working for organizations like Privacy International – one of the key non-governmental bodies engaged in advocacy of privacy protection – interpret GDPR as “remarkable achievement” because companies have been forced to update their privacy policies.
Going back to the investigation of the British Parliament and the appearance of Alexander Nix before its the Digital, Culture, Media and Sport Committee, what should we expect? It seems plausible to suggest that not much, at least as far as “legitimate” means of data collection, sharing and use, including for targeted advertisement, whether commercial or political, are concerned. And this is problematic because the focus on Cambridge Analytica, and especially after the entry into force of the GDPR and of the U.K. Data Protection act, create an impression that personal data and individual privacy are now taken seriously and are solidly protected, at least in the European context.
While it is undeniable that the measures taken have stimulated public debate and have drawn the attention of people to the risks they run as regards their privacy when they use different services, they do not seem to provide sufficient protection of this fundamental human right. It may, therefore, be argued that so far they constitute a consolation prize for privacy advocates, not more than that.