A dystopian future of mass-surveillance

‘THE NEW I WATCHES’ – Data Protection and Artificial Intelligence


Someone you’ve never met before, who is a stranger to you comes to you and says. ‘I know you. I know: your name, your age, where you first went to school, the street you live on, the hospital you were born in, your first best friend, your current friends, your medical records, your financial circumstances,…


Someone you’ve never met before, who is a stranger to you comes to you and says. ‘I know you. I know: your name, your age, where you first went to school, the street you live on, the hospital you were born in, your first best friend, your current friends, your medical records, your financial circumstances, where you were this morning and maybe even what you’re feeling right now’. Except it isn’t a person who says this, it’s a computer.

Currently, the UK government is refining a new Data Protection and Digital Information Bill, covering topics such as access to individual and business data as well as biometric data. I have another piece of writing that I co-authored, (published here on my blog), that discusses biometric technology, specifically facial recognition technology and here I am going to be discussing a similar topic: AI and data protection. I am going to address the topic broadly, considering the overarching considerations of the impact the development of AI has on data protection. Fundamental to data protection is the level of surveillance that can take place and that is the heart of this issue: that the development of AI makes monitoring and surveillance much easier and much more effective. Firstly, because it is now possible to store vast amounts of information about a huge array of individuals in one place on a major scale. Secondly, whereas previously individuals would have been required to collate evidence on people to monitor them, which would have been more expensive to employ said individuals and would have taken far more time, this can now all be done by the AI. The AI not only can collect vast amounts of information to create an extensive ‘file’ on a subject or person but it can analyse and sort through that information and make inferences about a person based on the information it has, and so the file of information builds up and becomes more and more comprehensive. This allows, social monitoring to be faster and far easier and therefore much more able to be done on mass.

At the start of the piece of writing, I noted that a computer may even be able to see what you’re feeling. To many this may seem like the inversion of a science fiction dystopia, however, is something that is being developed. The system so named, ‘VibraImage’ takes video footage and analyses the head movements of those in the video to determine the emotions of an individual. Although it is used in countries: like Russia, China, Japan and South Korea there is some dispute as to its efficacy due to a lack of evidence. Despite the doubts surrounding the system, it is evidence of the increasing development of AI-generated monitoring of individuals. A more concrete example of AI monitoring that effectively undertakes tracking is the US-based company ‘Clearview AI’. Clearview AI’s, model is evidence of how AI can be used to track and profile individuals, finding and making inferences based on information about these individuals. To provide a little introduction to the company this is a brief summary of its function. The software searches the internet and finds images that can be added to its database. Its clients, (national security agencies and law enforcement) can upload their images to Clearview’s system where images from the database are compared to those uploaded by the client. The system analyses the pictures and puts forward any potential matches it finds, with a grading of how likely the images are to be the same person. The software also provides information about people through data it has found in connection with the images it collates from the internet. While GDPR applies to AI monitoring systems, with Clearview successfully granted an Appeal against the Information Commissioner’s Office, in 2023 when it was tested against these regulations, there is no specific reference to AI within GDPR. However, as a general rule as long as monitoring is in the interest of safety and security it is generally accepted, the reason Clearview’s Appeal was successful was on the basis that the monitoring was used for this reason, (and why in 2020, after a lawsuit with the American Civil Liberties Union, Clearview had to stop supplying to commercial clients). The issue comes because what is considered ‘necessary’ for safety and security is so very arguable and certainly not clear-cut when weighted with the right to privacy. Naturally, the question we have to ask ourselves here is with the inevitable rise of individual profiling and tracking: how is this information being used and by whom? Governments? Corporations? Is it used for safety measures? Advertising measures? Greater social control measures?

©

Social Sophistry 2023-2025