Biometry is the analysis of biological data. Much more recently, the term “biometrics” has taken on a particular popular focus: specifically, the automated measurement of human features, often for individual identification based on specific characteristics such as face shape. The confluence of improvements in imaging technology and computer processing power can enable somewhat secure personal access to our phones and laptops by facial or fingerprint recognition. Security researchers Kaspersky Lab describe biometric security approaches as convenient – especially compared with some other security methods which can be forgotten or misplaced – and difficult to impersonate.
Unfortunately, this is where the good news ends. Convenience might be the case for day-to-day personal use. However, there are significant risks. Biometric data can be stolen either through hacking a storage site or by being copied from another source. Kaspersky Lab notes that biometric data and stored passwords are at similar risk to being stolen; but whereas a breached password can be changed, biometric data is expected to be persistent so there are greater ramifications of theft or duplication of such data. But the risk is far greater than just a handful of antisocial actors. Biometric identification technology is being implemented under capitalism, where there simultaneously exist forces of direct exploitation and of repression. This remainder of this article will address some of the risks we face, we’ll see numerous harmful applications are already in operation, and we’ll consider the nature of the futuristic-sounding surveillance-intensive dystopia we’ve already well and truly entered into.
The Electronic Frontier Foundation (EFF) warns that biometrics threatens privacy primarily through its potential use by governments for surveillance. Automatic car licence plate readers are already prevalent in New Zealand, such as for managing car parking areas. In the USA, car licence plate readers have been used to record the location of vehicles in national databases that allow retrospective tracking of their movements. Facial recognition works similarly, but is used to identify individual people by recognising unique facial features. The EFF warns that as technology continues to improve, facial recognition will become increasingly effective and surreptitious identification and tracking could become the norm.
We’re already seeing such normalisation. The Chinese government has been maligned for its use of surveillance, particularly of the Xinjiang region. Red Flag has described this as “a coercive way to force a whole community to self-police and make the construction of a Uyghur resistance movement difficult.” The UK Starmer government is proceeding to expand its current use of facial recognition technology, which has included scanning large crowds in real time to locate specific people. The USA has been accused of mass surveillance by “law enforcement agencies”, including but not limited to the recently much-discussed Immigration and Customs Enforcement (ICE) agency, as well as by corporations, community groups, and individuals. All that’s just the malevolent contribution of a single company, Flock Safety; Alec Hively of BGR reports that “[Flock’s] tech has been deployed to identify activists, track abortion visits, stalk women, and reinforce racial profiling.”
An illuminating example of what, and who, we are collectively up against is “Clearview AI”. Clearview’s co-founder, Cam-Hoan Ton-That, has longstanding connections with “alt-right” (i.e. far-right) extremists including Jeff Giesea (known for publicising a guide on how to give money to white nationalist and neo-Nazi organizations), Charles “Chuck” Johnson (once a writer for far-right website Breitbart), and Peter Thiel (one of Clearview’s earliest investors, whose own data-mining/surveillance technology Palantir is known to empower the USA’s ICE in surveillance of migrants and to assist Israel’s genocide of Palestinians in Gaza). Clearview has used automated methods to collect billions of publicly-available images from the Internet, compiling a massive reference database. Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, has reportedly described this business model as “weaponizing our own images against us without a license, without consent, without permission.” Clearview was then marketed to both state and private buyers, with US “law enforcement” urged to “run wild” with searches using Clearview to build enthusiasm for the technology – an approach that has successfully seen widespread adoption and weaponisation against targets from shoppers to activists and protesters.
Biometric data is a much broader category than just visual facial features. Biometric passenger screening is commonplace in airports across Southeast Asia and Europe. The US Transportation Security Administration (TSA) has already implemented identity “confirmation” via passengers’ personal phones, and fingerprint scanning is becoming increasingly commonplace. All this is marketed as a “seamless”, “smoother experience”, which supposedly “gives [airline passengers] back control”, as if giving away personal data is convenient and empowering. The Unique ID Authority of India Aadhaar program utilises facial recognition alongside iris scans and fingerprints from all 10 fingers, linking this information to a unique identification card which is expected to become mandatory for anyone accessing social services in India. Around the world, the individual-level degree to which control and surveillance is being exerted is escalating at an alarming pace.
Biometric systems are not infallible. This might seem like a morsel of good news, suggesting computer errors might mean occasional minor failures to oppress through surveillance; but here too the widespread implementation and reliance on biometric identification means such errors primarily pose additional risk. As Anton Stravinsky writes in Newstrail:
Multiple studies have shown that facial recognition systems are not equally accurate across demographics. Error rates are higher for women, people with darker skin tones, and older adults. At airports, this translates into disproportionate scrutiny for certain groups, raising concerns about fairness and potential discrimination. […] this means efficiency promises may mask unequal experiences, with some passengers breezing through while others repeatedly face secondary checks.
Those unequal experiences have significant potential to be harmful, and thus biometric surveillance and scrutiny only increases the risk for those already under the greatest oppression. Parallel concern was raised by New Zealand’s Office of the Privacy Commissioner, which wrote in 2023 that key risks of biometric technologies include “cultural harms to Māori from misuse of Māori biometric information, and the potential for biometric systems to misidentify Māori or be used in ways that have adverse outcomes for Māori.”
Scott Confer provides a detailed exploration of privacy issues in a 2017 paper titled A Socialist Theory of Privacy in the Internet Age: An Interdisciplinary Analysis. Drawing on earlier work by Christian Fuchs, Confer notes liberal discussions of privacy often ignore negative aspects of privacy which include “promoting individual agendas, allowing for misrepresentation of people’s character, and opposing participatory democracy.” Confer continues:
Liberal concepts of privacy end in the alienation of humans from their social essence in the public sphere. Given both the positive and negative potential of privacy, the question a theory of privacy needs to answer is not how privacy can be best protected but rather in what situations privacy should be protected.
Thus:
The context of a situation is key in determining how much to protect privacy. The socialist theory of privacy posed by Fuchs focuses on the distinction between classes in society, arguing that privacy should be used to strengthen the lower and middle classes, but it should not be used to protect corporations and the rich if this solidifies the gap between them and the lower classes. Ultimately, the socialist theory of privacy emphasizes both strengthening privacy for consumers and increasing transparency among capital owners. More generally, it advocates using privacy to protect the exploited from those exploiting them. This socialist theory uses privacy as a tool for leveling the playing field between an exploited group and a dominant one.
With the singular exception of the opening example of the convenience of unlocking your personal device, every example of the use of biometrics described in this article is an example of the ruling class consolidating control over the oppressed majority. Unease about surveillance is surely felt by many, and it can be easy to fall into overly simplistic responses along “everyone has the right to privacy” lines. But a class analysis is critical to understanding the phenomenon of surveillance. The ISO has long recognised this, and as an example supported protest in 2013 against the expansion of powers of the Government Communications Security Bureau (GCSB), one of New Zealand’s spy agencies, rebuking claims that this was “not a left or right issue” and pointing out: “spy agencies are used by the Right against the Left […] against union activists, feminist activists, and environmentalists.” A hypothetical reform requiring companies or politicians to disclose sources of income rightly disempowers capitalists; the unfettered proliferation of biometric surveillance technologies disempowers the masses. We should look to ways in which we might support increased transparency of the actions of capitalists while we simultaneously support restriction of privacy-invading technology on the mass scale.
The Guardian has reported on the Danish government moving to change copyright law “to ensure that everybody has the right to their own body, facial features and voice.” However, this move aims to limit the use of such biometric data for the creation of “deepfakes”, which are realistic digital (mis)representations of a person. The autonomy represented by such personal “copyright” might, perhaps, be usefully extended to cover unauthorised measurement and collection of biometric data for surveillance purposes. But notably absent from that reporting is any consideration of a person’s right to their own biometric data in the context of being surveilled and identified by a state, and the underlying assumption appears to be of identity as a marketable product rather than as a subject of surveillance and suppression.
Rather than relying on copyright legislation to save us, we can usefully fight back in other ways. First, we must be clear to ourselves and clearly communicate to others the class warfare that underpins biometric surveillance, as any further action will be strengthened by widespread support. Second, we can push for reforms which explicitly ban both state and private use of biometric surveillance in all spheres including in public spaces, borders, custodial spaces such as police stations and prisons, and private institutions such as most workplaces. Third, we can wholeheartedly celebrate every instance of interference with surveillance devices, such as the cutting down of surveillance cameras across several states in the USA. Biometrics and surveillance are weapons of class warfare, and we must not allow the capitalist class to continue to arm itself against us.
Banner Image: Image evocative of automated facial recognition as used for surveillance. Credit: Remix by Serah Allison of EFF-Graphics’ Surveillance-camera.png (CC BY 3.0 US) and mikemacmarketing’s Facial Recognition22.jpg (CC BY 2.0) from Wikimedia Commons.





