Jess Rohan

There are no cameras visible in the Metro Pictures gallery where Trevor Paglen’s latest exhibition, A Study of Invisible Images, is on show through October 21st. Instead, visitors encounter images that machines have already made - many displaying things that humans were never meant to see.

There’s Frantz Fanon, whose dark eyes peer out of a face blurred at the edges in a work titled “Fanon" (Even the Dead Are Not Safe). This image was created using a facial recognition algorithm like the one Facebook uses to identify faces that its users upload by isolating their unique features. The title points to the fact that even non-Facebook users are stored and identified by the software.


The exhibition considers machine-made images - some created for machines to communicate with each other, and others to inform their human operators. Paglen created the works at Stanford in collaboration with programmers and AI specialists to create software that allows humans to see legible images of this data.

For “Machine Readable Hito”, dozens of images of artist Hito Steyerl making different expressions are pasted grid-like to the wall. Below each photo are machine readouts of various facial recognition algorithms, designed to guess the age, gender, and other details about a subject. In one image, the algorithm puts Steyerl’s age at 30; in another, 43. Other readouts estimate the emotional composition of Steyerl’s face, expressed in percentages of joy, fear, sadness, disgust and surprise. The work in part highlights the shortcomings of machine sight, which can only interpret what it sees based on the datasets its operators feed it.

“Machine learning systems are very good at detecting patterns in large datasets but cannot make up their own rules, which is something humans easily do,” Paglen says.

One room, titled “Hallucinations”, is dedicated to machine images generated from training libraries, which programmers use to teach AI software to recognize objects. Instead of using a standard training library of one category of object (such as goldfish, or doors), Paglen trained the AIs with databases built on more abstract concepts, like “American Predators” (for which he included Venus Fly Traps, Wolves, Predator drones, and Mark Zuckerberg). When connected to a camera, this software interprets all visual input as a version of its “American Predator” dataset. The result is abstract-looking, often eerie images.

“The political and cultural climate in the US is so overboard at the moment that it's been hard to respond to aesthetically. I think the new body of work reflects that - it's far more grotesque and  even gothic than anything else I've ever done,” Paglen says.

But it’s the larger context Paglen references that is eeriest of all: these technologies are built and used now, by governments and corporations, for control and profit. We know that machine sight can be used by the US military, for example, for drone strikes. That usage is referenced in “Four Clouds”, a quadtych of aircraft-view landscapes overlaid with region adjacency graphs, which can be used to track moving objects on video.
 
US police departments are increasingly using drones and other aerial devices for domestic surveillance. The Los Angeles Police Department announced earlier this week that it would begin a drone pilot program to monitor natural disasters and “high-risk tactical situations,” becoming the largest department in the country to use drones. One of the concerns raised by opponents of the plan was that it would only heighten the distrust many residents have of the police. 

Technologies have a long history of absorbing the shortcomings of their creators. Most famously, Kodak film was designed to reliably render white skin, in ways that resulted in the poor rendering of black subjects. More recently, a Google photo recognition algorithm identified black people as gorillas, and software meant to focus police resources most efficiently recommends more policing in low-income black neighborhoods, reproducing the biases of the data it’s built on.

As Invisible Images deftly highlights, machines don’t see the way humans do. But the world is increasingly filtered through machine sight technology, despite its inherent limitations. Just as the quirks of conventional camera technology are glossed over in our eagerness to see digital images as a realer-than-real “reality”, there is risk in adopting machine sight as a de facto way of seeing, particularly in arenas where it stands to do the most damage - war, policing and public surveillance.

Paglen’s exploration of the depths and limits of AI sight raises questions about the implications of machine sight in the hands of the state, for the humans the machines are watching.

Jessica Rohan is a contributing editor for Warscapes.

Topics:
Region: