A bountiful harvest

 

Recently, I asked people if I could harvest their data.

I just needed them to generate some data to help me create art. I wrote out a contract for people to sign, promising vaguely that in exchange for letting me use their data for my art, I would give them a unique digital art object to own in return.

The data I desire comes in the form of the subject’s likeness, which I capture in the form of 30-45 second videos. In these videos, subjects move around in front of a black backdrop as they see fit, and the camera captures their entire body in the frame. In addition to capturing this photographic representation, I place a sensor beneath the video camera that is capable of “skeleton tracking”, a technique I rely on to capture motion data from my subjects. I convert this motion data to fourteen points–one for the head and neck each, two for the torso, one for each shoulder, one for each hand and elbow, and one for each knee and foot.

Typically, when we think of giving up our data, we think about it as a byproduct of our everyday activity. To exist in 2019 is to be constantly shedding personally identifiable information through passive, ambient processes of which we are largely not conscious. Social media platforms gather data all sorts of data like who you’re talking to and what you’re saying, or what you look like. Your public transportation card creates an entire map of all the trains and buses you’ve taken. Your face appears in closed-circuit surveillance cameras distributed around your city. Your web browsing history grows and grows.

None of this is new information. We are all very much aware of these processes. So aware, in fact that, that we largely ignore them altogether, figuring that we’re already compromised and there’s nothing we can do to keep ourselves from filling up a database housed in a desert somewhere. Some of us might take active measures to curb the amount of data we shed. We might delete our social media accounts or disable location-accessing services on our phone apps, but at this point there are so many ways in the world to capture our existence that it seems there’s no way to avoid them all.

But in the case of this project, the data I ask for is actively generated, in a studio environment that functions as a laboratory. Generating data, as opposed to shedding it, requires that we actively participate in the production of data. This may come in the form of filling out surveys or participating in scientific experiments. We associate this type of data more closely with labor, so we are more often compensated for contributing this kind of data, whereas the data we shed is treated as a byproduct of the things we are already doing, so all we are treated to might be minor increases in convenience, or (potentially) more appropriate advertising, at an enormous cost to our personal privacy.

While the data we shed is mostly used to recognize us–to make decisions about who we are and what we want, or whether we deserve those things–the data we generate

What does it mean to allow someone the rights to your likeness? The process I use to capture the likeness of my subjects isn’t all that different from the way, say, a studio photographer will photograph her clients. As is common practice, the rights to the captured images belongs to the photographer, not the client. This allows the photographer to earn money from her practice, through the withholding of the images she captures. But the photographer who holds these rights thinks of these as images, not as data, per se, even if they are typically digital images.

But now we live in the age of machine learning, where it seems the goal of all human activity is to be quantified, digitized and analyzed in a grand repository of similar data. Text, music, video, pictures; nothing is safe from the these data-devouring algorithms. In the case of images, images no longer exist to be merely looked at by humans. As the artist Trevor Paglen wrote out, “Human visual culture has become a special case of vision, an exception to the rule. The overwhelming majority of images are now made by machines for other machines, with humans rarely in the loop.”

The difference between the goals of a studio photographer and my goals, is that I wish to capture images of my subjects, so that I may teach a machine to understand how to represent their “likeness.” As much as I appreciate having human-watchable footage of my subjects joyfully gesturing and moving around in front of a camera, I really only care about the fact that I can use this video as data to feed a machine. The machine learning algorithm I use sees your thirty seconds of video as nine hundred image frames. At a resolution of 1024×512 pixels, the machine reads each image as a combination of 524,288 discrete elements. Each of those 524,288 pixels can be one of 16,777,216 colors. In a 1024×512 sized image, there are 524,28816,777,215 possible images. This is a ridiculously large number, especially considering that that are only an estimated 1080 atoms in the universe.

When a computer is fed hundreds, or thousands, or in my case tens of thousands of images, it can start to recognize patterns in these images, depending the architecture of the machine learning algorithm. For my work, I use a particular algorithm called pix2pix that can learn to associate any kind of input imagery with any kind of output imagery.