self-contained II, 2019
single-channel projection, pix2pixHD neural network, custom code
7:54 of a realtime system
The means by which we understand being in our bodies happens via two processes: a process of inhabiting our bodies; of dictating our own movements and feeling them act themselves out in time. There is also a process of watching our bodies: how much can we learn about being in our bodies from watching mirror reflections or video recordings? Simultaneously watching and inhabiting a body creates a feedback loop—a process of perpetual calibration.
Technology is a tool for self-reflection that provides a way to measure our imprint on the world around us, and how that world shapes us. With recent advancements in artificial intelligence, we now contend with computer-generated doppelgangers, born from data generated by our lived experience in the world. These doppelgangers are derived purely from observation, formed without any of the knowledge gained from inhabiting. How might a body constructed from photographic references operate? Is it a body at all?
Without context or history, self-contained II depicts bodies out of time and space. Trained on a dataset of over thirty thousand images of the artist’s body, an artificially intelligent neural network invents new forms of the artist’s body, unbound by biology, gravity or time.