self-contained: Motion Studies
Single-channel video, 11:50 of a real-time system, pix2pixHD neural network, dye-infused aluminum
2019
A neural network trained to see the world as variations of the artist’s body enacts a process of algorithmic interpretation that contends with a body as a subject of multiplicity. After training on over 30,000 images of the artist, this neural network learns to creates surreal humanoid figures unconstrained by physics, biology and time; figures that are simultaneously one and many. The choice of costumes and the movements performed by the artist to generate the training images were specifically formulated to optimize the legibility of the artist within this computational system. self-contained explores the algorithmic shaping of our bodies, attempting to answer the question: how does one represent themselves in a data set?
In addition to the moving on-screen figures, self-contained: Motion Studies introduces a sequence of nine frames extracted from the video, and printed on dye-infused aluminum. These nine frames, arranged side-by-side so as to suggest a film strip, represent just 0.3 seconds of the video content. Even in this short span of time, many details change from frame to frame. Though the on-screen figures are in motion, this printed slice of time calls attention to the non-temporality of this system. This artificial intelligence, which has no understanding of past or future, generates imagery unbound by time. Only through our observation does this system move.
This installation was included in the Future Conditions exhibition, part of SAIC’s Art and Technology Studies department 50th Anniversary programming.
To see how the self-contained series was created, click here.
The full video, as it was included in the installation.
This looping video is an excerpt from a real-time generative system.