self-contained utilizes the Pix2PixHD neural network to generate speculative physiologies. I trained image pairs of motion capture dots captured using a Kinect V2, and video frames of me in various outfits, adopting various personae through movement. The system has been trained to associate multiple personae with these simply dot patterns. Once the system is trained, feeding my original movements back into the system forces it to make decisions about which of my selves it must present, and how it should connect my limbs.
2018-10-07: Initial results
Live-testing the trained model using the pix2pix example in the ofxMSATensorFlow addon
Generating results by testing single-dot deviations from a model I know generates something coherent.
Trying novel input from the ground up. Absolutely demonic!
Based on a training set of 3,636 images, The model isn’t all that great yet at creating new body forms with novel input (novel input being arrangements of white dots not used in the training data). Still, with a more extensive training set, the sparse white-dot input could do a pretty good job of generating coherent bodies.
Above, you’ll see I’m only manipulating one dot at a time, seeing how far a new dot can be from a previous dot before the imagery becomes complete spaghetti. Right now, the margin is pretty tight.
The training data looks like this:
Video of the complete data set. 6x speed. 3,636 512×256 images.
Not exactly comprehensive of the range of my motion, but enough for the network to learn what arrangements of dots make up what human forms.
2018-10-05: Live testing novel input?
Not yet. The training was a success, in that I was able to feed in my data, and the model it spat out seems decently trained!
But in order to test it, I need to figure out how to feed my model in to the openFrameworks pix2pix example included in ofxMSATensorFlow
Here are some links I’ve gone through to try to get things working in openFrameworks.
This page is important for setting things up: https://github.com/memo/ofxMSATensorFlow/releases
This page is important for preparing my pre-trained models: https://github.com/memo/ofxMSATensorFlow/wiki/Preparing-models-for-ofxMSATensorFlow
This paragraph I somehow missed while thoroughly skimming (can on thoroughly skim?) this page ended up being incredibly crucial in exporting the frozen graph model I need to feed in to openFrameworks. Spent hours trying to figure this out. Turns out I was reading Christopher Hesse’s original pix2pix-tensorflow page, not Memo’s fork, which had this invaluable code-snippet.
2018-10-01: Beginning training!
After a lengthy setup process on the computers, making sure nvidia-docker2 was correctly installed (thanks Kyle Werle), it’s time to try out the training.
I think this project depends on getting the right kind of results from my training data. Whatever that might look like, the first hurdle is, of course, getting the training up and running. I’m going to post a list of links here that I accessed/used at various points to get this going.
- The neural network I’m using, Pix2Pix-tensorflow (courtesy of Memo Akten, courtesy of [add original torch implementation author here]): https://github.com/memo/pix2pix-tensorflow
- Notes from Christopher Baker on getting this up and running on Linux (thanks Chris!): https://gist.github.com/bakercp/ba1db00e25296357e5e3fef11ee147a0
- (haven’t tried this yet) High-resolution (1024×1024) adaptation of standard Pix2Pix: https://medium.com/@karol_majek/high-resolution-face2face-with-pix2pix-1024×1024-37b90c1ca7e8, https://github.com/karolmajek/face2face-demo
2018-09-03: The first steps!
So, the first part of this project is getting pix2pix to output what I need. Well,
Could I ask an AI to interpret biological motion from a sparse set of dots the say we humans are able to?