Tucson Sculpture Festival 2012

Monday, 31 August 2020

August Summary

I spent August using AI to make fake drawings, and learning Onshape, an online CAD program.  Holly Grimm  has been introducing me to the world of AI art (and had just finished mentoring Paola Torres Núñez del Prado in the Google Artists + Machine Language program).


FAKE DRAWINGS

StyleGAN2 (posted on Github) is churning out fake Krrrl drawings in my style, after I uploaded a good portion of the drawings from my blog into Runway ML.


Fake drawing in my style,
generated by the 3rd StyleGAN2 model,
and further colored and warped in Deep Dream Generator --
I did not draw this!






In Runway ML

I uploaded 209 drawings from my book -- Finish My Figure Drawings -- to "train" the first model in Runway ML, using the StyleGAN2 program.  Then I pushed the possibilities of that model through subsequent blog postings.


I did not draw this!




I did not draw this!


I did not draw this!


I did not draw this!









I made a 2nd "model" after uploading 1456 short pose drawings from my blog, from 2016 to the present (August 2020), drawn mostly in 20 minutes.  I "trained" this model in StyleGAN2 on the 1st model (based on the drawings from my book).

The results seemed a little more robust, with slightly better faces.

I did not draw this!



I made the 3rd "model" after uploading 348 drawings from my blog -- basically all the long pose drawings (usually drawn in 3 hour sessions).  I "trained" this model on the 2nd model of my drawings, which was quite incestuous (especially considering that the 2nd model was trained on the 1st model of my drawings).

The original drawings of longer poses were more substantial, and yielded better fake drawings.  The fake faces were as coherent as the fake bodies.

I did not draw this!

FAKE VIDEOS

Runway ML also allowed me to make fake videos -- using the First Order Motion Model.  This program maps the audio and the head motion of a face, onto a still face photo, to turn that still photo into a video.  I managed to get one of my static portrait drawings to mimic karaoke (see below).  One can also do this for free using Google Colab, but it is a convoluted process.


I drew a still portrait,
the computer animated it!




***

RUNWAY TUTORIALS

I watched the Runway ML tutorials by Derrick Shultz and Lia Coleman.


DRAWING IN THE 21ST CENTURY

Scott Eaton is using this kind of AI to create odd but realistic figure drawings.  He draws the outlines of a sketch, and the AI figures out how to put a fleshy volume there, like a leg, lit up by a single light source.  Drawing is not what it was in the 20th century.

Scott Eaton
figure drawing with AI



QUELAB
CAD/DRAFTING WORKSHOP


Ethan gave us an online CAD/Drafting workshop for QueLab memebers, featuring Onshape (there is a free version for registered users).  The first week covered drafting by hand (August 5 - 7), and the second week covered creating in the online CAD program, Onshape (August 12 - 14).







After taking the workshop in Zoom, I figured out how to prepare a DXF file of my drawings, and import it into Onshape, where I was able to extrude it into 3 dimensions.


Extruded into 3D
in Onshape

AUGUST ARTWALK 
DOWNTOWN ALBUQUERQUE

The Albuquerque Artwalk downtown continued right through the Covid lockdown on the first Fridays, with most of the activities outdoors and at curbside.  During the August Artwalk, the OT Circus gallery was open, but limited the number of visitors in the gallery.  The activity was brisk early in the evening, when I went.




TADPOLES

After a big thunderstorm on the 25th of July, tadpoles showed up in the standing water around the rail road tracks.  I watched the tadpoles flourish and then perish, after the sun dried the pools up. 

I did capture 7 tadpoles and a triops in a cup before the pools disappeared.  However, the following day another rainstorm refilled the pools, and I released the tadpoles and triops into those ponds.  

The tadpoles were growing little feet and arms when the pools dried up again in the scorching New Mexico sun.  On August 6th the water had evaporated entirely, and I still wonder if those tadpoles were big enough to burrow into the mud and survive.  I think they were New Mexico Spadefoot toads.

The last I saw of the tadpoles




I saw tadpoles in the desert once before, when I was a kid, during the rainy season, and couldn't understand how they came out of nowhere.  That completely fascinated me, and still does.

During this Covid summer I saw a possum, a badger, a skunk and a few racoons, animals that I have never seen inside the city of Albuquerque before.


UPDATE (September 4):    I found this toad in front of my house, on the other side of downtown from the tadpole pond.  Is this a huge New Mexico Spadefoot toad?  And if so, how has he survived these past dry weeks?




Runway ML -- 3rd "Model" (based on long poses)

I uploaded 348 drawings -- of almost all of the long poses from my blog (mostly from 3 hour sessions) -- to Runway ML to create a 3rd "model" using StyleGAN2 (an artificial intelligence program posted on Github). 




The video below reveals the whole process.  The first mosaic section shows the whole training process, every 100 steps, which took about 3 hours.  The second section is a "latent space walk" between three points in the AI "model" space; each point being a generated drawing.  The last section are some of the better results, cherry-picked from 1095 automatically generated fake drawings which Runway ML delivered as a Zip file.



Perhaps the morphing in the video below, between 5 generated fake drawings, best illustrates what is going on with the artificial intelligence program StyleGAN2.  This is a "latent spacewalk" video that I downloaded from Runway ML.




The third time is a charm!  This model was "trained" on the second "model" of my short pose drawings, which was "trained" on the first "model" of the drawings from my book.  The final product was very self-referential and incestuous.  


RESULTS

The video below consists of 500 fake drawings, in the style of my long drawings, generated by Runway ML.  The images that follow have been cherry-picked to show off the best results.





This third model handled the faces a lot better, and even delivered some quasi-realistic fake drawings in my style.














Lots of the results looked less realistic than my drawings, but had a certain unexpected weird charm anyhow.











And a lot of results did not even have a figure in them, but still preserved the touch and flow of my hand.











DEEP DREAM GENERATOR

I continued to warp the fake drawings with another AI (artificial intelligence) program -- Deep Dream Generator -- which applies the style and color of one drawing, onto another.  It is a free online program from Google (requires registration).

The fake drawing at the beginning of this blog post,
colored and warped in Deep Dream Generator




Double Whammy --
style from the 5th drawing of the "weird set,"
applied to the 7th drawing from the "more realistic set"


***

QUESTIONS

  • If I "trained" these 348 drawings for more than 3000 steps, would it make any difference?  The final FID Score was 80.13, while I hear that one should aim lower, for 30 to 50 FID Score.

Final FID score of 80.13
after training

  • If I "trained" longer, would the results look more realistic; or at least, would they look even more like my original drawings?  
  • The faces came out way better than those in from the training on the shorter drawings, both in the first model and in the second model.  If I "trained" this third model more, would the faces come out even better?
  • Perhaps I should have "trained" these drawings less, using only 1500 steps.  Would the resulting drawings look more like a hybrid between the short and long poses in my styles?
  • What if I "trained" on a data set of someone else's drawings?  If I "half-trained," using only 1500 steps, would the results look like a hybrid of my drawings and those of someone else?  
  • I was thinking of augmenting my data set with drawings from Egon Schiele, whom I admire.  That is, mixing my drawings with those of Egon Schiele, to make one data base to upload. Could I get enough drawings by Egon Shiele to make a good enough model?  And if so, perhaps I could just "half-train" my drawings on the model based on Egon Schiele, to get a nice hybrid style.

  •  While the whole point of this exercise is to crank out new drawings in my style, I was hoping that perhaps this AI program could even improve my drawings in the process.

Runway ML -- 2nd "Model" (more short poses)

I uploaded 1456 drawings from this blog to Runway ML, to create a second "model" using the nested StyleGAN2 program.







Runway ML summarized the whole concept in the above video.  This is a "latent space walk" video -- revealing a continuous morphing between 5 of the computer generated fake drawings (which are basically 5 points defined by the AI space of the "model.").

The 1456 drawings I uploaded to Runway ML are the short poses (usually 20 minute drawings) that I made from 2016 to the present (August 2020).  They are not all of the short poses, but most of the better ones posted on this blog.

I "trained" these drawings on the first AI "model" that I made in Runway ML, based on the 209 high resolution drawings that I uploaded from my book.  That's a bit self-referential, as this second "model" of my drawings was trained on the first "model" of my drawings.


I downloaded 500 computer generated fake drawings from the "model" (Truncation 1, Sampling Distance 1),  and cherry-picked the better ones below:
















The second model gave me more variety and refined drawings than the first model, and handled the faces a little bit better.


Final FID score of 69.82
after the training was finished


Friday, 28 August 2020

August 25, 2020

The Tuesday Night drawing group of Santa Fe drew together online using Zoom, because of the Covid 19 quarantine.  Someone participated from Hawaii.










Image 1






Image 2







LAYERED
Two Different Figure Drawings

John Tollett in Santa Fe, blended the two colored figures above -- Image 1 and Image 2 -- into the single image below:
"In Photoshop I used Linear Burn mode on the top layer, merged the two layers, then adjusted Saturation and Levels."


Combined figures
by John Tollett