Monday, 31 March 2025

MARCH SUMMARY -- 2025


5 YEARS AGO -- we went on Covid lockdown soon after March 12, 2020, after the last live figure drawing session I attended.

This March hit like a tornado, twisting me into new directions -- I started oil painting again and continued drawing with colored Sharpies; I hacked a FISKARS FUSE to make a portable printing press; and I started leaning towards generating images with the new multi-modal LLMs (Large Language Models) rather than just sticking with image dedicated programs like Midjourney and WHISK:

Woman explaining AI ART generation

I started oil painting again in March

Adam setup and installed Stable Diffusion on a supercomputer at Quelab about March 24th, so that we can generate AI ART locally for free, without have to access online programs like Midjourney.  I can even access the Quelab computer remotely, and generate bad AI images from the coffee shop:

The scariest thing was that I could use AI to put words into the mouth of my AI man in the above video -- using HEDRA. The result was just too realistic and convincing.









PENCIL DRAWINGS












PRINTMAKING

It seems like we made no printmaking progress in March, but not for the lack of trying.  I did finally hack the FISKARS FUSE into a portable press, by adding a plastic bed that was custom cut 12x24 inches by Port Plastics:


I showed off the portable press during the drawing session at FUSE on March 19th:


Ellie scored her own press -- a Vandercook Showcard Press:

Ellie's press should make flawless

During the First Friday Art Walk downtown, Karsten Creightney and Adam Berman were printing t-shirts in Karsten's studio with their UNM printmaking students:


Meanwhile I was talking with Kyle about doing big EXQUISITE TESSELLATION prints with a Steamroller during the next Southwest Print Fiesta in Silver City on October 11th, 2025.  I ordered 3 big 36x36 linoleum sheets from Dick Blick for $125 (free shipping):



I printed and hung Lisa Casuas tessellation print at at QUELAB:

I printed Lisa Casuas tessellation tile (top)
on the portable press
and hung it in the entry way of Quelab

I applied an NFC sticker in the bottom right of Lisa's EXQUISITE TESSELLATION print, on the glass frame, that sends viewers to an AR ROADRUNNER -- which they can view in Augmented Reality by merely hoovering their smart phone over the NFC sticker.

by hoovering your smartphone
over Lisa Causas' tessellation print

Adric took the screen shot above, and I then animated that image with AI, using the WAN 2.1 program in KREA :

AR screenshot of the AI roadrunner,
further animated with AI


Higher resolution of the 
ROADRUNNER video above



AI ART


This month we started changing the way we generate AI ART, by merely "chatting" with an LLM model (which one can even do with their voice when using a microphone):


AI WARS

The AI Wars cranked up at the end of the month, between China, Google, Open AI and others, each racing to outdo the other with a new release.  Specifically these general LLMs (Large Language Models) can now accept and even generate images, being "multi-modal" (a term and concept that confused me back in January).


Release
to draw red lines on my drawings --
but it has to downloaded and run on my computer


 First Release
Google AI Studio colored one of my drawings
with the model


Release



Second Release
  • Then Google AI Studio released a more advanced experimental model -- Gemini 2.5 Pro Experimental 03-25 .  One can upload images to this model to be examined, but it will not generate images (but it will write code to generate vector images and STL files). 
Google AI Studio generated  the code for
using the model

"...analytical abstract sketch with Cubist influences. It prioritizes understanding structure and form through fragmented, linear marks over realistic representation."

Release


*One could use any of the above programs for free, at least at the moment.

AI Subscriptions

Meanwhile Midjourney kicked me off once again on March 14th.  Apparently STRIPE had blocked my monthly payment again.  I got back on Midjourney on March 20th.

KREA changed their log in window, and offered a 20% discount for yearly subscriptions.  So I paid for the Basic subscription on March 31, 2025.  I had been using WAN 2.1 in KREA recently to generate AI videos:



"LOCAL"
AI ART GENERATION

March 23rd install: Adam uploaded STABLE DIFFUSION to a new supercomputer with a NVIDA 3080 video card (10 Gigs of VRAM) at Quelab which he had refurbished.  I could then generate bad AI ART from an AI MODEL based on my bad figure drawings:


Adam installed STABLE DIFFUSION in a Docker Container.  Then I could generate AI images "locally," on the Quelab computer for free, without having to go to services on the Internet like Midjourney or WHISK, to generate AI ART.  I can even log in remotely, and generate AI images from the coffee house.

Generated on the Quelab computer


MOST importantly, STABLE DIFFUSION and other local programs do not censor like Midjourney does.  So I can upload and alter my figure drawings to the programs installed on a "local" computer.

PLUS I can use an AI Model that I trained on my figure drawings in 2022 -- model_20221213233657.ckpt -- when generating AI locally with STABLE DIFFUSION:



We had actually done this before in 2023, using a supercomputer that Grant lent to Quelab.

AI UGLY
IS MORE INTERESTING

One of the big problems with AI ART is that all the commercial programs try to shoehorn the results into some kind of flawless Barbie archetype:

The ideal AI ART image,
which all the commercial programs are pushing us into

Case in point: Midjourney is currently polling for "THE MORE BEAUTIFUL" images in order to tune it's new upcoming model, in order to please everyone:

"Select the one that you think is more beautiful..."

The "more beautiful" aim just generates the same thing all the time. Maybe the antidote is to generate UGLY AI ART on purpose, which might be "more interesting." I can easily generate ugly using an AI model based on my figure drawings:

Pretty UGLY result -- but interesting,
generated by an AI model based on my figure drawings


I even made an UGLY MOODBOARD in Midjourney, by uploading a lot the ugly figures I generated in STABLE DIFFUSION, like the one above, to create a MOODBOARD: --profile r34atn6

I can "BLEND"  ugly images in Midjourney, and even use the UGLY MOODBOARD, but the results still tend to be too conventional and usually too pretty:

Not so pretty, but not so interesting either --
two ugly images blended in Midjourney

Gemini 2.5 Pro Experimental 03-25 did give the most accurate description of my style when I uploaded one of my drawings to Google AI Studio:





OTHER HIGHLIGHTS





Alabama Public Television filmed David Brower during our drawing session at FUSION on March 19, 2025.  The episode should come out this fall, and also might be aired on ¡COLORES!,  the KNME-TV channel from UNM in Albuquerque: