Monday, 31 March 2025

MARCH SUMMARY -- 2025


5 YEARS AGO -- we went on Covid lockdown soon after March 12, 2020, after the last live figure drawing session I attended.

This March hit like a tornado, twisting me into new directions -- I started oil painting again and continued drawing with colored Sharpies; I hacked a FISKARS FUSE to make a portable printing press; and I started leaning towards generating images with the new multi-modal LLMs (Large Language Models) rather than just sticking with image dedicated programs like Midjourney and WHISK:

Woman explaining AI ART generation

I started making
bad oil paintings again

Adam setup and installed Stable Diffusion on a supercomputer at Quelab about March 24th, so that we can generate AI ART locally for free, without have to access online programs like Midjourney.  I can even access the Quelab computer remotely, and generate bad AI images from the coffee shop:

Generated ugly FAKE AI ART
on a supercomputer at Quelab

However the real blow to my ego came before then, when AI criticized my AI ART:


The scariest thing was that I could use AI to put words into the mouth of my AI man in the above video -- using HEDRA. The result was just too realistic and convincing.









PENCIL DRAWINGS












PRINTMAKING

g









Prints hanging at QUELAB:

I printed Lisa Casuas tessellation tile
on the portable press
and hung it in the entry way of Quelab

I applied an NFC sticker in the bottom right of Lisa's EXQUISITE TESSELLATION print, on the glass frame, that sends viewers to an AR ROADRUNNER -- which they can view in Augmented Reality by merely hoovering their smart phone over the NFC sticker.

by hoovering your smartphone
over Lisa Causas' tessellation print

Adric took the screen shot above, and I then animated that image with AI, using the WAN 2.1 program in KREA :
AR screenshot of the AI roadrunner,
further animated with AI


Higher resolution of the 
ROADRUNNER video above



AI ART

This month we started changing the way we generate AI ART, by merely "chatting" with an LLM model (which one can even do with their voice when using a microphone):


AI WARS

The AI Wars cranked up at the end of the month, between China, Google, Open AI and others, each racing to outdo the other with a new release.  Specifically these general LLMs (Large Language Models) can now accept and even generate images, being "multi-modal" (a term and concept that confused me back in January).


*One could use any of the above programs for free, at least at the moment.

AI Subscriptions

Meanwhile Midjourney kicked me off once again on March 14th.  Apparently STRIPE had blocked my monthly payment again.  I got back on Midjourney on March 20th.

KREA changed their log in window, and offered a 20% discount for yearly subscriptions.  So I paid for the Basic subscription on March 31, 2025.  I had been using WAN 2.1 in KREA recently to generate AI videos:



"LOCAL"
AI ART GENERATION

March 23rd install: Adam uploaded STABLE DIFFUSION to a new supercomputer with a NVIDA 3080 video card (10 Gigs of VRAM) at Quelab which he had refurbished.  I could then generate bad AI ART from an AI MODEL based on my bad figure drawings:


Adam installed STABLE DIFFUSION in a Docker Container.  Then I could generate AI images "locally," on the Quelab computer for free, without having to go to services on the Internet like Midjourney or WHISK, to generate AI ART.  I can even log in remotely, and generate AI images from the coffee house.

Generated on the Quelab computer


MOST importantly, STABLE DIFFUSION and other local programs do not censor like Midjourney does.  So I can upload and alter my figure drawings to the programs installed on a "local" computer.

PLUS I can use an AI Model that I trained on my figure drawings in 2022 -- model_20221213233657.ckpt -- when generating AI locally with STABLE DIFFUSION:



We had actually done this before in 2023, using a supercomputer that Grant lent to Quelab.

AI UGLY
IS MORE INTERESTING

One of the big problems with AI ART is that all the commercial programs try to shoehorn the results into some kind of flawless Barbie archetype:


The ideal AI ART image,
which all the commercial programs are aiming for

Midjourney is currently polling for "THE MORE BEAUTIFUL" images in order to tune it's new upcoming model, in order to please everyone:

"Select the one that you think is more beautiful..."

The "more beautiful" aim just generates the same thing all the time. Maybe the antidote is to generate UGLY AI ART on purpose, which might be "more interesting." I can easily generate ugly using an AI model based on my figure drawings:

Pretty ugly result -- but interesting,
generated by an AI model based on my figure drawings


I even made an UGLY model in Midjourney, by uploading a lot the ugly figures I generated in STABLE DIFFUSION, like the one above, to create a MOODBOARD: --profile r34atn6

I can "BLEND"  ugly images in Midjourney, and even use the UGLY MOODBOARD, but the results still tend to be too conventional and usually too pretty:

Not so pretty, but not so interesting either --
two ugly images blended in Midjourney







This seems like it belongs on a marquee somewhere


No comments:

Post a Comment

Note: only a member of this blog may post a comment.