5 YEARS AGO -- we went on Covid lockdown soon after March 12, 2020, after the last live figure drawing session I attended.
Woman explaining AI ART generation
I started making
bad oil paintings again
Adam setup and installed Stable Diffusion on a supercomputer at Quelab about March 24th, so that we can generate AI ART locally for free, without have to access online programs like Midjourney. I can even access the Quelab computer remotely, and generate bad AI images from the coffee shop:
Generated ugly FAKE AI ART
on a supercomputer at Quelab
However the real blow to my ego came before then, when AI criticized my AI ART:
The scariest thing was that I could use AI to put words into the mouth of my AI man in the above video -- using HEDRA. The result was just too realistic and convincing.
PENCIL DRAWINGS
PRINTMAKING
g
Prints hanging at QUELAB:
I printed Lisa Casuas tessellation tile
on the portable press
and hung it in the entry way of Quelab
by hoovering your smartphone
over Lisa Causas' tessellation print
AR screenshot of the AI roadrunner,
further animated with AI
Higher resolution of the
ROADRUNNER video above
AI ART
This month we started changing the way we generate AI ART, by merely "chatting" with an LLM model (which one can even do with their voice when using a microphone):
AI WARS
- MARCH 6th release: QwQ-32B was released from China, a small Open Source AI model, and I used it to write a simple AI ART program -- LINES
- MARCH 12th release: First Google AI Studio let us generate and alter images, with their "Gemini 2.0 Flash (image generation) Experimental" model option.
- MARCH 25th release: Then Google AI Studio released a more advanced experimental model -- Gemini 2.5 Pro Experimental 03-25 . One can upload images to this model to be examined, but it will not generate images (but it will write code to generate vector images and STL files).
- MARCH 25th release: Then hours later, Open AI allowed users to use ChatGPT to generate images.
*One could use any of the above programs for free, at least at the moment.
AI Subscriptions
Meanwhile Midjourney kicked me off once again on March 14th. Apparently STRIPE had blocked my monthly payment again. I got back on Midjourney on March 20th.
KREA changed their log in window, and offered a 20% discount for yearly subscriptions. So I paid for the Basic subscription on March 31, 2025. I had been using WAN 2.1 in KREA recently to generate AI videos:
"LOCAL"
AI ART GENERATION
March 23rd install: Adam uploaded STABLE DIFFUSION to a new supercomputer with a NVIDA 3080 video card (10 Gigs of VRAM) at Quelab which he had refurbished. I could then generate bad AI ART from an AI MODEL based on my bad figure drawings:
Adam installed STABLE DIFFUSION in a Docker Container. Then I could generate AI images "locally," on the Quelab computer for free, without having to go to services on the Internet like Midjourney or WHISK, to generate AI ART. I can even log in remotely, and generate AI images from the coffee house.
Generated on the Quelab computer
using STABLE DIFFUSION
MOST importantly, STABLE DIFFUSION and other local programs do not censor like Midjourney does. So I can upload and alter my figure drawings to the programs installed on a "local" computer.
PLUS I can use an AI Model that I trained on my figure drawings in 2022 -- model_20221213233657.ckpt -- when generating AI locally with STABLE DIFFUSION:
We had actually done this before in 2023, using a supercomputer that Grant lent to Quelab.
AI UGLY
IS MORE INTERESTING
One of the big problems with AI ART is that all the commercial programs try to shoehorn the results into some kind of flawless Barbie archetype:
The ideal AI ART image,
which all the commercial programs are aiming for
Midjourney is currently polling for "THE MORE BEAUTIFUL" images in order to tune it's new upcoming model, in order to please everyone:
The "more beautiful" aim just generates the same thing all the time. Maybe the antidote is to generate UGLY AI ART on purpose, which might be "more interesting." I can easily generate ugly using an AI model based on my figure drawings:
Pretty ugly result -- but interesting,
generated by an AI model based on my figure drawings
I even made an UGLY model in Midjourney, by uploading a lot the ugly figures I generated in STABLE DIFFUSION, like the one above, to create a MOODBOARD: --profile r34atn6
I can "BLEND" ugly images in Midjourney, and even use the UGLY MOODBOARD, but the results still tend to be too conventional and usually too pretty:
Not so pretty, but not so interesting either --
two ugly images blended in Midjourney
This seems like it belongs on a marquee somewhere
No comments:
Post a Comment
Note: only a member of this blog may post a comment.