Wednesday, 2 April 2025

3D from Gemini 2.5

I used Google AI Studio to generate a 3D objects from a 2D images  --  using the Gemini 2.5 Pro Experimental 03-25 model -- by generating code for OpenSCAD (though the results are not always pretty):

BOCCIONI

DOWNLOAD: Boccioni3D.GLB (440.36 KB)


After distorting Umberto Boccioni's sculpture with SculptGL and Midjourney -- I uploaded the 2D result to Google AI Studio and generated the OpenSCAD code with the Gemini 2.5 Pro Experimental 03-25 model:

"Write OpenSCAD code for this figure.
The model will be 3D printed."


Then I uploaded that code into OpenSCAD Playground online and generated a 3D GLB object:


The result was not so pretty, but it was amazing that I could generate a 3D printable object from a 2D image just by using an LLM (Large Language Model).

REDMAN

DOWNLOAD: Redman.GLB (628.21 KB)







FROG MODEL

DOWNLOAD: Frog3D.GLB (146.44 KB)




SITTING

DOWNLOAD: Sitting.GLB (49.03 KB)









I uploaded an image I drew in Blender (which was originally 3D)




Tuesday, 1 April 2025

April 1, 2025

Drawn at Art Buddies in Albuquerque:








XnView/Image/Swap/RBG>BRG


XnView/Image/Swap/RBG>BRG



XnView/Image/Swap/RBG>GBR



XnView/Image/Swap/RBG>BRG



XnView/Image/Swap/RBG>RBG




Monday, 31 March 2025

MARCH SUMMARY -- 2025


5 YEARS AGO -- we went on Covid lockdown soon after March 12, 2020, after the last live figure drawing session I attended.

This March hit like a tornado, twisting me into new directions -- I started oil painting again and continued drawing with colored Sharpies; I hacked a FISKARS FUSE to make a portable printing press; and I started leaning towards generating images with the new multi-modal LLMs (Large Language Models) rather than just sticking with image dedicated programs like Midjourney and WHISK:

Woman explaining AI ART generation

I started making
bad oil paintings again

Adam setup and installed Stable Diffusion on a supercomputer at Quelab about March 24th, so that we can generate AI ART locally for free, without have to access online programs like Midjourney.  I can even access the Quelab computer remotely, and generate bad AI images from the coffee shop:

Generated ugly FAKE AI ART
on a supercomputer at Quelab

However the real blow to my ego came before then, when AI criticized my AI ART:


The scariest thing was that I could use AI to put words into the mouth of my AI man in the above video -- using HEDRA. The result was just too realistic and convincing.









PENCIL DRAWINGS












PRINTMAKING

g









I printed Lisa Casuas tessellation tile
on the portable press
and hung it in the entry way of Quelab




AI ART

This month we started changing the way we generate AI ART, by merely "chatting" with an LLM model (which one can even do with their voice when using a microphone):


AI WARS

The AI Wars cranked up at the end of the month, between China, Google, Open AI and others, each racing to outdo the other with a new release.  Specifically these general LLMs (Large Language Models) can now accept and even generate images, being "multi-modal" (a term and concept that confused me back in January).


*One could use any of the above programs for free, at least at the moment.

AI Subscriptions

Meanwhile Midjourney kicked me off once again on March 14th.  Apparently STRIPE had blocked my monthly payment again.  I got back on Midjourney on March 20th.

KREA changed their log in window, and offered a 20% discount for yearly subscriptions.  So I paid for the Basic subscription on March 31, 2025.  I had been using WAN 2.1 in KREA recently to generate AI videos:



"LOCAL"
AI ART GENERATION

March 23rd install: Adam uploaded STABLE DIFFUSION to a new supercomputer with a NVIDA 3080 video card (10 Gigs of VRAM) at Quelab which he had refurbished.  I could then generate bad AI ART from an AI MODEL based on my bad figure drawings:


Adam installed STABLE DIFFUSION in a Docker Container.  Then I could generate AI images "locally," on the Quelab computer for free, without having to go to services on the Internet like Midjourney or WHISK, to generate AI ART.  I can even log in remotely, and generate AI images from the coffee house.

Generated on the Quelab computer


MOST importantly, STABLE DIFFUSION and other local programs do not censor like Midjourney does.  So I can upload and alter my figure drawings to the programs installed on a "local" computer.

PLUS I can use an AI Model that I trained on my figure drawings in 2022 -- model_20221213233657.ckpt -- when generating AI locally with STABLE DIFFUSION:



We had actually done this before in 2023, using a supercomputer that Grant lent to Quelab.

AI UGLY
IS MORE INTERESTING

One of the big problems with AI ART is that all the commercial programs try to shoehorn the results into some kind of flawless Barbie archetype:


The ideal AI ART image,
which all the commercial programs are aiming for

Midjourney is currently polling for "THE MORE BEAUTIFUL" images in order to tune it's new upcoming model, in order to please everyone:

"Select the one that you think is more beautiful..."

The "more beautiful" aim just generates the same thing all the time. Maybe the antidote is to generate UGLY AI ART on purpose, which might be "more interesting." I can easily generate ugly using an AI model based on my figure drawings:

Pretty ugly result -- but interesting,
generated by an AI model based on my figure drawings


I even made an UGLY model in Midjourney, by uploading a lot the ugly figures I generated in STABLE DIFFUSION, like the one above, to create a MOODBOARD: --profile r34atn6

I can "BLEND"  ugly images in Midjourney, and even use the UGLY MOODBOARD, but the results still tend to be too conventional and usually too pretty:

Not so pretty, but not so interesting either --
two ugly images blended in Midjourney


UGLY 












































xxx
x
x






 







X