Tuesday, 30 December 2025

AI ART EVOLUTION -- 2025


AI ART EVOLUTIONS - 2025


AI ART EVOLUTION
2025

This is how AI unfolded and evolved throughout 2025:
* refers to the reference Blog Post

ARTIFICIAL INTELLIGENCE continued blazing like the Wild West in 2025, as the Chinese stepped up in January and offered a lot of Open Source AI Programs for free, like DEEP SEEK.  During all the competition between all the AI companies throughout the year, I continued to push my drawings through AI programs for free (mostly) -- and liked what KREA (Flux model) did for my figures so much that I started paying for it.  But does AI really make any improvements in my ART?:


"But there's no blood beneath your art
 No trembling hands, no beating heart"

DEEP SEEK

The AI year started with a bang by the Chinese company DEEP SEEK, which challenged Open AI and Google in January:




RUN DEEP SEEK
LOCALLY

I first mentioned DEEP SEEK R1 on January 21, 2025 -- noting that it can be run on your computer locally *
CODING
  • Moreover I got DEEP SEEK to write an art program for me in HTML -- ANIMATION *
DEEP SEEK wrote a program for me:

I also got DEEP SEEK R1 to write another program for me, via (Hugging Face) anychat (now anycoder) -- LINES: *

DEEP SEEK wrote another program for me --


JANUS AI PRO  from DEEP SEEK *

MULTI-MODAL

DEEP SEEK was pushing "Multi-Modal,  a term that confused me for a while -- however it just means that the AI program just understands both text and images.  So I uploaded images and asked DEEP SEEK about them:

  • Multimodal Understanding 
What is this?

What is this?
What is this?





QWEN

QWEN is another competing Chinese AI company that is also releasing OPEN SOURCE AI programs:


LoRA training in Qwen with as little as one image *



MIDJOURNEY

MIDJOURNEY started off the year with a RELAX-ATHON, allowing unlimited fast generations from December 18th, 2024  to at least January 27th, 2025However they were also censoring severely. *

Midjourney released a new model V7  to everyone on March 31st *
  • They actually released it on April 3, 2025 -- but you had to "personalize" yourself to use it, by rating images *

Midjourney kept pushing for me to "rate" images, and "personalize," which I refused to do.  I did not want to be locked into a "bubble" that just generates images that I like, but shows me nothing outside my imagination:
Midjourney gave everyone access to the V7 model on June 16, 2025, without requiring that they "rate" or "personalize" any images *

Midjourney added AI VIDEO generation on June 18th, 2025 *


Then they allowed users to add a starting frame, and an ending frame, on July 24, 2025 *

PAYMENT PROBLEMS

Midjourney continued to reject my payments, I think this was a STRIPE issue  -- though to Midjourney's credit, they would always resolve the situation when I wrote back to them:
  • February 2nd -- they rejected my payment *
  • February 14th -- they accepted my payment * 
  • March 14th - they rejected my payment *
  • March 20th -- they accepted my payment *
  • September 20th -- they rejected my payment, but accepted it on September 26th *



KREA

I subscribed to KREA on March 31, 2025, opting for the BASIC plan at $10 a month *


KREA.AI has come out with a "chat" feature (based on DEEP SEEK?) on February 7, 2025 *

KREA changed their log in interface today, March 31 2025 *

KREA gave everyone access to their new image generation model -- KREA 1 -- on June 13th (or 17th for the free plans) 2025 *

KREA added WAN 2.2 image generator *




MINIMAX M2.1  AI model that will run on a NVIDIA Spark *


OTHER AI IMAGE PROGRAMS



(GitHub) styles2paint *

FLUX 2 came out on November 20, 2025 (but I wasn't able to use it -- Open Source?) *

(GitHub) Z IMAGE (can be run locally)  *
SEGMENT ANYTHING by META *
  • Images, Video, create 3D objects and scenes



STABLE DIFFUSION
REVIVED

On March 23, 2025 Adam loaded up STABLE DIFFUSION again on a local computer at Quelab with a DOCKER Container, so that I could generate AI ART (like we did in 2023): *


FEBRURARY
On February 5, Google released it's Gemini Studio 2.0 Flash in GOOGLE AI STUDIO. *

MARCH

I uploaded my YouTube Video -- SPLASH -- Liquid Sculptures -- to GOOGLE AI STUDIO and asked "What is this video about?": *

 

Gemini 2.0 Flash Experimental was added to GOOGLE AI STUDIO on March 14th, and it colorized the image I uploaded, and changed those colors as I chatted with the program: *




On March 14, 2025, GOOGLE AI STUDIO came out with a Multi-Modal feature (using the Gemini 2.0 Flash Experimental" option) that let us upload and alter images simply by having a conversation with it:


I asked GOOGLE AI STUDIO -- "Can you tell me what style of drawing KRRRL uses? -- and it got it wrong, saying that I use "vibrant colors," which is wrong:
Gemini 2.0 Flash (Image Generation) Experimental (generates raster images) -- and is no longer available *



Gemini 2.5 Pro Experimental 03-25 -- released on March 25, 2025: *
    • generates vector images *
    • generates OpenScad code to make 3D images *
    • gives a better description of my drawing


I asked Gemini 2.5 Pro Preview 05-06 to animate one of my drawings in 3D style, and it gave me a Three.js animation code, which I used to make a blog page: *



Google came out with NANO BANANA (Gemini 2.5 Flash Image) on  August 26, 2025, and I first mentioned it on August 30, 2025 *

NOVEMBER

Google released GEMINI 3 on November 18, 2025 *

Google released NANO BANANA PRO in GOOGLE AI STUDIO on November 19, 2025 *

DECEMBER

I wrote art programs in Google GEMINI 3 *
GOOGLE SEARCH even writes programs *
GEMINI 3 FLASH came out on December 17, 2025 *




OPEN AI


As of March 25, 2025, Chat GPT creates images simply by chatting with it, even with the free option (I had to log in however)  *



ChatGPT 5 came out on August 5, 2025 *

SORA2 came out on September 30, 2025  *
  • I got access -- krrrl -- on October 4, 2025 (for free) *
ChatGPT 5.1  launch on November 12, 2025 *

GPT 5.2 came out on December 11, 2025 *

I uploaded a 3D file to GPT and asked it to write code to animate it in BLENDER *
CHAT GPT came out with IMAGE 1.5 on December 16, 2025 *

GPT Image 1.5 combines images
like NANO BANANA PRO






3D

HUNYUAN 3D came out about January 25, 2025


Touch Designer - 3D projection software *

  • Apose (glTF file compression online program) *


BOTTANGO is a free animatronics software *

(GitHubpinokiofactory/Hunyuan3d-2-lowvram to be run locally *

(Hugging Spaceashawkey/LGM text-to-3D with gaussian splatting *

(Hugging Space) TRELLIS *

As of March 25, 2025, Google would generate OpenScad code in GEMINI 2.5 -- turning 2D images into 3D objects: *


(Hugging Face) Stable-X/Hi3DGen -- 2D to 3D *


Using MCP, I merely "chatted" with Claude Desktop to get BLENDER to generate a 3D scene: *


KREA added STAGE so that users can generate 3D scenes from images and text *





(Hugging Face) Sparc3D *



Marble Worldlabs creates a 3D world with a 3D Gaussian Splat from an image that one uploads *

Generated in Marble Worldlabs

(Hugging Face) PartCrafter *

Google's GEMINI 3 generates even better OpenScad code to make 3D objects: *


I uploaded a 3D file to GPT and asked it to write code to animate it in BLENDER *
TRELLIS 2 came out in December  *



 VIDEO


WAN 2.1 from Alibaba is open source and can be run locally *

Holly sent a DOCKER Container on April 12th, with a WAN 2.1 program, which we downloaded and installed at Quelab, and used to generate AI Videos: *



I started generating AI video with Google's VEO2 on April 21, 2025: *


KREA has a VIDEO RESTYLE feature *

I found another music video based on my drawings, from an AI model I trained in 2020:


SORA2 came out on September 30, 2025  *
  • I got access on October 4, 2025 (for free) *
  • LUMA RAY (altering videos with AI ) is a paid feature



AUGMENTED REALITY



At the CURRENTS NEW MEDIA exhibition in Santa Fe, CORRINA ESPINOSA had an AR exhibition where the viewer just scanned one QR Code (not download an app), and could see many AR images from many framed prints.  Each print acted as a "target" to conjure a distinct AR experience: *

CORRINA ESPINOSA created a multi-AR experience
probably by using A-FRAME and MINDAR

MINDAR -- I could not figure out how to get it to work *




Programs that DIED:

HAIPER ended on February 1, 2025:







AI MUSIC

RIFFUSION

RIFFUSION is allowing free unlimited AI MUSIC generations during their BETA phase: *
RIFFUSSION changed their name to PRODUCER by August *
There is an AI radio station -- HIT RADIO AI -- that plays all music generated by AI *

DEEP DREAM GENERATOR came out with a music generator in Q3 (third quarter 2025) using Lyria *

*****

 AI REVIEW by MONTH 2025


No comments:

Post a Comment

Note: only a member of this blog may post a comment.