Thursday, 18 December 2025

Quick QWEN Training -- AI ART Models

TRAINING YOUR OWN AI MODELS has to be the future of AI ART.  The current AI ART models get stale after a while -- even those in Midjourney -- by generating in the same STYLE in spite of churning out a myriad of different images.  




So I was excited to discover an AI ART training program by QWEN that we can run locally on our computer -- DiffSynth-Studio/Qwen-Image-i2L -- which can quickly train a LoRA from just a few images (even as little as one).

I then uploaded five of my best drawings in 2025 to the Hugging Face program -- AiSudo/Qwen-Image-to-LoRA -- and generated a LoRA model that was huge, 281.43 MB:

I generated an AI ART model
from five of my best drawings

These are the 5 drawings I uploaded

After training I ran out of free Hugging Space computing time, and could not generate an image in the Hugging Space program.

I then renamed the LoRA -- to "Qrrrl.safetensors" -- and uploaded it to the Stable Diffusion program running locally on a computer at Quelab.

This was not as easy as just sticking the the LoRA file in the Lora folder, as we were running Stable Diffusion in a Docker container on a Linux machine.  Apparently the folder structure there is different from a Windows computer.




FIRST TRY

Then I used the Qrrrl LoRA (strength of "2") with the previous model based on my drawings -- krrrl.ckpt [a07c787ed6] -- to generate figures from a TEXT prompt: "a drawing of a clothed woman sitting on a stool" -- in Stable Diffusion:

One of the individual figures
generated with both a model and a LoRA based on my drawings


prompt: "a drawing of a clothed woman 
sitting on a stool"

  • I could only use the LoRA to generate images from a text prompt -- I could not upload one of my drawings and add the LoRA to generate an image.

The resulting figures were not entirely "clothed,"
like I asked for in the prompt

An animation of the 8 generations
made with the Qwen LoRA


I am not sure that the LoRA had much influence on the images that I generated with the main AI model based on my drawings.  I was hoping that by compounding both an AI model and a LoRA, both based on my drawings, Stable Diffusion would generate FAKE Figures that looked more like my drawings.



SECOND TRY

I tried again, using a different base model, only partially based on my drawings -- xsmerge_v31InSafetensor.safetensors [6eb26bbaaf] -- and added the Qrrrl LoRA to the same text prompt :"a drawing of a clothed woman sitting on a stool." The results were very different, of course:

Using the Qrrrl LoRA 
with a different and more robust base model
trained only partially on my drawings

The 8 figures generated from a different model
and the Qrrrl LoRA
were all clothed this time

Animation of the images generated with the Qrrrl LoRA
and a hybrid AI model partially based on my drawings

The best image generated from the Qrrrl LoRA
and a hybrid AI model partially based on my drawings

CLOTHED! ANIMATED

I was hoping to use this long arduous approach of generating CLOTHED figures from my drawings would let me animate them in Midjourney. However Midjourney censored the above CLOTHED image -- though it would animate a similar figure after I altered it with Flux AI in KREA:

Midjourney censored the above image (bottom row),
but would animate it after I tweaked it in KREA (top row)

I am not sure that the animated result
preserves anything of my drawing style


 WITHOUT

I then generated a bunch of new figures WITHOUT the Qrrrl LoRA, to see if there was any difference from the figures I generated above. I used the same base model and the same prompt:

Generated without the Qrrrl LoRA
from the hybrid model only partially based on my drawings

I think the LoRA probably made a difference.




QWEN LoRA as BASE MODEL

  • Note later we altered Stable Diffusion and could not get this method to work.

The LoRA also seemed to also work in the MAIN MODELS folder -- named "Lora/Qrrrl.safetensors [285ebe07a1]."  So I uploaded the first drawing I used to train the model -- using the prompt: "a drawing of a woman lying on the floor" -- and all 8 generated figures were clothed:

This individual image generation
is odder than the previous "Qrrrl LoRA" generations

  • I could upload an IMAGE and alter it from the LoRA in the Main Models folder

I used the QWEN LoRA as a MAIN MODEL
to transform the first of 5 figure drawings
which I used to train the LoRA model

The resulting 8 figures were a bit surreal

The 8 generations in an animation --
prompt: "a drawing of a woman lying on the floor"

The results were stranger when the LoRA as a MAIN MODEL altered one of my drawings -- so it seems that the QWEN LoRA model did have some impact.



BERNINI MODEL

On December 19th, 2025 I trained another QWEN LoRA model with five AI images in the style of Bernini -- Berni.safetensors (281.43 MB):

I trained a LoRA on these five AI images
in the style of Bernini

Stable Diffusion generated this hyper-Baroque anthropomorphic figure below:

Generated with the krrrl.ckpt [a07c787ed6] model
and the Berni LoRA

Download "Berni.GLB" (2.25 MB)


Generated with the krrrl.ckpt [a07c787ed6] model
and the Berni LoRA

Generated with the krrrl.ckpt [a07c787ed6] model
and the Berni LoRA

  • I like the extra twists in the AI versions of Bernini sculptures



CONCLUSION

I don't know that the QWEN LoRA model is making a huge different in the images that I'm generating. However I hope that training future custom models can create and perpetuate new ART STYLES.  And I think that new styles are the future of AI ART -- as STYLE IS CONTENT:


Moreover, I believe that -- STYLE has to agree with CONTENT -- in a successful piece of art.  For instance, a punk rock poster would not work in a Rembrandt style.


PREVIOUS AI TRAINING

I previously had made AI ART models based on my drawings:

No comments:

Post a Comment

Note: only a member of this blog may post a comment.