Recursive AI
Too self-referential? I ran the above drawing through Deep Dream Generator twice. The first time I used the "Text 2 Dream" with the prompt: "woman wrapped in cords." Then I used the result in the "Deep Style" option, to transfer the style of the generated image to the original figure drawing:
AI Experiment
I uploaded the above two images to Midjourney with the same prompt: "woman wrapped in cords." I like the generations from of my raw drawing the best, more than the generations from the figure altered by Deep Dream:
Midjourney prompt: "woman wrapped in cords"
I prefer this result --
the cord is more tactile
Midjourney prompt: "woman wrapped in cords"
This is not bad,
but too smooth and less like my initial drawing
Note: I did upload to Midjourney in reverse -- that is I uploaded the altered Deep Dream figure and created the generations -- and then uploaded the raw initial drawing created those generations. IF Midjourney learns as it generates, then perhaps the latter generations come out better, regardless of the order of the uploaded drawings...
ARTIFICIAL COMMENTARY
Midjourney prompt: "Alexa, create a figure drawing"
Midjourney seems to always default
to an archetype "Barbie" figure
There is a certain default Barbie factor to Midjourney. Ambiguous prompts seem to converge on a female portrait, and one that is more of a single idealized portrait that looks like Barbie's face. When the result gives a full body, it tends to be one with Barbie proportions.
Midjourney prompt: "3D model of Barbie created in Pepakura"
The default Midjourney archetype
It's like all of AI is converging on an ideal prototype, the Barbie Eve.
Midjourney prompt: "Alexa and Siri singing Kumbaya"
Alex and Siri singing Kumbaya
Midjourney prompt: "in the style of Hans Bellmer"
Alex and Siri singing
in the style of Hans Bellmer
Apparently Midjourney doesn't need any of my drawings to create an interesting variation of Hans Bellmer.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.