Design Boom article: SHAP-E AS A CONDITIONAL GENERATIVE MODEL FOR 3D ASSETS
DOWNLOAD: my generated GLB file (5 MB) that came from Shap-E
THE PROCESS
I uploaded the 2D image below (which actually came from a 3D drawing) into Midjourney:
First I BLENDED the above image with an AI image Robert Atkinson generated -- Top row below. That BLEND generated two AI images -- Bottom row below:
Then I uploaded the last above AI image (lower right) and used the Midjourney prompt: "in the style of Hans Bellmer," to get the tan AI image on the right below:
Continuing, I then BLENDED the original image with the "Hans Bellmer" result, to get the 3D-like image on the far right below:
Finally I BLENDED the other earlier AI generation with the resulting 3D-like image on the right above, to get the image on the far right below:
DOWNLOAD: 230509__tmp_gradio_ebf21daf42243a8 ... d45d5dd500ca3c1889_tmpb3eu_gpp.glb (5 MB)
SLIME TEXTURE
Going one step further, I decided to apply a green "texture" to my 3D file:
Leonardo.AI will apply a texture to an OBJ file using a mere text prompt -- I typed in "green algae covering the shapes." However first I had to convert the GLB file to an OBJ file in Blender, before uploading the OBJ file to Leonardo.AI.
DOWNLOAD: green algae covering the shapes.zip (32.80 MB)
The downloadable Zip file above contains: an OBJ file and four texture files (3 JPGS and one
PNG).
However the only way I can figure out how to condense everything into one GLB file is to upload the ZIP file to SketchFab, and then download the GLB file from SketchFab
I uploaded the Zip file to SketchFab (this green algae "textured" file does not look as solid as the initial GLB file from Shap-E):
* AUGMENTED REALITY FANTASY *
I want to create 3D AR experiences with our 8x8 inch prints -- similar to what we did with three Desert Triangle prints.
I want to generate 3D images to use in AR -- Augmented Reality. Ideally the small print would act as a "target" when a viewer passed his phone over it (as if it were a QR code) and then Augmented Reality would conjure up a 3D image that would float over and interact with that print. The ultimate goal is to use Web AR, so that the viewer does not have to download an app to see the 3D AR image (can I do this with 8th Wall?)
My ambition is to enhance some of our BUCKET PRINTS with 3D Augmented Reality images -- and then have a pop-up BUCKET EXHIBITION of ten enhanced linocuts, with ten different 3D AR images floating over them.
It would be even more 21st Century if both the 8x8 inch linocut prints and the 3D AR images were generated with AI.
** STREAMLINED AI AR 3D FANTASY **
What if this whole AR Print fantasy could be streamlined in real time? One would hold their smart phone over the framed 8x8 inch print, and the phone would take a picture of it automatically, like it were reading a QR code. That image would then be sent to the cloud, where Shap-E would automatically generate a 3D image. Then that generated 3D AI image would then be projected over that same print in Augmented Reality, practically in real time, adding a 3D element to a 2D print.
Moreover the smart phone could deliver an older 3D AR image immediately (perhaps generated by the previous viewer), while the new 3D AI AR image was being generated.
One might even use Siri, or another voice command assistant, to influence the 3D image that Shap-E generated.
Are there not already AI programs that combine other AI programs to deliver a more robust experience?
- NOTE: The Shap-E program seems to be an improvement over the Point-E program released last year in December 2022, which I blogged about:
No comments:
Post a Comment
Note: only a member of this blog may post a comment.