View in AUGMENTED REALITY
Hopefully I adjusted the AR animation below with Babylon.js 3D viewer so that it is not too high, like before, when viewers could not see it:
View in AUGMENTED REALITY
MIXAMO is good for animating 3D objects (import as OBJs), however I had problem with the output. They were not colored and textured, and they imported too high into the AR program, so that no one could see them.
I used BABYLON.js 3D Viewer to reposition the MIXAMO 3D models, and also to view 3D models online.
*****
The huge advantage is that I can use the simplified version of Hunyuan3D for free, on the powerful AI computer at Quelab. However that local computer does not have enough VRAM to generate colors and textures. I believe that we need 28 Gigs of VRAM to run such a 3D program locally.
I converted a lot of the AI ORIGAMI ANIMALS into 3D GLB files, as well as a lot of my figure drawings that had been enhanced with AI, on April 9, 2026. Again, there are 2D-to-3D programs that do a little better job.
Also note that MAKERLAB by Bambu Studio will generate a colored and textured 3D file from a 2D image.
Now perhaps I have to also generate my own motions with custom BVH files -- such as those that I can generated with (Hugging Face) MoMask. I blogged about this in January 2024 -- AI Confusion -- and ABQ Art Walk January.
However SegviGEN can be run locally -- but needs at least 24 gigs of memory:
- (YouTube) Importing Mocap into BLENDER
- (YouTube) Hunyuan 3D-2.5 - Create and Rig a 3D Character in 5 Steps
**********
I like breaking the 3D files into pieces, which can be done nicely with SegviGEN...but that program has been paused on Hugging Space.
docker run -it -p 7860:7860 --platform=linux/amd64 \
registry.hf.space/fenghora-segvigen:latest python app.pyYou need to switch on the new containerd engine in Docker for pulling and storing images. This will be the new Docker default in the future. Read more:
Extending Docker’s Integration with containerd*****
Running locally is potentially dangerous. Make sure to review this Space code before proceeding.
# Clone repository
git clone https://huggingface.co/spaces/fenghora/SegviGen cd SegviGen
# Create and activate Python environment
python -m venv env source env/bin/activate
# Install dependencies and run
pip install -r requirements.txt python app.py
- Also note that the segmented GLB files can be compressed still with GLB Compressor to very small files


























.png)












.gif)














