This is our current visualization of the project. Which what we accomplished and what there is still left to do.
So far this has been a roller coaster week. Starting with failed attempts with move.ai and beginning to work with our Metahuman.
On Tuesday we receive the four GoPros we planned to us with the online Motion Capture AI, move.ai. We attempted multiple tests and learned from our mistakes and continued to refine and follow the best practices. Following YouTube videos. Wednesday we finally had a really good capture session. I uploaded the footage, and due to the late hour, I decided to leave it and go home to sleep. I kid you not, with in those twelve hours, move.ai updated their website and stopped to support GoPro footage. I only found this out after my calibration takes were not working and started doing some research. They have started to encourage users to primarily use their iPhones instead of other cameras. Doesn't make sense to me footage is footage in my book. But none the less, we decided to abandon the move.ai method and use the trust Vicon system the school has.
As for the Metahuman, I followed a tutorial to retarget the Mixamo animations to the Meathuman rig. It didn't take very long and I learned a bit more about the engine, since this is my first time using Metahumans.
However, this was the easy part. We found a great asset for clothing for our Metahuman that we can easily match with our live photography.

Unfortunately, the clothes don't want to stay on the Metahuman during the sequencer, not to mention the engine seems to be forcing a lower level of detail in the sequencer. So I'm not sure what's going on there.
*** I actually managed to fix it. The Metahuman was forced to a -1 LOD which basically made it an auto function. Setting it to 0 did the trick. Remember kids, never use auto mode, always use manual.
Kommentare