Capturing Reality

With the recent Unity keynote, I was interested to see how I could implement photogrammetry into my projects. I finally got some time to give it a try using capturingreality. I just wanted to play around to understand what the technology allows us to do without spending to much time trying to have the best-looking result.

The software demo doesn't export meshes so I could only test the capturing part of all process. I first started by capturing a block character which I thought would be the perfect model to experiment with. It failed for three reasons. The ground and the object itself were too shiny and the camera was too much blurring the background (which I think might be preventing the software to efficiently create the 3D model? not sure about it..). I then decided to go for a doll to avoid reflections. After taking around 50 photos I was able to successfully create a model but it still lacked many details especially under specific angles.

For my last try, I put the doll on a stick and took around 150 photos with some images close enough to capture less visible parts of the object. When you align all images, you get different point cloud components as the software isn't able to match all the given images together. By adding control points, the algorithm was able to match more photo which resulted in a bigger density of points. Only the nose which was reflecting lights got an irregular surface when creating the mesh. However, it could easily be improved by putting a temporary paint (as Unity advised in their keynote) to avoid reflection during the capture of the model.

My team at Giantstep was interested in the result and might purchase a license. If so, I will update this post with a second part focusing on how to adapt the high-quality mesh for a real-time use in Unity. Here a Gif showing the doll from different angles.