Staffordshire University colleague and resident technical encyclopedia, Richard Harper suggested this approach after I shared the original post. Specifically, he advised using the diffuse map from the photogrammetry reconstruction and the normal map from the laser scan.
The result would be a more realistic looking colour, complimented by a really sharp, detailed normal map.
This made perfect sense.
So I open Maya and drop the low poly skull (already normal mapped) into a scene along with the high poly photogrammetry reconstruction.
I once again make use of the Transfer maps facility, baking the diffuse from the photogrammetry mesh and applying it to the low poly skull.
Three hours later and the baking process is complete.
Laser scanning on the left, photogrammetry on the right, each using the photogrammetry diffuse map
Was it worth the wait?
Upon first glance, my response is a resounding “Pfft, no difference! That’s the last time I listen to Rich.”. My tune changes to “Okay, shut up Ed” when I take a closer look at the meshes.
While the improvements are hard to appreciate in a globally illuminated environment, they become more clear as the light sources are moved.
This new mesh looks considerably better than either of the originals. I’ve put alongside the original photogrammetry mesh (on the right) to demonstrate:
Laser scan normal map on the left, photogrammetry normal map on the right
Should’ve tweaked the positioning
The overall quality of the map suffers from my rushing the baking process. Plus, it would’ve likely been better to map the laser scan normals onto the retopologised photogrammetry mesh. It would’ve taken care of the holes, but also taken much longer to render.
Regardless, when viewed side by side the improvements are clear.
The various highlights and shadows are far more consistent with the shapes indicated by the colour of the diffuse.
I didn’t think the photogrammetry normal map looked particularly poor initially. Now though, it’s clear just how much of a difference a strong normal map can make.
Video does a much better job of demonstrating all this than me. So here’s a video:
A quick animation demonstrating the normal and diffuse maps under different lighting conditions
There’s precious little more to really say on the matter, besides “It worked”.
Although it has caused me to consider investing in Reality Capture once more. To my knowledge, it allows users to combine laser scanning data and photogrammetry in a single session. This would be easier than my current pipeline of jumping between VXscan, Meshroom and Maya.
But their pricing strategy gave me a hernia. I’d rather endure a little bit more work than support and by extension perpetuate, anti-consumer business practice.
Because Free Market. Long may it live.
Anyway, what I hope this update highlights is that a one size fits all approach isn’t necessarily always applicable. Both laser scanning and photogrammetry come with their own strengths. Only by merging those strengths was I able to achieve the best result.
Because black and white makes everything look boss.
Work with a technology’s limitations, not against them
This is a point I consistently emphasise to our students, regarding the range of scanners and approaches at University – don’t become obsessed with the tech because the tech has limits.
When all you have is a hammer, every problem looks like a nail.
You can become so transfixed on a particular piece of hardware, that you compromise the quality of your output.
For me, it’s more important to understand the appropriate use and fundamental range of capability of each technology. The stronger your grasp on that that, the easier it is to maximise their effectiveness in unision. This opens the door to creative problem solving, rather than nurturing dependence on repeated button presses.