It’s a question that has plagued mankind for centuries – Which is the better method of digital reconstruction for creating game assets: laser scanning or close range photogrammetry?
Such mysteries of the universe rarely burden my thoughts. My brain tires at the prospect of eternal hypothesis. Furthermore, I recognise how blissful ignorance can be and thus, choose to bask in its warming glow wherever I can.
But this question is different, haunting my dreams and gnawing at the inside of my skull. I must consider answers constantly, because of one very simple reason:
I’m paid to.
Preparation and capture – Laser Scanning
Initial calibration and setup
Today’s laser scanner of choice is the HandySCAN Black Elite, by Creaform. From the moment I open the laptop, setup takes barely five minutes. I plug in all the kit, calibrate the scanner and configure the lasers in this time.
With those steps completed, the hardware and software are ready to go. Bill however, is not.
This is because the HandySCAN requires that a subject be adorned in targets first.
These are small, retro-reflective stickers, used by the scanner to triangulate its own position – not at all disimilar to how Motion Capture works. Except backwards. It’s a simple process in theory, but a little more complex and nuanced in practice.
For example, I need to avoid creating symmetrical patterns with these targets.
Symmetry can make it difficult for the software to distinguish areas from one another.
The scanner also needs to see at least five targets at any one time. If it doesn’t, it won’t scan the surface it’s pointed towards. Variety and imperfections across this particular subject’s geometry introduces challenges to this.
I’ve got extrusions of varying size and complexity, small holes, hidden areas. Ensuring targets are positioned to allow optimal scanning of these areas, takes some thought and consideration.
I’m not good at thought and I hate being considerate.
Careful not to destroy the finer details
The targets also need to sit on areas of minimal detail. When the mesh gets finalised, the software will average out the suface area beneath these targets. Therefore, any such details get lost in the process.
If I was working with a totally flat and smooth surface, this wouldn’t be a problem.
But my T-Rex skull is covered in varying textures. All of these I wish to maintain in the final output.
With all this in mind, it takes around twenty minutes to achieve a target placement I’m confident will do the job.
I expect I’ll get faster at this in future, as I scan more objects and have a better idea of optimal layout.
Scanning the subject
With the HandySCAN calibrated and the subject targeted up, I’m ready to scan the skull.
This literally involves pointing the scanner at the object, hitting the go button and moving my hand across all the surfaces I wish to capture. It’s a remarkably therapeutic process. I feel like I’m stroking a cat, except the cat is an extinct creature and my hand is lasers.
But I’m not being attacked or judged, so it’s literally nothing like stroking a cat.
Scanning from various angles
I perform three scans of the skull in different positions: sat upright, upside down and on its back.
This is because scanning the whole subject from one angle is impossible. Various areas become occluded on each scan and are only possible to capture when the skull is in a different position. These three scans will therefore be merged upon completion.
It takes approximately forty five minutes to complete this process, even though the actual capturing of data happens in seconds. The reason it takes so long, is because I want the highest resolution scan possible (because it’s easier to remove details than add them after the fact).
Areas take much longer to scan at these settings and I have to be careful not to leave any holes or gaps.
I also have to restart the software several times because of the battering being unleashed upon the system’s hardware. I put the majority of the time used down to my own inexperience with the kit and a less than optimal approach.
Nevertheless, I soon have a project file comprising three scans, ready to be merged. The actual session weighs in at a whopping 7.62GB. That’s fairly hefty, but I won’t be surprised if the photogrammetry files end up larger.
Reconstruction – Laser scanning
Cleaning and merging the laser scans
With data captured for both approches, it’s now time for reconstruction.
As far as laser scanning is concerned, I now have three scans. These need to be cleaned up with all excess noise and geometry removed.
This process takes around fifteen minutes, the results of which are ready to be merged together.
VXscan (the software used to reconstruct) allows me to keep the targets on my scans after cleanup. These can then be used to align the three meshes automatically, which it does with incredible speed.
Incredible might be pushing it. I’m just trying to make things sound exciting. Let’s go with expected.
This looks great, but I clearly did a crap job with my cleanup. There are random triangles and noise all over the place. I’ll bare this in mind for future scans, for now I’m just going to live with it.
I export this file to an OBJ, which lands at a whopping 1.05GB on my hard drive.
That’s a big OBJ, but it’s what I do with it that will really count.
Checking the exported laser scan mesh in Maya
I import the mesh into Maya to give it a look over and assess the overall accuracy.
ZBrush would probably be more suitable, naturally acustomed as it is to what I expect will be an insane polycount. But Maya’s interface doesn’t give me the shits, so it wins.
The polycount for this mesh is a whopping 8,678,179.
It looks pretty good. Now to get photogrammetry to the same stage.
Reconstruction – Photogrammetry
Importing into Meshroom
The reconstruction process for photogrammetry is much simpler than laser scanning. In theory.
Said theory dictates that because the volume of photos I’ve taken include so much coverage between images, no manual tweaking should be required.
It should just be a case of clicking and dragging the photographs into Meshroom and waiting for that to work its magic.
Which I do and indeed, all photographs are included. No manual work is required to regenerate the mesh. Which is a relief because I’m not sure this is a feature even supported by Meshroom.
Reality Capture supports this technique and is generally much faster. It’s also recently been subject to a revised pricing structure which seems to have alienated their entire audience. It’s like the Krupskaya of recontruction software.
But Meshroom had no problems aligning the photographs and completed the entire reconstruction in just over twenty eight friggin’ hours.
When doing photogrammetry – get a beefy rig
In fairness, reconstruction had to be done on my laptop, which isn’t ideal, squeezing every last drop out of my Intel core i7-7700hq, 16GB RAM and Nvidia Geforce GTX 1050 Ti.
So sure, the processing time is considerably higher than that involved with laser scanning.
On the other hand, the final mesh generated by Meshroom is 0.22GB, almost a fifth the size of the laser scanned OBJ.
It’s also rendered out the material which looks great. Capturing the material simply isn’t possible with the HandySCAN, so this is a massive positive in favour of photogrammetry.
Checking the photogrammetry reconstruction in Maya
I drop this OBJ into Maya along with the rendered texture.
The polycount comes in at 3,500,000 tris, less than half that of the HandySCAN mesh.
Oh, and it looks awesome. Really, really awesome.
The difference a good material can make
Maybe the novelty is waring off and I’m thus lowering my standards, but I’m very happy with how this has turned out. There’s not a hole in sight and the texture looks great.
I take a look at some of the more complex areas of geometry. While some of the surfaces are a bit rough compared to the real thing, the main shapes have been captured really well.
But this is really all about the material.
I’m sure with some tweaks to resolution and the texturing nodes in Meshroom, it could look even better. But that’s a story for another day.
As far as default settings are concerned, this is some good shizz.
Comparing the laser scanned and photogrammetry generated meshes
It doesn’t take long to realise that the HandySCAN performs considerably better with the smooth surfaces.
As you can see below, the laser scan captured some incredibly subtle, smooth details in the area between the lip and the snout. The photogrammetry version however, is covered in tiny holes and some of the finer details are lost entirely.
These could likely be solved with closer detail shots during the photogrammetry shoot.
The devil is in the details
I will say that the detail captured by photogrammetry is nevertheless, pretty staggering in it’s own right. I’m surprised by how many of the really small indentations it has managed to capture. Great as they are, they’re far exceeded by the capabilities of the HandySCAN.
Nowhere is this better demonstrated than on the teeth.
The teeth in the photogrammetry mesh are littered with imperfections and geometry which simply isn’t on the real model. More coverage of the teeth, especially in between one another would’ve made all the difference.
The HandySCAN however, has captured these pretty much perfectly.
The same can be said for the metal rods which support the top jaw.
With laser scanning, the smoothness of these forms is maintained. Which shouldn’t really come as a surprise, given that engineering is this tool’s primary area.
Photogrammetry’s issues with such geometry becomes painfully apparent here.
Plug those holes
Where photogrammetry really excels in respect of capture quality, is how it handles holes.
If the HandySCAN can’t see the surface, it doesn’t get scanned. Sure, you can fill in these holes manually, but that could be really tricky and time consuming. Plus, I’m comparing these in terms of how well they perform out of the box.
Photogrammetry does its best to fill these holes automatically. It’s not perfect, but it works and at the very least means I have a complete mesh.
In respect of pure mesh quality, the HandySCAN pretty much runs away with it here. Photogrammetry isn’t poor by any stretch of the imagination, but the smooth surfaces and ultra fine details just don’t compare.
To be fair, this might be addressed with the detail settings, but all such changes will impact the reconstruction time.
I went into Meshroom with the default values and there’s precious little doubt in my mind that with some tweaks, I could generate a better mesh.
But I’m not waiting another twenty eight hours to see if that’s the case. I’ll definitely make this the focus of a future test, however.
Small, shiny details are a problem
Where both approaches really struggle, is with the picture hook on the back of the skull.
Shiny materials are always problematic, but I expect the fact that the hook itself will have moved between shoots created a bigger issue.
But like I mentioned before, it’s not a prominent piece of geometry or focal point, so I’m not too bothered by this.
Optimising the meshes
First up is decimation, reducing the polycount of the raw mesh to a level more agreeable to modern game engines.
It takes ten minutes to import the laser scanned mesh but only two minutes for photogrammetry.
Each take another two minutes to decimate, taking the polycount down to 38,545 and 37,076 respectively.
There’s not much between the final outputs, in terms of quality.
Next is retopology. This basically involves optimising the surfaces, ensuring they’re well setup for both animation and texturing.
I play around with various settings, taking twenty minutes on the laser scans and ten minutes on photogrammetry.
The results are once again pretty much identical. The laser scanned polycount has come out slightly lower at 37,836 tris with photogammetry still at 37,076.
The major difference is between the teeth. The photogrammetry mesh has really taken a hit here, although it’s more than compensated for by the absense of holes.
Map baking in Maya
The optimised meshes are almost game ready. They need to be resized, orientated and unwrapped, before having the normals from the high detail meshes (and diffuse, in the case of photogrammetry) baked onto them.
My UV maps are rushed and terrible because unwrapping is hell and I just want to get to the end. Like when people talk about Brexit.
Baking the laser scans takes fifteen minutes whereas photogrammetry takes eight hours at the same settings (4k, yada yada yada). This is probably because I used a decimated version of the high poly mesh with the laser scans, whereas photogrammetry used the raw mesh. Decimating that version created texture problems which weren’t going to play nice with the baking.
Again, probably somewhat simple to over come if I have myself a little more time.
Either way, below displays both meshes, the low poly and finally the low poly wireframe of both the laser scans and photogrammetry.
Final presentation in Unreal Engine 4
Well, that looks ‘normal’
With both meshes at pretty much the same polycount, the difference in quality between the normal maps becomes far more apparent.
I wouldn’t turn my nose up at either, but the fidelity is considerably higher on the laser scanned mesh.
Those really fine details, holes and cracks are not nearly as well defined on the photogrammetry version.
These differences are even clearer when viewed straight on.
There’s a considerable amount of noise across the upper lip and inside of the top jaw on the photogrammetry mesh.
By comparison, the laser scan boasts more definition and clarity in its surface details.
In terms of how accurately these reflect the real model, the laser scan wins.
Everything changes with a dash of colour
While the photogrammetry normal map certainley has it’s shortcomings, these are practically annihilated once the diffuse is added.
It masks the rougher areas of noise while also highlighting some of the details that get lost in the normal alone.
The quality is consistent across all areas of the mesh. I’m a little surprised by this, given how rushed the UV unwrap was. I don’t need any encouragement for taking shortcuts, so this is a bad sign.
Yeah, looks cool.