T-Rex Skull

Posted on 4 November, 2019

Laser scanning and close range photogrammetry go head to head. Which will transpire to be the superior solution for getting my lovely, replica Tyrannosaurus Rex skull into a game engine?

It’s a question that has plagued mankind for centuries – Which is the better method of digital reconstruction for creating game assets: laser scanning or close range photogrammetry?

Such mysteries of the universe rarely burden my thoughts. My brain tires at the prospect of eternal hypothesis. Furthermore, I recognise how blissful ignorance can be and thus, choose to bask in its warming glow wherever I can.

But this question is different, haunting my dreams and gnawing at the inside of my skull. I must consider answers constantly, because of one very simple reason:

I’m paid to.

Story continues below...

Laser Scanning vs Close Range Photogrammetry

A month or so back, we hit the new academic year at Staffordshire University. The place is full of glowing smiles, hope-filled eyes and thick hairlines. All of these will have receded in three years. If not months. Such are the trials and tribulations of evolution into adulthood.

Not that I’d know.

As a result, my attention has turned from Building Hyperreality Games (which I’ll have more on soon, promise) onto more pressing matters, ie:

Educating students on our new suite of laser scanners and approaches to close range photogrammetry.

Being able to educate people on a subject, requires some understanding of what you’re talking about. This is a concept seemingly lost on politicians.

Oh yes, I am provocative.

Practice makes perfect

To this end, I plan to capture and reconstruct objects of varying complexity. This will be achieved with both laser scanning and photogrammetry. The result will (hopefully) be a decent series of tests which demonstrate the pros and cons of each approach.

In terms of equipment, laser scanning will be done with the scanners I have access to at Staffordshire University. Photogrammetry will be done with whatever camera I have available at the time and Meshroom. Comparisons will then be drawn between:

  • Equipment setup time
  • Subject setup time
  • Scanning/capture time and data size
  • Reconstruction time and mesh quality
  • Material/texture quality

The winner will be determined by performance in each of these areas and my own, totally arbitrary opinion.

Sure, it’d be a fairer and more accurate test to use the exact same scanner and photogrammetry setup in every scenario. But I’m not looking to get published in JIT. This is about getting my head into a science in the funnest/laziest way I can.

With that in mind, please don’t take all my observations too seriously.

Because this is my first test, I’d be wise to start with something relatively simple. Something easy to scan, easy to photograph and easy to compare the results of.

Wise has never really been my thing, though.

So I’m using my replica T-Rex skull.

I call him Bill.

Story continues after image
...continued

Preparation and capture – Laser Scanning

Initial calibration and setup

Today’s laser scanner of choice is the HandySCAN Black Elite, by Creaform. From the moment I open the laptop, setup takes barely five minutes. I plug in all the kit, calibrate the scanner and configure the lasers in this time.

With those steps completed, the hardware and software are ready to go. Bill however, is not.

This is because the HandySCAN requires that a subject be adorned in targets first.

Target placement

These are small, retro-reflective stickers, used by the scanner to triangulate its own position – not at all disimilar to how Motion Capture works. Except backwards. It’s a simple process in theory, but a little more complex and nuanced in practice.

For example, I need to avoid creating symmetrical patterns with these targets.

Symmetry can make it difficult for the software to distinguish areas from one another.

The scanner also needs to see at least five targets at any one time. If it doesn’t, it won’t scan the surface it’s pointed towards. Variety and imperfections across this particular subject’s geometry introduces challenges to this.

I’ve got extrusions of varying size and complexity, small holes, hidden areas. Ensuring targets are positioned to allow optimal scanning of these areas, takes some thought and consideration.

I’m not good at thought and I hate being considerate.

T Rex skull in 3d scanning markers

The top target is not flat and also covers and area of detail. This is a good example of what I should not be doing.

Careful not to destroy the finer details

The targets also need to sit on areas of minimal detail. When the mesh gets finalised, the software will average out the suface area beneath these targets. Therefore, any such details get lost in the process.

If I was working with a totally flat and smooth surface, this wouldn’t be a problem.

But my T-Rex skull is covered in varying textures. All of these I wish to maintain in the final output.

With all this in mind, it takes around twenty minutes to achieve a target placement I’m confident will do the job.

I expect I’ll get faster at this in future, as I scan more objects and have a better idea of optimal layout.

Story continues after image
...continued

Scanning the subject

With the HandySCAN calibrated and the subject targeted up, I’m ready to scan the skull.

This literally involves pointing the scanner at the object, hitting the go button and moving my hand across all the surfaces I wish to capture. It’s a remarkably therapeutic process. I feel like I’m stroking a cat, except the cat is an extinct creature and my hand is lasers.

But I’m not being attacked or judged, so it’s literally nothing like stroking a cat.

Scanning in progress!

Scanning from various angles

I perform three scans of the skull in different positions: sat upright, upside down and on its back.

This is because scanning the whole subject from one angle is impossible. Various areas become occluded on each scan and are only possible to capture when the skull is in a different position. These three scans will therefore be merged upon completion.

It takes approximately forty five minutes to complete this process, even though the actual capturing of data happens in seconds. The reason it takes so long, is because I want the highest resolution scan possible (because it’s easier to remove details than add them after the fact).

Areas take much longer to scan at these settings and I have to be careful not to leave any holes or gaps.

I also have to restart the software several times because of the battering being unleashed upon the system’s hardware. I put the majority of the time used down to my own inexperience with the kit and a less than optimal approach.

Nevertheless, I soon have a project file comprising three scans, ready to be merged. The actual session weighs in at a whopping 7.62GB. That’s fairly hefty, but I won’t be surprised if the photogrammetry files end up larger.

Laser scanned T-Rex skull

My final three scans, all mashed together without any cleanup

Preparation and capture – Photogrammetry

Initial setup

In this scenario, I’m using an exceptional little rig that colleague Richard Harper has setup. It’s a turntable which automatically rotates at user-defined intervals, triggering a connected camera to take a photo at each.

T-Rex photogrammetry

Getting the initial angle right for close range photogrammetry of the T-Rex skull

I take forty photographs at four different angles/elevations, with the skull sat on its jaw. This process is repeated with the skull flipped over, so I have coverage both above and beneath. Best case scenario – the software just recognises the various patterns and is able to stitch both orientations together. Worst case… blegh, I’ll worry about that if it happens.

A few photos come out blank, but in the end I’m left with 357 well lit images, with a fantastic amount of coverage.

This takes only thirty minutes. That’s fifteen less than the laser scan,  but the resultant file size is over 12.00GB –  almost double.

It’s also worth mentioning that the automated turntable has streamlined this process significantly. Taking all these photographs by hand would’ve taken considerably longer.

Longer than it’d take an artist to sculpt a T-Rex skull from scratch, however? That’s a question many game artists likely need to start asking themselves.

Story continues after image
...continued

Reconstruction – Laser scanning

Cleaning and merging the laser scans

With data captured for both approches, it’s now time for reconstruction.

As far as laser scanning is concerned, I now have three scans. These need to be cleaned up with all excess noise and geometry removed.

This process takes around fifteen minutes, the results of which are ready to be merged together.

VXscan (the software used to reconstruct) allows me to keep the targets on my scans after cleanup. These can then be used to align the three meshes automatically, which it does with incredible speed.

Incredible might be pushing it. I’m just trying to make things sound exciting. Let’s go with expected.

Laser scan image of T-Rex skull

The final mesh after the three initial scans have been cleaned up and merged.

This looks great, but I clearly did a crap job with my cleanup. There are random triangles and noise all over the place. I’ll bare this in mind for future scans, for now I’m just going to live with it.

I export this file to an OBJ, which lands at a whopping 1.05GB on my hard drive.

That’s a big OBJ, but it’s what I do with it that will really count.

Checking the exported laser scan mesh in Maya

I import the mesh into Maya to give it a look over and assess the overall accuracy.

ZBrush would probably be more suitable, naturally acustomed as it is to what I expect will be an insane polycount. But Maya’s interface doesn’t give me the shits, so it wins.

The polycount for this mesh is a whopping 8,678,179.

It looks pretty good. Now to get photogrammetry to the same stage.

Story continues after image
...continued

Reconstruction – Photogrammetry

Importing into Meshroom

The reconstruction process for photogrammetry is much simpler than laser scanning. In theory.

Said theory dictates that because the volume of photos I’ve taken include so much coverage between images, no manual tweaking should be required.

It should just be a case of clicking and dragging the photographs into Meshroom and waiting for that to work its magic.

Which I do and indeed, all photographs are included. No manual work is required to regenerate the mesh. Which is a relief because I’m not sure this is a feature even supported by Meshroom.

Reality Capture supports this technique and is generally much faster. It’s also recently been subject to a revised pricing structure which seems to have alienated their entire audience. It’s like the Krupskaya of recontruction software.

But Meshroom had no problems aligning the photographs and completed the entire reconstruction in just over twenty eight friggin’ hours.

Story continues after image
...continued

When doing photogrammetry – get a beefy rig

In fairness, reconstruction had to be done on my laptop, which isn’t ideal, squeezing every last drop out of my Intel core i7-7700hq, 16GB RAM and Nvidia Geforce GTX 1050 Ti.

So sure, the processing time is considerably higher than that involved with laser scanning.

On the other hand, the final mesh generated by Meshroom is 0.22GB, almost a fifth the size of the laser scanned OBJ.

It’s also rendered out the material which looks great. Capturing the material simply isn’t possible with the HandySCAN, so this is a massive positive in favour of photogrammetry.

Checking the photogrammetry reconstruction in Maya

I drop this OBJ into Maya along with the rendered texture.

The polycount comes in at 3,500,000 tris, less than half that of the HandySCAN mesh.

Oh, and it looks awesome. Really, really awesome.

Story continues after image
...continued

The difference a good material can make

Maybe the novelty is waring off and I’m thus lowering my standards, but I’m very happy with how this has turned out. There’s not a hole in sight and the texture looks great.

I take a look at some of the more complex areas of geometry. While some of the surfaces are a bit rough compared to the real thing, the main shapes have been captured really well.

Photogrammetry mesh detail test

Complex shapes and intersections of geometry such as these came out really nicely

But this is really all about the material.

I’m sure with some tweaks to resolution and the texturing nodes in Meshroom, it could look even better. But that’s a story for another day.

As far as default settings are concerned, this is some good shizz.

Photogrammetry texture

The texture looks boss. ’nuff said

Comparing the laser scanned and photogrammetry generated meshes

Smooth criminal

It doesn’t take long to realise that the HandySCAN performs considerably better with the smooth surfaces.

As you can see below, the laser scan captured some incredibly subtle, smooth details in the area between the lip and the snout. The photogrammetry version however, is covered in tiny holes and some of the finer details are lost entirely.

These could likely be solved with closer detail shots during the photogrammetry shoot.

Details on a saser scanned mesh compared with photogrammetry mesh

The smooth surfaces turned out far more detailed and accurate when laser scanned.

The devil is in the details

I will say that the detail captured by photogrammetry is nevertheless, pretty staggering in it’s own right. I’m surprised by how many of the really small indentations it has managed to capture. Great as they are, they’re far exceeded by the capabilities of the HandySCAN.

Nowhere is this better demonstrated than on the teeth.

The teeth in the photogrammetry mesh are littered with imperfections and geometry which simply isn’t on the real model. More coverage of the teeth, especially in between one another would’ve made all the difference.

The HandySCAN however, has captured these pretty much perfectly.

Object accuracy comparison between laser scanning and photogrammetry

Photogrammetry really struggled with the teeth.

The same can be said for the metal rods which support the top jaw.

With laser scanning, the smoothness of these forms is maintained. Which shouldn’t really come as a surprise, given that engineering is this tool’s primary area.

Photogrammetry’s issues with such geometry becomes painfully apparent here.

Smooth surface comparison between laser scanning and photogrammetry

Photogrammetry likewise didn’t peform too well on perfectly smooth areas, such as these support rods.

Plug those holes

Where photogrammetry really excels in respect of capture quality, is how it handles holes.

If the HandySCAN can’t see the surface, it doesn’t get scanned. Sure, you can fill in these holes manually, but that could be really tricky and time consuming. Plus, I’m comparing these in terms of how well they perform out of the box.

Photogrammetry does its best to fill these holes automatically. It’s not perfect, but it works and at the very least means I have a complete mesh.

Comparison of missing areas in laser scanning and photogrammetry

Gaps in the laser scanned mesh look ugly, whereas photogrammetry at least tries to sort this out for me. Thanks photogrammetry. Thanks.

In respect of pure mesh quality, the HandySCAN pretty much runs away with it here. Photogrammetry isn’t poor by any stretch of the imagination, but the smooth surfaces and ultra fine details just don’t compare.

To be fair, this might be addressed with the detail settings, but all such changes will impact the reconstruction time.

I went into Meshroom with the default values and there’s precious little doubt in my mind that with some tweaks, I could generate a better mesh.

But I’m not waiting another twenty eight hours to see if that’s the case. I’ll definitely make this the focus of a future test, however.

Small, shiny details are a problem

Where both approaches really struggle, is with the picture hook on the back of the skull.

Shiny materials are always problematic, but I expect the fact that the hook itself will have moved between shoots created a bigger issue.

But like I mentioned before, it’s not a prominent piece of geometry or focal point, so I’m not too bothered by this.

T Rex skull picture hook

The picture hook turned out poorly with photogrammetry, just like it did with laser scanning.

Quick optimisation for Game Engines

Rules

I need to adapt my rules. The original plan was based around getting meshes into a game engine, pretty much out of the box. But neither mesh is in a good place to do this, because they’re huge. I need to do some basic optimisation, which I honestly would have rather avoided.

Only due to laziness.

To make things fair, I determine the following rules:

  • Each mesh will be decimated and retopologised in Zbrush.
  • The target polycount is 40,000 tris. I arrived at this figure because it’s 4,000 less than the polycount of Doom 2016’s Cyberdemon and I like that guy.
  • Skull length must be 152cm – that of the largest T-Rex skull ever found.
  • Unwrapping and texture baking will be done in Maya, exporting at 4k.
  • The resultant mesh will go straight into Unreal Engine 4.

These steps can take a considerable amount of work in their own right, if done properly. There are also software packages such as xNormal, Knald and Substance which are better suited to each.

But in the interests of getting the asset game ready as quicky as possible, I’m satisfied these steps reasonably represent a standard production pipeline.

I’m also adapting my initial performance list to account for these areas:

  • Production time
  • Mesh size
  • Normal map quality
Story continues after image
...continued

Optimising the meshes

Decimation

First up is decimation, reducing the polycount of the raw mesh to a level more agreeable to modern game engines.

It takes ten minutes to import the laser scanned mesh but only two minutes for photogrammetry.

Each take another two minutes to decimate, taking the polycount down to 38,545 and 37,076 respectively.

There’s not much between the final outputs, in terms of quality.

Decimation is a process which reduces the amount of polygons on a mesh, while maintaining the core shape.

Retopology

Next is retopology. This basically involves optimising the surfaces, ensuring they’re well setup for both animation and texturing.

I play around with various settings, taking twenty minutes on the laser scans and ten minutes on photogrammetry.

The results are once again pretty much identical. The laser scanned polycount has come out slightly lower at 37,836 tris with photogammetry still at 37,076.

The major difference is between the teeth. The photogrammetry mesh has really taken a hit here, although it’s more than compensated for by the absense of holes.

The T-Rex skull nicely retopologised.

Map baking in Maya

The optimised meshes are almost game ready. They need to be resized, orientated and unwrapped, before having the normals from the high detail meshes (and diffuse, in the case of photogrammetry) baked onto them.

My UV maps are rushed and terrible because unwrapping is hell and I just want to get to the end. Like when people talk about Brexit.

Baking the laser scans takes fifteen minutes whereas photogrammetry takes eight hours at the same settings (4k, yada yada yada). This is probably because I used a decimated version of the high poly mesh with the laser scans, whereas photogrammetry used the raw mesh. Decimating that version created texture problems which weren’t going to play nice with the baking.

Again, probably somewhat simple to over come if I have myself a little more time.

Either way, below displays both meshes, the low poly and finally the low poly wireframe of both the laser scans and photogrammetry.

Side by side of the laser scan and photogrammetry meshes in Maya.

Final presentation in Unreal Engine 4

Well, that looks ‘normal’

With both meshes at pretty much the same polycount, the difference in quality between the normal maps becomes far more apparent.

I wouldn’t turn my nose up at either, but the fidelity is considerably higher on the laser scanned mesh.

Those really fine details, holes and cracks are not nearly as well defined on the photogrammetry version.

The details on the laser scan mesh (left) are much sharper than those achieved with photogrammetry.

These differences are even clearer when viewed straight on.

There’s a considerable amount of noise across the upper lip and inside of the top jaw on the photogrammetry mesh.

By comparison, the laser scan boasts more definition and clarity in its surface details.

In terms of how accurately these reflect the real model, the laser scan wins.

Laser scanning has allowed for a far more accurate normal map

Everything changes with a dash of colour

While the photogrammetry normal map certainley has it’s shortcomings, these are practically annihilated once the diffuse is added.

It masks the rougher areas of noise while also highlighting some of the details that get lost in the normal alone.

The diffuse map makes a massive difference to the overall look of the photogrammetry mesh.

The quality is consistent across all areas of the mesh. I’m a little surprised by this, given how rushed the UV unwrap was. I don’t need any encouragement for taking shortcuts, so this is a bad sign.

Yeah, looks cool.

Boom!

Story continues after image
...continued

The results

Performance:

HandySCAN / Close range photogrammetry

  • Equipment setup time: 5 minutes /30 minutes.
  • Subject setup time : 20 minutes / 1 minute.
  • Scanning : 45 minutes / 30 minutes.
  • Scan Data Size : 7.62GB / 12.00GB
  • Reconstruction Time: 20 minutes / 45 million years.
  • Reconstructed Mesh Size: 1.05GB / .22GB.
  • Reconstruction Accuracy: Great / Good.
  • Diffuse quality: NA / Great.
  • Game ready production time: .78 hours / 8.23 hours.
  • Game ready mesh size: 2.039MB / 2.935MB
  • Normal map quality: Great / Good

Winner:
The HandySCAN Black Elite

Conclusion

When it comes to the final meshes, this ended up being really close. Making the assets game ready was the deciding factor.

As awesome as the texture generated by photogrammetry is, it really became a question of whether the quality of the HandySCAN’s normal map would be enough to tip the balance.

In this case, I’m satisfied that it does. But I don’t think that will necessarily be true in every situation. I can forsee there being objects where the difference in normal quality will be so negligible, that the inclusion of a diffuse map makes photogrammetry an easy winner.

In the process of this writeup, I experimented with various light and camera setups. They’re nothing special, but if like me you can’t get enough T-Rex skulls, they’re featured below.

Thanks for reading!

Back To Top