Cast your mind back to the distant past of one month ago, when I shared the report from my first Masters module.
Digitising Wildlife was a very small glimpse into morphometric collection from 3D reconstructions of wild animals. Dead wild animals, to be specific. As in wild animals that are dead, not animals that are really, really wild.
The broader motivation behind this is to find tools and pipelines that can aid wildlife research and/or protection. The benefit of 3D photogrammetry in this context is that it facilitates the collection of a broader, deeper range of data types than traditional datasheets. Furthermore, it can be undertaken (theoretically) by anyone with a camera and completed relatively quickly.
However, a shortcoming of this approach is that it requires the subject to stay completely still. This is because 3D photogrammetry works by aligning many photographs based on their relationship to subjects in the frame. If those subjects move, this alignment ceases to be possible.
Most wild animals I’ve come across don’t like to keep still, with the exception of pheasants standing idly in the road when I’m in a rush to get somewhere. As a result, the method employed in my investigation is (for the time being) limited to those of a deceased, still, pushing up daisies variety.
This week I came across a potential alternative that could facilitate the scanning of live, moving animals with a similar level of ease and accessibility. I will now share that with you, oh lucky reader.
Generosity, thy name is me.
Story continues below...
This all started when I bought an Apple… product.
That’s right, I wilfully exchanged hard earned money for a piece of hardware developed and sold by the Apple corporation. If I understand modern culture correctly, this means I’m officially cool. Take that, boomers! Or whatever.
I picked up the iPad Pro to facilitate drawing on the move. As great as Wacom Cintiqs are, they’re not exactly agreeable to use on the bus. Robyn was among the many people who advised the iPad Pro as being the ideal solution and since her work is the shit, I figured it must make my work similarly so.
One of the most interesting features of this particular toy is its LiDAR scanner. In short, LiDAR (Light detection and ranging) chucks out lasers that hit surfaces. The time it takes light to return to the receiver is then measured. The result is a series of points stored in a three dimensional cloud, ie: a point cloud.
Essentially, you can think of it as being like Sonar, but using light instead of sound.
LiDAR is nothing new in and of itself, with plenty of solutions and variants available for purchase at ranging price points. The FARO Focus S70 (which I purchased as part of the hardware allocation for Staffordshire University’s Visualisation and Simulation facilities) for example, captures up to 70m range per scan. We’ve attained awesome results on a couple of projects, details of which I hope to share in the coming months.
So I downloaded an app, did a couple of basic scans and my first thought was “Should I give up cheese? I do eat a lot of it and it’s not helping my weight loss”. Soon after, I wondered if said app could facilitate the collection of morphometric data from moving animals. Capturing a point cloud in real-time wouldn’t give a complete model, but it could give enough to collect data from.
The process succeeded, albeit at a somewhat rudimentary level.
Here’s the steps, for anyone interested in doing the same.
All two of you.
Collecting and measuring data
You’d be forgiven for expecting a long winded, complex process following that introduction. Honestly, the biggest barrier to entry is the cost of the hardware – because Apple. Altogether, this process requires:
- iPad Pro – Starting at £730, it’s an expensive indulgence but one that is fast becoming an invaluable tool for a dweeb like me. I believe the iPhone 12 is capable of the same, but I don’t like phones, so sod that.
- Record3D – Scanning app, free for the first three scans and then £4.99 to unlock the full version.
- MeshLab – Free software for 3D data editing.
Then you need a subject to scan. In my case I’m using Jasper, latin for ‘that which exists without shame nor decorum’ and pictured is below:
Collecting and exporting data
First up, we activate the Record 3D app and point the camera in the direction of our subject. It’s important to keep the area we wish to measure in frame, because… well, that’s the whole idea. Sorry if that sounds patronising, but if it did – you deserve it.
Once that’s done we can preview the video to make sure we got the bit we wanted. In my case I’m going to measure the distance between his ears. This is because I’m not convinced anything actually exists here:
Happy with that, we go to our Scan Library, hit Edit, select the scan we wish to upload and then choose Export. I’d apologise for how poor the following screenshots are but I’m being consciously lazy, thus doing so would be disingenious.
A series of export options appear and since MeshLab is a fan of the .PLY format, that’s what we’ll want to go with. We can export them individually or into a zip archive. Personally, I prefer the individual option so I can check and upload on a per scan basis. The alternative is to upload the entire zip which might not be ideal depending on the size of the zip and your connection speeds, if you’re without USB.
On that note, we then need to get those files onto a machine with MeshLab installed. If you’ve got the Dropbox app, I find this to often be the quickest method.
Upload the scan, download on the other side and we’re ready to take measurements.
Importing and measuring
Next up we open MeshLab and to go File > Import.
Then we browse to the .PLY file exported from our gadget and in a couple of seconds, our point cloud is imported.
As mentioned, I want to record the distance between Jasper’s ears. Chances are that if you’re collecting morphometrics, you’ve a definitive list of what’s required, but the following steps are the same regardless.
From the MeshLab toolbar, we select the Measurement tool, represented by a roll of tape.
We then left click at the start and end of the line we wish to measure and… that’s it. The value shown on the line – that’s our measurement.
In Jasper’s case the distance 0.106400. It’s worth noting that MeshLab doesn’t present data in CM or MM. The unity of measurement is determined by the software the data was exported from. I might be wrong on that, as I haven’t looked into it too much but I think it’s the case.
To double check, I measured the distance manually with a ruler and got around 10.5CM. So the value collected from the point cloud is accurate, it just needs multiplying by one hundred.
Just to reiterate, I’ve only had the app for a week and measuring in MeshLab is also something I’ve not done before. It’s therefore highly possible that I’m overlooking some steps that would simplify the process. Just as it’s highly possible someone has already done this and I’ve missed it because I got excited instead of Googling to check.
Whatever – the point is that as a potential solution for morphometric collection from living animals, this seems like it could be viable.
I hope this is useful to anyone interested in or studying these areas. If you are but can’t get access to an iPad Pro and would like something tested, please do give me a shout.