Hi I'm Jesse

I'm a robotics engineer with an interest in computer vision and machine learning. My research has focussed on beyond-vision sensing problems that look at capturing and understanding the world that is hidden from us, and bringing it into a space we can comprehend.

The projects I've completed have focussed on the setup of cameras in a known setup, which from my experience has made combining different imaging sensors and estimating shape so much easier. I'm now interested in decoding the information that is hidden amongst pixels through machine learning and deep learning.

Tools, Languages and Software

Docker C++ MATLAB Python OpenCV Pytorch ROS Ubuntu Raspberry Pi Weights and Biases Nvidia CUDA and Jetson

Areas of Expertise

  • Computer Vision
  • Hyperspectral Imaging
  • Camera Calibration
  • Machine Learning
  • Deep Learning
  • Multi-Camera Systems
  • Statistical Modelling
  • Non-Linear Optimization
  • Computational Imaging
  • Multi-Modal Vision

Projects

Hyperspectral Deep Learning of Subcutaneous Fat Depth

Summary I modelled the depth of subcutaneous fat (in millimeters) on lamb cuts using hyperspectral imaging through training CNN deep learning models. The hyperspectral data was captured using a RGB-D and line-scan hyperspectral camera system where the ground-truth fat depth data was acquired from CT scans.

Results The CNN was compared to a multi-layer perceptron (MLP) and linear regression models. The results for R2 and RMSE in Fat Depth are shown for all models in the table below. The CNN demonstrated the best fit in fat depth. The maximum fat depth that can be accurately estimated by hyperspectral camera was around 15mm.

Model R2 RMSE in Fat Depth (mm)
Linear Regression 0.65 1.78
MLP 0.73 1.57
CNN 0.81 1.17

Multi-Camera Extrinsic Calibration in ROS

Summary I was required to extrinsically calibrate 16 RGB-D cameras with partial-overlapping views that were setup in a ROS environment. I solved the calibration by creating a double-sided ArUco board and solved the optimization as a pose graph using g2o solver.

Results Point cloud data from the cameras was used to create 3D reconstructed models. My calibration approach reduced the error in the models by a factor 10 when compared the previous calibration method. The total calibration time, which includes collecting the data and solving the calibration problem, was reduced by over 90% when compared to the previous method that calibrated cameras pairwise.

GitHub

Line-scan Frame Camera Calibration

Summary I was required to calibrate a line-scan hyperspectral camera to use for robotic applications. The difficulty with calibrating this camera is the single spatial dimension. I successfully implemented the calibration by using an additional 2D color camera where the cameras were modelled according to the pinhole camera model. This calibration incorporated uncertainty estimation due to pixel noise, which was later used to create a novel active calibration algorithm (see below). This work has since been updated to use OpenCV's ArUco board for automatic pose estimation of the calibration board.

Results The calibration made it possible to reproject line-scan images to color images. The active calibration algorithm reduced the error in the calibration parameters by 26% while using fewer images when compared to a naive approach that used all images.

GitHub

Publications

Subcutaneous Fat Depth Regression Using Hyperspectral and Depth Imaging

This paper uses a calibrated line-scan hyperspectral and frame camera system with a calibrated light source to model the subcutaneous fat depth of lamb cut samples using machine learning. The ground-truth fat depth is acquired from ray-casting into CT scans of the lamb cut samples. These CT scans are first converted into 3D meshes, which then are aligned to a 3D reconstructed mesh from depth images. Finally, the fat depth is acquired through ray-casting hyperspectral pixels. Fat depth models are trained using classic and deep learning models, where the deep learning models show the best results.

Multi-Modal Non-Isotropic Light Source Modelling for Reflectance Estimation in Hyperspectral Imaging

This paper improves the estimation of the material property of reflectance for an object of interest that is captured using a calibrated line-scan hyperspectral and frame camera system, by modeling a light source and incorporating shape information. The reflected light from the object of interest is assumed to be described by the dichromatic reflectance model. The cameras, light source and objects are all near-field where the incident irradiance varies over the object's surface. The light source modeling uses a Gaussian Process with a non-zero mean function to capture the spatial irradiance of an actual light source. The proposed reflectance estimation involves optimization that uses the irradiance estimated from the Gaussian Process model with additional terms that involve the surface shape.

GitHub

Observability driven multi-modal line-scan camera calibration

This paper improves the calibration of a line-scan camera through a novel active calibration algorithm. The line-scan camera is combined with a 2D traditional frame camera (color or RGB camera). The active calibration algorithm filters through the calibration dataset and only uses images that improve parameter estimation through calculation of the observability.

GitHub

Education

PhD in Robotics

University of Technology Sydney

Sydney, Australia

January 2019 to July 2023

Topic: Subsurface Material Property Estimation using Hyperspectral Imaging

Bachelor of Engineering (Honors) specializing in Mechatronics

The University of Auckland

Auckland, New Zealand

January 2014 to July 2018

Graduated with first class honors