Stitching Intra-Oral Images


Note: This is a repost of my January post on MIT Media Lab’s Wordpress blog of their RedX 2015 Camp held at IIT-Bombay. There are a few minor modifications though.

Most intraoral cameras have a relative narrow field of view, and the entire jaw is never visible in a single image. We are trying to stitch several images into one, so that the user has complete view of the jaw, and we can then segment the tooth from it, and keep a track for every individual tooth.

A basic image stitching pipeline has the following steps:

  1. Matching features between two images
  2. Computing the homography with RANSAC (minimal set is four matches)
  3. Transforming , concatenating and blending the images.

Most of the existing panaroma building algorithms are well-suited for applications in which the object being photographed is quite far away from the camera, such as in the image shown below (obtained from the Autostitch page):

Panorama construction

However, we are photographing the teeth at a really close range, and minor changes in perspective are fatal for these algorithms. In order to overcome the problems imposed by changes in perspective, we are using ASIFT, a feature detection/description/matching algorithm which is robust to perspective changes when compared to SIFT. The next steps (homography computation, blending) are pretty standard, and here are some results:

A stitch of three images taken from an intraoral camera