Digital holography is a technique where light shined on an object is recorded by a digital sensor; instead of recording the image itself, the sensor captures the diffraction pattern generated by the interference between the wave scattered by the object and the original light wave.

While in theory holographic methods allow an entire 3D image to be captured and stored as a 2D diffraction pattern, what the sensor captures is usually not interpretable by a human. Holograms must be reconstructed in order to be analyzed, taking large amounts of storage and computing power. These two bottlenecks have prevented digital holography from being used to image large amounts of data.

Recent advanced in machine learning have identified a promising new approach to analysis of these holographic images: choose a desired analysis output (say, species of phytoplankton in a hologram of a sample of seawater), and use a neural network to transform the 2D holograms directly into that output. This would alleviate the need for storage of the reconstructed holograms, and once properly trained a neural network will be able to analyze digital holograms in real-time. Our goal is to create a low-cost device for digital in-line holography, that uses a neural network for real-time classification and data analysis.

Team: Allan Adams (Future Oceans Lab @MIT), Yogi Girdhar (WARPLab @WHOI), Heidi Sosik (@WHOI), John San Soucie (MIT/WHOI)


  • Spring 2019: Device development, neural network training
  • Summer 2019: Field testing in Woods Hole