COMPUTATIONAL MICROSCOPY/ LENS-FREE IMAGING/POINT-OF-CARE DIAGNOSTICS: Algorithms and cell phone add-on enable powerful, inexpensive microscopy

The optical components of a modern microscope are relatively bulky and and complex...unless you replace them with digital holography, and the microscope becomes a chip-based device that clips to a cell phone. Computation, sensor arrays, and LED light are the critical ingredients in a setup that hopes to revolutionize global healthcare and won SPIE's inaugural Biophotonics Technology Innovator Award for its developer.


ByAydogan Ozcan

The optical components of a modern microscope are relatively bulky and and complex...unless you replace them with digital holography, and the microscope becomes a chip-based device that clips to a cell phone. Computation, sensor arrays, and LED light are the critical ingredients in a setup that hopes to revolutionize global healthcare and won SPIE's inaugural Biophotonics Technology Innovator Award for its developer.

As described in our recent paper, our lens-free on-chip imaging approach for cell phone-based microscopy uses a digital optoelectronic sensor array—such as a CCD or CMOS chip-to directly sample light transmitted through a specimen.1 CMOS chips generally provide smaller pixels, which is better for high-resolution imaging, whereas CCD chips typically provide much wider imaging areas, reaching for example a 10–20 cm2 field of view (FOV).

Our design uses no lenses between the object and the sensor planes so besides the detector array, the only key physical component is a partially coherent light source. We use a light-emitting diode (LED) typically filtered through an aperture 50–100 μm in width, providing a simple means to create sufficient spatial coherence at the sample plane.

Spatial and temporal coherence must be carefully considered when building and using such a lens-free on-chip imaging device. The use of lasers is not always desired since they produce speckle noise and multiple reflection interference artifacts on the images. LEDs, by contrast, typically have 10–20 nm bandwidths. Such an illumination choice helps us to avoid speckle and multiple reflection interference. However, one has to be careful to make sure that this choice does not hurt the resolution by washing away some of the useful interference terms characteristic of holography. To ensure this, we place the specimen close to the detector array (usually less than 1 mm vertically) so that even the large scattering angles from an object can interfere with the background light, despite our limited temporal coherence length. The same choice of small vertical distance between the object and sensor planes also helps us to use partial spatial coherence and still achieve a high numerical aperture (0.8–0.9) without being affected by the limited coherence diameter at the detector plane.

FIGURE 1. (a) A multiheight phase-recovery process enables imaging of dense and confluent samples by propagating back and forth between super-resolved holograms acquired at different heights. The resulting complex field is then backpropagated to the object plane, yielding amplitude and phase information of the specimen. (b) Weighing only ~122 g, the lens-free multiheight holographic microscope achieves submicron resolution across a field of view of about 30 mm2. (c) A schematic of the same device is shown.

How it works

In this system, computation and image reconstruction algorithms make up for the simplicity in the optical design. Thus, the lens-free on-chip microscope is based on partially coherent digital in-line holography. The light wave that is scattered through the sample interferes with the background light to encode the phase information of the scattered waves into intensity oscillations, which form an in-line hologram of the specimen. These holograms are then digitally reconstructed using iterative phase recovery approaches to recover the phase and amplitude images (in transmission) of the specimen.

To improve spatial resolution, we also employ a pixel super-resolution scheme, where the light source is slightly shifted between each lens-free image acquisition. These individual images (that are sub-pixel shifted with respect to each other) are then merged together to synthesize a pixel super-resolved image exhibiting a much better resolution compared to a raw lens-free image.

We routinely achieve ~300 nm resolution across ~20 mm2 FOV using lens-free on-chip microscopy with a state-of-the-art CMOS imager. Using a high-end CCD, an ~18 cm2 FOV with a resolution of <2 μm can also be achieved for extreme throughput or a large imaging area, yielding >1.5 billion useful pixels in each image.

Benefits and variations in design

The benefits of this design include cost-effectiveness, extreme compactness, light weight, and leveraging of a nearly ubiquitous platform (the cell phone), making it appropriate for all kinds of applications, including field use in remote areas of the world.

Importantly, the above-described design decouples resolution and object FOV. Thus, new sensors that come out with more megapixels and even smaller pixel sizes can improve both the FOV and the resolution at the same time. And if the pixel size and its architecture are maintained, an upgrade from 5 to 10 Mpixels, for example, would mean achieving the same spatial resolution across a 2X larger field of view without changing anything in the microscope design. This is a great example of a microscope that follows the Moore's Law! Advances in other technologies will also enable improved imaging results. For instance, for our tomographic lens-free microscopy work, we utilized GPUs to speed up our reconstructions by 1040X compared to a CPU.2

Depending on the imaging needs, it is possible to create a multi-angled illumination scheme for tomographic imaging of specimens. We have also demonstrated optofluidic versions of the same idea; while the samples are flowing within a microfluidic channel, we image them from different angles and synthesize a tomogram of these flowing samples. In this design, the fluidic motion of the samples benefits the resolution of the microscope, helping us implement pixel super-resolution. As another example, for color imaging capability it is possible to use LEDs of different colors (for example, red, green, and blue) to digitally synthesize lens-free color images of a specimen.

FIGURE 2. (a) Lens-free pixel super-resolution imaging results for <100 nm beads with and without self- assembled nanolenses; (b) lens-free reconstruction result of a larger field of view, with corresponding scanning electron microscopy (SEM) images (i.e., s1 - s4) shown; and (c) lens-free imaging results for single adenovirus and influenza A (H1N1) viruses. For verification purposes, corresponding SEM and 100X oil-immersion objective (NA=1.25) images are also presented. (Adapted from O. Mudanyali et al., Nat. Photon., doi:10.1038/NPHOTON.2012.337 [2013])

Future directions

Further variations are also possible. For instance, the addition of a microspectrometer is feasible and would enable high-throughput spectroscopic imaging by creating a wide-field image cube, where the third dimension of this cube is the transmission spectrum.

This technology has the ability not only to go places microscopes aren't typically able to go, but also to enable things we haven't yet thought about. For instance, because of their wide FOV and large depth of field, lens-free on-chip microscopes provide unique opportunities for imaging of rare events. Researchers in my lab have applied these unique features to accomplish three-dimensional imaging and tracking of the rare helical3 and chiral ribbon4 motion of human sperms, which occur in ~4-5% of cells. The on-chip microscope enabled the researchers to see the head of the sperm conducting a tight helix with a rotation diameter of 2–3 μm and a rotation speed of approximately 10 rotations/s.5,6

It will be exciting to see what other discoveries may be enabled by the combination of wide FOV and large depth of field provided by computational on-chip microscopy.


1. A. Greenbaum et al., Nat. Meth., 9, 889-895 (2012); doi:10.1038/nmeth.2114.
2. S. O. Isikman et al., Proc. Nat. Acad. Sci., doi:10.1073/pnas.1015638108 (2011).
3. T-W. Su, L. Xue, and A. Ozcan, Proc. Nat. Acad. Sci., doi:10.1073/pnas.1212506109 (2012).
4. T. Su et al., Sci. Rep., doi:10.1038/srep01664 (2013).

Aydogan Ozcan is associate professor in the Electrical Engineering and Bioengineering Departments at the University of California, Los Angeles (UCLA; and the California NanoSystems Institute ( Contact him at

EDITOR'S NOTE: At SPIE Photonics West 2013, Professor Aydogan Ozcan was awarded the first Biophotonics Technology Innovator Award, which recognizes extraordinary achievements in technology development that show promise for important impact. In presenting the award, SPIE acknowledged Ozcan's contributions to computational imaging, sensing, and biophotonics, and their impacts on telemedicine and global health challenges: "Ozcan's lab has developed new devices that will provide access to medical tests in resource-poor countries, including a cell phone-based microscope with applications such as diagnosing malaria."

Ozcan describes the impact of his work in a video interview with BioOptics World Associate Editor Lee Mather following receipt of the award during the 2013 BiOS Hot Topics session (see In this article, we asked him to describe the critical components and design considerations of his setup; its operation, capabilities, and benefits; and its possible future.

More in Biophotonics Tools