Before they excise a tumor, surgeons need to determine exactly where the cancerous cells lie. Recognizing this, a team of researchers at the University of Arizona (Tucson, AZ) and Washington University in St. Louis (Missouri) has developed a multimodal imager that combines two systems—near-infrared (NIR) fluorescent imaging to detect marked cancer cells and visible light reflectance imaging to see the contours of the tissue itself—into a small, lightweight package that measures 25 mm across. The imager could lead to cheaper and more lightweight tools for surgeons, such as goggles or hand-held devices, to identify tumors in real time in the operating room.
Related: Unraveling the basis of disease with NIR molecular probes
"Dual modality is the path forward because it has significant advantages over single modality," says author Rongguang Liang, associate professor of optical sciences at the University of Arizona.
Currently, doctors can inject fluorescent dyes into a patient to help them pinpoint cancer cells. The dyes converge onto the diseased cells, and when doctors shine a light of a particular wavelength onto the cancerous area, the dye glows. In the case of a common dye called indocyanine green (ICG), it glows in NIR light. But because the human eye isn't sensitive to NIR light, surgeons have to use a special camera to see the glow and identify the tumor's precise location.
Surgeons also need to be able to see the surface of the tissue and the tumor underneath before cutting away, which requires visible light imaging. So researchers have been developing systems that can see in both fluorescent and visible light modes.
|Near-infrared (a) and visible (b) images of the aperture filter used in the new dual-mode imager. Only the central region of the filter can transmit visible light, while the outer portion can only transmit the near-infrared light used for fluorescent imaging. (Image courtesy of Optics Letters)|
The trouble is that the two modes have opposing needs, which makes integration difficult. Because the fluorescent glow tends to be dim, a NIR light camera needs to have a wide aperture to collect as much fluorescent light as possible. But a camera with a large aperture has a low depth of field, which is the opposite of what's needed for visible-light imaging.
"The other solution is to put two different imaging systems together side by side," Liang says. "But that makes the device bulky, heavy, and not easy to use."
To solve this problem, Liang’s group and that of his colleagues, Samuel Achilefu and Viktor Gruev at Washington University in St. Louis, created a dual-mode imaging system that doesn't make any sacrifices.
The new system relies on a simple aperture filter that consists of a disk-shaped region in the middle and a ring-shaped area on the outside. The middle area lets in visible and NIR light, but the outer ring only permits NIR light. When you place the filter in the imaging system, the aperture is wide enough to let in plenty of NIR light. But since visible light can't penetrate the outer ring, the visible-sensitive part of the filter has a small enough aperture that the depth of field is large.
|Optical and mechanical structure of the customized lens with aperture filter (a) and the photograph of the assembled lens, with a quarter for comparison (b). (Image courtesy of Optics Letters)|
Liang’s team is now adapting its filter design for use in lightweight goggle-like devices that a surgeon can wear while operating. They are also developing a similar hand-held instrument.
To view the full details of the work, which appear in the journal Optics Letters, please visit http://dx.doi.org/10.1364/OL.39.003830.
Don't miss Strategies in Biophotonics, a conference and exhibition dedicated to development and commercialization of bio-optics and biophotonics technologies!
Follow us on Twitter, 'like' us on Facebook, and join our group on LinkedIn
Subscribe now to BioOptics World magazine; it's free!