A wide range of camera features are combined in various ways with assorted image sensors (CCD, EMCCD, CMOS, and sCMOS) to yield the variety of options available in microscope cameras. Let's look at how resolution, speed, and cost interact behind the scenes of photomicrographs.
Research microscopists face the same dilemma as a consumer when it comes to buying a new camera. There's an ocean of options. The sensor type varies: CCD, EMCCD, CMOS, sCMOS. Different cameras use the same sensor, but provide different features. Moreover, a wide range of companies—from Andor Technologies (Belfast, Northern Ireland) to QImaging (Surrey, BC, Canada) and others—make cameras just for microscopes. Some of these cameras even cover specialized forms of microscopy, such as fluorescence and super-resolution. It's a jungle of options, but one worth exploring (see table).
Before bemoaning the "mess," let's remind ourselves that this is actually a great fortune. Not that long ago, photomicrographs emerged the old-fashioned way: from film, with all its attendant variables and quirks. If you ever feel swamped by the options in today's digital cameras for microscopy, keep in mind that the process is infinitely faster and more flexible now than it was just a few short years ago.
Basics between the sensors
CCDs, or charge-coupled devices, make up one of the most common camera sensors. Arising from computer-memory work at AT&T Bell Labs in 1969, these devices detect light based on the movement of electrical charges. Filters over the pixels add color capabilities. For even better color, some cameras use three-CCD (3CCD) sensors, which use a different CCD for red, green, and blue. These can also provide higher quantum efficiency—and, thereby, higher light sensitivity—than an ordinary CCD with filters, or a Bayer mask. In a so-called electron-multiplying CCD (EMCCD), the light-generated electrical signal gets amplified, and a single photon can be detected.
The other key kind of sensor is complementary metal-oxide semiconductor (CMOS). In general, CMOS represents a way of building integrated circuits, including camera sensors. In 1967, the late electrical engineer Frank Wanlass patented CMOS technology. For circuits in general and camera sensors in particular, CMOS delivers a key benefit: low power consumption. In fact, some CCDs consume 100 times more power than some CMOS sensors. In addition, CMOS sensors tend to be far less expensive because they are easier to make with widely available processes. A modified version, scientific CMOS (sCMOS), adds a range of features: Low noise, fast frame rates, wider dynamic range, and more.
Researchers also keep improving sensors. In some cases, this leads to entirely new approaches, such as a method called serial time-encoded amplified microscopy (STEAM), which was developed by a team of researchers at the University of California, Los Angeles (UCLA). In an article published online (July 2, 2012) in the Proceedings of the National Academy of Sciences, the team wrote that "conventional optical microscopy is incapable of statistically relevant screening of large populations (greater than 100,000,000) with high precision due to its low throughput and limited digital memory size." STEAM, on the other hand, provides such high throughput—100,000 particles per second in some of the team's research and suggestions that even higher throughput is possible—that it's been called the fastest camera ever made. This system relies on a combination of fast optics and self-focusing microfluidics. The optics can capture 36.7 million frames per second.
One size does not fit all
The wide collection of cameras listed in the accompanying table exists for a good reason: Different microscopes and different tasks demand difference sensors and setups.
By contacting many microscopists, I found that some prefer specific setups. For example, Hideo Otsuna, Ph.D., of the department of neurobiology and anatomy at the University of Utah Medical Center (Salt Lake City, UT) likes to use the Photometrics' (Tucson, AZ) CoolSNAP HQ or the Hamamatsu (Bridgewater, NJ) ORCA-100 with a Carl Zeiss (Oberkochen, Germany) compound microscope. He says, "Both cameras use the same CCD from Sony, and have less noise and higher signal-to-noise." He adds that he prefers a black-and-white model for fluorescence and a color model for brightfield microscopy.
Other scientists, though, seem less interested in the specific sensor in the camera that they use for making photomicrographs (see figure). More than one admitted to not knowing what sensor their camera uses. Others added that they wouldn't know the difference, anyway.
|The features in a photomicrograph depend largely on the capabilities of the sensor, such as this CCD, that produced it. (Image courtesy of NASA)|
The scientists who do know the difference, however, agree with Otsuna that one sensor or another generates better images with specific setups. For instance, Dianwen Zhang takes care of the instruments in the microscopy suite of the Imaging Technology Group at the University of Illinois at Urbana-Champaign, and he says, "As far as I know, there is no perfect detector for all microscopy." He adds, "It depends on which microscope we are talking about." Likewise, more than a few microscopists pointed out that today's research requires so many different imaging technologies that no single kind of detector makes sense for every application.
Undoubtedly, some of the guesswork remains in photomicroscopy. Instead of selecting between films and exposure settings, photographic papers and development times, today's researchers can explore which sensor—and even that sensor in various cameras—works best for a specific research need. The great news is that as we try different setups, we can see quickly how it all turned out, since we no longer must wait for film to develop, paper to be exposed and processed. The results come up instantly on the digital screen. So the market today is no mess. It's a bonanza!