As a manufacturer of both Charge-Coupled Devices (CCD) and CMOS image sensors, why do we have two different approaches for image sensing?
CCD technology was designed for imaging, and is the base technology that historically provides the highest level of image quality. Because of this, very demanding imaging applications—such as use in professional cameras, or high-end machine vision applications—tend to utilize CCD image sensors because these devices were designed and built solely for the purpose of imaging. CMOS technology, on the other hand, is the same core technology that is broadly used in semiconductor manufacturing. By modifying these devices, it is possible to make that piece of silicon sensitive to light as well. CMOS image sensors can integrate digital logic and mixed-signal processing directly onto the imaging device, but the image quality available from these devices tends to be lower than that available from corresponding CCD sensors.
What's changing the playing field?
There is a lot of work going on in the CMOS arena to try and improve the quality of the pixel, so that it ends up approaching the quality that customers are used to getting with a CCD device.
What can we look forward to in CCD technology?
While CCD technology, from a pixel perspective, tends to be relatively well developed, there continue to be significant advances in imaging performance from efforts to increase the quantum efficiency of the devices, preserve charge capacity at smaller pixel geometries, and reduce the noise floor. As array sizes increase, other characteristics also begin to become very important, such as the ability to read the array at a very high rate without adding imaging artifacts. That allows a CCD sensor that is capable of capturing high-resolution still images to also output a video image stream, so that (depending on the application) there is an opportunity to switch between different modes of the sensor.
Will one technology wind up with a well-defined position in the market in the future?
As we have conversations with customers in different applications and in different markets, we find that they don't really talk about a specific technology, but rather about imaging features and the ability of a given technology to meet those needs. For companies making integrated cameras or digital backs for high-end professional photography with 22 million pixels and higher, for example, the questions are about image quality, dynamic range, and noise floor; they don't care as much about specific technologies as they do about meeting the specifications required by their customers. While the only way to meet these requirements today may be with CCD, as CMOS continues to develop it may be possible to meet them in the future with CMOS as well.
So, why all the confusion?
The reason there is confusion is because if you don't have an understanding of some of the issues involved, it is very easy to simply ask, "Should I use CCD or should I use CMOS?" And that's not really the right way to think about the problem—it's not really asking the right questions. The right questions would be, "What kind of sensitivity do I need? What kind of frame rate do I need? What dynamic range do I need?" Then it becomes a matter of identifying the technology that maps best to those needs.
What about future technology choices?
Today, if you look at the sensors that are available in the market for a particular application, a given sensor may be the best solution to the problem. But a year from now, there are going to be different products and technologies available. So it is not really a question of CCD versus CMOS, it's a question of the available technology that provides the best optimized solution.