19  Optical Mechanisms

We will first discuss the fundamental mechanisms that produce light (Section 19.1), followed by various mechanisms to produce color (Section 19.2) and to control luminance (Section 19.3). Finally, we will take a look at two smartphone displays and their gamuts (Section 19.4).

19.1 Light Emission Mechanisms

Displays fundamentally transform electrical signals, i.e., image pixels, to optical signals, i.e., light. The most common device used for this electrical-to-optical signal transduction is an Light Emitting Diode (LED), is a semiconductor device that emits light when an electric current passes through it.

Applying an external voltage across the p–n junction injects electrons from the n-type side and holes from the p-type side into the junction. When electrons recombine with holes, they release energy as photons. The photon’s wavelength is determined by the semiconductor’s band gap, which depends on the material composition. Common semiconductors used for LEDs include AlGaInP (aluminum gallium indium phosphide, for red, orange, or yellow LEDs) and InGaN (indium gallium nitride, for green, blue, or white LEDs).

Figure 19.1 (a) shows the emission spectra of two InGanN and two AlGaInP LEDs, and Figure 19.1 (b) shows the chromaticities of the four LEDs in the CIE 1931 xy-chromaticity diagram. Generally, the emission spectra of InGanN and AlGaInP are pretty sharp, leading to saturated colors.

Figure 19.1: (a): emission spectra of four different LEDs; (b): the corresponding colors in the xy-chromaticity diagram. Adapted from Thomson, Stuart (2018, figs. 3, 4).

InGanN and AlGaInP are inorganic materials. They are usually very large, millimeter in size, so they cannot be used as individual display pixels. OLED (Organic LED) displays use organic (i.e., carbon-based) molecules/polymers, which can be small and act as individual pixels or sub-pixels (see later). MicroLED displays, which many believe are the future of displays, use miniatured inorganic LEDs as individual pixels. These tiny inorganic LEDs have better (optical) properties than organic LEDs, e.g., higher luminance, higher electrical to optical conversion efficiency, and longer lifetime, but are more difficult to manufacture.

Finally, there is also the Quantum-Dot LED (QLED), which is another kind of LED material that uses quantum dots, tiny nanoparticles of semiconductor, to emit light. QLEDs rely on the quantum effects of the quantum dots to produce color: generally small dots emit higher energy (shorter wavelength, bluer) light, and larger dots emit lower energy (longer wavelength, redder) light1. Commercial QLEDs operate based on fluorescence, where light emission is driven by an external light (Agarwal, Rai, and Mondal 2023). The cutting edge research, however, focuses on using quantum dots as “conventional”, electrical driven devices that emit lights under external current/voltage (Mashford et al. 2013; Shirasaki et al. 2013; Qian et al. 2011).

Other Mechanisms

Another common light emission mechanism is fluorescence, of which we have seen one use in QLEDs above. Generally, fluorescing materials absorb a photon at a shorter wavelength, which excite an electron across the band gap. When the electron relaxes and recombines with the hole, a photon is emitted, usually at a longer wavelength. The shift between the absorption and emission wavelength is called the Stokes shift. Usually blue/UV lights are absorbed and green lights are emitted — that is why fluorescence is usually green. Since the luminance efficiency function (LEF) peaks at 555 \(\text{nm}\), green-ish light (Section 4.3.2), fluorescence usually makes object appear brighter/more conspicuous, even though the emitted energy is almost always lower than the incident energy.

In fluorescent light blubs, phosphors are commonly used as fluorescent material. Usually we use UV light (generated by applying current to excite mercury vapor) to excite the phosphor coating inside a lamp, which then fluoresces. What enters you eyes is the combination of the fluorescent light and UV light directly emitted by the mercury vapor.

Figure 19.2: (a) the emission spectrum of a typical fluorescent light bulb; from Daniel Smith (2016); (b): the spectrum of a phosphor-converted white LED; adapted from Deglr6328 (2018).

Figure 19.2 (a) shows the emission spectrum of a typical fluorescent light blub. The two shorter-wavelength peaks are likely from the UV lights themselves, and the rest of the peaks are likely resulted from the phosphor emissions.

Other than fluorescent blubs, many white LEDs also make use of fluorescence. They are so-called phosphor-converted white LEDs. Figure 19.2 (b) shows the spectrum of one such LED, where the peak at the shorter wavelength is caused by the regular InGaN LED emission and the longer wavelength peak is emitted by the Ce:YAG phosphor (cerium-doped yttrium aluminium garnet). These two spectra combined together give a relatively broad-band, white-ish emission spectrum.

19.2 Color Production

The trichromaticy theory of color tells us that reproduce a color we need only three primary colors. This is how all displays produce colors: each image pixel’s color is produced by combining three primary colors. There are two main strategies to implement the three primaries, one relies on the spatial integration of the human visual system, and the other relies on the temporal integration.

19.2.1 Subpixels

In many displays, each display pixel is implemented by three sub-pixels, each of which has an implementation-specific emission spectrium and acts as a primary light. The retina then spatially integrates the lights from the three sub-pixels, i.e., mixing the three primary colors. Figure 19.3 shows two subpixels structures in iPhone 6 and iPhone 14 Pro. Different phones usually use slightly different subpixel structures, often times to avoid patent infringement.

Figure 19.3: Two subpixels structures in iPhone 6 (LCD) and iPhone 14 Pro (OLED display).

While three subpixels are the absolute minimal for color production, there are also displays that use more than three primaries. At an area and resolution cost, more primaries means a larger color gamut and provides more flexibility. For instance, if the color of one of the primaries is slightly off (e.g., due to aging), the additional primaries can be used to maintain the overall color reproduction accuracy.

In particular, there are four-primary displays that have an additional white subpixel. The white subpixels are useful in two ways. First, we can use the white subpixels for actually producing white rather than mixing the other three subpixels. This generally improves the power efficiency. Second, we can add white to artificially boost the luminance of a pixel, but this comes at the cost of sacrificing the color reproduction accuracy: the resulting color becomes more desaturated. This is commonly exploited in projectors.

Filtering vs. Emissive Displays

There are two ways to realize the subpixels. Liquid-Crystal Display (LCD) is a light filtering display that uses a white backlight made from a few large inorganic LEDs, which is then filtered by per-pixel color filters to produce subpixel colors. Emissive displays, such as OLED and MicroLED displays, directly use emissive LEDs as individual pixels.

Figure 19.4: The architectural comparison between an OLED and a LCD.

Figure 19.4 compares the architectures between the OLED display and the LCD. In a LCD, the backlight is usually broadband (white) produced either by the phosphor-converted white LEDs or a mixture of RGB LEDs, all are conventional large inorganic LEDs. Each sub-pixel is associated with a color filter with a particular transmittance spectrum that, together with the backlight spectrum, determines the color of the sub-pixel.

Each sub-pixel has a liquid crystal (LC) cell sandwiched between a rear polarizer and a front polarizer. The rear polarizer rotates the backlight to a particular polarization direction, say vertical. The front polarizer is set to allow light with only the horizontal polarization to go through. Without the LCs in-between, backlight passing through the rear polarizer is blocked by the front polarizer.

When no voltage is applied, the LC cells are in their default twist state, which rotates the polarization of the light by 90 degrees so that it pass the second polarizer, making the pixel transparent and producing the highest luminance. When a voltage is applied, the LC cells align with the electric field and untwist. This alignment causes no change to the polarization of the light leaving the rear polarizer, so the light is blocked by the front polarizer, producing the lowest luminance.

This relationship between voltage and transmittance (i.e., low voltage leads to high transmittance) is characteristic of Twisted Nematic (TN) LC cells. There are other LC technologies such as In-Plane Switching (IPS) and Vertical Alignment (VA) (which are better alternatives to TN (Trussell and Vrhel 2008, chap. 11.2)) where the relationship is inverted.

19.2.2 Field-Sequential Color

Other displays use the Field-Sequential Color (FSC) mechanism to produce color. In FSC, each image is presented by three sequential fields, each of which produces light only using on primary color. FSC then relies on the temporal integration of our visual system to create colors.

In fact, early color TV was delivered using the FSC mechanism. CBS debuted the FSC TV system in 1940s; it is obviously obsolete now, but you can still watch a video of its operation here. Figure 19.5 (a) shows a modern replication of the CBS’ FSC color wheel, which has 2 set of three filters, so there are 6 filters in total per rotation. The wheel spins 24 rotations per second, which amounts of 144 filters per second.

Figure 19.5: (a) a color wheel similar to the one used by CBS’ FSC TV; (b) the integrated frame shows color. From LabGuy’s World (2014).

The TV itself is a Cathode-Ray Tube (CRT) display, which scans the entire display 144 times a second, producing 144 frames. Each frame, received from the broadcaster, is broadband and contains only luminance information. The frame presentation and the color wheel rotation must be perfectly in sync such that as a frame is scanned on the TV, the corresponding filter is placed in front of the TV, displaying a fully red, green, or blue field. Figure 19.5 (b) shows an effective color image of the TV taken by a camera.

Figure 19.6: (a) the light path inside a DLP projector: light from a lamp (covered inside so not seen here) goes through the color wheel, reflected off a mirror on the left to the DMD, which controls the light intensity and sends to the light through the projection lens to the screen; adapted from DMahalko (2009c), DMahalko (2009a), and DMahalko (2009b); (b) the operating principle of a DMD, where every pixel can be mechanically turned on (oriented to reflect the light to a target of interest) or off (oriented to reflect the light away from the target); from Allen (2017, fig. 1).

DLP and LCoS

Today, the most common example of an FSC display is the Digital Light Processing (DLP) projectors. Figure 19.6 (a) shows the interior view of a typical DLP. Light comes from a light source on the right (blocked by the projector housing in the figure and not seen), which could be a broadband lamp or a mixture of different LEDs. The light first passes through the color wheel, which in this case has four filters, red, gree, blue, and transparent. This is equivalent to a four-primary system as discussed above. The light then is reflected by a mirror on the left to a Digital Micromirror Device (DMD) at the bottom, which has an array of pixels/tiny mirrors, each of which can be mechanically turned by a yoke.

Figure 19.6 (b) illustrates the basic structure of a DMD. Each pixel can be turned either “on”, which directs the incident light to the projection lens, or turned “off”, which directs the incident light away from the projection lens. The light passing through the project lens then leaves the system and enters the scene. The DMD controls the luminance of each pixel by the ratio of the on-time to the off-time. This luminance-control mechanisms is essentially pulse-width modulation, which we will discuss shortly.

Another application of FSC is in Liquid Crystal on Silicon (LCoS) displays, which, in princple, combine DLP projectors and LCDs. Like DLPs, LCoS displays also use the FSC and use a per-pixel mirror to reflect light to the scene. Unlike DLPs, the mirror is always “on”. The mechanism that determines the pixel luminance is, instead, similar to that in LCDs.

Like LCDs, there are two polarizers. The light source, originally unpolarized, is polarized by the first polarizer before hitting the LC array. The voltage applied to each (per-pixel) LC rotates the polarization state of the light, which then reflects off the pixel’s mirror and goes through the second polarizer, which transmits light with a polarization state that is rotated 90 degrees from that of the first polarizer. Essentially like LCDs, the voltage determines the amount of light emitted from each pixel.

19.3 Luminance Control

Now we know how displays can produce (at least) three primaries, the next question is how do we control their luminance. We keep using “luminance” in this chapter: if we know the emission spectral of the sub-pixels, determining/knowing the luminance is equivalent to knowing the color. We have already seen some of the ways different display architectures use to control luminance before. In general, there are two ways to control the luminance: pulse width modulation (PWM) or pulse amplitude modulation (PAM).

19.3.1 PAM

In PAM, we (directly or indirectly) control the voltage or current supplied to a pixel, which then changes the amount of photons emitted (and thereby luminance).

For LEDs (whether organic or inorganic), the luminance is proportional to the current that flows through the diode. Recall in LEDs the photons are emitted through the creation of electron-hole pairs, so the amount of photon emission is naturally proportional to the number of electrons injected (through an external voltage), which is proportional to current.

More formally, we can define quantum efficiency \(\eta\) of an LED (Schubert 2006, chap. 5): \[ \eta = \frac{N_{\text{photons}}}{N_{\text{electrons}}} = \frac{P/(h f)}{I/e}, \tag{19.1}\]

where \(N_{\text{photons}}\) is the number of photons emitted to space per second, \(N_{\text{electrons}}\) is the number of electrons injected to the LED per second, \(P\) is the emitted power, \(h\) is the Planck’s constant, \(f\) is the frequency of the photons, \(I\) is the injection current (total charges per second), and \(q\) is the elementary charge.

In reality, however, the relationship between current and luminance can be non-linear because \(\eta\) is not a constant. For instance, the quantum efficiency reduces as current increase (Dai et al. 2010; Deng et al. 2017).

The relationship between the voltage across an LED and the current through the LED, known as the I-V curve, is non-linear. Generally, the relationship is exponential (Geffroy, Le Roy, and Prat 2006; Tsujimura 2017, chap. 4.2; Miller 2019, chap. 6.1.2), governed by the Shockley diode equation (Shockley 1949):

\[ I = I_0 (e^{V/nV_T} - 1), \tag{19.2}\]

where \(I\) is the diode forward current, \(V\) is the voltage across the diode, \(I_0\) is the saturation current, \(n\) is diode ideality factor, and \(V_T\) is a temperature dependent thermal voltage.

Importantly, however, we do not get to directly control the voltage or current across the LED. The exact driving circuit will be discussed in Chapter 20.

Figure 19.7: Relationship between the transmittance and voltage applied to (a) a Twisted Nematic (TN) LC cell; from Gauza et al. (2007, fig. 5) and (b) an In-Plane Switching (IPS) LC cell; from Jeon et al. (2009, fig. 2b).

For LCDs, the luminance depends on the LC transmittance, which depends on the voltage that is applied on the LC. Figure 19.7 (a) shows the relationship between transmittance and voltage of a TN LC cell. Overall, the relationship is not linear: the transmittance is almost invariant to voltage at very low or very high voltage levels, but in the mid-voltage range the relationship is close to linear (Trussell and Vrhel 2008, chap. 11.2; Gauza et al. 2007; Lee, Lee, and Kim 1998; Jeon et al. 2009; Hong, Shin, and Chung 2007). Figure 19.7 (b) shows the transmittance vs. voltage relationship for an IPS cell, where the relationship is largely inverted. Generally, the LCD luminance does not scale linearly with the voltage.

19.3.2 PWM

In PWM, the voltage or current supplied to a pixel is fixed but we control the duty cycle, the period in which the voltage or current is active. For instance, with PWM the LC in the LCD is either fully twisted or not twisted, and we control the time during which the LC is in each state. Similarly for emissive displays, each subpixel either emit no light or maximum amount of light. What changes is the time during which the subpixels emit light.

Duty cycle is always between 0 and 1. Assuming there is a display that refreshes 60 times a second, a duty cycle of 0.5 would mean that the pixels emit light in about \(1\text{s}/60\times 0.5\approx 8.3\text{ms}\) during each refresh cycle. Figure 19.8 shows two PWM examples with a 25% and 75% duty cycle, respectively.

Figure 19.8: Two PWM examples with a 25% and 75% duty cycle, respectively. Ideally each pulse is a perfect square wave, in which case the luminance is proportional to the duty cycle, but in reality pulses take time to rise and fall, so the luminance is sub-linear w.r.t. duty cycle.

In theory, the luminance is proportional to the duty cycle — assuming that the pulse is a perfect square wave (the black curve in Figure 19.8). In reality, it takes time for the pulse to rise and fall (e.g., for a liquid crystal cell to change its orientation), so there is some efficiency loss (the green curve in Figure 19.8). As a result, the luminance is sub-linear with respect to the intended duty cycle. The longer the duty cycle, the more we can amortize this efficiency loss and the closer we get to being linear. Generally, the LEDs respond to current changes in the nanoseconds to microseconds range, while the liquid crystal cells respond much slower, usually at the milliseconds range.

19.4 Display Native Gamut

Regardless of how color and luminance are produced, ultimately a display has a set of effective primaries, which make up the display’s native color space, which is most likely not exactly sRGB or any standard color space. The primary colors (and the white point) depend on the emission spectrum of each sub-pixel, which in turn depends on the material used. For instance, inorganic LEDs have a narrower emission spectra than the organic LEDs (Huang et al. 2020), so they tend to be able to generate more saturated colors and, thus, the resulting display gamut is wider. One has to balance multiple trade-offs in a display design, such as invariance of chromaticity vs. luminance, lifetime, power consumption, and cost, so it is difficult to tune the pixel spectra just so that the colors precisely match that of a standard.

Figure 19.9: Microscope-magnified subpixel images of P3 green and sRGB green primary (both are [0, 255, 0] in their respective color spaces) on a 4th-generation iPad Pro taken from an iPhone 12 Pro (whose image signal processing chain introduces color inaccuracies; the red sub-pixel contributions to the sRGB green are not as strong when seen by naked eye). As a side note, you can also see that when the image is focused on the green sub-pixels, the red (and blue) sub-pixels are out of focus, a result of chromatic aberration.

As an example, Figure 19.9 shows the the sub-pixels images of the green primary colors in the P3 and sRGB color space as displayed on a 4th-generation iPad Pro. We can make a few observations. First, the emission patterns of P3 green and sRGB green are different. The P3 green is more “pure”, where the red and blue sub-pixels are contributing very little, whereas the sRGB green requires noticeable contribution from the red sub-pixels. This is not surprising because the P3 green is much more saturated (closer to spectral colors) than the sRGB green, as shown in the right figure in Figure 5.3. The actual contributions of red sub-pixels in sRGB green as seen by my eye are not as strong as seen in this iPhone-taken image; the image signal processing pipeline in the iPhone definitely has introduced its artifacts.

Second, even for the P3 green, there are still some contributions from the red sub-pixels. This suggests that the native display gamut is different from, in fact larger than, P3. This makes sense: for a display to support a particular color space, say, the P3 space, the display’s native color space must be no smaller than the P3 space.

It is worth noting that the spectrum/color of the light emitted from a sub-pixel is angularly dependent. This is at least partially because some emitted photons might not escape the LED because of the internal reflection at the material-air interface, and this reflection depends on photon arrival angles (Schubert 2006, chap. 5). Display measurement standard usually defines the angles at which a display’s color and luminance must be measured for a full display performance characterization (ICDM 2025, chap. 9).


  1. 2023 Nobel Prize in Chemistry is awarded to the discovery, characterization, and synthesis of quantum dots.↩︎