11  Modeling Material Surface

We have established the rendering equation for modeling surface scattering. To apply this equation, we need the BRDF of the surfaces being rendered. In practice, the material BRDF is modeled in two ways: analytical parameterization (Section 11.1 and Section 11.2) and directly measurement (Section 11.3).

11.1 Types of Material Surface

In everyday life, material surfaces are usually classified as being diffuse, specular, or glossy. Figure 11.1 shows examples of the three materials. We can now give a more rigorous treatment of these material types using BRDF, which will, in turn, give us some inspiration for parameterizing the BRDF.

Figure 11.1: (a): a diffuse material and its BRDF. (b): a specular material and its BRDF. (c): a glossy material and its BRDF. From Prabhu B Doss (2007), Daderot (2012), Steve Fareham (2007), VonHaarberg (2018c), VonHaarberg (2018a), and VonHaarberg (2018b).

11.1.1 Diffuse Material

When the surface is rough, the energy of surface reflection deviates away from the perfect mirror-like reflection and, instead, distributes across the hemisphere. When the surface becomes rough enough, the distribution of outgoing energy can become uniform across all outgoing directions over the entire hemisphere. Such a surface is called a diffuse or an ideal Lambertian surface. The perfect Lambertian surface does not exist, but many things in the real world come close, such as paper, marble, or wood.

The BRDF is a Lambertian surface is a uniform function. As we have seen in Equation 10.17, \(f_r(p, \os, \oi) = \frac{\rho_{hh}}{\pi}\) when \(\rho_{hh}\) is the surface albedo and is between 0 and 1. It is easy to see that diffuse materials are always isotropic.

11.1.2 Perfectly Specular Material

If a surface is perfectly smooth, like a mirror, it is called a perfectly specular material. Such materials follow the Snell’s law, which governs the angles of reflection and refraction, and the Fresnel equations, which govern the energy of reflection and refraction.

In the plane of incidence (the plane uniquely determined by the incident direction and the surface normal), the reflection direction is symmetric about the surface normal as the incident direction. More precisely, if the incident direction is \(\oi\) (parameterized by the polar angle \(\theta_i\) and azimuthal angle \(\phi_i\)) and the reflection direction is \(\os\) (\(\theta_s, \phi_s\)), we have:

\[ \begin{aligned} \theta_s &= \theta_i, \\ \phi_s &= \phi_i + \pi. \end{aligned} \tag{11.1}\]

The refraction/transmitted direction \(\omega_t\) (\(\theta_t, \phi_t\)) follows:

\[ \begin{aligned} & n_1 \sin\theta_i = n_2 \sin\theta_t, \\ & \phi_t = \phi_i + \pi, \end{aligned} \tag{11.2}\]

where \(n_1\) is the refractive index of the medium where light comes from and \(n_2\) is that of the medium that reflects/refracts the lights.

The energy of the reflected and refracted light is governed by the Fresnel equations. We will spare you the details, but it suffices to say that the fractions of reflected/refracted light are dependent on the incident angle, refractive indices of the two interface media, and the polarization states of the light. If you work out the math and assume that the incident light is unpolarized, the percentage of reflected energy \(F_r(\oi)\) for an incident direction \(\oi\) is given by:

\[ \begin{aligned} F_r(\oi) &= \frac{r_a+r_e}{2}, \\ r_a &= (\frac{n_2 \cos\theta_i - n_1 \cos\theta_t}{n_2 \cos\theta_i + n_1 \cos\theta_t})^2, \\ r_e &= (\frac{n_1 \cos\theta_i - n_2 \cos\theta_t}{n_1 \cos\theta_i + n_2 \cos\theta_t})^2. \end{aligned} \tag{11.3}\]

We call \(F_r(\oi)\) the specular reflectance, which not only varies with \(\oi\) but also is also a spectral term; we omit the wavelength for simplicity. Assuming no loss of energy, the specular transmittance, i.e., the fraction of the transmitted energy, is given by \(1-F_r\).

Fresnel’s equations are best understood in the context of the electromagnetic theory and are derived by treating light as waves in an electric field (the fact that we need to consider polarization states of a light is a giveaway). While \(F_r\) cannot be derived from radiometry, it is fundamentally about the energy transfer of surface scattering, which radiometry is also concerned with. So \(F_r\) can be integrated into the radiometry framework. One good example is to express the BRDF of a specular material using \(F_r\):

\[ f_r(p, \os, \oi) = F_r(\oi)\frac{\delta(\theta_s-\theta_i)\delta(\phi_s-\phi_i-\pi)}{\cos\theta_i}, \tag{11.4}\]

where \(\delta(x)\) is the Dirac delta function, which is 0 everywhere except when \(x=0\) and has the property \(\int\delta(x)\d x = 1\).

We can verify that this BRDF makes sense. First, the BRDF is non-zero only when Equation 11.1 holds because of the double-delta term. Second, the energy conservation is followed. For instance, if we calculate the directional-hemispherical reflectance by plugging the BRDF into Equation 10.8 and assuming \(\Omega\) is a hemisphere, we get:

\[ \frac{E_o}{E_i} = \rho_{dh}(p, \oi) = \int^{\Omega} F_r(\oi)\frac{\delta(\theta_s-\theta_i)\delta(\phi_s-\phi_i-\pi)}{\cos\theta_i} \cos\theta_s\text{d}\os. \tag{11.5}\]

Since \(F_r(\oi)\) is independent of \(\os\), Equation 11.5 evaluates to Equation 11.6. The integration in Equation 11.6 evaluates to 1. This is because, informally, the integrand is non-zero only when Equation 11.1 holds, at which point \(\theta_s = \theta_i\), so the cosine terms cancel out. So the integration is just sort of a hugely complicated way of writing \(\int \delta(x)\d x\), which is 1. \[ \frac{E_o}{E_i} = F_r(\oi) \int^{\Omega} \frac{\delta(\theta_s-\theta_i)\delta(\phi_s-\phi_i-\pi)}{\cos\theta_i} \cos\theta_s\text{d}\os = F_r(\oi). \tag{11.6}\]

We can see that the specular reflectance \(F_r\) is equivalent to \(\rho_{dh}\), the directional-hemispherical reflectance. This makes sense, because in specular materials the scattering is directional if the incident light is directional. So the directional-hemispherical reflectance reduces to the “directional-directional” reflectance, which is essentially the specular reflectance.

The specular reflectance is also equivalent to the hemispherical-directional reflectance \(\rho_{hd}\). We can show this either by simply invoking the reciprocity that \(\rho_{hd} = \rho_{dh}\) or by plugging the specular BRDF Equation 11.4 into Equation 10.9 and obtaining (assuming \(\Omega\) is hemisphere):

\[ \begin{aligned} \rho_{hd}(p, \os) &= \int^{\Omega} F_r(\oi)\frac{\delta(\theta_s-\theta_i)\delta(\phi_s-\phi_i-\pi)}{\cos\theta_i} \cos\theta_i\text{d}\oi \\ &= F_r(\hat\os) = F_r(\os), \end{aligned} \]

where \(\hat\os(\theta_s, \phi_s-\pi)\) is the mirror-reflection direction of \(\os(\theta_s, \phi_s)\). The integral evaluates to \(F_r(\hat\os)\) because, informally, the integrand is non-zero only when Equation 11.1 holds, at which point \(\oi = \hat\os\) so \(F_r(\oi) = F_r(\hat\os)\); the integral is a complicated way of writing \(\int F_r(\oi)\delta(\hat\os-\oi)\doi\), which evaluates to \(F_r(\hat\os)\). The result has an intuitive explanation: for a specular surface, the scattered energy along \(\os\) given a hemispherical illumination is the same as when the illumination comes only from \(\hat\os\). We can then show that \(F_r(\hat\os) = F_r(\os)\), which is not surprising given reciprocity; you can also verify it by going through the equations in Equation 11.3.

Interestingly, the specular reflectance \(F_r\) in general is not equivalent to the hemispherical-hemispherical reflectance \(\rho_{hh}\). To see this, plug the specular BRDF into Equation 10.10 (assuming \(\Omega_i\) and \(\Omega_s\) are hemispheres):

\[ \begin{aligned} E_o &= \int^{\Omega_s} (\int^{\Omega_i} f_r(p, \os, \oi) L(p, \oi) \cos\theta_i\text{d}\oi) \cos\theta_s \dos, \\ &= \int^{\Omega_s} \big(F_r(\os) L(p, \os)\big) \cos\theta_s \dos, \\ &= \int^{\Omega_i} F_r(\oi) L(p, \oi) \cos\theta_i \doi , \\ E_i &= \int^{\Omega_i} L(p, \oi) \cos\theta_i \doi. \end{aligned} \]

We can see that only when \(F_r(\oi)\) is a constant do we get \(F_r(\oi) = \frac{E_o}{E_i} = \rho_{hh}\). This is consistent with our early result in Section 10.2.3 that \(\rho_{dh} = \rho_{hh}\) only when the material is Lambertian, and a specular material is obviously not Lambertian.

When \(F_r(\oi)\) is constant, the specular material is isotropic (can you prove it?). Since \(F_r(\oi)\) does not have to be a constant, specular materials could be anisotropic. That is, it is theoretically possible that a material always reflects specularly, but the reflected energy depends on the incident direction.

11.1.3 Glossy Material

The surface scattering in most materials is in-between being perfectly specular and perfectly diffuse. These materials scatter light to a small cone of directions, usually centered around the direction of a perfect reflection. These materials are usually called glossy or sometimes, confusingly, “specular”, too. The energy distribution of a glossy material is neither a Delta function (as in the perfectly specular case) nor a uniform function (as in the diffuse case). It is usually a function that peaks at the mirror-reflection direction and gradually decays as we move away from that direction.

The bottom figures in Figure 11.1 illustrate an example of the BRDF for each of the three surface types under a given incident direction. An actual BRDF (for a given surface point and a given incident direction) would be a 3D shape, and what we are showing here is the cross section. The shape of the locus is drawn to be proportional to the magnitude of the BRDF; the locus in graphics literature is sometimes called the specular lobe.

The spectral-lobe visualization gives us a hint: we can parameterize a BRDF by mathematically describing the shape of the specular lobe. In fact, the BRDFs for the Lambertian surface (Equation 10.17) and for the specular materials (Equation 11.4) are two such examples. A glossy BRDF is more difficult to parameterize. Many BRDF parameterizations have been proposed; some are empirical, while others attempt to be physically plausible. The most popular and widely used is based on the microfacet model, which we will discuss next.

11.2 BRDF Parameterization with Microfacet Models

The assumption of the microfacet model is that the surface scattering behavior of a point depends on its local roughness: the rougher the surface, the more diffuse the surface scattering becomes. To model the roughness, the surface is modeled as a collection of small microfacets, each of which acts like a perfect mirror. A specular surface is one where all the microfacets have the exact same orientation. As the surface becomes rougher, the mirrors become more randomly oriented. When the mirrors are completely randomly oriented, the resulting surface scattering becomes diffuse.

To derive a microfacet model, we need to first define the orientation of each microfacet. Given a beam of incident lights from a particular direction, we can then trace, following the laws governing specular reflection, how the lights are scattered by the collection of the microfacets given their orientations. In the end, we obtain the collection of outgoing directions, from which we can derive the BRDF.

There are many variants of the microfacet model. They have one thing in common: they do not explicitly model the scattering of each ray at each microfacet but, rather, model the scattering of the microfacets statistically given the distribution of the microfacet orientations. In the end, they can either have an analytical form of the BRDF (Lambertian surface being an extreme example), have a close approximation of the analytical form, or can numerically estimate the BRDF efficiently (mostly through sampling).

Without going into the details, we will refer you to Pharr, Jakob, and Humphreys (2023, chap. 9.6) for a mathematical treatment of the general idea and to Torrance and Sparrow (1967), Cook and Torrance (1982), Ward (1992), Oren and Nayar (1995), and Walter et al. (2007) for the classical models.

11.2.1 Nature of Microfacets Models

If the microfacet theory does not sound weird to you, it should! In a microfacet model, we are still modeling surface scattering using discrete objects (microfacets) and events (perfect mirror-like reflection on each microfacet). Is it surprising that we can use the discrete microfacet model to reason about the behavior of a continuous surface? Given any point \(p\) on a surface, wouldn’t \(p\) correspond to one single microfacet, and the behavior of \(p\) simply be the result of a perfect mirror reflection there? If so, how can the microfacet model describe non-specular surface scattering of glossy and diffuse materials?

An intermediate answer is that the microfacet theory is just a modeling methodology. We use a set of discrete microfacets to derive the surface-scattering statistics of that set of microfacets, but then simply assume that the so-derived statistics apply anywhere on a continuous surface of interest. Still, does this methodology reflect the physical reality?

Figure 11.2: Triphasic profile of object property. Object property at both the macroscopic scale and at the atomic/molecular scale fluctuates wildly, but at there is a scale where the property does not very much. Models based on radiometry operate at this scale. This scale is sufficiently small (smaller than the spatial resolution of human vision and typical cameras) so our calculus machinery can be applied, but still larger than individual molecules and atoms so that we do not have to worry about the wild fluctuations at that scale.

Well, the physical world is fundamentally not continuous; when we break down the surface into finer and finer scales, we eventually get to molecules and atoms, so the surface property undergoes wild fluctuations depending on whether a small area contains molecules or not. If that is the level of detail you want to get into, you have to model things at the molecular and atomic levels (or even lower). Figure 11.2 illustrates this idea.

Fortunately for many real-world use-cases, we do not have to go there. Our eyes have a resolution limit, so we cannot resolve the details of a tiny surface area anyway; image sensors also have a resolution limit. The just-resolvable area \(\delta A\), set by the spatial resolution limit of our visual system, is more than large enough that it contains many microfacets, so the aggregated behavior of those microfacets can effectively model the observed scattering of \(\delta A\), which is all that matters to our vision (and to computer graphics and imaging, which is concerned only with satisfying human vision). So effectively what the microfacet theory does is to assume that the small \(\delta A\) (which contains a distribution of microfacets) is just within the range where the surface scattering property is stable. When the microfacet theory says something about a particular point \(p\), it is really saying something about \(\delta A\).

This way of modeling and thinking is pervasive in radiometry, which uses differential and integral equations and thus has inherently assumed that the radiation field under modeling is continuous. That is not true. Take irradiance as an example. The average irradiance of a surface changes dramatically at the microscopic level when we initially reduce the surface area, because the photon distribution over a large area is likely very non-uniform. When the surface area is sufficiently small, the number of photons hitting the surface will change proportionally with the surface area, because at that scale the photon distribution is roughly uniform. This is the scale at which irradiance is defined. But if we keep reducing the area smaller and smaller, the amount of photons hitting a tiny area will, again, undergo wild fluctuations depending on whether there are photons in the area of not — photons are discrete packets of energy. We will see another example shortly in volume scattering, where we use a small volume of discrete particles to build a model for radiative energy transfer, which we then apply to any given point in a continuous volume.

Orthogonal to the discussion above is the limitation that microfacet models do not account for the surface roughness on the scale of the light wavelength. In the regime where the length of each microfacet is comparable with the light wavelength, diffraction takes place. As a result, reflection does not follow the Snell’s law and is wavelength dependent. In fact, this is how we get iridescence; in engineering, people make diffraction gratings that take advantage of the wavelength dependency to disperse lights of different wavelengths.

11.3 Measuring Spectral Reflectance and BRDF

This section discuss the principles and practices of measuring the spectral reflectance or spectral BRDF. It is absolutely important to note that the measured reflectance is not necessarily attributed only to surface scattering, because the measurement setup does not care what the material being measured is. If SSS plays a role (e.g., translucent materials), the resulting reflectance data would include the contribution from volume scattering, too.

Worse, for these materials not all the SSS influences are captured by this measurement geometry, since some back-scattered photons will exit at other surface points, which will not be captured by the detector. So the measurement is neither complete nor sound for materials where back-scattered photons contribute to their reflectance.

11.3.1 Measuring Spectral Reflectance

How do we know the spectral reflectance (transmittance) of a material? We measure it. This is easier said than done. We will focus on the reflectance measurement here, but transmittance is measured similarly, except you are not measuring from the same side of the illuminant but from the other side. Sharma (2003, chap. 1.11.4), Trussell and Vrhel (2008, chap. 8.7), and Reinhard et al. (2008, chap 6.8) have overviews of various measurement devices that might be helpful.

The Importance of Measurement Geometry

Consider Figure 7.2 (a) again. The illuminant emits lights everywhere, but what matters is the light incident on the point \(p\) the viewer is currently gazing at; of course, the incident lights could come from everywhere else in the space, not just a particular illuminant. Similarly, \(p\) could potentially scatter lights everywhere over the hemisphere (through surface scattering and/or SSS), but it is the small beam of light that enters the viewer’s eye that matters. In order to measure the reflectance that is relevant to this particular illumination-viewing geometry, we need to 1) measure all the illuminating power that hits \(p\) and 2) measure the scattered light from \(p\) only along the viewing direction.

You can imagine that if we change the illumination to be, say, a diffuse lighting where there is an equal amount of light hitting \(p\) from all directions, the reflectance would be different, and it would be a perfectly relevant reflectance measure to report. If you have not, next time when you visit an art museum, pay attention to how the lighting system is carefully set up to bring out the best viewing experience (while also considering conservation); you ideally want the reflectance measurement of an artifact to simulate the viewing lighting.

Single Reflectance Measurement

Figure 11.3: (a): Four different illumination-viewing geometries to measure the reflectance of a material; from Judd and Wyszecki (1975, fig. 2.11). (b): A spectrophotometer, which takes two spectroradiometric measurements of the standard material with a known reflectance and a test material to calculate the spectral reflectance of the test material; from Sharma (2003, fig. 1.33). (c): A spectroradiometer design, which measures the spectral power distribution of a light source (self-luminous or scattering) using a prism; from Judd and Wyszecki (1975, fig. 2.1). (d): Another way to implement the spectroradiometer that uses diffraction grating to disperse incident light; from Reinhard et al. (2008, fig. 6.22).

In general, there really is no single reflectance number we can associate with a material. There are two ways to approach this. A common approach is to set up the measurement geometry so that it is close to an actual viewing experience. Figure 11.3 (a) shows four common settings. Some might illuminate the material from 0\(^{\circ}\) (assuming the direction of the surface normal has an angle of 0\(^{\circ}\)) and then measure the scattered lights at 45\(^{\circ}\); others can illuminate the material using diffuse illumination and measure the reflectance at 0\(^{\circ}\) (Judd and Wyszecki 1975, p. 122–25; Reinhard et al. 2008, chap. 6.8.2; Li 2003, chap. 2.2.2).

To get a reflectance spectrum, we need to know the reflectance at each sampled wavelength. There are multiple ways to go about measuring the spectral information. For instance, we can place a monochromator or a set of optical filters between the illuminant and the material so that we can control the wavelength of the light that is incident on the material.

Alternatively, we can change the detector to measure spectral information. We can use a dispersive medium such as a prism, shown in Figure 11.3 (c), or a diffraction grating, shown in Figure 11.3 (d), to separate the scattered light into different wavelengths and measure them individually. A detector that is capable of measuring the spectral radiometric quantities (e.g., the spectral power distribution) is called a spectroradiometer.

The raw detector readings of a spectroradiometer are usually not the absolute radiometric quantity of interest. The raw recording is, instead, roughly proportional to radiometric quantity up to a constant scaling factor \(SSF(\lambda)\), which is usually called the detector’s spectral sensitivity funciton or the responsivity function, which we will study carefully in Section 16.5. \(SSF(\lambda)\) can be calibrated offline, and that allows us to turn a detector’s raw recording into the corresponding absolute radiometric quantity.

We take a spectroradiometric measurement of the illumination hitting the material and that of the scattered light of interest; the ratio is the spectral reflectance \(\rho(\lambda)\):

\[ \rho(\lambda) = \frac{\Phi_s(\lambda)SSF(\lambda)}{\Phi_i(\lambda)SSF(\lambda)} = \frac{\Phi_s(\lambda)}{\Phi_i(\lambda)}. \]

We can see that for reflectance measurement, the exact values of \(SSF(\lambda)\) are immaterial. A curious question is that, while the detector can measure \(\Phi_s(\lambda)\), what measures \(\Phi_i(\lambda)\)? One strategy is to, offline, place the same detector where the material is and directly measure \(\Phi_i(\lambda)\) there.

Another, perhaps much more common and standard, way to measure spectral reflectance is to use something called a spectrophotometer. This method does not need to know \(\Phi_i(\lambda)\), but it requires a reference sample with a known spectral reflectance. This is shown in Figure 11.3 (b). It takes two spectroradiometric measurements under the identical illumination: one for the test material and the other for the standard/reference sample. The spectral reflectance of the test material \(\rho_t(\lambda)\) is given by:

\[ \rho_t(\lambda) = \frac{m_t(\lambda)}{m_s(\lambda)}\rho_s(\lambda), \]

where \(\rho_s(\lambda)\) is the known spectral reflectance of the standard/reference sample, \(m_s(\lambda)\) and \(m_t(\lambda)\) refer to the raw detector readings of the standard and the test material at wavelength \(\lambda\), respectively. We can see that the spectrum of the illumination does not matter1. Sometimes \(\frac{m_t(\lambda)}{m_s(\lambda)}\) is called the spectral reflectance factor of the test material if the reference material is perfectly diffuse (Judd and Wyszecki 1975, p. 93).

In practice, the reference measurement can be done separately rather than simultaneously with the test material to reduce the device form factor, and the reference measurement data can be tabulated to save measurement time.

One note on terminology: while a spectroradiometer is used to measure the spectral radiometric quantities (e.g., spectral radiance), a spectrophotometer does not measure the spectral photometric quantities (e.g., spectral luminance); instead, it measures the spectral reflectance. This is standardized in American Society for Testing and Materials (ASTM) E284-13b (ASTM International 2013) (along with other terminologies related to material properties and measurement instruments).

The nice thing about the approach described so far is that you get a single reflectance spectrum, but be very careful under what measurement geometry is the spectrum obtained. There is no guarantee that a particular measurement geometry corresponds to the illumination/observation geometry of an actual viewing experience, so use the reported reflectance data with that caveat in mind.

Goniometric Measurements

A more general approach is to measure the reflectance at every illumination-viewing direction combination. For that we need what is called a goniospectrophotometer2. There are also gonioradiometers, which measure the spectral radiometric quantities from different viewing directions.]. Figure 11.4 shows one such setup. The illuminant/light source incident on the material comes through the small aperture \(I\), and the scattered light from the material is captured by a detector (e.g., a photodiode or, essentially, a single-pixel image sensor) through another aperture \(V\). Transmittance can be similarly measured by placing the detector at the other side of the material.

Figure 11.4: A setup for measuring goniometric reflectance and BRDF. Both the illuminant (source) and the detector (photometer) can vary in two degrees of freedom, \((\theta_i, \phi_i)\) for the source and \((\theta_s, \phi_s)\) for the detector, covering different illuminant-scattering combinations. Adapted from Judd and Wyszecki (1975, fig. 3.4).

The idea is to simultaneously sample, say, \(N\) illumination directions (parameterized by the azimuth \(\phi_i\) and polar angle \(\theta_i\)) and \(M\) scattering directions (parameterized by the azimuth \(\phi_s\) and polar angle \(\theta_s\)), and obtain \(M \times N\) measurements, each of which corresponds to one particular combination of the illuminant and scattering directions. For convenience, commercial goniometric measurements usually use a beam splitter to simultaneously measure the illumination and scattering flux (Lanevski, Manoocheri, and Ikonen 2022; Rabal et al. 2012).

Denote the area on the material being measured \(A_r\). The size of the area is dictated by the illumination aperture \(I\). Assuming the power received by \(A_r\) from the illuminant through \(I\) is \(\Phi_i(\lambda, A_r, I)\), and the power scattered by \(A_r\) and collected by the detector through the aperture \(V\) is \(\Phi_s(\lambda, A_r, V)\), the reflectance of the small area \(A_r\) is simply given by:

\[ \rho(\lambda, A_r) = \frac{\Phi_s(\lambda, A_r, V)}{\Phi_i(\lambda, A_r, I)}. \]

As the two apertures become very small, \(A_r\) becomes very small, and the incident and outgoing solid angles become very small, too. The resulting reflectance measurement can be thought of as estimating the directional-directional reflectance (Section 10.2). But in general you can see how the reflectance number can easily change when we slightly vary the hardware setup. For instance, if we increase the detector aperture \(V\), the detected power will increase, and that would increase the resulting reflectance. If we increase the illumination aperture \(I\), the resulting reflectance would be for a larger material area \(A_r\).

One can also use a reference material (with known reflectance spectra at the same measurement geometries) to avoid measuring \(\Phi_i(\lambda, A_r, I)\), similar to how a spectrophotometer is operated.

11.3.2 Measuring BRDF

Reflectance is integrated from the BRDF, which suggests that the latter is a more fundamental measure of material property. The same setup shown in Figure 11.4 can also be used to measure the BRDF, in which case the setup is called a goniospectroreflectometer. We will take the same measurements, but with a bit more calculation we can estimate the BRDF of the material, rather than just the (goniometric) reflectance spectra.

Let us be precise about the setup (omitting the \(\lambda\) term in all relevant quantities).

  • We are illuminating a small area \(A_r\) through the illumination aperture \(I\).
  • The center of \(A_r\) is an infinitesimal point \(p\), which along with \(I\) subtends a solid angle \(\Oi(p, I)\).
  • \(\oi\) is the direction between \(p\) and the center of \(I\).
  • \(A_r\) scatters lights toward the detector through the detector aperture \(V\), which subtends a solid angle of \(\Os (p, V)\) with \(p\).
  • \(\os\) is the direction between \(p\) and the center of \(V\).
  • The power incident on \(A_r\) is \(\Phi_i(A_r, I)\), and the portion of the power scattered by \(A_r\) and collected by the detector is \(\Phi_s(A_r, V)\).
  • We are interested in calculating the BRDF \(f_r(p, \omega_s, \omega_i)\).

Recall that \(f_r(p, \omega_s, \omega_i)\) is defined as the ratio of the difference in radiance leaving \(p\) toward \(\os\) over the difference in irradiance incident on \(p\) due to the lights coming from an infinitesimal solid angle \(\doi\) (omitting \(\lambda\) in all equations for simplicity):

\[ f_r(p, \omega_s, \omega_i) = \frac{\text{d}L_s(p, \omega_s)}{\text{d}E_i(p, \omega_i)} \approx \frac{L_s(p, \omega_s)}{E_i(p, \Oi(p, I))}. \tag{11.7}\]

There is no way we can illuminate a point \(p\) through an infinitesimal solid angle \(\doi\); all we could do is to illuminate a small cone of directions \(\Oi(p, I)\). We can then calculate the average BRDF of all the incident directions in \(\Oi(p, I)\) (i.e., assuming the BRDF is the same for all the outgoing directions in \(\Oi(p, I)\)) using the approximation in Equation 11.7, which we have derived in Section 10.1.1.

How do we calculate \(E_i(p, \Oi(p, I))\)? There is no way we can illuminate and measure the irradiance of an infinitesimal point \(p\); all we can do is to illuminate a small area \(A_r\) and assume that the irradiance received is constant anywhere inside \(A_r\), so we have:

\[ E_i(p, \Oi(p, I)) \approx \frac{\Phi_i(A_r, I)}{A_r}. \tag{11.8}\]

Now how do we get \(L_s(p, \omega_s)\)? For this we turn to the detector side. Using basic radiometry, \(\Phi_s(A_r, V)\) is expressed in Equation 11.9, where \(p'\) and \(\os'\) are dummy variables, \(\theta_s'\) is associated with \(\os'\), and \(\Oi(p', V)\) is associated with \(p'\) (c.f., \(p\) refers to a specific point on \(A_r\), and \(\os\) and \(\Os(p, V)\) refer to physical quantities associated specifically with \(p\)):

\[ \Phi_s(A_r, V) = \int^{A_r} \int^{\Os(p', V)} L_s(p', \omega_s') \cos{\theta_s'} \text{d}\os' \text{d}p'. \tag{11.9}\]

We assume that the radiance of any ray between \(A_r\) and the detector aperture \(V\) is constant and takes the value of \(L_s(p, \os)\); this gets us Equation 11.10:

\[ \Phi_s(A_r, V) \approx \int^{A_r} \int^{\Os(p, V)} L_s(p, \omega_s) \cos{\theta_s} \text{d}\os' \text{d}p'. \tag{11.10}\]

Since \(L_s(p, \os)\) and \(\cos\theta_s\) are invariant to \(\os'\) and \(p'\), they can be taken out of the two integrations, and this gives us Equation 11.11: \[ \Phi_s(A_r, V) \approx L_s(p, \omega_s) \cos{\theta_s} \int^{A_r} \int^{\Os(p, V)} \text{d}\os' \text{d}p'. \tag{11.11}\]

Calculating the two integrals in Equation 11.11 gives us Equation 11.12, where \(C_1\) and \(C_2\) are constant. Given the boundary condition that \(\Phi_s(\cdot)\) has to be 0 when \(\Os(\cdot)\) or \(A_r\) is 0 (if the detector aperture is closed or the illumination area vanishes, no scattered light will be detected), we know \(C_1=C_2=0\).

\[ \Phi_s(A_r, V) \approx L_s(p, \omega_s) \cos{\theta_s} (A_r (\Os(p, V) + C_1) + C_2). \tag{11.12}\]

Plugging Equation 11.7 we get:

\[ \Phi_s(A_r, V) \approx f_r(p, \omega_s, \omega_i)E_i(p, \Oi(p, I)) \cos{\theta_s} A_r \Os(p, V). \tag{11.13}\]

Plugging Equation 11.8 we get:

\[ \Phi_s(A_r, V) \approx f_r(p, \omega_s, \omega_i) \frac{\Phi_i(A_r, I)}{A_r} \cos{\theta_s} A_r \Os(p, V). \tag{11.14}\]

Therefore, the final BRDF is given by:

\[ f_r(p, \omega_s, \omega_i) = \frac{\Phi_s(A_r, V)}{\Phi_i(A_r, I) \cos{\theta_s} \Os(p, V)}. \tag{11.15}\]

Rearranging the terms, we get a seemingly more complex expression:

\[ f_r(p, \omega_s, \omega_i) = \frac{[\Phi_s(A_r, V)/(A_r \cos\theta_s)]/\Os(p, V)}{\Phi_i(A_r, I)/A_r}. \tag{11.16}\]

Equation 11.16 actually gives a simple interpretation. The denominator is the average irradiance incident on \(p\) through a small solid angle \(\Oi(p, I)\) (see Equation 11.8), and the numerator is the average radiance leaving \(p\)3. Taking the ratio of the two matches our intuition of the average BRDF: radiance over irradiance (received over a small solid angle).

If we assume the surface to be Lambertian, the BRDF is then \(1/\pi\) for any \(\os\) (under a given \(p\) and \(\oi\); see Equation 10.17) assuming no loss of energy. This means:

\[ \Phi_s(A_r, V) \propto \cos\theta_s. \]

That is, the flux reading weakens as the incident direction \(\theta\) by a factor of \(\cos\theta\). Is this surprising? It should not be if you recall our discussion of radiant intensity (Equation 8.7). If we assume that every point on \(A_r\) emits the same amount flux to the same solid angle (through the aperture \(V\)), the radiant intensity of \(p\) toward \(\os\) is to \(\frac{\Phi_s(A_r, V)}{A_r \Os(p, V)}\) and, thus, proportional to \(\cos\theta\), which matches our earlier conclusion of how the radiant intensity of a Lambertian emitter/scatterer decays with \(\theta\).

Anytime you measure something, the measurement is subject to noise and uncertainty. For instance, in the case of gonioreflectometer measurement, the angular positioning of the illuminant and detector might not be accurate, the detector itself is subject to all sorts of measurement noise (which we will study in the image sensor lecture), and there might be stray lights that enter the detector. Quantifying the sources of uncertainty and, even better, correcting for them is an important part of reflectance/BRDF measurement (Lanevski, Manoocheri, and Ikonen 2022; Rabal et al. 2012).


  1. An alternative, and mathematically equivalent, method is that we measure 1) the spectrum of the illumination (e.g., using a spectroradiometer) and 2) the camera SSF; then given the detector reading we can calculate the spectral reflectance (raw reading = illumination \(\times\) reflectance \(\times\) SSF). While we do not have to measure the reflectance of the reference sample, this method comes with the extra work of measuring the illumination and calibrating the camera SSF, so it is less preferred.↩︎

  2. “gonio-”” comes from the Greek word \(\gamma\omega\nu\iota\alpha\) (g={o}n'{i}a), which means angle.↩︎

  3. \(\Phi_s(A_r, V)/(A_r \cos\theta_s)\) in the numerator gives us the average irradiance leaving \(p\) (note that this radiance is defined at the surface perpendicular to \(A_r\), hence the \(\cos\theta_s\) term), which is divided by \(\Os(p, V)\) to give us the average radiance leaving \(p\).↩︎