9 Surface Scattering
When a group of photons arrives at a material surface, some of the photons might be immediately turned away (i.e., reflected) while others might penetrate into the material (i.e., refracted). The refracted portion is scattered further inside the material, giving rise to subsurface scattering, which we will discuss in the next chapter. This chapter is concerned with surface scattering, ignoring the contribution of surface scattering to an object’s appearance.
There are two properties we care about in surface scattering: the directions of the reflection and the (spectral) energy along each direction. The direction is important because, as far as rendering, human vision, or camera imaging are concerned, only the photons that will eventually captured by a detector matter. The (spectral) energy is important because it dictates the perceived color. These two properties are captured by what is known as the BRDF, the protagonist of this chapter. We will then derive a few very useful equations and properties based on BRDF.
\[ \def\oi{{\omega_i}} \def\os{{\omega_s}} \def\Oi{{\Omega_i}} \def\Os{{\Omega_s}} \def\d{{\text{d}}} \def\D{{\Delta}} \def\do{{\d\omega}} \def\Do{{\Delta\omega}} \def\doi{{\d\omega_i}} \def\dos{{\d\omega_s}} \def\Doi{{\D\omega_i}} \def\Dos{{\D\omega_s}} \def\H{{\mathbf{H}}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cL}{\mathcal{L}} \]
9.1 BRDF
Generally, the energy distribution of the surface scattering is captured by the Bidirectional Reflectance Distribution Function (BRDF) (Nicodemus et al. 1977). Informally, it tells us how the incident energy from a particular direction is distributed to different exiting directions. The BRDF is parameterized by three parameters: a surface point \(p\), the direction of light incident on \(p\), denoted \(\oi\), and the direction of light leaving \(p\), denoted \(\os\). So the BRDF is usually written as \(f_r(p, \os, \oi)\).
The way to understand BRDF \(f_r(p, \os, \oi)\) is to consider the following. \(L(p, \os)\), i.e., the radiance leaving \(p\) toward \(\os\), is dependent on the light incident on \(p\). When the incident light on \(p\) comes from only the direction \(\oi\), the irradiance at \(p\) is zero, since the solid angle of a single direction \(\oi\) is zero, so naturally \(L(p, \os)\) is 0 (assuming there is no other light hitting \(p\)). When \(p\) receives light from a non-zero solid angle of directions \(\Delta\oi\) (centered around \(\oi\)), the irradiance of \(p\) is increased by \(\Delta E(p, \oi)\). At the same time, due to this increase in incident light, \(L(p, \os)\) is no longer zero; the increase in the radiance leaving \(p\) over \(\os\) is denoted \(\Delta L(p, \os)\).
As we increase \(\Delta \oi\), both \(\Delta E(p, \oi)\) and \(\Delta L(p, \os)\) increase. BRDF is defined as the ratio of the two increments when \(\Delta \oi\) approaches 0 (when the radiance along all directions in \(\Delta \oi\) can be thought of as a constant):
\[ \begin{align} f_r(p, \os, \oi) = \lim_{\Delta \oi \rightarrow 0}\frac{\Delta L(p, \os)}{\Delta E(p, \oi)} = \frac{\text{d}L(p, \os)}{\text{d}E(p, \oi)} = \frac{\text{d}L(p, \os)}{L(p, \oi)\cos\theta_i \text{d}\oi}. \end{align} \tag{9.1}\]
9.1.1 A Useful Approximation
Now assume that we illuminate \(p\) through a finite, but small, solid angle \(\Oi\). Turning the differential equation into an integral equation, we get:
\[ \begin{align} L(p, \os) = \int^{\Oi} f_r(p, \os, \oi) \text{d}E(p, \oi). \end{align} \tag{9.2}\]
Using the definition of radiance in Equation 8.4, we get: \[ \begin{align} L(p, \os) = \int^{\Oi} f_r(p, \os, \oi) L(p, \oi)\cos\theta_i \text{d}\oi. \end{align} \tag{9.3}\]
If we assume that the BRDF is a constant over all the directions in \(\Oi\), Equation 9.3 is simplified to:
\[ \begin{align} L(p, \os) \approx f_r(p, \os, \oi) \int^{\Oi} L(p, \oi)\cos\theta_i \text{d}\oi. \end{align} \tag{9.4}\]
The integration in Equation 9.4 has no analytical solution, since we do not know the analytical form of \(L(p, \oi)\), but we know the integration is just another way of expressing the total irradiance incident upon \(p\) over \(\Oi\), which is denoted as \(E(p, \Oi)\)1. This gets us Equation 9.5:
\[ \begin{align} L(p, \os) \approx f_r(p, \os, \oi) E(p, \Oi). \end{align} \tag{9.5}\]
Thus, we can approximate the BRDF as: \[ \begin{align} f_r(p, \os, \oi) &\approx \frac{L(p, \os)}{E(p, \Oi)}. \end{align} \tag{9.6}\]
Ultimately, we can see from Equation 9.6 that the BRDF \(f_r(p, \os, \oi)\) can also be calculated as the ratio between the absolute radiance \(L(p, \oi)\) and the absolute irradiance \(E(p, \Oi)\) illuminated from a very small, but finite solid angle \(\Oi\). Another way to interpret this is that the so-calculated BRDF is the average BRDF over \(\Oi\). This derivation is useful for actually measuring a BRDF, which we will discuss in Section 9.6.2, where we will have no choice but to use a non-zero solid angle for illumination, because physically we just cannot illuminate a point through an infinitesimal solid angle \(\doi\).
9.1.2 Isotropic Material
A 3D direction \(\omega\) expressed in the Cartesian coordinate system can also be expressed by two 2D planar angles in the spherical coordinate system: the polar angle \(\theta\) and the azimuthal angle \(\phi\). So BRDF can also be parameterized as \(f_r(p, \theta_s, \phi_s, \theta_i, \phi_i)\). A material is isotropic if its BRDF satisfies \(f_r(p, \theta_s, \phi_s, \theta_i, \phi_i) = f_r(p, \theta_s, \phi_s+x, \theta_i, \phi_i+x)\) for any \(x\). An intuitive way to think of an isotropic material is this: if you pick a point \(p\) and rotate the material about the normal vector at \(p\), the color of \(p\) does not change. This is because rotation about the normal vector keeps \(\theta_i\) and \(\theta_s\) unchanged and varies \(\phi_i\) and \(\phi_s\) by the same amount.
The nice thing about an isotropic BRDF is it can be parameterized with one fewer degree of freedom: \(f_r(p, \theta_s, \phi_s - \phi_i, \theta_i)\). This is because it is \((\phi_i - \phi_s)\) rather than the specific values of \(\phi_s\) or \(\phi_i\) that matter.
9.2 Reflectance and Albedo
The BRDF does not have to be a value between 0 and 1. Let’s say that there is 100 J of energy incident on a point coming from a solid angle \(\Delta \oi\). That amount of energy is distributed across all the outgoing directions in the hemisphere, which forms a solid angle of \(4\pi/2 = 2\pi\). So on average the energy exiting per direction is \(\frac{100}{2\pi}J\), which clearly is greater than 1. This is not surprising, since BRDF is ultimately a density measure, a distribution, which is most meaningful when it is integrated to calculate some quantity. Integrating the BRDF gives a percentage/fraction measure between 0 and 1, i.e., reflectance, which we will discuss next.
9.2.1 Directional-Hemispherical Reflectance
For the energy to be conserved, the total outgoing energy at any point must not exceed that of the incident energy received by that point. Assume that a point \(p\) receives an irradiance \(\d E_i\) from a direction \(\oi\) over an infinitesimal solid angle \(\doi\), and the outgoing radiance along the direction \(\os\) due to that irradiance is \(f_r(p, \os, \oi)\d E_i\). Then the outgoing irradiance leaving \(p\) over an infinitesimal solid angle \(\dos\) around \(\os\) would be \(f_r(p, \os, \oi)\d E_i\cos\theta_s\dos\). If we integrate all the outgoing directions \(\Omega\), we get the total outgoing irradiance \(\d E_o\), which must not exceed the incident irradiance \(\d E_i\):
\[ \begin{align} \d E_o = \int^{\Omega} \d E_i f_r(p, \os, \oi) \cos\theta_s\text{d}\os. \end{align} \tag{9.7}\]
\(\d E_i\) is independent of \(\os\), so it can be hoisted out of the integration. Therefore, we have the following equation, which holds for any arbitrary incident direction \(\oi\):
\[ \begin{align} \int^{\Omega}f_r(p, \os, \oi)\cos\theta_s\text{d}\os = \frac{\d E_o}{\d E_i} = \rho_{dh}(p, \oi) \leq 1. \end{align} \tag{9.8}\]
\(\rho_{dh}\) is defined as the ratio between \(\d E_o\) and \(\d E_i\). When \(\Omega\) is the hemisphere, \(\rho_{dh}\) is called the directional-hemispherical reflectance in the computer vision and graphics literature, and is interpreted as the percentage of energy scattered by a point over the entire hemisphere given the incident light from a particular direction. Clearly, \(\rho_{dh}\) is a function of both \(p\) and \(\oi\) and takes a value between 0 and 1.
9.2.2 Hemispherical-Directional Reflectance
Since we are dealing with geometric optics, the Helmholtz reciprocity holds:
\[ \begin{align} f_r(p, \os, \oi) = f_r(p, \oi, \os), \end{align} \]
which means the energy conservation can also be expressed as:
\[ \begin{align} \int^{\Omega}f_r(p, \os, \oi)\cos\theta_i\text{d}\oi = \rho_{hd}(p, \os) \leq 1, \end{align} \tag{9.9}\]
where \(\rho_{hd}\) is called the hemispherical-directional reflectance when \(\Omega\) is the hemisphere. \(\rho_{hd}\), a function of \(p\) and \(\os\), is interpreted as the percentage of energy reflected toward a particular direction \(\os\) given the incident energy over the entire hemisphere.
Equation 9.9 can be derived by first rewriting Equation 9.8 as \(\int^{\Omega}f_r(p, \oi, \os)\cos\theta_s\text{d}\os \leq 1\) (using the reciprocity) followed by switching \(\os\) and \(\oi\) (simply a change of notation). This derivation suggests that \(\rho_{hd}(p, \oi) = \rho_{dh}(p, \os)\), a natural consequence of the reciprocity.
9.2.3 Albedo and Hemispherical-Hemispherical Reflectance
We can also describe the relationship between all the outgoing irradiance \(E_o\) of a point over a solid angle \(\Os\) due to all the incident irradiance \(E_i\) over a solid angle \(\Oi\):
\[ \begin{align} E_o &= \int^{\Omega_s} \Big(\int^{\Omega_i} f_r(p, \os, \oi) L(p, \oi) \cos\theta_i\text{d}\oi\Big) \cos\theta_s \dos, \\ E_i &= \int^{\Omega_i} L(p, \oi) \cos\theta_i\text{d}\oi \end{align} \tag{9.10}\]
Due to energy conservation, we have:
\[ \begin{align} \rho_{hh}(p) = \frac{E_o}{E_i} \leq 1. \end{align} \tag{9.11}\]
Equation 9.11 defines \(\rho_{hh}\), which is called the hemispherical-hemispherical reflectance when both \(\Omega_i\) and \(\Omega_s\) are hemispheres. \(\rho_{hh}\) has another name: albedo.
When \(f_r(p, \os, \oi)\) is independent of (invariant to) \(\oi\) and \(\os\), i.e., when \(p\) is an ideal Lambertian surface (see Section 9.4), Equation 9.10 can be re-written as:
\[ \begin{align} E_o = \int^{\Omega_s} f_r(p, \os, \oi) (\int^{\Omega_i} L(p, \oi) \cos\theta_i\text{d}\oi) \cos\theta_s \dos. \end{align} \tag{9.12}\]
Plugging in the definition of \(E_i\) from Equation 9.10, we have: \[ \begin{align} E_o = \int^{\Omega_s} f_r(p, \os, \oi) E_i \cos\theta_s \dos. \end{align} \tag{9.13}\]
Since \(E_i\) is independent of \(\os\), we have:
\[ \begin{align} E_o = E_i \int^{\Omega_s} f_r(p, \os, \oi) \cos\theta_s \dos. \end{align} \tag{9.14}\]
Using the definition of \(\rho_{dh}\) in Equation 9.8, we have:
\[ \begin{align} E_o = E_i \rho_{dh}(p, \oi). \end{align} \tag{9.15}\]
Comparing Equation 9.15 and Equation 9.11, we can see that for a Lambertian surface the albedo (\(\rho_{hh}\)) is equivalent to \(\rho_{dh}\) and \(\rho_{hd}\), but this relationship is not true in general.
We can also show that for a Lambertian surface, the BRDF is the constant \(\frac{\rho_{hh}}{\pi}\). Starting from Equation 9.14 and using the assumption that \(f_r(p, \os, \oi)\) is independent of \(\os\):
\[ \begin{align} E_o = E_i \rho_{hh} &= E_i \int^{\Omega_s} f_r(p, \os, \oi) \cos\theta_s \dos \\ &= E_i f_r(p, \os, \oi) \int^{\Omega_s} \cos\theta_s \dos \\ &= E_i f_r(p, \os, \oi) \pi. \end{align} \tag{9.16}\]
Thus: \[ \begin{align} f_r(p, \os, \oi) = \frac{\rho_{hh}}{\pi}. \end{align} \tag{9.17}\]
The last step in Equation 9.16 uses the integral results that:
\[ \begin{align} \d\omega &= \sin\theta\d\theta\d\phi, \\ \int^{\Omega=2\pi} \cos\theta \d\omega &= \int^{2\pi}_0 \int^{\pi/2}_{0} \cos\theta \sin\theta\d\theta\d\phi \nonumber\\ &= 2\pi\int^{\pi/2}_{0} \cos\theta \sin\theta\d\theta \nonumber \\ &= \pi, \end{align} \]
when \(\Omega\) is the hemisphere.
A fun exercise you can entertain yourself with is to show that if \(f_r\) is independent of \(\oi\), it must also be independent of \(\os\). An informal way to do so is the following. Since \(f_r(p, \os, \oi)\) is independent of \(\oi\), let’s rewrite it as \(g(p, \os)\). Now we invoke the reciprocity and rewrite \(f_r(p, \os, \oi)\) as \(g(p, \oi)\). The only way for \(g(p, \os) = g(p, \oi)\) is for \(g\) to be dependent only on \(p\).
Finally, one can also define the directional-directional reflectance, which is naturally a function of both the incident direction and outgoing direction and can be defined as the ratio between the incident irradiance and the outgoing irradiance when both the incident and outgoing solid angles approach 0.
BRDF and directional-directional reflectance are both sensitive to both the incident and outgoing directions. But the former is a density measure, whereas the latter is a fraction/percentage measure (all other reflectance quantities are fraction measures too). Integrating BRDF over a finite set of directions gives us some measure reflectance. This is why the BRDF is defined as the radiance/ irradiance ratio rather than radiance/radiance or irradiance/irradiance ratio; it is to reflect the fact that the energy of a small cone of incident directions is distributed over all the directions over the hemisphere, and what we care to characterize is the distribution of the incident energy over all outgoing directions.
9.3 The Rendering Equation
Given the BRDF, we can estimate the outgoing radiance of a point given its illumination using the well-known Rendering Equation.
The setup is that we have a surface on which there is a point \(p\) that is receiving light from a solid angle \(\Omega\). We are interested in calculating the exiting radiance leaving \(p\) toward an arbitrary direction \(\os\). The rendering equation formulates this calculation by:
\[ \begin{align} L(p, \os) = \int^{\Omega} f_r(p, \os, \oi) L(p, \oi) \cos\theta_i \text{d}\oi, \end{align} \tag{9.18}\]
where \(L(p, \os)\) is the outgoing radiance from \(p\) toward the direction \(\os\); \(\Omega\) is usually a hemisphere in surface scattering, since lights hitting a surface point can come from anywhere in the hemisphere, in which case Equation 9.18 is also called the reflection equation, indicating the fact that the equation governs surface reflection/scattering. The rendering equation was first introduced to computer graphics by Kajiya (1986) and Immel, Cohen, and Greenberg (1986) (albeit with slightly different formulations and the former being more general than the latter).
The rendering equation is exactly the same equation in Equation 9.3, so there is nothing more profound about the rendering equation than the definition of the BRDF: we are simply following the BRDF’s definition and turning the differential equation into an integral one. Intuitively, the way to understand this equation is that every ray that hits \(p\) makes some contribution toward the outgoing radiance \(L(p, \os)\), and the integration just accumulates all the contributions. In particular:\(L(p, \oi) \text{d}\oi\) is the incident irradiance of a differential solid angle \(\text{d}\oi\); note that the irradiance calculated here is defined with respect to a surface perpendicular to the direction of \(\oi\).
- \(L(p, \oi) \cos\theta \text{d}\oi\) applies the Lambert’s cosine law and calculates the irradiance at the surface where \(p\) lies.
- \(f_r(p, \os, \oi)L(p, \oi) \cos\theta \text{d}\oi\) “transfers” the differential incident irradiance to the differential outgoing radiance toward \(\os\) through the BRDF function.
- The integration over all the incident directions calculates the total outgoing radiance given all the incident lights.
The rendering equation in theory allows us to calculate the entire light field, i.e., the radiance distribution in space, given an arbitrary \(p\) and \(\os\). Why is knowing the light field important? Recall Equation 8.8: knowing the light field allows us to synthesize any image or calculate the color of any object from any perspective.
It is, of course, much easier said than done when it comes to solving the rendering equation, which itself is worth multiple chapters in a computer graphics textbook. We will not get into it here; let’s just consider the following challenges. First, the integrand in generally has no analytical form, so we will not be able to get an analytical solution to the integral equation. A common method is Monte-Carlo integration, which samples the integrand at different points and estimates the integral from the samples.
Second, in a realistic environment, we need to solve the rendering equation recursively. Note how the radiance function shows up on both sides of the equation. Put it in another way, when using Monte-Carlo integration to solve Equation 9.18 we need to sample the value of \(L(p, \oi)\) for a specific \(\oi\) — how? We evaluate Equation 9.18 again, but this time treating \(\oi\) as the \(\os\), which means we invoke Monte Carlo integration again. You can see how this can quickly blow up the computation: the number of rays whose radiances we need to calculate exponentially increases as long as we need to sample more than one ray at each point. A big chunk of physically-based graphics is devoted to addressing this issue; the most commonly used strategy is called path tracing, for which Pharr, Jakob, and Humphreys (2023, Chpt. 13) is a great reference.
Another way to think of this is that there are infinitely many paths through which light can propagate and be incident on a point. A global illumination method for rendering would attempt to track all these paths (e.g., through Monte Carlo methods). In contrast, a local illumination method is concerned with only a small subset of these paths, in which case we might be able to evaluate the rendering equation as a single-pass integration while avoiding recursion. For instance, we might consider lights only from direct light sources. We will see the counterpart of this exact situation in surface scattering/volume rendering in Section 10.4. For this reason, the rendering equation is sometimes called the light transport equation (LTE), because it in principle captures how light is transported in space.
An interesting, and approximate, global illumination method that avoids path tracing is the idea of environment map (Ramamoorthi 2009, Chpt. 3). It assumes that the light sources are so distant from the objects in the scene that all points in the scene receive the same incident radiance distribution. That is, \(L(p, \oi)\) in Equation 9.18 is a function of only \(\oi\) but not \(p\). We can then pre-compute (through path tracing for instance) or directly measure \(L(\oi)\) offline and store them in a data structure. For instance, we can use the equirectangular projection to store a discretized form of \(L(\oi)\), or use spherical harmonics to (approximately) store a parameterized form of \(L(\oi)\). Either way, the data structure that stores pre-computed \(L(\oi)\) is called an environment map, which we can load at rendering time, plug it into the rendering equation, and calculate the outgoing radiance by simply evaluating the integral.
Finally, we also need to somehow know the BRDF of the material. There are generally two methods of going about it. We can, of course, measure it, but we have no realistic way of measuring the complete BRDF for a material, because we would have to measure infinitely many points and, for each point, infinitely many incident and outgoing directions. We can only sample the BRDF using something called a goniospectroreflectometer or a goniospectrophotometer (Judd and Wyszecki 1975, p. 402–10), but there is still a massive amount of samples we need to take and to store. Lots of prior work goes into efficiently sampling, measuring, and deriving BRDFs (Marschner et al. 2000; Matusik 2003; Pharr, Jakob, and Humphreys 2023, Chpt. 9.8).
Another approach is to parameterize the BRDF so that we can evaluate the BRDF on demand rather than storing all the BRDF data, and this is what we will study next.
9.4 Diffuse, Specular, and Glossy Materials
In everyday life, material surfaces are usually classified as being specular, glossy, or diffuse. Figure 9.1 shows examples of the three materials. We can now give a more rigorous treatment of these material types using BRDF, which will, in turn, give us some inspiration for parameterizing the BRDF.
9.4.1 Diffuse Material
When the surface is rough, the energy of surface reflection deviates away from the perfect mirror-like reflection and, instead, distributes across the hemisphere. When the surface becomes rough enough, the distribution of outgoing energy can become uniform across all outgoing directions over the entire hemisphere. Such a surface is called a diffuse or an ideal Lambertian surface. The perfect Lambertian surface does not exist, but many things in the real world come close, such as paper, marble, or wood.
The BRDF is a Lambertian surface is a uniform function. As we have seen in Equation 9.17, \(f_r(p, \os, \oi) = \frac{\rho_{hh}}{\pi}\) when \(\rho_{hh}\) is the surface albedo and is between 0 and 1. It is easy to see that diffuse materials are always isotropic.
9.4.2 Perfectly Specular Material
If a surface is perfectly smooth, like a mirror, it is called a perfectly specular material. Such materials follow the Snell’s law, which governs the angles of reflection and refraction, and the Fresnel equations, which govern the energy of reflection and refraction.
In the plane of incidence (the plane uniquely determined by the incident direction and the surface normal), the reflection direction is symmetric about the surface normal as the incident direction. More precisely, if the incident direction is \(\oi\) (parameterized by the polar angle \(\theta_i\) and azimuthal angle \(\phi_i\)) and the reflection direction is \(\os\) (\(\theta_s, \phi_s\)), we have:
\[ \begin{align} \theta_s &= \theta_i, \\ \phi_s &= \phi_i + \pi. \end{align} \tag{9.19}\]
The refraction/transmitted direction \(\omega_t\) (\(\theta_t, \phi_t\)) follows:
\[ \begin{align} & n_1 \sin\theta_i = n_2 \sin\theta_t, \\ & \phi_t = \phi_i + \pi, \end{align} \tag{9.20}\]
where \(n_1\) is the refractive index of the medium where light comes from and \(n_2\) is that of the medium that reflects/refracts the lights.
The energy of the reflected and refracted light is governed by the Fresnel equations. We will spare you the details, but it suffices to say that the fractions of reflected/refracted light are dependent on the incident angle, refractive indices of the two interface media, and the polarization states of the light. If you work out the math and assume that the incident light is unpolarized, the percentage of reflected energy \(F_r(\oi)\) for an incident direction \(\oi\) is given by:
\[ \begin{align} F_r(\oi) &= \frac{r_a+r_e}{2}, \\ r_a &= (\frac{n_2 \cos\theta_i - n_1 \cos\theta_t}{n_2 \cos\theta_i + n_1 \cos\theta_t})^2, \\ r_e &= (\frac{n_1 \cos\theta_i - n_2 \cos\theta_t}{n_1 \cos\theta_i + n_2 \cos\theta_t})^2. \end{align} \tag{9.21}\]
We call \(F_r(\oi)\) the specular reflectance, which not only varies with \(\oi\) but also is also a spectral term; we omit the wavelength for simplicity. Assuming no loss of energy, the specular transmittance, i.e., the fraction of the transmitted energy, is given by \(1-F_r\).
Fresnel’s equations are best understood in the context of the electromagnetic theory and are derived by treating light as waves in an electric field (the fact that we need to consider polarization states of a light is a giveaway). While \(F_r\) cannot be derived from radiometry, it is fundamentally about the energy transfer of surface scattering, which radiometry is also concerned with. So \(F_r\) can be integrated into the radiometry framework. One good example is to express the BRDF of a specular material using \(F_r\):
\[ \begin{align} f_r(p, \os, \oi) = F_r(\oi)\frac{\delta(\theta_s-\theta_i)\delta(\phi_s-\phi_i-\pi)}{\cos\theta_i}, \end{align} \tag{9.22}\]
where \(\delta(x)\) is the Dirac delta function, which is 0 everywhere except when \(x=0\) and has the property \(\int\delta(x)\d x = 1\).
We can verify that this BRDF makes sense. First, the BRDF is non-zero only when Equation 9.19 holds because of the double-delta term. Second, the energy conservation is followed. For instance, if we calculate the directional-hemispherical reflectance by plugging the BRDF into Equation 9.8 and assuming \(\Omega\) is a hemisphere, we get:
\[ \begin{align} \frac{E_o}{E_i} = \rho_{dh}(p, \oi) = \int^{\Omega} F_r(\oi)\frac{\delta(\theta_s-\theta_i)\delta(\phi_s-\phi_i-\pi)}{\cos\theta_i} \cos\theta_s\text{d}\os. \end{align} \tag{9.23}\]
Since \(F_r(\oi)\) is independent of \(\os\), Equation 9.23 evaluates to Equation 9.24. The integration in Equation 9.24 evaluates to 1. This is because, informally, the integrand is non-zero only when Equation 9.19 holds, at which point \(\theta_s = \theta_i\), so the cosine terms cancel out. So the integration is just sort of a hugely complicated way of writing \(\int \delta(x)\d x\), which is 1. \[ \begin{align} \frac{E_o}{E_i} = F_r(\oi) \int^{\Omega} \frac{\delta(\theta_s-\theta_i)\delta(\phi_s-\phi_i-\pi)}{\cos\theta_i} \cos\theta_s\text{d}\os = F_r(\oi). \end{align} \tag{9.24}\]
We can see that the specular reflectance \(F_r\) is equivalent to \(\rho_{dh}\), the directional-hemispherical reflectance. This makes sense, because in specular materials the scattering is directional if the incident light is directional. So the directional-hemispherical reflectance reduces to the “directional-directional” reflectance, which is essentially the specular reflectance.
The specular reflectance is also equivalent to the hemispherical-directional reflectance \(\rho_{hd}\). We can show this either by simply invoking the reciprocity that \(\rho_{hd} = \rho_{dh}\) or by plugging the specular BRDF Equation 9.22 into Equation 9.9 and obtaining (assuming \(\Omega\) is hemisphere):
\[ \begin{align} \rho_{hd}(p, \os) &= \int^{\Omega} F_r(\oi)\frac{\delta(\theta_s-\theta_i)\delta(\phi_s-\phi_i-\pi)}{\cos\theta_i} \cos\theta_i\text{d}\oi \\ &= F_r(\hat\os) = F_r(\os), \end{align} \]
where \(\hat\os(\theta_s, \phi_s-\pi)\) is the mirror-reflection direction of \(\os(\theta_s, \phi_s)\). The integral evaluates to \(F_r(\hat\os)\) because, informally, the integrand is non-zero only when Equation 9.19 holds, at which point \(\oi = \hat\os\) so \(F_r(\oi) = F_r(\hat\os)\); the integral is a complicated way of writing \(\int F_r(\oi)\delta(\hat\os-\oi)\doi\), which evaluates to \(F_r(\hat\os)\). The result has an intuitive explanation: for a specular surface, the scattered energy along \(\os\) given a hemispherical illumination is the same as when the illumination comes only from \(\hat\os\). We can then show that \(F_r(\hat\os) = F_r(\os)\), which is not surprising given reciprocity; you can also verify it by going through the equations in Equation 9.21.
Interestingly, the specular reflectance \(F_r\) in general is not equivalent to the hemispherical-hemispherical reflectance \(\rho_{hh}\). To see this, plug the specular BRDF into Equation 9.10 (assuming \(\Omega_i\) and \(\Omega_s\) are hemispheres):
\[ \begin{align} E_o &= \int^{\Omega_s} (\int^{\Omega_i} f_r(p, \os, \oi) L(p, \oi) \cos\theta_i\text{d}\oi) \cos\theta_s \dos \label{eq:specular_brdf_hh_1} \\ &= \int^{\Omega_s} \big(F_r(\os) L(p, \os)\big) \cos\theta_s \dos \label{eq:specular_brdf_hh_2} \\ &= \int^{\Omega_i} F_r(\oi) L(p, \oi) \cos\theta_i \doi \label{eq:specular_brdf_hh_3}, \\ E_i &= \int^{\Omega_i} L(p, \oi) \cos\theta_i \doi. \label{eq:specular_brdf_hh_4} \end{align} \]
We can see that only when \(F_r(\oi)\) is a constant do we get \(F_r(\oi) = \frac{E_o}{E_i} = \rho_{hh}\). This is consistent with our early result in Section 9.2.3 that \(\rho_{dh} = \rho_{hh}\) only when the material is Lambertian, and a specular material is obviously not Lambertian.
When \(F_r(\oi)\) is constant, the specular material is isotropic (can you prove it?). Since \(F_r(\oi)\) does not have to be a constant, specular materials could be anisotropic. That is, it is theoretically possible that a material always reflects specularly, but the reflected energy depends on the incident direction.
9.4.3 Glossy Material
The surface scattering in most materials is in-between being perfectly specular and perfectly diffuse. These materials scatter light to a small cone of directions, usually centered around the direction of a perfect reflection. These materials are usually called glossy or sometimes, confusingly, “specular”, too. The energy distribution of a glossy material is neither a Delta function (as in the perfectly specular case) nor a uniform function (as in the diffuse case). It is usually a function that peaks at the mirror-reflection direction and gradually decays as we move away from that direction.
The bottom figures in Figure 9.1 illustrate an example of the BRDF for each of the three surface types under a given incident direction. An actual BRDF (for a given surface point and a given incident direction) would be a 3D shape, and what we are showing here is the cross section. The shape of the locus is drawn to be proportional to the magnitude of the BRDF; the locus in graphics literature is sometimes called the specular lobe.
The spectral-lobe visualization gives us a hint: we can parameterize a BRDF by mathematically describing the shape of the specular lobe. In fact, the BRDFs for the Lambertian surface (Equation 9.17) and for the specular materials (Equation 9.22) are two such examples. A glossy BRDF is more difficult to parameterize. Many BRDF parameterizations have been proposed; some are empirical, while others attempt to be physically plausible. The most popular and widely used is based on the microfacet model, which we will discuss next.
9.5 BRDF Parameterization with Microfacet Models
The assumption of the microfacet model is that the surface scattering behavior of a point depends on its local roughness: the rougher the surface, the more diffuse the surface scattering becomes. To model the roughness, the surface is modeled as a collection of small microfacets, each of which acts like a perfect mirror. A specular surface is one where all the microfacets have the exact same orientation. As the surface becomes rougher, the mirrors become more randomly oriented. When the mirrors are completely randomly oriented, the resulting surface scattering becomes diffuse.
To derive a microfacet model, we need to first define the orientation of each microfacet. Given a beam of incident lights from a particular direction, we can then trace, following the laws governing specular reflection, how the lights are scattered by the collection of the microfacets given their orientations. In the end, we obtain the collection of outgoing directions, from which we can derive the BRDF.
There are many variants of the microfacet model. They have one thing in common: they do not explicitly model the scattering of each ray at each microfacet but, rather, model the scattering of the microfacets statistically given the distribution of the microfacet orientations. In the end, they can either have an analytical form of the BRDF (Lambertian surface being an extreme example), have a close approximation of the analytical form, or can numerically estimate the BRDF efficiently (mostly through sampling).
Without going into the details, we will refer you to Pharr, Jakob, and Humphreys (2023, Chpt. 9.6) for a mathematical treatment of the general idea and to Torrance and Sparrow (1967), Cook and Torrance (1982), Ward (1992), Oren and Nayar (1995), and Walter et al. (2007) for the classical models.
9.5.1 Nature of Microfacets Models
If the microfacet theory does not sound weird to you, it should! In a microfacet model, we are still modeling surface scattering using discrete objects (microfacets) and events (perfect mirror-like reflection on each microfacet). Is it surprising that we can use the discrete microfacet model to reason about the behavior of a continuous surface? Given any point \(p\) on a surface, wouldn’t \(p\) correspond to one single microfacet, and the behavior of \(p\) simply be the result of a perfect mirror reflection there? If so, how can the microfacet model describe non-specular surface scattering of glossy and diffuse materials?
An intermediate answer is that the microfacet theory is just a modeling methodology. We use a set of discrete microfacets to derive the surface-scattering statistics of that set of microfacets, but then simply assume that the so-derived statistics apply anywhere on a continuous surface of interest. Still, does this methodology reflect the physical reality?
Well, the physical world is fundamentally not continuous; when we break down the surface into finer and finer scales, we eventually get to molecules and atoms, so the surface property undergoes wild fluctuations depending on whether a small area contains molecules or not. If that is the level of detail you want to get into, you have to model things at the molecular and atomic levels (or even lower). Figure 9.2 illustrates this idea.
Fortunately for many real-world use-cases, we do not have to go there. Our eyes have a resolution limit, so we cannot resolve the details of a tiny surface area anyway; image sensors also have a resolution limit. The just-resolvable area \(\delta A\), set by the spatial resolution limit of our visual system, is more than large enough that it contains many microfacets, so the aggregated behavior of those microfacets can effectively model the observed scattering of \(\delta A\), which is all that matters to our vision (and to computer graphics and imaging, which is concerned only with satisfying human vision). So effectively what the microfacet theory does is to assume that the small \(\delta A\) (which contains a distribution of microfacets) is just within the range where the surface scattering property is stable. When the microfacet theory says something about a particular point \(p\), it is really saying something about \(\delta A\).
This way of modeling and thinking is pervasive in radiometry, which uses differential and integral equations and thus has inherently assumed that the radiation field under modeling is continuous. That is not true. Take irradiance as an example. The average irradiance of a surface changes dramatically at the microscopic level when we initially reduce the surface area, because the photon distribution over a large area is likely very non-uniform. When the surface area is sufficiently small, the number of photons hitting the surface will change proportionally with the surface area, because at that scale the photon distribution is roughly uniform. This is the scale at which irradiance is defined. But if we keep reducing the area smaller and smaller, the amount of photons hitting a tiny area will, again, undergo wild fluctuations depending on whether there are photons in the area of not — photons are discrete packets of energy. We will see another example shortly in volume scattering, where we use a small volume of discrete particles to build a model for radiative energy transfer, which we then apply to any given point in a continuous volume.
Orthogonal to the discussion above is the limitation that microfacet models do not account for the surface roughness on the scale of the light wavelength. In the regime where the length of each microfacet is comparable with the light wavelength, diffraction takes place. As a result, reflection does not follow the Snell’s law and is wavelength dependent. In fact, this is how we get iridescence; in engineering, people make diffraction gratings that take advantage of the wavelength dependency to disperse lights of different wavelengths.
9.6 Measuring Spectral Reflectance and BRDF
This section discuss the principles and practices of measuring the spectral reflectance or spectral BRDF. It is absolutely important to note that the measured reflectance is not necessarily attributed only to surface scattering, because the measurement setup does not care what the material being measured is. If SSS plays a role (e.g., translucent materials), the resulting reflectance data would include the contribution from volume scattering, too.
Worse, for these materials not all the SSS influences are captured by this measurement geometry, since some back-scattered photons will exit at other surface points, which will not be captured by the detector. So the measurement is neither complete nor sound for materials where back-scattered photons contribute to their reflectance.
9.6.1 Measuring Spectral Reflectance
How do we know the spectral reflectance (transmittance) of a material? We measure it. This is easier said than done. We will focus on the reflectance measurement here, but transmittance is measured similarly, except you are not measuring from the same side of the illuminant but from the other side. Sharma (2003, Chpt. 1.11.4), Trussell and Vrhel (2008, Chpt. 8.7), and Reinhard et al. (2008, Chpt 6.8) have overviews of various measurement devices that might be helpful.
The Importance of Measurement Geometry
Consider Figure 8.1 (a) again. The illuminant emits lights everywhere, but what matters is the light incident on the point \(p\) the viewer is currently gazing at; of course, the incident lights could come from everywhere else in the space, not just a particular illuminant. Similarly, \(p\) could potentially scatter lights everywhere over the hemisphere (through surface scattering and/or SSS), but it is the small beam of light that enters the viewer’s eye that matters. In order to measure the reflectance that is relevant to this particular illumination-viewing geometry, we need to 1) measure all the illuminating power that hits \(p\) and 2) measure the scattered light from \(p\) only along the viewing direction.
You can imagine that if we change the illumination to be, say, a diffuse lighting where there is an equal amount of light hitting \(p\) from all directions, the reflectance would be different, and it would be a perfectly relevant reflectance measure to report. If you have not, next time when you visit an art museum, pay attention to how the lighting system is carefully set up to bring out the best viewing experience (while also considering conservation); you ideally want the reflectance measurement of an artifact to simulate the viewing lighting.
Single Reflectance Measurement
In general, there really is no single reflectance number we can associate with a material. There are two ways to approach this. A common approach is to set up the measurement geometry so that it is close to an actual viewing experience. Figure 9.3 (a) shows four common settings. Some might illuminate the material from 0\(^{\circ}\) (assuming the direction of the surface normal has an angle of 0\(^{\circ}\)) and then measure the scattered lights at 45\(^{\circ}\); others can illuminate the material using diffuse illumination and measure the reflectance at 0\(^{\circ}\) (Judd and Wyszecki 1975, p. 122–25; Reinhard et al. 2008, Chpt. 6.8.2; Li 2003, Chpt. 2.2.2).
To get a reflectance spectrum, we need to know the reflectance at each sampled wavelength. There are multiple ways to go about measuring the spectral information. For instance, we can place a monochromator or a set of optical filters between the illuminant and the material so that we can control the wavelength of the light that is incident on the material.
Alternatively, we can change the detector to measure spectral information. We can use a dispersive medium such as a prism, shown in Figure 9.3 (c), or a diffraction grating, shown in Figure 9.3 (d), to separate the scattered light into different wavelengths and measure them individually. A detector that is capable of measuring the spectral radiometric quantities (e.g., the spectral power distribution) is called a spectroradiometer.
The raw detector readings of a spectroradiometer are usually not the absolute radiometric quantity of interest. The raw recording is, instead, roughly proportional to radiometric quantity up to a constant scaling factor \(SSF(\lambda)\), which is usually called the detector’s spectral sensitivity funciton or the responsivity function, which we will study carefully in Section 12.5. \(SSF(\lambda)\) can be calibrated offline, and that allows us to turn a detector’s raw recording into the corresponding absolute radiometric quantity.
We take a spectroradiometric measurement of the illumination hitting the material and that of the scattered light of interest; the ratio is the spectral reflectance \(\rho(\lambda)\):
\[ \begin{align} \rho(\lambda) = \frac{\Phi_s(\lambda)SSF(\lambda)}{\Phi_i(\lambda)SSF(\lambda)} = \frac{\Phi_s(\lambda)}{\Phi_i(\lambda)}. \end{align} \]
We can see that for reflectance measurement, the exact values of \(SSF(\lambda)\) are immaterial. A curious question is that, while the detector can measure \(\Phi_s(\lambda)\), what measures \(\Phi_i(\lambda)\)? One strategy is to, offline, place the same detector where the material is and directly measure \(\Phi_i(\lambda)\) there.
Another, perhaps much more common and standard, way to measure spectral reflectance is to use something called a spectrophotometer. This method does not need to know \(\Phi_i(\lambda)\), but it requires a reference sample with a known spectral reflectance. This is shown in Figure 9.3 (b). It takes two spectroradiometric measurements under the identical illumination: one for the test material and the other for the standard/reference sample. The spectral reflectance of the test material \(\rho_t(\lambda)\) is given by:
\[ \begin{align} \rho_t(\lambda) = \frac{m_t(\lambda)}{m_s(\lambda)}\rho_s(\lambda), \end{align} \]
where \(\rho_s(\lambda)\) is the known spectral reflectance of the standard/reference sample, \(m_s(\lambda)\) and \(m_t(\lambda)\) refer to the raw detector readings of the standard and the test material at wavelength \(\lambda\), respectively. We can see that the spectrum of the illumination does not matter. Sometimes \(\frac{m_t(\lambda)}{m_s(\lambda)}\) is called the spectral reflectance factor of the test material if the reference material is perfectly diffuse Judd and Wyszecki (1975, p. 93).
In practice, the reference measurement can be done separately rather than simultaneously with the test material to reduce the device form factor, and the reference measurement data can be tabulated to save measurement time.
One note on terminology: while a spectroradiometer is used to measure the spectral radiometric quantities (e.g., spectral radiance), a spectrophotometer does not measure the spectral photometric quantities (e.g., spectral luminance); instead, it measures the spectral reflectance. This is standardized in American Society for Testing and Materials (ASTM) E284-13b (ASTM International 2013) (along with other terminologies related to material properties and measurement instruments).
The nice thing about the approach described so far is that you get a single reflectance spectrum, but be very careful under what measurement geometry is the spectrum obtained. There is no guarantee that a particular measurement geometry corresponds to the illumination/observation geometry of an actual viewing experience, so use the reported reflectance data with that caveat in mind.
Goniometric Measurements
A more general approach is to measure the reflectance at every illumination-viewing direction combination. For that we need what is called a goniospectrophotometer2. There are also gonioradiometers, which measure the spectral radiometric quantities from different viewing directions.]. Figure 9.4 shows one such setup. The illuminant/light source incident on the material comes through the small aperture \(I\), and the scattered light from the material is captured by a detector (e.g., a photodiode or, essentially, a single-pixel image sensor) through another aperture \(V\). Transmittance can be similarly measured by placing the detector at the other side of the material.
The idea is to simultaneously sample, say, \(N\) illumination directions (parameterized by the azimuth \(\phi_i\) and polar angle \(\theta_i\)) and \(M\) scattering directions (parameterized by the azimuth \(\phi_s\) and polar angle \(\theta_s\)), and obtain \(M \times N\) measurements, each of which corresponds to one particular combination of the illuminant and scattering directions. For convenience, commercial goniometric measurements usually use a beam splitter to simultaneously measure the illumination and scattering flux (Lanevski, Manoocheri, and Ikonen 2022; Rabal et al. 2012).
Denote the area on the material being measured \(A_r\). The size of the area is dictated by the illumination aperture \(I\). Assuming the power received by \(A_r\) from the illuminant through \(I\) is \(\Phi_i(\lambda, A_r, I)\), and the power scattered by \(A_r\) and collected by the detector through the aperture \(V\) is \(\Phi_s(\lambda, A_r, V)\), the reflectance of the small area \(A_r\) is simply given by:
\[ \begin{align} \rho(\lambda, A_r) = \frac{\Phi_s(\lambda, A_r, V)}{\Phi_i(\lambda, A_r, I)}. \end{align} \]
As the two apertures become very small, \(A_r\) becomes very small, and the incident and outgoing solid angles become very small, too. The resulting reflectance measurement can be thought of as estimating the directional-directional reflectance (Section 9.2). But in general you can see how the reflectance number can easily change when we slightly vary the hardware setup. For instance, if we increase the detector aperture \(V\), the detected power will increase, and that would increase the resulting reflectance. If we increase the illumination aperture \(I\), the resulting reflectance would be for a larger material area \(A_r\).
One can also use a reference material (with known reflectance spectra at the same measurement geometries) to avoid measuring \(\Phi_i(\lambda, A_r, I)\), similar to how a spectrophotometer is operated.
9.6.2 Measuring BRDF
Reflectance is integrated from the BRDF, which suggests that the latter is a more fundamental measure of material property. The same setup shown in Figure 9.4 can also be used to measure the BRDF, in which case the setup is called a goniospectroreflectometer. We will take the same measurements, but with a bit more calculation we can estimate the BRDF of the material, rather than just the (goniometric) reflectance spectra.
Let us be precise about the setup (omitting the \(\lambda\) term in all relevant quantities).
- We are illuminating a small area \(A_r\) through the illumination aperture \(I\).
- The center of \(A_r\) is an infinitesimal point \(p\), which along with \(I\) subtends a solid angle \(\Oi(p, I)\).
- \(\oi\) is the direction between \(p\) and the center of \(I\).
- \(A_r\) scatters lights toward the detector through the detector aperture \(V\), which subtends a solid angle of \(\Os (p, V)\) with \(p\).
- \(\os\) is the direction between \(p\) and the center of \(V\).
- The power incident on \(A_r\) is \(\Phi_i(A_r, I)\), and the portion of the power scattered by \(A_r\) and collected by the detector is \(\Phi_s(A_r, V)\).
- We are interested in calculating the BRDF \(f_r(p, \omega_s, \omega_i)\).
Recall that \(f_r(p, \omega_s, \omega_i)\) is defined as the ratio of the difference in radiance leaving \(p\) toward \(\os\) over the difference in irradiance incident on \(p\) due to the lights coming from an infinitesimal solid angle \(\doi\) (omitting \(\lambda\) in all equations for simplicity):
\[ \begin{align} f_r(p, \omega_s, \omega_i) = \frac{\text{d}L_s(p, \omega_s)}{\text{d}E_i(p, \omega_i)} \approx \frac{L_s(p, \omega_s)}{E_i(p, \Oi(p, I))}. \end{align} \tag{9.25}\]
There is no way we can illuminate a point \(p\) through an infinitesimal solid angle \(\doi\); all we could do is to illuminate a small cone of directions \(\Oi(p, I)\). We can then calculate the average BRDF of all the incident directions in \(\Oi(p, I)\) (i.e., assuming the BRDF is the same for all the outgoing directions in \(\Oi(p, I)\)) using the approximation in Equation 9.25, which we have derived in Section 9.1.1.
How do we calculate \(E_i(p, \Oi(p, I))\)? There is no way we can illuminate and measure the irradiance of an infinitesimal point \(p\); all we can do is to illuminate a small area \(A_r\) and assume that the irradiance received is constant anywhere inside \(A_r\), so we have:
\[ \begin{align} E_i(p, \Oi(p, I)) \approx \frac{\Phi_i(A_r, I)}{A_r}. \end{align} \tag{9.26}\]
Now how do we get \(L_s(p, \omega_s)\)? For this we turn to the detector side. Using basic radiometry, \(\Phi_s(A_r, V)\) is expressed in Equation 9.27, where \(p'\) and \(\os'\) are dummy variables, \(\theta_s'\) is associated with \(\os'\), and \(\Oi(p', V)\) is associated with \(p'\) (c.f., \(p\) refers to a specific point on \(A_r\), and \(\os\) and \(\Os(p, V)\) refer to physical quantities associated specifically with \(p\)):
\[ \begin{align} \Phi_s(A_r, V) = \int^{A_r} \int^{\Os(p', V)} L_s(p', \omega_s') \cos{\theta_s'} \text{d}\os' \text{d}p'. \end{align} \tag{9.27}\]
We assume that the radiance of any ray between \(A_r\) and the detector aperture \(V\) is constant and takes the value of \(L_s(p, \os)\); this gets us Equation 9.28:
\[ \begin{align} \Phi_s(A_r, V) \approx \int^{A_r} \int^{\Os(p, V)} L_s(p, \omega_s) \cos{\theta_s} \text{d}\os' \text{d}p'. \end{align} \tag{9.28}\]
Since \(L_s(p, \os)\) and \(\cos\theta_s\) are invariant to \(\os'\) and \(p'\), they can be taken out of the two integrations, and this gives us Equation 9.29: \[ \begin{align} \Phi_s(A_r, V) \approx L_s(p, \omega_s) \cos{\theta_s} \int^{A_r} \int^{\Os(p, V)} \text{d}\os' \text{d}p'. \end{align} \tag{9.29}\]
Calculating the two integrals in Equation 9.29 gives us Equation 9.30, where \(C_1\) and \(C_2\) are constant. Given the boundary condition that \(\Phi_s(\cdot)\) has to be 0 when \(\Os(\cdot)\) or \(A_r\) is 0 (if the detector aperture is closed or the illumination area vanishes, no scattered light will be detected), we know \(C_1=C_2=0\).
\[ \begin{align} \Phi_s(A_r, V) \approx L_s(p, \omega_s) \cos{\theta_s} (A_r (\Os(p, V) + C_1) + C_2). \end{align} \tag{9.30}\]
Plugging Equation 9.25 we get:
\[ \begin{align} \Phi_s(A_r, V) \approx f_r(p, \omega_s, \omega_i)E_i(p, \Oi(p, I)) \cos{\theta_s} A_r \Os(p, V). \end{align} \tag{9.31}\]
Plugging Equation 9.26 we get:
\[ \begin{align} \Phi_s(A_r, V) \approx f_r(p, \omega_s, \omega_i) \frac{\Phi_i(A_r, I)}{A_r} \cos{\theta_s} A_r \Os(p, V). \end{align} \tag{9.32}\]
Therefore, the final BRDF is given by:
\[ \begin{align} f_r(p, \omega_s, \omega_i) = \frac{\Phi_s(A_r, V)}{\Phi_i(A_r, I) \cos{\theta_s} \Os(p, V)}. \end{align} \tag{9.33}\]
Rearranging the terms, we get a seemingly more complex expression:
\[ \begin{align} f_r(p, \omega_s, \omega_i) = \frac{[\Phi_s(A_r, V)/(A_r \cos\theta_s)]/\Os(p, V)}{\Phi_i(A_r, I)/A_r}. \end{align} \tag{9.34}\]
Equation 9.34 actually gives a simple interpretation. The denominator is the average irradiance incident on \(p\) through a small solid angle \(\Oi(p, I)\) (see Equation 9.26), and the numerator is the average radiance leaving \(p\)3. Taking the ratio of the two matches our intuition of the average BRDF: radiance over irradiance (received over a small solid angle).
If we assume the surface to be Lambertian, the BRDF is then \(1/\pi\) for any \(\os\) (under a given \(p\) and \(\oi\); see ) assuming no loss of energy. This means:
\[ \begin{align} \Phi_s(A_r, V) \propto \cos\theta_s. \end{align} \]
That is, the flux reading weakens as the incident direction \(\theta\) by a factor of \(\cos\theta\). Is this surprising? It should not be if you recall our discussion of radiant intensity (Equation 8.7). If we assume that every point on \(A_r\) emits the same amount flux to the same solid angle (through the aperture \(V\)), the radiant intensity of \(p\) toward \(\os\) is to \(\frac{\Phi_s(A_r, V)}{A_r \Os(p, V)}\) and, thus, proportional to \(\cos\theta\), which matches our earlier conclusion of how the radiant intensity of a Lambertian emitter/scatterer decays with \(\theta\).
Anytime you measure something, the measurement is subject to noise and uncertainty. For instance, in the case of gonioreflectometer measurement, the angular positioning of the illuminant and detector might not be accurate, the detector itself is subject to all sorts of measurement noise (which we will study in the image sensor lecture), and there might be stray lights that enter the detector. Quantifying the sources of uncertainty and, even better, correcting for them is an important part of reflectance/BRDF measurement (Lanevski, Manoocheri, and Ikonen 2022; Rabal et al. 2012).
To be more rigorous, the integration in Equation 9.4 evaluates to \(E(p, \Oi) + C\), where \(C\) is a constant. Given the boundary condition that \(L(p, \os) = 0\) when \(E(p, \Oi) = 0\), we know \(C=0\), so \(C\) is omitted.↩︎
“gonio-”” comes from the Greek word \(\gamma\omega\nu\iota\alpha\) (g={o}n'{i}a), which means angle.↩︎
\(\Phi_s(A_r, V)/(A_r \cos\theta_s)\) in the numerator gives us the average irradiance leaving \(p\) (note that this radiance is defined at the surface perpendicular to \(A_r\), hence the \(\cos\theta_s\) term), which is divided by \(\Os(p, V)\) to give us the average radiance leaving \(p\).↩︎