Transforming Normals

Eric Lengyel   •   December 12, 2024

If you’ve been working with 3D computer graphics for any amount of time, then you’ve probably heard that normal vectors transform differently than ordinary vectors. That is, if t is an ordinary column vector that’s tangent to some surface, and m is a \(3 \times 3\) matrix that transforms t with the product \(\mathbf t' = \mathbf{mt}\), then calculating \(\mathbf n' = \mathbf{mn}\) for some normal vector n perpendicular to the surface doesn’t necessarily produce the correct results. By correct results, I mean that if the vectors t and n are initially perpendicular to each other, then their transformations \(\mathbf t'\) and \(\mathbf n'\) must also be perpendicular to each other. This fails to hold if the matrix m contains a nonuniform scale, skew, or anything else that makes it nonorthogonal. For example, consider the case in which the triangle below is scaled by a factor of two in the horizontal direction only. If we apply the same matrix m used to transform the sides of the triangle to transform the normal vector n, then the transformed normal vector is no longer perpendicular to the diagonal side.

The usual way to solve this problem is to start with the assumption that \(\mathbf n \cdot \mathbf t = \mathbf n^{\mathrm T}\mathbf t = 0\) and then figure out what matrix x, derived from m, causes the requirement \((\mathbf{xn}) \cdot (\mathbf{mt}) = (\mathbf{xn})^{\mathrm T}\mathbf{mt} = 0\) to be true. This immediately leads to the answer \(\mathbf x = (\mathbf m^{-1})^{\mathrm T}\), which appears throughout the computer graphics literature. Perpendicularity is preserved when normal vectors are transformed with the inverse transpose of the matrix that transforms ordinary vectors. This solution works most of the time, but it does not produce the correct results if the matrix m contains a reflection, and it doesn’t work at all if m is not invertible.

A better solution that works for any matrix m no matter what can be found by enforcing the preservation of cross products. (The following derivation appears in FGED1, Section 3.2, and it’s the very first topic discussed in PGA Illuminated.) If we have two nonparallel tangent vectors s and t at some point on a surface, then the normal at that point is given by \(\mathbf n = \mathbf s \times \mathbf t\). For whatever matrix x we use to transform the normal n, it had better be the case that \(\mathbf{xn} = \mathbf{ms} \times \mathbf{mt}\) so the transformed normal is given by the cross product of the transformed tangent vectors. Expanding the matrix-vector products \(\mathbf{ms}\) and \(\mathbf{mt}\) by columns, we can write

\(\mathbf{xn} = (s_x \mathbf m_{[0]} + s_y \mathbf m_{[1]} + s_z \mathbf m_{[2]}) \times (t_x \mathbf m_{[0]} + t_y \mathbf m_{[1]} + t_z \mathbf m_{[2]})\),

where the notation \(\mathbf m_{[j]}\) means the zero-based column j of the matrix m. After distributing the cross product to all of the terms and simplifying, we get

\(\begin{split} \mathbf{xn} =\, &(s_yt_z - s_zt_y)(\mathbf m_{[1]} \times \mathbf m_{[2]}) \\[1ex] +\, &(s_zt_x - s_xt_z)(\mathbf m_{[2]} \times \mathbf m_{[0]}) \\[1ex] +\, &(s_xt_y - s_yt_x)(\mathbf m_{[0]} \times \mathbf m_{[1]}). \end{split}\) (1)

The components of the cross product \(\mathbf s \times \mathbf t\) are clearly visible here, and we know they are the components of the original normal n. The three vectors on the right side are the columns of the matrix x, and they are formed by cross products of the columns of the matrix m. The transpose of this matrix has the property that \(\mathbf x^{\mathrm T}\mathbf m = (\det\mathbf m)\mathbf I\) because \((\mathbf m_{[i]} \times \mathbf m_{[j]}) \cdot \mathbf m_{[k]} = \det\mathbf m\) for any \(\{i, j, k\}\) that’s an even permutation of \(\{0, 1, 2\}\). Such a matrix \(\mathbf x^{\mathrm T}\) is called the adjugate of m, denoted by \(\mathrm{adj}(\mathbf m)\), and thus \(\mathbf x = \mathrm{adj}(\mathbf m)^{\mathrm T}\). The correct matrix for transforming normals is the adjugate transpose of the matrix m, and we can write the transformation as \(\mathbf n' = \mathrm{adj}(\mathbf m)^{\mathrm T}\mathbf n\). More generally, the adjugate transpose is the correct matrix to use for transforming any cross product of ordinary vectors.

The earliest appearance of the adjugate transpose I could find in the graphics literature dates back to a chapter entitled “Matrix Inversion” in the first volume of Graphics Gems from 1990. (Note that it’s called the adjoint matrix \(M^*\) there, and the chapter incorrectly states that \(M^* = (1/\det(M))M^{-1}\) when it should be \(M^* = \det(M)M^{-1}\).) The chapter observes that the adjugate can be used to transform normals, but it does not include any explanation for why it is correct.

There’s a bigger picture to all of this. Normals transform differently than ordinary vectors because they are not vectors at all. They’re antivectors. As soon as we start multiplying vectors together with the cross product, we are actually producing new quantities that lie outside the vector space, and we need to enlarge our world to include the complete exterior algebra. In an n-dimensional exterior algebra based on an n-dimensional vector space, there are \(n + 1\) different kinds of objects having grades between zero and n. Scalars have grade zero, and ordinary vectors have grade one. Normals have grade \(n - 1\), which are the antivectors of the exterior algebra. The cross product, which is limited to three dimensions, is replaced by the wedge product, which applies to all numbers of dimensions. Once we have an \(n \times n\) matrix m that transforms grade-one vectors, we can construct a larger \(2^n \times 2^n\) exomorphism matrix that correctly transforms everything in the exterior algebra. As it turns out, the part of this exomorphism matrix that transforms normals is always the adjugate transpose of m in any number of dimensions. Furthermore, in dimensions greater than three, there are more parts in addition to the original matrix m and its adjugate transpose, and they transform things that are neither vectors nor antivectors. The following description is taken from Section 2.7 of Projective Geometric Algebra Illuminated.

Section 2.7. Exomorphisms

The matrices at the heart of linear algebra perform transformations that move vectors from one coordinate system to another. In n dimensions, the columns of an \(n \times n\) transformation matrix m are exactly the images of the n basis vectors \(\mathbf e_1, \mathbf e_2, \ldots, \mathbf e_n\) after the transformation has been applied through the product \(\mathbf{me}_i\). What we would like to do is extend the matrix m to a larger matrix in such a way that it properly transforms not only the basis vectors having grade one but all \(2^n\) basis elements over all grades in the n-dimensional exterior algebra. We accomplish this by requiring that our extension of m behaves as a homomorphism with respect to the wedge product. That is, given the way in which the grade-one vectors transform, the structure of the rest of the exterior algebra must be the same before and after the extended transformation is applied. Such an extended transformation is called the exomorphism of the linear transformation m, where the prefix exo- comes from its relationship to the exterior product.

We use a capital M to denote the exomorphism matrix, which is the extension of the matrix m to the larger matrix needed to transform quantities of any grade. The matrix M has \(2^n\) columns and \(2^n\) rows corresponding to the \(2^n\) basis elements in the n-dimensional exterior algebra. The generic vectors transformed by M are column matrices with \(2^n\) components, one for each basis element of a complete multivector in n dimensions. For example, in three dimensions, a complete eight-component multivector u can be written as

\(\mathbf u = s\mathbf 1 + v_x\,\mathbf e_1 + v_y\,\mathbf e_2 + v_z\,\mathbf e_3 + b_x\,\mathbf e_{23} + b_y\,\mathbf e_{31} + b_z\,\mathbf e_{12} + t\unicode{x1D7D9}\), (2.47)

and its matrix representation is the column of entries \(s\), \(v_x\), \(v_y\), \(v_z\), \(b_x\), \(b_y\), \(b_z\), and \(t\) in that order from top to bottom. It is sufficient to ensure that M is a homomorphism over the basis elements of the algebra because that property would then extend to all multivectors by the linearity of matrix multiplication. For M to be an exomorphism, we must have

\(\mathbf M(\mathbf a \wedge \mathbf b) = (\mathbf{Ma}) \wedge (\mathbf{Mb})\) (2.48)

for all basis elements a and b. That is, the image of \(\mathbf a \wedge \mathbf b\) under the transformation performed by M must be the wedge product of the images of a and b. This rule allows us to build the full \(2^n \times 2^n\) matrix M solely from the information contained in the original \(n \times n\) matrix m. We already know the image of each basis vector \(\mathbf e_i\) because it is given by \(\mathbf m_{[i]}\), the ith column of m, extended with zeros in the positions for basis elements having grade other than one. For any basis bivector \(\mathbf e_{ij}\), Equation (2.48) requires that

\(\mathbf{Me}_{ij} = (\mathbf{Me}_i) \wedge (\mathbf{Me}_j)\), (2.49)

but the right side of this equation is just the wedge product of \(\mathbf m_{[i]}\) and \(\mathbf m_{[j]}\). We calculate this wedge product by treating the column \(\mathbf m_{[i]}\) with entries \(\mathbf m_{i1}, \mathbf m_{i2}, \mathbf m_{i3}, \ldots\) from top to bottom as the vector \(\mathbf m_{i1}\mathbf e_1 + \mathbf m_{i2}\mathbf e_2 + \mathbf m_{i3}\mathbf e_3 + \cdots\). When applied to all basis bivectors, Equation (2.49) produces \(\binom{n}{2}\) new columns in the matrix M, each containing the image of a single basis bivector. The entries in each column corresponding to other grades are zero, so we obtain a new \(\binom{n}{2} \times \binom{n}{2}\) submatrix of M that tells us specifically how to transform any arbitrary bivector quantity.

Continuing this process for trivectors and higher grades by multiplying three or more columns of m together with the wedge product fills out the rest of the exomorphism matrix M. The submatrix that transforms quantities of grade k has size \(\binom{n}{k} \times \binom{n}{k}\), so M has the form of a block diagonal matrix with \(n + 1\) submatrices corresponding to the \(n + 1\) possible grades as k ranges from 0 to n. The submatrix corresponding to grade k is known as the kth compound matrix of m, denoted by \(C_k(\mathbf m)\).

We define the grade-zero submatrix \(C_0(\mathbf m)\) that transforms scalars to be the \(1 \times 1\) identity matrix, so the upper-left entry of M is always the number one. The entry in the bottom-right corner represents the \(1 \times 1\) submatrix \(C_n(\mathbf m)\) containing the image of the volume element \(\unicode{x1D7D9}\), and it tells us how antiscalars are transformed. Since the volume element is equal to the wedge product of all n basis vectors \(\mathbf e_1\) through \(\mathbf e_n\), it must be transformed by multiplying it by the wedge product of all columns of m, which is equal to its determinant. Thus, the entry in the bottom-right corner of M is always equal to \(\det \mathbf m\).

The submatrix \(C_{n - 1}(\mathbf m)\) in the penultimate position along the diagonal has size \(n \times n\), and it transforms antivectors. This submatrix is always the adjugate transpose of the matrix m because each of its columns is constructed from the wedge product of all but one column of m.

The 3D Euclidean exterior algebra is simple enough that it enables a fully written out example of the above process. Here, the eight basis elements are \(\mathbf 1\), \(\mathbf e_1\), \(\mathbf e_2\), \(\mathbf e_3\), \(\mathbf e_{23}\), \(\mathbf e_{31}\), \(\mathbf e_{12}\), and \(\unicode{x1D7D9}\), where it’s important that we keep a consistent order. The exomorphism matrix M has eight rows and eight columns that correspond to the basis elements in the same order, and a multivector is expressed as an eight-component column matrix whose entries are the coefficients of the basis elements in the same order. Now suppose that we have a \(3 \times 3\) matrix m that transforms the grade-one vectors from one coordinate system to another. We label the columns of m as the 3D vectors a, b, and c so that m can be written as

\(\mathbf m = \begin{bmatrix}a_x && b_x && c_x \\ a_y && b_y && c_y \\ a_z && b_z && c_z\end{bmatrix}\). (2.50)

The vectors a, b, and c are the images of the basis elements \(\mathbf e_1\), \(\mathbf e_2\), and \(\mathbf e_3\), respectively, when transformed by the matrix m.

The second compound matrix of m transforms bivectors, and it consists of all possible wedge products between pairs of columns of m. We just have to be careful about the order in which we multiply the columns of m and the order in which we write the components of the results. The first column of \(C_2(\mathbf m)\) is given by the wedge product \(\mathbf b \wedge \mathbf c\) because the first bivector basis element in the order we have imposed is \(\mathbf e_{23}\). When we calculate \(\mathbf b \wedge \mathbf c\), we get

\(\mathbf b \wedge \mathbf c = (b_yc_z - b_zc_y)\mathbf e_{23} + (b_zc_x - b_xc_z)\mathbf e_{31} + (b_xc_y - b_yc_x)\mathbf e_{12}\), (2.51)

where we have written the terms in the order imposed on the basis elements. The three coefficients on the right side are the three entries in the first column of \(C_2(\mathbf m)\). The entries of the other two columns are similarly given by \(\mathbf c \wedge \mathbf a\), corresponding to the \(\mathbf e_{31}\) basis element, and \(\mathbf a \wedge \mathbf b\), corresponding to the \(\mathbf e_{12}\) basis element. Altogether, these nine coefficients constitute the entire matrix \(C_2(\mathbf m)\), which we can now write as

\(C_2(\mathbf m) = \begin{bmatrix}b_yc_z - b_zc_y && c_ya_z - c_za_y && a_yb_z - a_zb_y \\ b_zc_x - b_xc_z && c_ya_z - c_za_y && a_yb_z - a_zb_y \\ b_xc_y - b_yc_x && c_ya_z - c_za_y && a_yb_z - a_zb_y\end{bmatrix}\). (2.52)

The entries of this matrix are the same values appearing in Equation (1) that arose in the transformation of quantities generated by the cross product. The cross product is really a wedge product in disguise, and it produces bivector quantities that must transform in this manner.

The exomorphism matrix M is the \(8 \times 8\) block diagonal matrix whose submatrices are \(C_0(\mathbf m)\), \(C_1(\mathbf m)\), \(C_2(\mathbf m)\), and \(C_3(\mathbf m)\), as illustrated in Figure 2.10. \(C_0(\mathbf m)\) is always the \(1 \times 1\) identity matrix having a single entry filled with the number one. \(C_1(\mathbf m)\) is just the matrix m itself. \(C_2(\mathbf m)\) was derived from m, and its entries are given by Equation (2.52). Finally, \(C_3(\mathbf m)\) is the \(1 \times 1\) matrix whose single entry is the determinant of m. With these submatrices arranged along the diagonal, the exomorphism matrix M correctly transforms any 3D multivector of the form shown in Equation (2.47) given that grade-one vectors are transformed by the matrix m and with the requirement that transformation by M is a homomorphism under the wedge product.

Figure 2.10. The exomorphism matrix M is an \(8 \times 8\) block diagonal matrix that transforms eight-component 3D multivectors from one coordinate system to another. The \(3 \times 3\) matrix m is a given transformation of grade-one vectors, and the rest of the matrix M is derived from it. Each submatrix along the diagonal is a kth compound matrix of m, where \(0 \le k \le 3\). Each column of M corresponds to the image of the basis element shown at the top under the transformation that M performs. Entries in unshaded areas are zero.

Since we have modeled homogeneous points, lines, and planes in a 4D projective space, we are interested in how 4D quantities transform from one coordinate system to another. The process for constructing the exomorphism matrix M is exactly the same as in the 3D case except that we now have a total of 16 basis elements, and we begin with a \(4 \times 4\) matrix m that transforms grade-one vectors. The result is the block diagonal matrix M shown in Figure 2.11 containing five submatrices corresponding to the five different grades that exist in the 4D algebra.

Instead of calculating the compound matrices \(C_2(\mathbf m)\) and \(C_3(\mathbf m)\) for an arbitrary matrix m, which becomes very tedious even if we can assume the fourth row is \([0\ 0\ 0\ 1]\), we look at some specific examples. First, suppose that m is the matrix that performs a translation by the vector t and thus has the form

\(\mathbf m = \begin{bmatrix} 1 && 0 && 0 && t_x \\ 0 && 1 && 0 && t_y \\ 0 && 0 && 1 && t_z \\ 0 && 0 && 0 && 1 \end{bmatrix}\). (2.53)

This matrix directly translates any homogeneous point \((x, y, z, w)\). To translate a bivector representing a line, we need to calculate the \(6 \times 6\) matrix \(C_2(\mathbf m)\) by taking wedge products of the columns of m in the appropriate order. Using the order of components appearing in the definition of a line given by Equation (2.36), we have

\(C_2(\mathbf m) = \begin{bmatrix} 1 && 0 && 0 && 0 && 0 && 0 \\ 0 && 1 && 0 && 0 && 0 && 0 \\ 0 && 0 && 1 && 0 && 0 && 0 \\ 0 && -t_z && t_y && 1 && 0 && 0 \\ t_z && 0 && -t_x && 0 && 1 && 0 \\ -t_y && t_x && 0 && 0 && 0 && 1 \end{bmatrix}\), (2.54)

and this transforms a line with coordinates \((l_{vx}, l_{vy}, l_{vz}, l_{mx}, l_{my}, l_{mz})\). The effect of \(C_2(\mathbf m)\) matches the transformation of Plücker coordinates given by Equation (1.42) for a pure translation where the cross product between t and the line’s direction is added to the line’s moment. Next, to translate a trivector representing a plane, we calculate the \(4 \times 4\) matrix \(C_3(\mathbf m)\) and obtain

\(C_3(\mathbf m) = \begin{bmatrix} 1 && 0 && 0 && 0 \\ 0 && 1 && 0 && 0 \\ 0 && 0 && 1 && 0 \\ -t_x && -t_y && -t_z && 1 \end{bmatrix}\), (2.55)

which transforms a plane with coordinates \((g_x, g_y, g_z, g_w)\). The effect of \(C_3(\mathbf m)\) matches the transformation of the d coordinate of a plane in Equation (1.38) for a pure translation where the dot product between t and the plane’s normal is subtracted from the plane’s distance from the origin.

For another example, suppose that m is the matrix that performs a nonuniform scale along the x, y, and z axes and thus has the form

\(\mathbf m = \begin{bmatrix} s_x && 0 && 0 && 0 \\ 0 && s_y && 0 && 0 \\ 0 && 0 && s_z && 0 \\ 0 && 0 && 0 && 1 \end{bmatrix}\). (2.56)

This is the submatrix shaded green in Figure 2.11. The second and third compound matrices of m that transform lines (bivectors) and planes (trivectors) are given by

\(C_2(\mathbf m) = \begin{bmatrix} s_x && 0 && 0 && 0 && 0 && 0 \\ 0 && s_y && 0 && 0 && 0 && 0 \\ 0 && 0 && s_z && 0 && 0 && 0 \\ 0 && 0 && 0 && s_ys_z && 0 && 0 \\ 0 && 0 && 0 && 0 && s_zs_x && 0 \\ 0 && 0 && 0 && 0 && 0 && s_xs_y \end{bmatrix}\)

and

\(C_3(\mathbf m) = \begin{bmatrix} s_ys_z && 0 && 0 && 0 \\ 0 && s_zs_x && 0 && 0 \\ 0 && 0 && s_xs_y && 0 \\ 0 && 0 && 0 && s_xs_ys_z \end{bmatrix}\). (2.57)

These are the submatrices shaded blue and purple in Figure 2.11. By examining the entries of \(C_2(\mathbf m)\), we can see that the direction of a line scales just like a vector, and this is due to the direction being a length-like quantity. The moment of a line is an area-like quantity, and its scaling transformation therefore involves two factors. The normal of a plane is also an area-like quantity, and the entries of \(C_3(\mathbf m)\) demonstrate that it scales with two factors as well. The w coordinate of a plane is the only component that scales like a volume, as shown by the product \(s_xs_ys_z\) in the bottom-right corner of \(C_3(\mathbf m)\).

Figure 2.11. The exomorphism matrix M is a \(16 \times 16\) block diagonal matrix that transforms multivectors belonging to the 4D projective space. The matrix m is a given transformation of grade-one vectors, and each submatrix along the diagonal is a kth compound matrix of m, where \(0 \le k \le 4\). Each column of M corresponds to the image of the basis element shown at the top under the transformation that M performs. Entries in unshaded areas are zero.

Additional Resources