# Poor Foundations in Geometric Algebra

Eric Lengyel • August 23, 2024

*You’ve got to get the fundamentals down, because otherwise the fancy stuff is not going to work.*

—Randy Pausch

I have been involved in exterior algebra and geometric algebra research for about 15 years now. Over that time, I have developed a good intuition for how these subjects are viewed by mathematicians outside the field. Their opinions are often not positive due to (a) a perceived lack of mathematical rigor and (b) a very real toxicity within the geometric algebra community. I can’t do much about (b), but I have a lot to say about (a), and that’s what this post is about. Many of the foundational concepts in geometric algebra (GA) are poorly designed at the defining level and/or hijack the meanings of well established terminology. This frequently leads to inelegant hacks that I like to call “duct tape and rubber bands” arising elsewhere to compensate for the shaky foundations. I am going to focus on three particular concepts, inner products, contractions, and duals, because they are all connected to each other. If you screw up any one of them, you screw up all of them. I’ll also make some comments about the erroneous performance claims that seem to keep popping up.

Due to its popularity, I’ll be quoting from the book by Dorst, Fontijne, and Mann, *Geometric Algebra for Computer Science* (GA4CS), but it should be understood that the same problems described in this
post exist throughout essentially all of the geometric algebra literature. Later in this post, I’ll be quoting from the follow-up chapter to this book, “A Guided Tour to the Plane-Based Geometric Algebra”.

## Inner Products

You might think something as basic as the dot product would be easy to get right, but then you would be underestimating the ability of geometric algebra authors to make a complete mess of things.
There are several different generalizations of the inner product described in the GA literature (scalar product, fat dot product, Hestenes dot product, etc.), but every one of them is incorrect.
It has been pure guesswork compounded by a long tradition of authors copying bad ideas from other authors. To begin with, an inner product must produce a scalar quantity by definition, so we can
immediately eliminate any inner products that are capable of producing nonscalar results. (Those are interior products, not inner products.) Next, inner products that yield imaginary norms under a positive-semidefinite
metric are out. This happens when one defines the inner product by extracting the scalar part of the geometric product as is. The authors of GA4CS understood that such an inner product would be invalid,
but there is a religious belief that the geometric product is fundamental and that all other products should be derived from it. There is no mathematical justification for that, and it leads to a
hornet’s nest of sign inconsistencies. Nevertheless, the authors of GA4CS defined a separate, entirely superfluous *scalar product* as follows:

**GA4CS, Page 67**

For *k*-blades \(\mathbf A = \mathbf a_1 \wedge \ldots \wedge \mathbf a_k\) and \(\mathbf B = \mathbf b_1 \wedge \ldots \wedge \mathbf b_k\)
and scalars \(\alpha\) and \(\beta,\) the *scalar product* \(\ast : {\Large\wedge}^k \,\mathbb R^n \times {\Large\wedge}^k \,\mathbb R^n \rightarrow \mathbb R\)
is defined as

\(\begin{split} \alpha \ast \beta &= \alpha\beta \\[1ex] \mathbf A \ast \mathbf B &= \begin{vmatrix}\mathbf a_1 \cdot \mathbf b_k & \mathbf a_1 \cdot \mathbf b_{k - 1} & \ldots & \mathbf a_1 \cdot \mathbf b_1 \\ \mathbf a_2 \cdot \mathbf b_k & \mathbf a_2 \cdot \mathbf b_{k - 1} & \ldots & \mathbf a_2 \cdot \mathbf b_1 \\ \vdots & & \ddots & \vdots \\ \mathbf a_k \cdot \mathbf b_k & \mathbf a_k \cdot \mathbf b_{k - 1} & \ldots & \mathbf a_k \cdot \mathbf b_1\end{vmatrix} \\[1ex] \mathbf A \ast \mathbf B &= 0 \;\;\; \text{between blades of unequal grades} \end{split}\)

In this definition, the authors reversed the columns of the Gram determinant without explanation. Had they not done that, this would have been a correct formula for the inner product.
The authors go on to define the inner product between blades **A** and **B** in terms of the scalar product as \(\mathbf A \ast \mathbf{\widetilde B},\) where the reverse operation on **B**
undoes the column reversal in the definition of \(\ast.\) The norm is also defined as \(\Vert \mathbf A \Vert^2 = \mathbf A \ast \mathbf{\widetilde A}.\) These definitions produce the correct results,
but they involve a separate scalar product and additional reverse operations that really shouldn’t be there. These are the duct tape and rubber bands holding together a poorly designed framework.
The reason the scalar product needed to be defined like this has to do with the way contractions were defined. We’ll get to that in a moment after taking a look at a better formulation of the inner product.

Once a metric tensor has been established, there is exactly one valid inner product on the full exterior algebra, and there can be only one. The correct inner product is actually quite easy to construct.
Given an \(n \times n\) metric tensor \(\mathfrak g\) (or matrix representation of the bilinear form, if you prefer) on an *n*-dimensional vector space, the inner product between vectors **a** and **b**
is given by \(\mathbf a \cdot \mathbf b = \mathbf a^{\mathrm T} \mathfrak g \mathbf b,\) by definition, where all products on the right side are matrix multiplication. The metric tensor \(\mathfrak g\) is a
linear transformation that is extended to a \(2^n \times 2^n\) exomorphism matrix **G** that transforms the entire exterior algebra by constructing the block diagonal matrix of compound matrices of
\(\mathfrak g.\) (This may sound complicated, but it’s very easy to do.) The inner product between two *arbitrary* multivectors **A** and **B** is then given by
\(\mathbf A \mathbin{\small\unicode{x2022}} \mathbf B = \mathbf A^{\mathrm T} \mathbf G \mathbf B,\) where the fat dot \(\small\unicode{x2022}\) indicates that this inner product applies to the entire
multivector space. This definition represents the only mathematically sound way to extend the inner product on the base vector space to the full exterior algebra. There is no choice in the matter,
nor should there be. With the correct definition of the inner product in hand, everything related to it works out very nicely. For example, the induced Euclidean norm is simply
\(\Vert \mathbf A \Vert = \sqrt{\mathbf A \mathbin{\unicode{x2022}} \mathbf A},\) just as it should be. The definition of the scalar product given in GA4CS above disagrees with the inner product when
**A** and **B** have grades 2 or 3 modulo 4 because that’s exactly where a reversal of vector factors results in a negation, and the geometric product is required to be anticommutative.
Hence, the reverse must be applied in their definitions of inner product and norm to undo the mistake.

(Some people may be thinking of the canonical norms in the complex numbers and quaternions right now and asking how the norm of a blade \(\mathbf A\) given by \(\Vert \mathbf A \Vert = \sqrt{\mathbf A \ast \mathbf{\widetilde A}}\)
can be wrong when the norm of a complex number *z* is given by \(\Vert z \Vert = \sqrt{zz^* \vphantom A}\) and the norm of a quaternion **q** is given by \(\Vert \mathbf q \Vert = \sqrt{\mathbf q \mathbf q^* \vphantom A}.\)
After all, the complex numbers can be identified with \(\mathrm{Cl}_{2,0}^+\), and the quaternions can be identified with \(\mathrm{Cl}_{3,0}^+\) where the conjugate operation \(^*\) is equivalent to reversal.
The difference is that the product under the radical is complex multiplication or quaternion multiplication, not the inner product. Using the inner product, the norm of a complex number can also be written as
\(\Vert z \Vert = \sqrt{z \mathbin{\small\unicode{x2022}} z \vphantom A}\) and the norm of a quaternion as \(\Vert \mathbf q \Vert = \sqrt{\mathbf q \mathbin{\small\unicode{x2022}} \mathbf q \vphantom A}.\)
This generalizes to the identity \(\mathbf A \mathbin{\small\unicode{x2022}} \mathbf A = \langle \mathbf A \mathbf{\widetilde A} \rangle_0\) in geometric algebra, where the product on the right is the geometric product.)

## Contractions

GA4CS makes heavy use of the contraction product (too much, in my opinion), and the authors decided to define it in such a way that it would reduce to the scalar product when the operands have
the same grade. I assume this is another instance of simply wanting something to be an exact piece of the geometric product without a sound mathematical reason. Contractions are *interior products*,
and interior products reduce to the *inner product*, not the scalar product. Here is the definition given in the book:

**GA4CS, Page 73**

The *contraction* \(\unicode{x230B}\) is a product producing a \((k - l)\)-blade from a \(k\)-blade and an
\(l\)-blade (so it is a mapping \(\unicode{x230B} : {\Large\wedge}^k \,\mathbb R^n \times {\Large\wedge}^l \,\mathbb R^n \rightarrow {\Large\wedge}^{k - l} \,\mathbb R^n\)),
with the following defining properties:

\(\begin{split} \alpha \unicode{x230B} \mathbf B &= \alpha\,\mathbf B \\ \mathbf B \unicode{x230B} \alpha &= 0 \;\;\; \mathrm{if}\ \operatorname{grade}(\mathbf B) > 0 \\ \mathbf a \unicode{x230B} \mathbf b &= \mathbf a \cdot \mathbf b \\ \mathbf a \unicode{x230B} (\mathbf B \wedge \mathbf C) &= (\mathbf a \unicode{x230B} \mathbf B) \wedge \mathbf C + (-1)^{\operatorname{grade}(\mathbf B)}\mathbf B \wedge (\mathbf a \unicode{x230B} \mathbf C) \\ (\mathbf A \wedge \mathbf B) \unicode{x230B} \mathbf C &= \mathbf A \unicode{x230B} (\mathbf B \unicode{x230B} \mathbf C), \end{split}\)

where \(\alpha\) is a scalar, **a** and **b** are vectors, and **A**, **B**, and **C** are blades
(which could be scalars or vectors as well as higher-dimensional blades).

To begin with, this is not a definition! It’s just a list of properties that the left contraction must possess, and the authors are presenting them as an “axiomatic form”.
They explain that these properties should be used to construct a “recursive program” that calculates the contraction product, but that’s unnecessarily difficult and horribly unintuitive.
Furthermore, the fourth property contains a defect, and this is what causes the contraction to reduce to the above-defined scalar product instead of the inner product. (These properties appear to have been
pulled straight from Lounesto, *Clifford Algebras and Spinors*, page 46, which contains the same error.) Under this definition, the contraction \(\mathbf A \unicode{x230B} \mathbf B\) produces results
that can be selected from the geometric product \(\mathbf{AB}\) as the part with grade equal to \(\operatorname{gr}(\mathbf B) - \operatorname{gr}(\mathbf A).\) The authors consider this to be a desirable
property, but again, there is no mathematical basis for preferring such a thing, and it breaks the established meaning of the interior product. The correct equation for the fourth property is

\(\mathbf a \unicode{x230B} (\mathbf B \wedge \mathbf C) = \mathbf B \wedge (\mathbf a \unicode{x230B} \mathbf C) + (-1)^{\operatorname{grade}(\mathbf C)}(\mathbf a \unicode{x230B} \mathbf B) \wedge \mathbf C,\)

and the analogous equation for the right contraction is

\((\mathbf B \wedge \mathbf C) \unicode{x230A} \mathbf a = (\mathbf B \unicode{x230A} \mathbf a) \wedge \mathbf C + (-1)^{\operatorname{grade}(\mathbf B)} \mathbf B \wedge (\mathbf C \unicode{x230A} \mathbf a).\)

(Interior products are called *derivations* because these equations have the form of a graded Leibniz product rule.) With these properties, the contraction properly reduces to the inner product,
but calculating contractions with a list of axiomatic properties is still extremely clunky. Vastly cleaner definitions of the left and right contraction products are given by

\(\begin{split} \mathbf A \unicode{x230B} \mathbf B &= \mathbf A_{\small\unicode{x2605}} \vee \mathbf B \\[1ex] \mathbf A \unicode{x230A} \mathbf B &= \mathbf A \vee \mathbf B^{\small\unicode{x2605}}, \end{split}\)

where \(^{\small\unicode{x2605}}\) is the Hodge dual, but the authors were clearly unaware that these existed. (In fact, the antiwedge product
\(\vee\) doesn’t show up anywhere in the entire book despite being of fundamental importance.) These definitions make the calculation of either contraction explicit and very straightforward. All of the
properties listed by the authors (with the fourth one fixed) are easily derived from these definitions as theorems, not axioms, and they satisfy the requirement of interior products that
\(\mathbf A \unicode{x230B} \mathbf B = \mathbf A \unicode{x230A} \mathbf B = \mathbf A \mathbin{\small\unicode{x2022}} \mathbf B\) whenever **A** and **B** have the same grade. They’re perfect,
and there’s no reason to make contractions any more complicated. (Some knowledgeable readers may be worried that these definitions depend on the orientation chosen for the volume element due to the
dependence of both the antiwedge product and Hodge dual on the complement operation. These two dependencies always cancel out, so neither contraction changes when the orientation of space is negated.)

## Duals

The authors of GA4CS define the dual as follows:

**GA4CS, Page 80**

Given a *k*-blade \(\mathbf A_k\) in the space \(\mathbb R^n\) with unit pseudoscalar \(\mathbf I_n,\) its *dual*
is obtained by the dualization mapping \(^* : {\Large\wedge}^k \,\mathbb R \rightarrow {\Large\wedge}^k \,\mathbb R\) defined by

*dualization:* \(\mathbf A_k^* = \mathbf A_k \unicode{x230B} \mathbf I_n^{-1}.\)

The operation “taking the dual” is linear in \(\mathbf A_k,\) and it results in a blade with the same magnitude
as \(\mathbf A_k\) and a well-defined orientation. The reason for the inverse pseudoscalar is clear when we use it on a
(hyper) volume blade such as \(\mathbf a \wedge \mathbf b \wedge \mathbf c.\) We have seen in the previous chapter how such
an *n*-blade is proportional to the pseudoscalar \(\mathbf I_n\) by a scalar that is the oriented volume. With the
definition of dual, that oriented scalar volume is simply its dual, \((\mathbf a \wedge \mathbf b \wedge \mathbf c)^*,\)
without extraneous signs.

This definition is equivalent to \(\mathbf A_k^* = \mathbf A_k \mathbf I_n^{-1},\) where the geometric product (implied by juxtaposition) takes the place of the left contraction.
Virtually all authors define the dual of **A** by multiplying on the right or left by one of \(\mathbf I_n,\) \(\mathbf I_n^{-1},\) \(\mathbf{\widetilde I}_n,\) or \(-\mathbf I_n,\)
and there seems to be no consensus as to which one is best. It doesn’t matter because every single one of these definitions is crap. They all stem from the observation that multiplying
by \(\mathbf I_n\) under a nondegenerate metric inverts the dimensionality of **A** and produces something in its orthogonal complement. The first problem with this is that the resulting
complement does not have a consistent orientation, matching neither the right complement nor left complement in the exterior algebra. The second problem, pointed out in the above excerpt,
is that the dual of a volume having the same orientation as the pseudoscalar is a negative scalar quantity in both 2D and 3D. The duct tape applied by the authors in this case was to use
\(\mathbf I_n^{-1}\) instead of \(\mathbf I_n,\) but that causes problems if the pseudoscalar isn’t invertible (as in projective GA), so \(\mathbf{\widetilde I}_n\) or \(-\mathbf I_n\)
arguably make better choices. Regardless, these are all just ugly hacks, and there’s a third problem that none of them address. Operators having components of mixed grades, like quaternions,
tend to get inverted due to the inconsistent orientation of the dual, causing them to rotate in the opposite direction.

The correct definition of the dual is given by \(\mathbf A^{\small\unicode{x2605}} = \overline{\mathbf{GA}},\) where **A** is an *arbitrary* multivector, **G** is the same
\(2^n \times 2^n\) extended metric discussed under inner products above, the product \(\mathbf{GA}\) is matrix multiplication, and the overline applies the right complement.
(There is also a left version of the dual defined by \(\mathbf A_{\small\unicode{x2605}} = \underline{\mathbf{GA}},\) where the underline applies the left complement.)
This dual produces results with a consistent orientation, it produces positive scalar magnitudes for volumes matching the orientation of the pseudoscalar in all dimensions, it preserves orthogonal
operators, it has no dependence on the geometric product, and it produces sensible results even when the metric is degenerate. More importantly, it consistently satisfies the identity
\(\mathbf A \wedge \mathbf B^{\small\unicode{x2605}} = (\mathbf A \mathbin{\small\unicode{x2022}} \mathbf B) \mathbf I_n\) when **A** and **B** have the same grade, which is usually stated
as the defining property of the Hodge dual. None of the definitions \(\mathbf{AI}_n,\) \(\mathbf{AI}_n^{-1},\) \(\mathbf{A\widetilde I}_n,\) or \(-\mathbf{AI}_n\) possess these desirable properties
because they are nothing more than random shots in the dark made by people who really had no idea what they were doing. If you want to express the dual in terms of the geometric product, there is an
easily derived identity given by \(\mathbf A^{\small\unicode{x2605}} = \mathbf{\widetilde A I}_n,\) which isn’t far off from all the wrong answers.

The contractions and duals built by the authors of GA4CS (and many others) do exhibit some consistency among themselves, but their flaws become apparent when they are viewed within the larger picture. In particular, the fact that the contractions and duals are tied to a scalar product different from the inner product reveals that something is broken. The correct solution is both self-consistent and fits into the larger picture properly by connecting directly with the inner product, obviating the need for a separate scalar product. The differences are summarized in the following table.

Broken | Correct |
---|---|

A scalar product is defined with a reversed Gram determinant. It satisfies the identity \(\mathbf A \ast \mathbf B = \langle\mathbf{AB}\rangle_0.\) | No scalar product necessary. Get rid of it. |

The inner product is defined as \(\mathbf A \mathbin{\small\unicode{x2022}} \mathbf B = \mathbf A \ast \mathbf{\widetilde B}.\) | The inner product is defined as \(\mathbf A \mathbin{\small\unicode{x2022}} \mathbf B = \mathbf A^{\mathrm T} \mathbf{GB}.\) It satisfies the identity \(\mathbf A \mathbin{\small\unicode{x2022}} \mathbf B = \langle\mathbf{A \widetilde B}\rangle_0 = \langle\mathbf{B \widetilde A}\rangle_0.\) |

The norm of A is defined as \(\Vert \mathbf A \Vert = \sqrt{\mathbf A \ast \mathbf{\widetilde A}}.\) |
The norm of A is defined as \(\Vert \mathbf A \Vert = \sqrt{\mathbf A \mathbin{\small\unicode{x2022}} \mathbf A \vphantom{\mathbf{\widetilde A}}}.\) |

The contractions are implicitly defined by a list of axioms. For blades \(\begin{split}\mathbf A \unicode{x230B} \mathbf B\ &= \langle\mathbf{AB}\rangle_{\operatorname{gr}(\mathbf B) - \operatorname{gr}(\mathbf A)} \\[1ex] \mathbf A \unicode{x230A} \mathbf B\ &= \langle\mathbf{AB}\rangle_{\operatorname{gr}(\mathbf A) - \operatorname{gr}(\mathbf B)}\end{split}\) |
The contractions are explicitly defined by: \(\begin{split}\mathbf A \unicode{x230B} \mathbf B &= \mathbf A_{\small\unicode{x2605}} \vee \mathbf B \\[1ex] \mathbf A \unicode{x230A} \mathbf B &= \mathbf A \vee \mathbf B^{\small\unicode{x2605}}\end{split}\) For blades \(\begin{split}\mathbf A \unicode{x230B} \mathbf B\ &= \langle\mathbf{B \widetilde A}\rangle_{\operatorname{gr}(\mathbf B) - \operatorname{gr}(\mathbf A)} \\[1ex] \mathbf A \unicode{x230A} \mathbf B\ &= \langle\mathbf{\widetilde B A}\rangle_{\operatorname{gr}(\mathbf A) - \operatorname{gr}(\mathbf B)}\end{split}\) |

For A and B with same grade, the contractions reduce to the scalar product \(\mathbf A \ast \mathbf B.\) |
For A and B with same grade, the contractions reduce to the inner product \(\mathbf A \mathbin{\small\unicode{x2022}} \mathbf B.\) |

For a vector a and blade B, the geometric product satisfies \(\mathbf{aB} = \mathbf a \unicode{x230B} \mathbf B + \mathbf a \wedge \mathbf B.\) |
For a vector a and blade B, the geometric product satisfies \(\mathbf{aB} = \mathbf B \unicode{x230A} \mathbf a + \mathbf a \wedge \mathbf B.\) |

The dual is defined by \(\mathbf A^* = \mathbf A \mathbf I_n^{-1}.\) | The dual is defined by \(\mathbf A^{\small\unicode{x2605}} = \overline{\mathbf{GA}}.\) It satisfies the identity \(\mathbf A^{\small\unicode{x2605}} = \mathbf{\widetilde A}\mathbf I_n.\) |

For A and B with same grade, the dual satisfies the identity \(\mathbf A \wedge \mathbf B^* = (\mathbf A \ast \mathbf B) \mathbf I_n^{-1},\)
connecting it to the scalar product. |
For A and B with same grade, the dual satisfies the identity \(\mathbf A \wedge \mathbf B^{\small\unicode{x2605}} = (\mathbf A \mathbin{\small\unicode{x2022}} \mathbf B) \mathbf I_n,\)
connecting it to the inner product. |

The definition of dual fails when the metric is degenerate. | Duals work correctly when the metric is degenerate. This is important in projective GA. |

Duals do not have a consistent orientation. The dual of an orthogonal operator is inverted. | Duals have a consistent orientation. Orthogonal operators and their duals are equivalent. |

## Transform Composition

Many authors like to shout very loudly about how geometric algebra is revolutionary and will make everything faster and more efficient, but they often get the actual numbers wrong. In reality, there are a very limited number of cases in which GA methods are faster than the equivalent matrix methods. So imagine my surprise when I read this excerpt from GA4CS:

**GA4CS, Page 200**

** For the composition of orthogonal transformations, rotors are superior in up to 10 dimensions.**
The geometric algebra of an

*n*-dimensional space has a general basis of \(2^n\) elements. Rotors, which are only even-dimensional, in principle require \(2^{n - 1}\) parameters for their specification (though typical rotors use only part of this). Linear transformations specified by matrices need \(n \times n\) matrices with \(n^2\) parameters (and typically need them all). ... Composing transformations as rotors takes \(2^n\) operations, and composing them as matrices requires \(n^3\) operations. Therefore in fewer than 10 dimensions, rotors are more efficient than matrices (in the practical dimensions 3, 4, 5 about four times more). ... Unit quaternions are rotors in 3-D, and of course well known to be efficient for the composition of rotations.

This claim is not only dead wrong, but completely absurd. The authors made an error in their calculation of the number of operations needed to compose two rotors.
The correct number is \(2^{n - 1} \cdot 2^{n - 1} = 2^{2n - 2}\), but they thought the number was just \(2^n\). The same mistake is made later on the same page, where
they explicitly write \(2^{n - 1} \times n \times 2^{n - 1} = n 2^n\). This second instance is listed in the errata,
but it doesn’t look like anybody noticed the first one. It’s clear that while one of the authors was writing this section, he had it stuck in his head that multiplying \(2^{n - 1}\)
by itself gave \(2^n\). These kinds of errors happen, and the fact that something was simply miscalculated is not my chief complaint. The real problem here is that none of the three authors
paused for a second and thought to themselves, “Wait a minute. It can't be *that* good because it would tell us there’s something fundamentally inefficient about
linear algebra that we don’t understand.” If the conclusion was correct, then it would open new avenues of research into exactly why orthogonal matrix transformations
are so bad. But that didn’t happen, of course, because the authors are wrong. The correct numbers are listed in the following table up through five dimensions.

\(n\) | Matrix Composition \(n^3\) operations |
Rotor Composition \(2^{2n - 2}\) operations |
---|---|---|

2 | 4 * | 4 |

3 | 24 * | 16 |

4 | 64 | 64 |

5 | 125 | 256 |

Entries with an asterisk under matrix composition have been reduced by exploiting the orthogonality of the matrices. Some GA enthusiasts have cried foul when I do this, claiming that it’s somehow not fair that I use the most efficient method of calculation available, but they don’t hesitate to pat themselves on the back whenever they’re able to make use of a similar type of optimization that happens to benefit GA. On multiple occasions, I’ve seen GA authors purposely compare the best possible implementation of a GA method against the worst possible implementation of the equivalent matrix method in an attempt to demonstrate superiority that doesn’t actually exist. An example is highlighted in PGA4CS below.

As evident in the table, rotor composition is faster in exactly one case, three dimensions, and it’s 1.5 times faster than the equivalent matrix composition, not the four times claimed by the authors. Multiplying two quaternions together is indeed faster than multiplying two orthogonal \(3 \times 3\) matrices together. But that’s it. Everywhere else, rotors offer no computational advantage over matrices. The situation gets worse for GA when translations are added to the mix. In projective GA where motors represent all possible combinations of rotations and translations, the composition of equivalent matrices is significantly faster than motor composition in all dimensions.

## Projective Geometric Algebra (PGA)

We’ll now move on to the new chapter entitled “A Guided Tour to the Plane-Based Geometric Algebra” (PGA4CS, version 2.0), which was originally written as an upgrade to Chapter 11 in GA4CS. This chapter takes on a very condescending tone that repeatedly bashes a more fully developed model considered superior by many people. It is not my intention to argue against the so-called “plane-based” model in this post, but instead point out a few glaring instances of crackpot-level bullshit that fall within this post’s theme of poor foundations.

First, on the topic of norms in Section 4, the authors begin with a restatement of the norm induced by the inner product previously given in GA4CS:

**PGA4CS, Page 34**

The natural concept of the norm of an element *A* in geometric algebras is the usual

\(\Vert A \Vert = \sqrt{A \ast \widetilde A} = \sqrt{\langle A \widetilde A \rangle_0}\), (42)

where \(\ast\) is the scalar product, and \(\langle\rangle_0\) selects the grade-0 part of its argument; the two are equivalent. This definition works well in spaces where the basis vectors do not have negative squares: the reversion always makes similar elements cancel each other with a positive or zero scalar, so the square root exists. This definition therefore seems to suffice for PGA.

Norms in PGA have been completely worked out, and the result is the beautiful concept of a homogeneous magnitude \(\Vert \mathbf A \Vert = \sqrt{\mathbf A \mathbin{\small\unicode{x2022}} \mathbf A} + \sqrt{\mathbf A \mathbin{\small\unicode{x2218}} \mathbf A}\) produced by the combination of two seminorms induced by the two complementary inner products. The authors appear to have no knowledge of this whatsoever and have instead written one of the jankiest sections about norms I have ever seen, claiming that it “seems to suffice”. There is no mention of magnitude, distance from origin, distance between objects, angle between objects, or distance an object is moved by an operator. You know, the things a legitimate norm would actually tell you. They comment on the definition:

**PGA4CS, Page 35**

However, when an element *A* is ‘null’ (i.e., has a zero square), the standard norm eq.(42) is rather useless.

By itself, the norm in Equation (42) is *always* useless due to the homogeneous nature of the geometric representations in PGA. It has no connection to a concrete distance because any object can be
arbitrarily scaled without changing its geometric meaning, so the norm could take on any value for any object. This is a completely solved problem when the two norms in PGA are combined properly. The authors
recognize that a second norm exists, but they clearly have no idea what to do with it, and they never offer a working solution. They define a second “infinity norm” as follows:

**PGA4CS, Page 35**

Algebraically, it can be defined by employing the *reciprocal* \(e_0^r\) of \(e_0\), as a dot product (so that the
parts not containing \(e_0\) automatically do not contribute).

\(\Vert A \Vert_\infty = \Vert e_0^r \cdot A \Vert\). (43)

This is called the *infinity norm*, or *ideal norm*; for us it would be consistent to call it the *vanishing norm*.

What the actual hell? There is no reciprocal of \(e_0\) that can be used in this way. It doesn’t exist, and this entire line of reasoning is crank nonsense. The authors’ own definition
of reciprocal basis vector, given by Equation (3.31) in GA4CS, breaks down when the metric is degenerate. (And if you try to derive a reciprocal of \(e_0\) by applying the metric tensor, you get zero.)
They are trying to strip off a factor of \(e_0\) from *A* here, and that could be done properly by simply calculating \(A \vee \mathbf e_{123}\). But that still isn’t the right way to express
the other norm. The authors do not understand operator duality, which very naturally and elegantly produces the second norm with the complemetary inner product. The authors continue with:

**PGA4CS, Page 35**

Note that this definition eq.(43) is not algebraically a part of PGA proper, since we need to invoke the reciprocal \(e_0^r\) of \(e_0\), which is in the dual space of PGA. When in Section 9 we have the Hodge dual, we can employ that to denote the infinity norm as

\(\Vert A \Vert_\infty = \Vert \star A \Vert\), (44)

since the Hodge dual also has the effect of removing the \(e_0\) part from *A* and producing something of which a Euclidean norm
can be taken. (It also adds a multiplication factor \(e_0\) to an already Euclidean part, automatically excluding it from contributing to the norm.)

The Hodge dual discussed in Section 9 is not the Hodge dual. It’s the right complement. If the authors had actually used the Hodge dual here, they would have found that Equation (44) always produces zero. Using the right complement instead in Equation (44) is almost correct, but the authors failed to apply the left complement to the result to produce the correct dual to the norm given by Equation (42). There are two embeddings of the scalar field in PGA. One norm produces scalars in one of those embeddings, and the other norm produces scalars in the other embedding. The sum of those norms gives a homogeneous magnitude that finally satisfies the axioms of a true norm and provides concrete distances with arbitrary weights taken into account.

The authors go on to discuss a couple basic formulas involving their norms, but they do not account for the weights of the inputs properly. At no point in the entire section do they ever deliver on their claim to “enable the measurement of magnitudes of finite and ideal elements” on page 37.

In Section 9, the authors talk about projective duality and the Hodge dual, but they once again demonstrate that they have no clue what they’re talking about. They begin with this statement:

**PGA4CS, Page 81**

Some authors use a form of duality that is related to the duality relationship between the PGA of planes and its
dual space in which vectors represent points. A proper way of defining this duality would be projectively. We could
determine, for a blade *X*, what more we need to span the pseudoscalar \(\mathcal I = e_0 \mathbf I_3\) of PGA:

*‘complement-dual’* \(\overline X\) of \(X\) defined through: \(X \wedge \overline X = \mathcal I\).

(This ‘complement-dual’ is our term, to distinguish it from other forms.) Although mathematically proper, there are two practical problems with this definition:

• The complement dual is not unique. ...

• The complement dual is not linear. ...

The authors are spending a lot of effort attacking an imagined mistake that nobody actually made. In every text that defines a right complement as \(\mathbf x \wedge \overline{\mathbf x} = \mathcal I\),
the value of \(\mathbf x\) is limited to *basis* elements, and the operation is then extended by linearity. The authors were very careless here, and they made yet another condescending argument
against their peers who, as it turns out, knew what they were doing. The authors call the above definition “kludgy” and then provide this definition for the Hodge dual:

**PGA4CS, Page 82**

Thus our Hodge dual \(\star\) is defined for a PGA blade *S* through

\(S(\star S) = (\mathbf S \mathbf{\widetilde S}) \mathcal I\) for a blade *S*, (125)

and extended to multivectors by viewing them as sums of blades. Here **S** is the Euclidean factor
(i.e., non-*e* factor) of the blade *S*.

So they call a very simple and correct definition of the complement “kludgy” and immediately proceed to give the kludiest definition of all time for the Hodge dual.
And it’s not even correct. The Hodge dual is a strongly metric operation, but they’ve removed the metric from this equation by arbitrarily dropping factors of \(e_0\) on the right side.
The values shown in their Table 4 are the right complements of the basis elements, not the Hodge duals. The authors don’t appear to know the difference.
They’re just making stuff up at this point, and they found some clunky formula that happens to hold if the star operator was the right complement operation. But it’s not. The correct
identity satisfied by the Hodge star is \((\mathbf a \wedge \star \mathbf b) = (\mathbf a \cdot \mathbf b) \mathcal I\), but even that is still derivable from the more fundamental
definition \(\star S = \overline{\mathbf GS}\) for arbitrary *S* discussed above. The authors then make this comment:

**PGA4CS, Pages 82–83**

For instance, though the Hodge dual is linear and thus satisfies \(\star (X + Y) = \star X + \star Y\), that property does not hold for the specific blade-based definition eq.(125) if you naively set \(S = X + Y\)! We ourselves have fallen into this trap, repeatedly...

The linearity property does not hold because the definition in Equation (125) is complete garbage. The authors have no idea what they’re talking about, they don’t understand what the Hodge dual does, and they’ve scribbled down the same kind of drivel that a student who skipped class would turn in for a homework assignment they didn’t know how to do.

Finally, I need to address some comments made by the authors in Section 10 in which they compare the number of operations needed to compose Euclidean motions using motors and the equivalent matrices. Here’s what they had to say:

**PGA4CS, Page 87**

The advantage of embedding in PGA, for even in the highly optimized graphics space, is that extra performance gains can be made.

• For the composition of Euclidean motions, the classic 4 × 4 matrix product (requiring 64 multiplications, 48 additions and 16 floats of storage) can be replaced by the geometric product between motors (requiring only 48 multiplications, 40 additions and 8 floats of storage).

The numbers given here are incorrect, and the claim that PGA requires fewer operations is intentionally misleading in an attempt to make PGA operators look better than they actually are. The authors are well aware that the 4 × 4 matrix corresponding to any motor always has an orthogonal upper-left 3 × 3 portion and always has a fourth row of [0 0 0 1] (thus requiring only 12 floats of storage, not 16). When this form is properly recognized, the number of operations required for matrix composition is 33 multiplications and 24 additions, which is much better than the 48 and 40 required for motor composition. Motor composition is not faster.

## Additional Resources

- Eric Lengyel’s central hub for projective geometric algebra: projectivegeometricalgebra.org.
*Projective Geometric Algebra Illuminated*. If you want solid foundations, this book is for you.