Complex Eigenvalues in Geometric Algebra


A less well-known features of Twitter is that, outside of all the angry shouting, occasionally an interesting mathematical discussion breaks out. One of these was started by asking the simple question `is there a geometric interpretation of complex eigenvalues?’. This question has a simple answer in geometric algebra (GA) that deserves to be better known.

There are two use cases to consider: where the function is a real linear mapping of real vector spaces that happens to have complex eigenvalues; and where the linear function and vector spaces are intrinsically complex. Both have a similar resolution in GA.

As a warm up, consider a rotation in a plane defined by the matrix

\displaystyle  U = \begin{pmatrix} \cos\theta & -\sin\theta \\  \sin\theta & \cos\theta \end{pmatrix}

The characteristic equation for this matrix is

\displaystyle  (\cos \theta - \lambda)^2 + \sin^2\theta = 0

which has complex solutions

\displaystyle  \lambda = \exp(\pm i \theta).

This is eminently very reasonable, as the phase factors also encode rotations through \pm \theta, but it appears somehow circular. If we were to write the 2-dimensional rotation in GA terms, for example, we could write

\displaystyle a \mapsto \exp( - \theta e_1 e_2) a  = a \exp(  \theta e_1 e_2).

The complex eigenvalues seem to be capturing aspects of the rotor behaviour, but with the geometry of the $e_1 e_2$ plane replaced by the generic imaginary $i$. Is there a way to put the geometry back in?


The key to understanding linear functions in GA is through their extended action on multivectors. Suppose f denotes a linear function on vectors, so

\displaystyle  f(\lambda a + \mu b) = \lambda f(a) + \mu f(b),

where \lambda, \mu are scalars, and a, b are vectors. You can think of f either as an abstract linear function or an n-by-n matrix, where the matrix entries are fixed by choosing a coordinate system.

Given the linear function f, we extend its action to multivectors through the definition

\displaystyle  f(a \wedge b \wedge \cdots \wedge c) = f(a) \wedge f(b) \wedge \cdots \wedge f(c).

This is a grade-preserving mapping by definition, and is easily shown to be linear. One consequence of the grade-preserving property is that the action of f on the pseudoscalar I in an algebra can only be a multiple of the pseudoscalar, allowing us to define

\displaystyle  f(I) = \det(f) I.

This really is the way the determinant should be introduced. The definition includes everything you need to perform a calculation, and immediately suggests an interpretation as the volume scale factor. Indeed, it was first encountering this result that encouraged me to dig deeper into GA.

The extended function opens up the concept of eigen-multivectors satisfying f(A) = \lambda A. The pseudoscalar is an example of an eigen-multivector, one shared by all linear functions in the same space. More relevant here is the concept of an eigen-bivector, which is what we need to understand complex eigenvalues. Returning to the rotation example, this has

\displaystyle  f(e_1) = \cos\theta e_1 + \sin\theta e_2,  \qquad   f(e_2) = - \sin\theta e_1  + \cos\theta e_2.

If we now extend this to act on the bivector e_1 e_2 we find

\displaystyle  f(e_1 \wedge e_2) = (\cos\theta e_1 + \sin\theta e_2) \wedge (- \sin\theta e_1  + \cos\theta e_2) = e_1 \wedge e_2.

The plane itself is an eigen-bivector, with (real) eigenvalue 1. This is to be expected. If we rotate in a plane, the plane itself is unchanged, but every vector in the plane is transformed. This is the key concept we need to understand complex eigenvalues.

Suppose that we have a real function over a real space, that turns out to have a complex eigenvalue / eigenvector pair, so

\displaystyle  f(u + i v) = (\alpha +i \beta) (u+iv)

This separates into a pair of real equations

\displaystyle f(u) = \alpha u - \beta v, \qquad f(v) = \alpha v + \beta u

and we see immediately that

\displaystyle  f(u \wedge v) = ( \alpha u - \beta v) \wedge ( \alpha v + \beta u) =  (\alpha^2+ \beta^2) u \wedge v.

The eigenbivector u \wedge v picks up the magnitude of the complex eigenvalue. The remaining phase information determines a rotation in the u \wedge v plane, and the combined rotation/dilation can be expressed as a scaled rotor.

Given a real general linear transformation, one way to understand it is to decompose the space on which it acts into eigenvectors (with real eigenvalues) and eigen-bivectors (with a scale and rotation angle). This captures all of the geometry of the linear function / matrix, without ever having to introduce complex numbers. Of course, other representations exist and may be more useful in certain applications (SVD, Jordan form, etc.), but for real functions this answers the question of how to interpret complex eigenvalues.

Complex Functions

The remaining question is how we should view complex functions over complex spaces. This has a very similar answer, but first we need to borrow an idea from the paper Lie groups as spin groups. We represent complex n-dimensional spaces as a real space of 2n-dimensions together with a complex structure encoded in a bivector J. As a basis we use the orthogonal vectors \{e_i, f_i\}, i=1\ldots n, and define

\displaystyle J = \sum_{i=1}^n e_i \wedge f_i.

The bivector J provides a complex structure through the property

\displaystyle (a \cdot J) \cdot J = -a \quad \forall a.

Complex linear functions act in the same way as real linear functions in our 2n-dimensional space, with the additional constraint that they respect the complex structure,

\displaystyle  f(a \cdot J) = f(a) \cdot J.

Suppose that we have a complex linear function F, with a complex eigenvector u+iv,

\displaystyle  F(u+iv) =  (\alpha +i \beta) (u+iv).

We map the function F to a real function f in our 2n-dimensional space, and we map u+iv onto the real vector

\displaystyle  a = \sum_{k=1}^n u_k e_k + v_k f_k

where the u_k and v_k are the real coefficients of the complex eigenvector. In our 2n-dimensional space the eigenvector equation becomes

\displaystyle f(a) = \alpha a + \beta a \cdot J.

But the complex structure ensures that

\displaystyle  f(a \cdot J) = f(a) \cdot J =  \alpha a \cdot J - \beta a.

We can now define the eigen-bivector a \wedge (a \cdot J) satisfying

\displaystyle  f(a \wedge (a \cdot J)) = (\alpha^2+ \beta^2) a \wedge (a \cdot J).

The same analysis holds as for real functions, except that now the eigen-bivectors have the special structure a \wedge (a \cdot J).

One way to understand this is to think of the complex structure as defining a linear function j(a) via

\displaystyle   j(a)  = a \cdot J.

This extends to act on bivectors, and we find that

\displaystyle  j(a \wedge (a \cdot J)) = (a \cdot J) \wedge ((a \cdot J) \cdot J) = (a \cdot J) \wedge (-a) = a \wedge (a \cdot J).

So the eigen-bivectors of f are also eigen-bivectors of j. This is how the complex structure imposes itself on the eigen-bivectors. The fact that the linear functions f and j commute means they can have simultaneous eigen-bivectors.

There are many advantages to viewing complex functions as a special case of real functions with a complex structure imposed. For example all special unitary transformations can be represented with rotors, and the corresponding Lie algebra is represented by an algebra of bivectors. This reverses the usual order of presentation, where real functions are a special case of complex functions, which is usually motivated by the argument that we need to consider complex eiqenvalues anyway. Once that argument disappears, the route presented here becomes much more natural and geometrically clearer.


PDF version, with better typesetting.