Skip to main content

Section 4 Applications

Subsection 4.1 Structure from maximal gradings

In this subsection we show how maximal gradings may be used to find some structural information of Lie algebras. We start by studying how maximal gradings reveal the structure of a direct product. A similar result can be found in 1.6.5 of [14].

Consider the Lie algebra \(L_{6,22}(1)\) in [1] with basis \(\{X_1,\dots,X_6 \}\text{,}\) where the only non-zero bracket relations are

\begin{equation*} [X_1,X_2]=X_5,\;\; [X_1,X_3]=X_6,\;\; [X_2,X_4]=X_6,\;\; [X_3,X_4]=X_5\text{.} \end{equation*}

In a basis \(\{Y_1,\dots,Y_6 \}\) adapted to the maximal grading, the bracket relations are

\begin{equation*} [Y_1,Y_2]=Y_3,\;\;[Y_4,Y_5]=Y_6\text{.} \end{equation*}

From these bracket relations we immediately see that the Lie algebra \(L_{6,22}(1)\) is isomorphic to \(L_{3,2} \times L_{3,2}\text{,}\) where \(L_{3,2}\) is the first Heisenberg Lie algebra.

We say that a split torus \(\mathfrak{t}\subset\der(\mathfrak{g})\) is non-degenerate if the intersection of the kernels of the maps \(D \in \mathfrak{t}\) is trivial. That is, a split torus is non-degenerate if and only if the \(\mathfrak{t}^*\)-grading it induces does not have zero as a weight.

We expect that the following result is known even without the non-degeneracy assumption, however we have been unable to locate a reference. We will therefore give a direct proof of the simpler claim.

Denoting \(\mathfrak{t} = \mathfrak{t}_1 \times \mathfrak{t}_2\text{,}\) let \(D \in C(\mathfrak{t})\) be a diagonalizable derivation in the centralizer \(C(\mathfrak{t})\text{.}\) To show the maximality of \(\mathfrak{t}\text{,}\) it suffices to show that \(D \in \mathfrak{t}\text{.}\) In a basis adapted to the product we may represent

\begin{equation*} D = \begin{bmatrix} E_1 \amp F_1 \\ F_2 \amp E_2 \end{bmatrix}\text{,} \end{equation*}

where \(E_1\in \der(\mathfrak{g}_1)\text{,}\) \(E_2\in \der(\mathfrak{g}_2)\text{,}\) and \(F_1\colon \mathfrak{g}_2\to\mathfrak{g}_1\) and \(F_2\colon \mathfrak{g}_1\to \mathfrak{g}_2\) are some linear maps. We are going to demonstrate that \(E_1\in \mathfrak t_1\text{,}\) \(E_2\in \mathfrak t_2\) and \(F_1=F_2=0\text{,}\) which would prove that \(D = E_1\times E_2 \in \mathfrak t\text{.}\)

Let \(D_1 \in \mathfrak t_1\text{.}\) By assumption \(D\) commutes with \(D_1\times 0 \in \mathfrak t\text{,}\) so a simple computation shows that \(E_1\) commutes with \(D_1\) and \(D_1 F_1 = 0\text{.}\) Since \(D_1\) is arbitrary, we obtain \(E_1 \in C(\mathfrak t_1) \text{.}\) From the fact that \(D_1F_1 = 0\) for every \(D_1 \in \mathfrak t_1\) we get

\begin{equation*} \operatorname{Im}(F_1) \subset \bigcap_{D_1\in \mathfrak t_1} \ker(D_1) = \{0\}, \end{equation*}

where the last equality follows from the non-degeneracy of \(\mathfrak t_1\text{.}\) Consequently, \(F_1 = 0\text{.}\)

A similar argument shows that \(E_2 \in C(\mathfrak t_2)\) and \(F_2=0\text{.}\) Since \(D\) is assumed diagonalizable, it follows that \(E_1\) and \(E_2\) are diagonalizable. Then by maximality of \(\mathfrak t_1\) and \(\mathfrak t_2\) we have \(E_1\in \mathfrak t_1\) and \(E_2\in \mathfrak t_2\text{,}\) which shows that \(D = E_1\times E_2 \in \mathfrak t\text{.}\)

For gradings, the above lemma implies the following. Suppose \(\mathcal V : \mathfrak g_1 = \bigoplus_{\alpha \in A } V_\alpha\) and \(\mathcal W : \mathfrak g_2 = \bigoplus_{\beta \in B } W_\beta\) are maximal gradings of Lie algebras \(\mathfrak g_1\) and \(\mathfrak g_2\text{,}\) and suppose zero is not a weight for either \(\mathcal{V}\) or \(\mathcal{W}\text{.}\) Then

\begin{equation} \mathcal V \times \mathcal W : \Bigl(\bigoplus_{(\alpha,0) \in A \times B} V_\alpha \times \{0\} \Bigr) \oplus \Bigl( \bigoplus_{(0,\beta) \in A \times B} \{0\} \times W_\beta \Bigr)\label{eq-product-grading}\tag{4.1} \end{equation}

is a maximal grading of \(\mathfrak g = \mathfrak g_1 \times \mathfrak g_2 \text{.}\) Indeed, the gradings \(\mathcal{V}\) and \(\mathcal{W}\) are the universal realizations of gradings induced by the respective maximal split tori \(\mathfrak{t}_1\) and \(\mathfrak{t}_2\) of the Lie algebras \(\mathfrak{g}_1\) and \(\mathfrak{g}_2\text{.}\) By Lemma 4.2, the product torus \(\mathfrak t_1 \times \mathfrak t_2\) is maximal. The universal realization of the grading induced by \(\mathfrak t_1 \times \mathfrak t_2\) is equivalent to the product grading (4.1).

Conversely, we can detect when a grading is a product grading. Similarly as in 1.6.4 of [14], for a grading \(\mathcal{V}\colon \mathfrak{g} = \bigoplus_{\alpha \in A} V_\alpha \) with weights \(\Omega\subset A\) consider the graph with vertices \(\Omega\) defined as follows: Whenever \([V_\alpha, V_\beta]\neq 0\text{,}\) we define edges between all the three vertices \(\alpha,\beta,\alpha+\beta\in \Omega\text{.}\) If the graph \(\Omega\) admits a partition \(\Omega = \Omega_1 \sqcup \Omega_2\) such that no edges exist between \(\Omega_1\) and \(\Omega_2\text{,}\) then the Lie algebra \(\mathfrak{g}\) is a direct product of the ideals \(\mathfrak{g}_1 = \bigoplus_{\alpha \in \Omega_1} V_\alpha\) and \(\mathfrak{g}_2 = \bigoplus_{\beta \in \Omega_2} V_\beta\text{.}\) In this situation we say the grading \(\mathcal{V}\) detects the product structure \(\mathfrak{g}_1 \times \mathfrak{g}_2 \) of the Lie algebra \(\mathfrak{g}\text{.}\) We gather the observations made above into the following proposition.

We remark that while maximal gradings are able to detect product structures as indicated above, they are not able to detect some other algebraic properties. The Lie algebra \(L_{6,24}(1)\) in [1] provides examples of two such phenomena. First, the layers of its maximal grading are not contained in the terms of its lower central series (this behavior can be also achieved by examples where the maximal grading is very coarse). Secondly, this Lie algebra has a nice basis (see [8] for the precise definition and its motivation), but it can be shown that no basis adapted to a maximal grading is nice.

Despite these negative results, maximal gradings have another structural application in simplifying the problem of deciding whether two Lie algebras are isomorphic or not.

Remark 4.4.

If two Lie algebras \(\mathfrak g_1 \) and \(\mathfrak g_2\) are isomorphic, then any isomorphism maps the maximal grading of \(\mathfrak{g}_1\) to a maximal grading of \(\mathfrak{g}_2\text{.}\) Therefore, if the maximal gradings of \(\mathfrak{g}_1\) and \(\mathfrak{g}_2\) are given, then deciding if \(\mathfrak{g}_1\) and \(\mathfrak{g}_2\) are isomorphic reduces to determining the existence of an isomorphism between the maximal gradings. In many cases this is significantly easier than naively solving the original isomorphism problem. For example, in low dimensions, the majority of the layers of the maximal grading are one-dimensional, in which case searching for possible isomorphisms becomes a combinatorial problem.

Subsection 4.2 Classification of gradings in low dimension

Following the strategy outlined in Section 3.7 of [21], we classify universal realizations of torsion-free gradings in nilpotent Lie algebras of dimension up to 7 over \(\mathbb{C}\text{,}\) apart from a few uncountable families of 7 dimensional Lie algebras. Regarding the uncountable families, we follow the study carried out in [23] and focus on those singular values of the complex parameter \(\lambda\) for which either the Lie algebra cohomology or the adjoint cohomology have different dimensions compared to the rest of the Lie algebras in the same family. We also include a few examples corresponding to non-singular values. The complete classification of the gradings can be found in [18]; here we will give a brief overview of Lie algebras of dimension up to 6.

The main part of the classification is the construction of a maximal grading (Algorithm 3.10) and the enumeration of torsion-free gradings (Proposition 3.1). Since all the Lie algebras we consider are defined over the algebraic numbers, we are free to make use of our computer algebra implementation, as discussed in Subsection 2.1. As a starting point we used the classifications of nilpotent Lie algebras given in [10] for dimensions less than 6, [1] for dimension 6, and [16] for dimension 7. The classification up to dimension 6 has a pre-existing computer implementation in the GAP package [2]. Since these Lie algebras are not always given in a basis adapted to any maximal grading, we first compute the maximal grading using the methods described in Subsection 3.4 and switch to a basis adapted to the resulting grading.

The presentations we use for the nilpotent Lie algebras up to dimension 6 are listed in Table 4.5. The Lie brackets \([Y_a,Y_b]=Y_c\) are listed in the condensed form \(ab=c\text{.}\) Lie algebras \(\mathfrak{g}\times\mathbb{C}^d\) with abelian factors have identical structure coefficients with the nonabelian factor \(\mathfrak{g}\) and are omitted from the list. For example \(L_{4,2}=L_{3,2}\times\mathbb{C}\) has the basis \(Y_1,\ldots,Y_4\) with the bracket relation \([Y_1,Y_2]=Y_3\) from \(L_{3,2}\text{.}\)

Table 4.5. Lie algebras of dimension up to 6 over \(\mathbb{C}\) in a basis adapted to a maximal grading.
\(L_{3,2}\) \(12=3\)
\(L_{4,3}\) \(12=3\) \(13=4\)
\(L_{5,4}\) \(41=5\) \(23=5\)
\(L_{5,5}\) \(13=4\) \(14=5\) \(32=5\)
\(L_{5,6}\) \(12=3\) \(13=4\) \(14=5\) \(23=5\)
\(L_{5,7}\) \(12=3\) \(13=4\) \(14=5\)
\(L_{5,8}\) \(12=3\) \(14=5\)
\(L_{5,9}\) \(12=3\) \(23=4\) \(13=5\)
\(L_{6,10}\) \(23=4\) \(51=6\) \(24=6\)
\(L_{6,11}\) \(12=3\) \(13=5\) \(15=6\) \(23=6\) \(24=6\)
\(L_{6,12}\) \(23=4\) \(24=5\) \(31=6\) \(25=6\)
\(L_{6,13}\) \(13=4\) \(14=5\) \(32=5\) \(15=6\) \(42=6\)
\(L_{6,14}\) \(12=3\) \(13=4\) \(14=5\) \(23=5\) \(25=6\) \(43=6\)
\(L_{6,15}\) \(12=3\) \(13=4\) \(14=5\) \(23=5\) \(15=6\) \(24=6\)
\(L_{6,16}\) \(12=3\) \(13=4\) \(14=5\) \(25=6\) \(43=6\)
\(L_{6,17}\) \(21=3\) \(23=4\) \(24=5\) \(13=6\) \(25=6\)
\(L_{6,18}\) \(12=3\) \(13=4\) \(14=5\) \(15=6\)
\(L_{6,19}(-1)\) \(12=3\) \(14=5\) \(25=6\) \(43=6\)
\(L_{6,20}\) \(12=3\) \(14=5\) \(15=6\) \(23=6\)
\(L_{6,21}(-1)\) \(12=3\) \(23=4\) \(13=5\) \(14=6\) \(25=6\)
\(L_{6,22}(0)\) \(24=5\) \(41=6\) \(23=6\)
\(L_{6,22}(1)\) \(12=3\) \(45=6\)
\(L_{6,23}\) \(12=3\) \(14=5\) \(15=6\) \(42=6\)
\(L_{6,24}(0)\) \(13=4\) \(34=5\) \(14=6\) \(32=6\)
\(L_{6,24}(1)\) \(12=3\) \(23=5\) \(24=5\) \(13=6\)
\(L_{6,25}\) \(12=3\) \(13=4\) \(15=6\)
\(L_{6,26}\) \(12=3\) \(24=5\) \(14=6\)
\(L_{6,27}\) \(12=3\) \(13=4\) \(25=6\)
\(L_{6,28}\) \(12=3\) \(23=4\) \(13=5\) \(15=6\)

With all the maximal gradings computed, we enumerate universal realizations of all torsion-free gradings as in Proposition 3.1. For the classification up to equivalence, we first introduce some easy-to-check invariants for gradings. Recall that by Lemma 2.9, the grading groups of the obtained gradings are isomorphic to some groups \(\mathbb{Z}^k\text{.}\) The dimension \(k\) is called the rank of the grading. We recall also an invariant from Section 3.2 of [21]: the type of a grading is the tuple \((n_1,n_2,\ldots,n_k)\text{,}\) where \(k\) is the dimension of the largest layer, and each \(n_i\) is the number of \(i\)-dimensional layers.

From the full list of torsion-free gradings, we initially collect together gradings using the following criteria:

  1. The ranks of the gradings are equal.
  2. The types of the gradings are equal.
  3. There exists a homomorphism between the grading groups of the universal realizations mapping layers to layers of equal dimensions.

In this way we get for each Lie algebra families \(I_1,I_2,\ldots,I_k\) of gradings such that the gradings of \(I_i\) and \(I_j\) are not equivalent for \(i\neq j\text{.}\)

To compute the precise equivalence classes, we naively check if the gradings within each family \(I_i\) are equivalent. For each pair of \(\mathbb{Z}^k\)-gradings \(\mathfrak{g}=\bigoplus_{\alpha\in \mathbb{Z}^k} V_\alpha\) and \(\mathfrak{g}=\bigoplus_{\beta\in \mathbb{Z}^k} W_\beta\text{,}\) there are usually only a few homomorphisms \(f\colon \mathbb{Z}^k\to \mathbb{Z}^k\) with \(\dim V_\alpha=\dim W_{f(\beta)}\text{.}\) For each such homomorphism \(f\text{,}\) we need to check whether there exists an automorphism \(\Phi\in\Aut(\mathfrak{g})\) such that \(\Phi(V_\alpha)=W_{f(\beta)}\) for all weights \(\alpha\text{.}\) These identities define a system of quadratic equations over algebraic numbers. Since we are working over an algebraically closed field, the system has no solution if and only if 1 is contained in the ideal defined by the polynomial equations. The dimensions of the layers are generally quite small in the cases we need to check, so Gröbner basis methods work well.

For nilpotent Lie algebras of dimension up to 6, an overview of our classification of gradings is compiled in Table 4.6. For each Lie algebra, we list its label in the classification of [1], the rank of its maximal grading (\(k\)), whether it is stratifiable or not (s?), the number of gradings (#), and the number of gradings with a positive realization (\(\#\mathbb{Z}_+\)).

Table 4.6. Gradings of Lie algebras up to dimension 6 over \(\mathbb{C}\)
Name \(k\) s? # #\(\mathbb{Z}_+\)
\(L_{2,1}\) \(2\) \(\checkmark\) \(2\) \(2\)
\(L_{3,1}\) \(3\) \(\checkmark\) \(3\) \(3\)
\(L_{3,2}\) \(2\) \(\checkmark\) \(4\) \(2\)
\(L_{4,1}\) \(4\) \(\checkmark\) \(5\) \(5\)
\(L_{4,2}\) \(3\) \(\checkmark\) \(11\) \(6\)
\(L_{4,3}\) \(2\) \(\checkmark\) \(6\) \(2\)
\(L_{5,1}\) \(5\) \(\checkmark\) \(7\) \(7\)
\(L_{5,2}\) \(4\) \(\checkmark\) \(26\) \(15\)
\(L_{5,3}\) \(3\) \(\checkmark\) \(22\) \(9\)
\(L_{5,4}\) \(3\) \(\checkmark\) \(9\) \(4\)
\(L_{5,5}\) \(2\) \(\) \(7\) \(3\)
\(L_{5,6}\) \(1\) \(\) \(2\) \(1\)
\(L_{5,7}\) \(2\) \(\checkmark\) \(7\) \(2\)
\(L_{5,8}\) \(3\) \(\checkmark\) \(14\) \(6\)
\(L_{5,9}\) \(2\) \(\checkmark\) \(5\) \(2\)
\(L_{6,1}\) \(6\) \(\checkmark\) \(11\) \(11\)
\(L_{6,2}\) \(5\) \(\checkmark\) \(52\) \(31\)
\(L_{6,3}\) \(4\) \(\checkmark\) \(60\) \(27\)
\(L_{6,4}\) \(4\) \(\checkmark\) \(29\) \(13\)
\(L_{6,5}\) \(3\) \(\) \(29\) \(15\)
\(L_{6,6}\) \(2\) \(\) \(8\) \(6\)
\(L_{6,7}\) \(3\) \(\checkmark\) \(31\) \(11\)
\(L_{6,8}\) \(4\) \(\checkmark\) \(52\) \(25\)
Name \(k\) s? # #\(\mathbb{Z}_+\)
\(L_{6,9}\) \(3\) \(\checkmark\) \(17\) \(8\)
\(L_{6,10}\) \(3\) \(\) \(23\) \(8\)
\(L_{6,11}\) \(1\) \(\) \(2\) \(1\)
\(L_{6,12}\) \(2\) \(\) \(9\) \(4\)
\(L_{6,13}\) \(2\) \(\) \(8\) \(3\)
\(L_{6,14}\) \(1\) \(\) \(2\) \(1\)
\(L_{6,15}\) \(1\) \(\) \(2\) \(1\)
\(L_{6,16}\) \(2\) \(\checkmark\) \(8\) \(2\)
\(L_{6,17}\) \(1\) \(\) \(2\) \(1\)
\(L_{6,18}\) \(2\) \(\checkmark\) \(8\) \(2\)
\(L_{6,19}(-1)\) \(3\) \(\checkmark\) \(21\) \(6\)
\(L_{6,20}\) \(2\) \(\checkmark\) \(8\) \(3\)
\(L_{6,21}(-1)\) \(2\) \(\checkmark\) \(6\) \(2\)
\(L_{6,22}(0)\) \(3\) \(\checkmark\) \(18\) \(8\)
\(L_{6,22}(1)\) \(4\) \(\checkmark\) \(32\) \(15\)
\(L_{6,23}\) \(2\) \(\) \(8\) \(4\)
\(L_{6,24}(0)\) \(2\) \(\) \(8\) \(4\)
\(L_{6,24}(1)\) \(2\) \(\) \(5\) \(2\)
\(L_{6,25}\) \(3\) \(\checkmark\) \(29\) \(11\)
\(L_{6,26}\) \(3\) \(\checkmark\) \(10\) \(5\)
\(L_{6,27}\) \(3\) \(\checkmark\) \(32\) \(13\)
\(L_{6,28}\) \(2\) \(\checkmark\) \(8\) \(3\)

We present our method of classifying all the possible gradings explicitly in the simple case of the Lie algebra \(L_{4,2}\) given in the basis \(Y_1,\ldots,Y_4\) with the only nonzero bracket \([Y_1,Y_2]=Y_3\text{.}\) The maximal grading is over \(\mathbb{Z}^3\) with the layers

\begin{align*} V_{(1,0,0)}\amp=\langle Y_1\rangle,\amp V_{(0,1,0)}\amp=\langle Y_2\rangle,\amp V_{(1,1,0)}\amp=\langle Y_3\rangle,\amp V_{(0,0,1)}\amp=\langle Y_4\rangle\text{.} \end{align*}

Ignoring scalar multiples, the difference set \(\Omega-\Omega\) of weights consists of the 6 elements \(e_1\text{,}\) \(e_2\text{,}\) \(e_1-e_2\text{,}\) \(e_1-e_3\text{,}\) \(e_2-e_3\text{,}\) and \(e_1+e_2-e_3\text{,}\) where \(e_1,e_2,e_3\) are the standard basis elements of the lattice \(\mathbb{Z}^3\text{.}\) Subsets of these points span the trivial subspace, 6 one-dimensional subspaces, 7 two-dimensional subspaces \({\langle e_1,e_2\rangle}\text{,}\) \({\langle e_1,e_3 \rangle}\text{,}\) \({\langle e_2,e_3 \rangle}\text{,}\) \({\langle e_1 - e_3,e_2 \rangle}\text{,}\) \({\langle e_1,e_2 - e_3 \rangle}\text{,}\) \({\langle e_1 - e_3,e_2 - e_3 \rangle}\text{,}\) \({\langle 2e_1 - e_3,2e_2 - e_3 \rangle}\text{,}\) and the full space \(\mathbb{Z}^3\text{.}\)

In this case, each of these 15 subspaces \(S\) defines a torsion-free quotient \(\mathbb{Z}^3/S\text{.}\) For instance parametrizing the quotient \(\pi\colon\mathbb{Z}^3\to\mathbb{Z}^3/\langle e_1 - e_3,e_2-e_3 \rangle\) as \(\mathbb{Z}\) using the complementary line \(\mathbb{Z}e_3\) gives the weights

\begin{equation*} \pi(e_1) = \pi(e_2) = \pi(e_3) = 1,\quad \pi(e_1+e_2) = 2\text{,} \end{equation*}

so a push-forward grading for the quotient \(\mathbb{Z}^3/\langle e_1 - e_3,e_2-e_3 \rangle\) is the \(\mathbb{Z}\)-grading

\begin{equation*} V_1 = \langle Y_1, Y_2, Y_4\rangle, \quad V_2=\langle Y_3\rangle\text{.} \end{equation*}

To determine the distinct equivalence classes out of the 15 gradings, we first consider the simple criteria listed earlier. The trivial grading and the maximal grading are distinguished by the rank. The six \(\mathbb{Z}^2\)-gradings all have 2 one-dimensional layers and 1 two-dimensional layer. There exists a homomorphism that preserves the dimensions of the layers for two pairs of the gradings: one between the quotients by \(\langle e_1 \rangle\) and \(\langle e_2\rangle\text{,}\) and one between the quotients by \(\langle e_1-e_3\rangle \) and \(\langle e_2-e_3\rangle\text{.}\)

Out of the seven \(\mathbb{Z}\)-gradings, the four quotients by

\begin{equation*} {\langle e_1,e_2\rangle}, {\langle e_1 - e_3,e_2 \rangle}, {\langle e_1,e_2 - e_3 \rangle}, {\langle e_1 - e_3,e_2 - e_3 \rangle} \end{equation*}

define gradings with 1 one-dimensional layer and 1 three-dimensional layer, and the three quotients by

\begin{equation*} {\langle e_1,e_3 \rangle}, {\langle e_2,e_3 \rangle}, {\langle 2e_1 - e_3,2e_2 - e_3 \rangle} \end{equation*}

define gradings with 2 two-dimensional layers. In both families there is exactly one pair of gradings admitting a homomorphism: the pair \(\langle e_1 - e_3,e_2 \rangle\) and \(\langle e_1,e_2 - e_3 \rangle\text{,}\) and the pair \(\langle e_1,e_3 \rangle\) and \(\langle e_2,e_3 \rangle\text{.}\)

In all of these cases, the homomorphism between the quotients is induced by the isomorphism \(f\colon\mathbb{Z}^3\to\mathbb{Z}^3\) swapping \(e_1\) and \(e_2\text{.}\) All of the mentioned pairs of \(\mathbb{Z}^2\)- and \(\mathbb{Z}\)-gradings are in fact equivalent, since there is a corresponding Lie algebra automorphism swapping the basis elements \(Y_1\) and \(Y_2\) that preserves the subspaces \(\langle Y_3\rangle\) and \(\langle Y_4\rangle\text{.}\) This reduces the list of 15 gradings down to 11 distinct equivalence classes. Universal realizations for each equivalence class of torsion-free gradings are listed in Table 4.8.

Table 4.8. Gradings of the Lie algebra \(L_{4,2}\)
rank type layers
3 (4) \(V_{1,0,0}\oplus V_{0,1,0}\oplus V_{1,1,0}\oplus V_{0,0,1} = \langle Y_1\rangle\oplus \langle Y_2\rangle\oplus \langle Y_3\rangle\oplus \langle Y_4\rangle\)
2 (2, 1) \(V_{0,0}\oplus V_{1,0}\oplus V_{0,1} = \langle Y_2\rangle\oplus \langle Y_4\rangle\oplus \langle Y_1,Y_3\rangle\)
2 (2, 1) \(V_{1,0}\oplus V_{0,1}\oplus V_{0,2} = \langle Y_4\rangle\oplus \langle Y_1,Y_2\rangle\oplus \langle Y_3\rangle\)
2 (2, 1) \(V_{1,0}\oplus V_{0,1}\oplus V_{1,1} = \langle Y_1,Y_4\rangle\oplus \langle Y_2\rangle\oplus \langle Y_3\rangle\)
2 (2, 1) \(V_{1,-1}\oplus V_{0,1}\oplus V_{1,0} = \langle Y_1\rangle\oplus \langle Y_2\rangle\oplus \langle Y_3,Y_4\rangle\)
1 (0, 2) \(V_{0}\oplus V_{1} = \langle Y_1,Y_4\rangle \oplus \langle Y_2,Y_3\rangle\)
1 (0, 2) \(V_{1}\oplus V_{2} = \langle Y_1,Y_2\rangle\oplus \langle Y_3,Y_4\rangle\)
1 (1, 0, 1) \(V_{1}\oplus V_{2} = \langle Y_1,Y_2,Y_4\rangle\oplus \langle Y_3\rangle\)
1 (1, 0, 1) \(V_{0}\oplus V_{1} = \langle Y_1,Y_2,Y_3\rangle \oplus \langle Y_4\rangle\)
1 (1, 0, 1) \(V_{0}\oplus V_{1} = \langle Y_1\rangle\oplus \langle Y_2,Y_3,Y_4\rangle\)
0 (0, 0, 0, 1) \(V_{0} = \langle Y_1,Y_2,Y_3,Y_4\rangle\)

Subsection 4.3 Enumerating Heintze groups

In this section, we present how knowing a maximal grading of a given nilpotent Lie algebra \(\mathfrak{g}\) can be used to determine a list of Heintze groups over \(\mathfrak{g}\text{.}\) However, note that when working in the non-algebraically closed field \(\mathbb{R}\text{,}\) we cannot in general obtain the maximal grading using Algorithm 3.10, see Remark 3.14.

Definition 4.9.

A Heintze group is a simply connected Lie group over \(\mathbb{R}\) whose Lie algebra is a semidirect product of a nilpotent Lie algebra \(\mathfrak{g}\) and \(\mathbb{R}\) via a derivation \(\alpha \in \der(\mathfrak{g})\) whose eigenvalues have strictly positive real parts.

Positive gradings for a given Lie algebra are naturally identified with diagonalizable derivations with strictly positive eigenvalues, see Subsection 2.4. Hence, to any positively graded Lie algebra \(\mathfrak{g}\) we may associate a Heintze group over \(\mathfrak{g}\text{.}\) We shall call these groups diagonal Heintze groups.

The quasi-isometric classification of Heintze groups reduces to the study of so called purely real Heintze groups, for which the associated derivation has real eigenvalues. Purely real Heintze groups are equivalent to diagonal Heintze groups under a slightly weaker notion of equivalence (sublinear biLipschitz-equivalence, see Theorem  1.2 of [6] and Theorem  3.2 of [24]). Moreover, by [7] if two diagonal Heintze groups are quasi-isometric, then their associated derivations are proportional. Hence, the quasi-isometric classification problem of diagonal Heintze groups can be approached by treating the algebraic problem of finding all the possible derivations defining non-isomorphic diagonal Heintze groups.

Proposition 4.10 is a tool for tackling the above mentioned algebraic problem using positive gradings. We will prove this result later in this section after discussing its role in the enumeration of Heintze groups.

The enumeration of positive gradings we have established immediately gives the corresponding enumeration of diagonal Heintze groups over \(\mathfrak{g}\text{.}\) The enumeration of positive gradings can be understood in two different ways, as we discussed in Subsection 3.3. The corresponding enumeration of Heintze groups has similar character: it is either a parametrization via the projections or a finite list that does not contain all the isomorphism classes of Heintze groups but a representative for each family in terms of the layers.

Considering the parametrization of positive gradings via parametrization of the projections, we note that if one is able to eliminate equivalent gradings from the enumeration of positive gradings, then by Proposition 4.10 the corresponding list of Heintze groups does not contain isomorphic Heintze groups. Notice that already over \(\mathfrak{g}=\mathbb{R}^2\) there are uncountably many isomorphism classes of Heintze groups given by the projections \((1,0) \mapsto 1\) and \((0,1) \mapsto a\) with \(a >0\text{.}\)

Proposition 4.10 will follow by a suitable conjugation of the derivations by the adjoint map. We will first recall some relevant formulas. We thank G. Pallier for helping us improve an early version of Lemma 4.11.

Recall that \(\Ad_{\exp(X)} = e^{\ad({X})}\text{,}\) see Proposition 1.91 of [20], and recall the identity

\begin{equation*} \Ad_{\exp(X)}\circ\ad({Y})\circ\Ad_{\exp(-X)} = \ad({\Ad_{\exp(X)}Y}). \end{equation*}

A consequence of the assumption \([\delta(X),X]=0\) is that if \(P\) is a polynomial and \(P'\) denotes its derivative polynomial, then we have

\begin{equation*} [\delta,P(\ad(X))] = \ad({\delta(X)}) \circ P'(\ad(X))\text{.} \end{equation*}

In the limit we obtain

\begin{equation*} [\delta,e^{-\ad(X)}] = \ad({\delta(-X)}) \circ e^{-\ad(X)}. \end{equation*}

Expanding out the bracket in \(e^{\ad(X)}[\delta,e^{-\ad(X)}]\) and reorganizing terms making use of the assumption \([\delta(X),X]=0\text{,}\) we obtain the desired formula.

For a vector \(X\in\mathfrak{g}\text{,}\) denote by \(C_X\colon\der(\mathfrak{g})\to\der(\mathfrak{g})\) the conjugation map

\begin{equation*} C_X(\eta) = \Ad_{\exp(X)}\circ \eta \circ \Ad_{\exp(-X)}\text{.} \end{equation*}

Let \(X_1,\ldots,X_n\) be a basis of \(\mathfrak{g}\) that diagonalizes \(\delta\text{.}\) Consider the map

\begin{equation*} \Phi\colon \mathbb{R}^n\to \der(\mathfrak{g}),\quad \Phi(x_1,\ldots,x_n)=C_{x_nX_n}\circ\cdots\circ C_{x_1X_1}(\delta)\text{.} \end{equation*}

By repeated application of Lemma 4.11, it follows that \(\Phi(x) = \delta - \ad({\phi(x)})\text{,}\) where \(\phi\colon \mathbb{R}^n\to\mathfrak{g}\) is the map

\begin{align} \phi(x_1,\amp\ldots,x_n) = \delta(x_nX_n) + \Ad_{\exp(x_nX_n)}\delta(x_{n-1}X_{n-1})+\cdots\label{eq-def-adjoint-sum}\tag{4.2}\\ \amp+ \Ad_{\exp(x_nX_n)}\Ad_{\exp(x_{n-1}X_{n-1})}\cdots\Ad_{\exp(x_2X_2)}\delta(x_{1}X_{1})\text{.}\notag \end{align}

Since the composition of conjugations is a conjugation, it suffices to prove that the map \(\phi\) is surjective.

Let \(w_1,\ldots,w_n\gt 0\) be the eigenvalues of the vectors \(X_1,\ldots,X_n\) for the derivation \(\delta\text{.}\) Since the maps \(x_i\mapsto \operatorname{sign}(x_i)\abs{x_i}^{w_i}\) are all invertible, the map \(\phi\colon \mathbb{R}^n\to\mathfrak{g}\) is surjective if and only if the map \(\tilde{\phi}\colon \mathbb{R}^n\to\mathfrak{g}\) defined by

\begin{equation} \tilde{\phi}(x_1,\ldots,x_n)=\phi(\operatorname{sign}(x_1)\abs{x_1}^{w_1},\ldots,\operatorname{sign}(x_n)\abs{x_n}^{w_n})\label{eq-rescaled-adjoint-sum}\tag{4.3} \end{equation}

is surjective.

Let \(D_\lambda\in\Aut(\mathfrak{g})\text{,}\) \(\lambda\gt 0\text{,}\) be the one-parameter family of dilations defined by the derivation \(\delta\text{,}\) i.e., \(D_\lambda = \exp(\delta \log \lambda)\text{.}\) Then for each \(i=1,\ldots,n\) the dilation is given by \(D_\lambda(X_i)=\lambda^{w_i}X_i\) and we have the dilation equivariance

\begin{equation*} \Ad_{\exp(\lambda^{w_i} X_i)}\circ D_\lambda = D_{\lambda}\circ\Ad_{\exp(X_i)}\text{.} \end{equation*}

Applying the above equivariance to the definition (4.3) we find that the map \(\tilde{\phi}\) is \(D_\lambda\)-homogeneous, i.e., \(\tilde{\phi}(\lambda x) = D_\lambda(\tilde{\phi}(x))\) for all \(x\in\mathbb{R}^n\) and \(\lambda\gt 0\text{.}\) Since \(\bigcup_{\lambda > 0} D_\lambda(U) = \mathfrak g\) for any neighborhood \(U\) of the identity it follows that the map \(\tilde{\phi}\) is surjective if and only if it is open at zero. Since the change of parameters in (4.3) is a homeomorphism, the same is true also for the map \(\phi\text{.}\)

By the definition (4.2), the map \(\phi\) is smooth. The derivative of each summand \(\Ad_{\exp(x_nX_n)}\cdots\Ad_{\exp(x_{i+1}X_{i+1})}\delta(x_iX_i)\) at zero is the map \(x\mapsto \delta(x_iX_i)\text{,}\) so the derivative \(D_0\phi\) of the map \(\phi\) at zero is

\begin{equation*} D_0\phi(x_1,\ldots,x_n) = \delta(x_1X_1+\cdots+x_nX_n)\text{.} \end{equation*}

By the strictly positive eigenvalue assumption, the map \(\delta\) is invertible. Since \(X_1,\ldots,X_n\) is a basis of \(\mathfrak{g}\text{,}\) it follows that the map \(\phi\) is open at zero, concluding the proof.

Rescaling the derivations by a scalar, we may assume that the smallest of the eigenvalues for both derivations is 1. Since the Heintze groups are assumed to be isomorphic, it is straightforward to see that there is a vector \(X \in \mathfrak{g}\) such that the derivation \(\alpha\) is conjugate by a Lie algebra automorphism of \(\mathfrak{g}\) to the derivation \(\beta + \ad(X)\text{.}\) By Lemma 4.12, it follows that \(\alpha\) and \(\beta\) are conjugate. Applying Lemma 2.14i to the split tori spanned by \(\alpha\) and \(\beta\) gives the desired result.

Subsection 4.4 Bounds for non-vanishing \(\ell^{q,p}\) cohomology

Knowing all the possible positive gradings of a nilpotent Lie algebra \(\mathfrak{g}\) has one further application in the realm of quasi-isometric classifications. The parametrization of all the possible positive gradings combined with the technical tools presented in [25] can be used to find improved vanishing estimates for the \(\ell^{q,p}\) cohomology of a nilpotent Lie group, which is a well-known quasi-isometry invariant. In this section we present a systematic way of obtaining these estimates using the theory considered in the previous sections. Note that the methods of this paper for computing the maximal grading require the Lie algebra to be defined over an algebraically closed field, however see Remark 3.14.

By definition, the \(\ell^{q,p}\) cohomology of a Riemannian manifold with bounded geometry is the \(\ell^{q,p}\) cohomology of every bounded geometry simplicial complex quasi-isometric to it. A crucial result of [25] shows that in the case of contractible Lie groups, the \(\ell^{q,p}\) cohomology of the manifold is isomorphic to its \(L^{q,p}\) cohomology.

Definition 4.13.

The \(L^{q,p}\) cohomology of a nilpotent Lie group \(G\) is defined as

\begin{equation*} L^{q,p}H^\bullet(G)=\frac{\lbrace \text{closed forms in }L^p\rbrace}{d\big(\lbrace \text{forms in }L^q\rbrace\big)\cap L^p}\text{.} \end{equation*}

In Theorem 1.1 of [25] it is shown that the Rumin complex constructed on a Carnot group allows for sharper computations regarding \(L^{q,p}H^\bullet(G)\) when compared to the usual de Rham complex. Defining and reviewing the properties of the Rumin complex \((E_0^\bullet,d_c)\) goes beyond the scope of this paper. For the following discussion, it is sufficient to know that the space of Rumin \(h\)-forms \(E_0^h\) is a subspace of the space of smooth differential \(h\)-forms of the underlying nilpotent Lie group \(G\text{.}\)

Definition 4.14.

Let us consider a positive grading \(\mathcal{V} : \mathfrak{g}=\bigoplus_{\alpha\in\mathbb{R}}V_\alpha\text{.}\) If \(\theta=X^\ast\) for \(X \in V_\alpha\text{,}\) then we say that the left-invariant 1-form \(\theta\) has weight \(\alpha\) and denote \(w(\theta)=\alpha\text{.}\) In general, given a left-invariant \(h\)-form, we say that it has weight \(p\) if it can be expressed as a linear combination of left-invariant \(h\)-forms \(\theta_{i_1,\ldots,i_h} = \theta_{i_1}\wedge\cdots\wedge\theta_{i_h}\) such that \(w(\theta_{i_1})+\cdots+w(\theta_{i_h})=p\text{.}\)

Given a positive grading \(\mathcal V :\mathfrak{g} = \bigoplus_{\alpha \in \mathbb{R}} V_\alpha\text{,}\) we call the quantity

\begin{equation*} Q = \sum_{\alpha \in \mathbb{R}_+}\alpha \dim V_\alpha \end{equation*}

the homogeneous dimension of \(\mathcal V\text{.}\) We also define for each degree \(h\) the number

\begin{gather*} \delta N_{\min}(h)= \min_{\theta\in E_0^h} w(\theta)-\max_{\tilde{\theta}\in E_0^{h-1}}w(\tilde{\theta})\,. \end{gather*}

The following is Theorem 1.1(ii) of [25].

Moreover, in Theorem 9.2 of the same paper it is shown how the non-vanishing statement has a wider scope, as it can be applied to Carnot groups equipped with a homogeneous structure that comes from a positive grading. This result has been further extended in [30] to arbitrary positively graded nilpotent Lie groups.

A natural question that stems from these considerations is whether it is possible to identify which choice of positive grading will yield the best interval for non-vanishing cohomology. This problem can be easily presented in terms of maximising the value of the fraction \(\delta N_{\min}(h)/Q\) among all the possible positive gradings for a given Lie group \(G\text{.}\)

Let us describe the maximization procedure in more detail. Recall that positive gradings of \(\mathfrak{g}\) are parametrized by vectors \(\mathbf{a}\in\positiveset\) where \(\positiveset\) is defined by (3.5). Denote by \(w(\theta)_{\mathbf{a}}\) the weight of a one-form \(\theta\) for the positive grading associated with a vector \(\mathbf{a}\in\positiveset\text{.}\) Then we want to find the value of the following expression for each degree \(h\text{:}\)

\begin{gather*} \max_{\mathbf{a}\in\positiveset}\bigg\{\frac{\min_{\theta\in E_0^h}w(\theta)_\mathbf{a}-\max_{\tilde{\theta}\in E_0^{h-1}}w(\tilde{\theta})_{\mathbf{a}}}{Q_{\mathbf{a}}}\bigg\}\text{,} \end{gather*}

where \(Q_{\mathbf{a}}\) is the homogeneous dimension of \(\pi^{\mathbf a}_*(\mathcal W)\text{.}\)

A problem of this form can be converted into a linear optimization problem as follows:

  1. replace \(\min_{\theta\in E_0^h} w(\theta)_\mathbf{a}\) with a new variable \(x\text{,}\) and add the constraint \(x\le w(\theta)_\mathbf{a}\) for each \(\theta\in E_0^h\text{;}\)
  2. replace \(\max_{\tilde{\theta}\in E_0^{h-1}}w(\tilde{\theta})\) with a new variable \(y\text{,}\) and add the constraint \(y\ge w(\tilde{\theta})_\mathbf{a}\) for each \(\tilde{\theta}\in E_0^{h-1}\text{;}\)
  3. normalize the expression by imposing \(Q_\mathbf{a}=1.\)

We are then left with the following expression for our original maximization problem

\begin{align*} \text{Maximize}\quad \amp x-y\\ \text{subject to}\quad x\amp\le w(\theta)_\mathbf{a}\quad \forall\,\theta\in E_0^h,\\ y\amp\ge w(\tilde{\theta})_\mathbf{a}\quad \forall \tilde{\theta}\in E_0^{h-1},\\ Q_\mathbf{a}\amp=1,\quad \mathbf{a}\in\positiveset \end{align*}

which can easily be solved by a computer, yielding the optimal bound for non-vanishing cohomology using the method of Theorem 4.15.

Let us consider the non-stratifiable Lie group \(G\) of dimension 6, whose Lie algebra is denoted as \(L_{6,10}\) in [10], with the non-trivial brackets

\begin{equation*} [X_1,X_2]=X_3\;,\;[X_1,X_3]=[X_5,X_6]=X_4\text{.} \end{equation*}

The space of Rumin forms in \(G\) is

\begin{align*} E_0^1\amp=\langle\theta_1,\theta_2,\theta_5,\theta_6\rangle;\\ E_0^2\amp=\langle\theta_{5,6}-\theta_{1,3},\theta_{1,5},\theta_{1,6},\theta_{2,3},\theta_{2,5},\theta_{2,6}\rangle;\\ E_0^3\amp=\langle\theta_{2,5,6}+\theta_{1,2,3},\theta_{2,3,5},\theta_{2,3,6},\theta_{1,3,4}-\theta_{4,5,6},\theta_{1,4,5},\theta_{1,4,6}\rangle. \end{align*}

For the Lie algebra \(L_{6,10}\text{,}\) the maximal grading is over \(\mathbb{Z}^3\) with the layers

\begin{align*} V_{(0,1,0)}\amp=\langle X_1\rangle,\amp V_{(0,0,1)}\amp=\langle X_2\rangle,\amp V_{(0,1,1)}\amp=\langle X_3\rangle\\ V_{(0,2,1)}\amp=\langle X_4\rangle,\amp V_{(1,0,0)}\amp=\langle X_5\rangle,\amp V_{(-1,2,1)}\amp=\langle X_6\rangle\text{.} \end{align*}

The family of projections \(\pi^{\mathbf{a}} \colon \mathbb{Z}^3 \to \mathbb{R}\) giving positive gradings is parametrized by \((a_1,a_2,a_3)=\mathbf{a}\in\positiveset\) as in (3.5). The weights of left-invariant 1-forms are

\begin{align*} w(\theta_1)_{\mathbf{a}}\amp=\pi^{\mathbf{a}}(0,1,0)=a_2;\\ w(\theta_2)_{\mathbf{a}}\amp=\pi^{\mathbf{a}}(0,0,1)=a_3;\\ w(\theta_3)_{\mathbf{a}}\amp=\pi^{\mathbf{a}}(0,1,1)=a_2+a_3;\\ w(\theta_4)_{\mathbf{a}}\amp=\pi^{\mathbf{a}}(0,2,1)=2a_2+a_3;\\ w(\theta_5)_{\mathbf{a}}\amp=\pi^{\mathbf{a}}(1,0,0)=a_1;\\ w(\theta_6)_{\mathbf{a}}\amp=\pi^{\mathbf{a}}(-1,2,1)=2a_2+a_3-a_1\text{.} \end{align*}

From this computation we get the explicit expression

\begin{equation*} \positiveset = \{\mathbf{a}\in\mathbb{R}^3: a_1\gt 0,\,a_2\gt 0,\,a_3\gt 0,\,-a_1+2a_2+a_3\gt 0 \} \end{equation*}

and the homogeneous dimension \(Q_{\mathbf{a}}=6a_2+4a_3\text{.}\)

Let us first consider the bound for non-vanishing cohomology in degree 1. We express

\begin{gather*} \max_{\mathbf{a}\in\positiveset}\bigg\{\frac{\delta N_{\min}(1)}{Q_{\mathbf{a}}}\bigg\}=\max_{\mathbf{a}\in\positiveset}\bigg\{\frac{\min\{a_1,a_2,a_3,2a_2+a_3-a_1\}}{6a_2+4a_3}\bigg\} \end{gather*}

as the linear optimization problem

\begin{align*} \text{Maximize}\quad \amp x\\ \text{subject to}\quad x\amp\le a_1,\; x\leq a_2,\; x\leq a_3,\\ x\amp\leq 2a_2+a_3-a_1,\\ 1\amp=6a_2+4a_3,\\ a_1\amp,a_2,a_3\gt 0,\; 2a_2+a_3-a_1\gt 0\text{.} \end{align*}

A solver finds the solution \(\frac{1}{10}\text{,}\) which is obtained by choosing \(a_1=a_2=a_3=\frac{1}{10}\text{.}\) Since the quantity \(\frac{\delta N_{\min}(1)}{Q_{\mathbf{a}}}\) is scaling invariant, we find that the grading defined by \(a_1=a_2=a_3=1\) gives \(\ell^{q,p}H^1(G)\neq 0\) with the optimal bound \(\frac{1}{p}-\frac{1}{q}\lt \frac{1}{10}\text{.}\)

Similarly, once we re-express

\begin{gather*} \max_{\mathbf{a}\in\positiveset}\bigg\{\frac{\delta N_{\min}(2)}{Q_{\mathbf{a}}}\bigg\} \end{gather*}

as a linear optimization problem and feed it into a solver, we get the result \(\frac{1}{10}\text{,}\) obtained (up to rescaling) by taking \(a_2=a_3=2\) and \(a_1=3\text{.}\) Therefore \(\ell^{q,p}H^2(G)\neq 0\) for \(\frac{1}{p}-\frac{1}{q}\lt\frac{1}{10}\text{.}\)

Likewise, we obtain the optimal bound \(\frac{1}{p}-\frac{1}{q}\lt\frac{1}{10}\) for \(\ell^{q,p}H^3(G)\neq 0\) by taking \(a_1=a_2=a_3=1\text{.}\)

Finally, by Hodge duality, see Theorem 7.3 of [30], we obtain the optimal bounds for \(\ell^{q,p}\) cohomology in complementary degree, that is \(\ell^{q,p}H^4(G)\neq 0\text{,}\) \(\ell^{q,p}H^5(G)\neq 0\text{,}\) and \(\ell^{q,p}H^6(G)\neq 0\text{,}\) for \(\frac{1}{p}-\frac{1}{q}\lt\frac{1}{10}\text{.}\)

Remark 4.17.

Example 9.5 of [25] describes an explicit positive grading in the Engel group that gives an improved bound for the non-vanishing of the \(L^{q,p}\) cohomology in degree 2. By a similar computation as the one shown in Example 4.16, one can verify that the value given in Example 9.5 of [25] is indeed the optimal bound.