Skip to main content

Section 3 Constructions

Subsection 3.1 Enumeration of torsion-free gradings

Following the method suggested in Section 3.7 of [21], we now give a simple way to enumerate a complete (and finite) set of universal realizations of gradings of a Lie algebra using the maximal grading.

For the rest of this section, let \(\mathfrak g\) be a Lie algebra and let \(\mathcal W : \mathfrak g = \bigoplus_{n\in \mathbb{Z}^k}W_n \) be a maximal grading of \(\mathfrak g\) with weights \(\Omega\text{.}\) Denote by \(\Omega-\Omega\) the difference set \(\Omega- \Omega = \{n-m \,\mid\, n,m\in \Omega\}\text{.}\) For a subset \(I \subset \Omega-\Omega \text{,}\) let

\begin{equation*} \pi_I \colon \mathbb{Z}^k \to \mathbb{Z}^k/\langle I\rangle \end{equation*}

be the canonical projection. We define the finite set

\begin{equation} \Gamma = \{{(\pi_I)_*\mathcal W}\mid I \subset \Omega - \Omega, \;\mathbb{Z}^k/\langle I\rangle\text{ is torsion-free} \}.\label{eq-Gamma}\tag{3.1} \end{equation}

Let \(\mathcal{V}\) be the universal realization of some torsion-free grading. Due to Lemma 2.9, the grading group of \(\mathcal V\) is some \(\mathbb{Z}^m\text{.}\) By Proposition 2.18, there exists a homomorphism \(f\colon \mathbb{Z}^k\to \mathbb{Z}^m \) and an automorphism \(\Phi \in \Aut(\mathfrak g)\) such that \(\mathcal V= f_*\Phi(\mathcal W) \text{.}\) Let

\begin{equation*} I = \ker(f)\cap (\Omega-\Omega). \end{equation*}

We are going to show that \(\mathcal V' = (\pi_I)_*(\mathcal W) \) is equivalent to \(\mathcal{V}\text{.}\) Then, a posteriori, \(\mathbb{Z}^k/\langle I\rangle\) is torsion-free and we have \(\mathcal V' \in \Gamma\text{,}\) proving the claim.

First, since \(\ker(\pi_I) = \langle I\rangle \subseteq \ker(f)\text{,}\) by the universal property of quotients there exists a unique homomorphism \(\phi \colon \mathbb{Z}^k/\langle I\rangle \to \mathbb{Z}^m \) such that \(f = \phi \circ \pi_I\text{.}\) In particular,

\begin{equation*} \mathcal V = f_*\Phi(\mathcal W) = \phi_*(\pi_I)_*\Phi(\mathcal W) = \phi_*\Phi(\mathcal V')\text{,} \end{equation*}

so \(\mathcal V\) is a push-forward grading of \(\mathcal V'\text{.}\)

Secondly, since also \(\ker(f)\cap (\Omega-\Omega) = I \subseteq \ker(\pi_I)\cap (\Omega-\Omega) \text{,}\) we deduce that \(\mathcal V \) and \(\Phi(\mathcal V')\) are realizations of the same grading. Since \(\mathcal V\) is a universal realization, it follows that \(\Phi(\mathcal V')\) is a push-forward grading of \(\mathcal V\text{.}\) Consequently, \(\mathcal V'\) is a push-forward grading of \(\mathcal V\text{.}\) Since the grading group of a universal realization is generated by the weights, we get that the gradings \(\mathcal V\) and \(\mathcal V'\) are equivalent by Lemma 2.6, as wanted.

Notice that some of the \(\mathbb{Z}^k/\langle I\rangle\)-gradings in \(\Gamma\) are typically equivalent to each other. From the classification point of view, a more challenging task is to determine the equivalence classes once the set \(\Gamma\) is obtained. In low dimensions, naive methods are enough to separate non-equivalent gradings, and for equivalent ones the connecting automorphism can be found rather easily.

In [18] we give a representative from each equivalence class in \(\Gamma\) for every 6 dimensional nilpotent Lie algebra over \(\mathbb{C}\) and for an extensive class of 7 dimensional Lie algebras over \(\mathbb{C}\text{.}\) The results and the methods for distinguishing the equivalence classes of the obtained gradings are described in more detail in Subsection 4.2.

Subsection 3.2 Stratifications

Definition 3.2.

A stratification (a.k.a. Carnot grading) is a \(\mathbb{Z}\)-grading \(\mathfrak{g}=\bigoplus_{n\in\mathbb{Z}}V_n\) such that \(V_1\) generates \(\mathfrak{g}\) as a Lie algebra. A Lie algebra \(\mathfrak{g}\) is stratifiable if it admits a stratification.

In this section we present the linear problem of constructing a stratification for a Lie algebra (or determining that one does not exist). The method is based on Lemma 3.10 of [4], which gives the following characterization of stratifiable Lie algebras:

The condition of Lemma 3.3 is straightforward to check in a basis adapted to the lower central series.

Definition 3.4.

Recall that the lower central series of a Lie algebra \(\mathfrak{g}\) is the decreasing sequence of ideals

\begin{equation*} \mathfrak{g}=\mathfrak{g}^{(1)}\supset \mathfrak{g}^{(2)}\supset \mathfrak{g}^{(3)}\supset\cdots\text{,} \end{equation*}

where \(\mathfrak{g}^{(i+1)} = [\mathfrak{g},\mathfrak{g}^{(i)}]\text{.}\) A basis \(X_1,\dots,X_n\) of a Lie algebra \(\mathfrak{g}\) is adapted to the lower central series if for every non-zero \(\mathfrak{g}^{(i)}\) there exists an index \(n_i\in\mathbb{N}\) such that \(X_{n_i},\dots,X_n\) is a basis of \(\mathfrak{g}^{(i)}\text{.}\) The degree of the basis element \(X_i\) is the integer \(w_i=\max\{j\in\mathbb{N}: X_i\in\mathfrak{g}^{(j)}\}\text{.}\)

If \(\delta\colon\mathfrak{g}\to\mathfrak{g}\) is a derivation that restricts to the identity on \(\mathfrak{g}/[\mathfrak{g},\mathfrak{g}]\text{,}\) then by Lemma 3.3 \(\mathfrak{g}\) admits a stratification

\begin{equation*} \mathfrak{g}=V_1\oplus\dots\oplus V_s \end{equation*}

such that \(\restr{\delta}{V_i} = i\cdot \operatorname{id}\text{.}\) Since the terms of the lower central series are given in terms of the stratification as \(\mathfrak{g}^{(i)}=V_i\oplus\dots\oplus V_s \text{,}\) it follows that \(\delta(Y)\in i\cdot Y+\mathfrak{g}^{(i+1)}\) for any \(Y\in\mathfrak{g}^{(i)}\text{.}\) That is, a derivation \(\delta\) restricting to the identity on \(\mathfrak{g}/[\mathfrak{g},\mathfrak{g}]\) is of the form (3.2) for some coefficients \(a_{ij}\in F\text{.}\)

It is then enough to show that (3.3) is equivalent to the Leibniz rule

\begin{equation*} \delta([X_i,X_j]) = [\delta(X_i),X_j]+[X_i,\delta(X_j)],\quad \forall i,j\in\{1,\ldots,n\}\text{.} \end{equation*}

Indeed, this would prove that a linear map defined by (3.2) is a derivation if and only if the coefficients \(a_{ij}\) satisfy the linear system (3.3).

Since the basis \(X_i\) is adapted to the lower central series, only the structure coefficients with large enough degrees are non-zero, i.e., we have

\begin{equation} [X_i,X_j] = \sum_{w_k\geq w_i+w_j}c_{ij}^kX_k\text{.}\label{eq-lcs-basis-structure-coefficients}\tag{3.4} \end{equation}

By direct computation using (3.2) and (3.4) we get the expressions

\begin{align*} [\delta(X_i),X_j] \amp = \sum_{w_k\geq w_i+w_j}c_{ij}^kw_iX_k + \sum_{w_h\gt w_i}\sum_{w_k\geq w_h+w_j}a_{ih}c_{hj}^kX_k\\ [X_i,\delta(X_j)] \amp = \sum_{w_k\geq w_i+w_j}c_{ij}^kw_jX_k + \sum_{w_h\gt w_j}\sum_{w_k\geq w_i+w_h}a_{jh}c_{ih}^kX_k\\ \delta([X_i,X_j]) \amp = \sum_{w_k\geq w_i+w_j}c_{ij}^kw_kX_k + \sum_{w_h\geq w_i+w_j}\sum_{w_k\gt w_h}c_{ij}^ha_{hk}X_k \end{align*}

Denoting \(\sum_k B_{ij}^kX_k = \delta([X_i,X_j])-[\delta(X_i),X_j]-[X_i,\delta(X_j)]\text{,}\) we find that the equation \(B_{ij}^k=0\) is up to reorganizing terms equivalent to (3.3).

Finally, we observe that when \(w_k\leq w_i+w_j\text{,}\) the condition \(B_{ij}^k=0\) is automatically satisfied: for \(w_k\lt w_i+w_j\) all of the sums are empty, and for \(w_k=w_i+w_j\text{,}\) the only remaining terms from the sums cancel out as

\begin{equation*} B_{ij}^k = c_{ij}^kw_k-c_{ij}^kw_i-c_{ij}^kw_j=0\text{.} \end{equation*}

The concrete criterion of Proposition 3.5 provides the algorithm to construct a stratification. That is, a stratification or the non-existence of one for a nilpotent Lie algebra \(\mathfrak{g}\) is found as follows:

  1. Construct a basis \(X_1,\ldots,X_n\) adapted to the lower central series.
  2. Find a derivation \(\delta\) as in (3.2) by solving the linear system (3.3). If the system has no solutions, then \(\mathfrak{g}\) is not stratifiable.
  3. Return the stratification with the layers \(V_i=\ker (\delta-i)\text{.}\)
Remark 3.6.

By Theorem 1.4 of [4], the existence of a stratification for a Lie algebra is invariant under base field extensions, so it suffices to work within any field \(F\) that \(\mathfrak{g}\) is defined over.

Subsection 3.3 Positive gradings

Definition 3.7.

An \(\mathbb{R}\)-grading \(\mathcal{V} \colon \mathfrak{g} = \bigoplus_{\alpha \in \mathbb{R}} V_\alpha \) is positive if \(\alpha\gt 0\) for all the weights of \(\mathcal{V}\text{.}\) If such a grading exists for \(\mathfrak{g}\text{,}\) then \(\mathfrak{g}\) is said to be positively gradable.

In this section, we formulate and prove Algorithm 3.8. Using this algorithm one can decide whether a given grading of a Lie algebra admits a positive realization. If one starts with a Lie algebra with a known maximal grading, one is therefore able to answer the following questions:

  1. Can the Lie algebra be equipped with a positive grading?
  2. Can one find in some sense all positive gradings of the Lie algebra?

The methods of this article to construct a maximal grading are guaranteed to work only when the Lie algebra is defined over an algebraically closed field, see Subsection 3.4. In this discussion, we shall assume we are given a Lie algebra and a maximal grading for it, but we are not assuming that the field of coefficients is algebraically closed. However, regarding question i, note that the existence of a positive grading for a given Lie algebra is invariant under extension of scalars by Theorem 1.4 of [4] so we may as well work with the algebraic closure.

To answer question i, we observe that a Lie algebra admits a positive grading if and only if its maximal grading admits a positive realization by Proposition 2.18. A maximal grading admits a positive realization exactly when the convex hull of its weights does not contain zero, see Proposition 3.22 of [4]. To concretely find a positive realization, one may use Algorithm 3.8.

Question ii admits two relevant interpretations. First, one may use the enumeration of universal realizations of gradings of the given Lie algebra, as done in Subsection 3.1, and using Algorithm 3.8 construct their positive realizations when such realizations exist. The resulting list of positive gradings is complete in the sense that every positive grading of the given Lie algebra has the same layers as a grading on the list, up to a Lie algebra automorphism.

Question ii may also be interpreted as finding a parametrization of the usually uncountable family of positive gradings. Let \(\mathcal W :\mathfrak{g}=\bigoplus_{n\in \mathbb{Z}^k}W_n\) be a maximal grading of our given Lie algebra \(\mathfrak g\) and let \(\Omega\) be the support of \(\mathcal{W}\text{.}\) For any \(\mathbf{a}=(a_1,\ldots,a_k)\in\mathbb{R}^k\text{,}\) let \(\pi^{\mathbf a} \colon \mathbb{Z}^k \to \mathbb{R}\) be the projection given by \(\pi^{\mathbf a}(e_i)=a_i\) with \(e_i\) denoting the standard basis elements of the lattice \(\mathbb{Z}^k\text{.}\) Let

\begin{equation} \positiveset = \{\mathbf{a}\in\mathbb{R}^k: \pi^{\mathbf a}(n)\gt 0\,\forall n\in\Omega \}\text{.}\label{eq-positive-set}\tag{3.5} \end{equation}

The push-forward grading \(\pi^{\mathbf a}_*(\mathcal W)\) is a positive grading if and only if \(\mathbf{a}\in\positiveset\text{.}\) Every positive grading of \(\mathfrak{g}\) is equivalent to some grading \(\pi^{\mathbf a}_*(\mathcal W)\) and hence corresponds to an element of the set \(\positiveset\text{.}\) However, a pair of different elements of \(\positiveset\) may correspond to a pair of equivalent gradings.

We next present and prove Algorithm 3.8. The idea behind the algorithm is rather simple: finding a positive realization can be seen as a linear programming problem. The purpose of the slightly cumbersome form of the linear programming problem in Algorithm 3.8 is to guarantee that the weights of the positive realization are small. This method works well for problems in small dimensions, but does not scale well to large problems. If one does not care about the resulting positive weights, there is a simpler algorithm, see Remark 3.9.

If the grading \(\mathcal{V}\) has a positive realization, then it is a push-forward grading of the universal realization by some homomorphism \(f\colon \mathbb{Z}^k\to\mathbb{R}\) satisfying the inequalities \(f(\alpha_i)\gt 0\) and \(f(\alpha_i)\neq f(\alpha_j)\) for all \(i\neq j\text{.}\) Since the inequalities all have integer coefficients, the existence of such a homomorphism is equivalent to the existence of a homomorphism \(f\colon \mathbb{Z}^k\to\mathbb{Z}\) with the same properties. We may always write such a homomorphism in the form \(f(\cdot)=\braket{w,\cdot}\) for some \(w\in\mathbb{Z}^k\text{.}\) To prove the correctness of the algorithm, we need to show that the linear programming problem (3.6)(3.9) has a solution if and only if there exists \(w\in\mathbb{Z}^k\) such that

\begin{align} \braket{w,\alpha_i}\amp\geq 1\label{eq-positive}\tag{3.10}\\ \braket{w,\alpha_i-\alpha_j}\amp\neq 0\label{eq-disjoint}\tag{3.11} \end{align}

and that this solution has the smallest possible \(\max_i \braket{w,\alpha_i}\text{.}\) Furthermore we claim, that if a suitable \(w\in\mathbb{Z}^k\) exists, then there also exists one with

\begin{equation} \abs{\braket{w,\alpha_i-\alpha_j}}\leq C\label{eq-bounded-abs-diff}\tag{3.12} \end{equation}

where \(C\) is the constant defined in step 2. We prove this claim later.

The smallest maximal weight property is equivalent to (3.6) and the first half of (3.7), since a solution will necessarily satisfy \(z = \max_i \braket{w,\alpha_i}\text{.}\) The latter half of (3.7) is exactly the condition (3.10). The inequalities (3.11) and (3.12) are encoded in the inequalities (3.8) and (3.9) using the auxiliary binary variables \(b_{ij}\text{.}\) Indeed, if we have \(b_{ij}=0\text{,}\) then the inequalities reduce to

\begin{equation*} -C\leq \braket{w,\alpha_i-\alpha_j}\leq -1 \end{equation*}

and if \(b_{ij}=1\) then the inequalities reduce to

\begin{equation*} 1\leq \braket{w,\alpha_i-\alpha_j}\leq C\text{.} \end{equation*}

Therefore it remains to prove the claim about the additional condition (3.12).

First we show that disregarding the other constraints, the system (3.10) has a solution if and only if there exists a solution with \(\abs{\braket{w,\alpha_i}} \leq 1+N2^N\text{.}\) The normal form of the system (3.10) is given by switching to the variables \(x_i = \braket{w,\alpha_i}-1\text{,}\) resulting in the system

\begin{equation} Ax=d,\quad x\geq 0\text{,}\label{eq-pos-linprog-normal-form}\tag{3.13} \end{equation}

where the matrix \(A\) is the matrix whose rows are \(e_i+e_j-e_k\in\mathbb{Z}^N\) for each linear relation \(\alpha_i+\alpha_j=\alpha_k\) (dropping linearly dependent conditions) and the right-hand-side vector is \(d=(-1,\ldots,-1)\text{.}\)

The non-zero components of the basic feasible solutions of the normal form system (3.13) are determined by \(B^{-1}d\) where \(B\) is some invertible square submatrix of \(A\text{.}\) Writing

\begin{equation*} B^{-1} = \frac{1}{\det B}\operatorname{Adj}(B)\text{,} \end{equation*}

where \(\operatorname{Adj}(B)\) is the adjugate matrix of \(B\text{,}\) we see that integer solutions are determined by the vectors \(\operatorname{Adj}(B)d\text{.}\) Since each row of \(A\) has the norm bound \(\norm{e_i+e_j-e_k}_\infty\leq 2\) every minor of \(A\) is bounded by \(2^N\text{.}\) Hence we can bound the norms of integer basic feasible solutions to (3.13) by

\begin{equation*} \norm{x}_\infty = \norm{\operatorname{Adj}(B)d}_\infty \leq N2^N\text{.} \end{equation*}

Consequently the original problem (3.10) has a solution \(w\in\mathbb{Z}^k\) if and only if there exists a solution \(w\) with

\begin{equation*} \abs{\braket{w,\alpha_i}} = \abs{x_i+1} \leq 1+N2^N\text{.} \end{equation*}

Finally, if \(w\in\mathbb{Z}^k\) is as above, we claim that \(\tilde{w} = M^{k}w + (1,M,\ldots,M^{k-1})\) is a solution to (3.10)(3.12).

To see that \(\tilde{w}\) satisfies (3.10)(3.12), we consider base-\(M\) expansions of the integers \(\braket{\tilde{w},\alpha_i}\) and \(\braket{\tilde{w},\alpha_i-\alpha_j}\text{.}\) Since \(\norm{\alpha_i}_\infty\lt M\text{,}\) we have

\begin{equation*} \braket{\tilde{w},\alpha_i} = M^{k}\braket{w,\alpha_i} + \sum_{j=1}^{k}M^{j-1}\braket{e_j,\alpha_i}\geq M^{k} - \sum_{j=1}^{k}M^{j-1}\norm{\alpha_i}_\infty\geq 1. \end{equation*}

A similar computation using \(\norm{\alpha_i-\alpha_j}_\infty\lt M\) gives the bound

\begin{align*} \abs{\braket{\tilde{w},\alpha_i-\alpha_j}} \amp\leq M^{k}\abs{\braket{w,\alpha_i}}+M^{k}\abs{\braket{w,\alpha_j}} + \sum_{j=1}^{k}M^{j-1}\norm{\alpha_i-\alpha_j}_\infty\\ \amp\leq (2+N2^{N+1})M^{k} + M^{k}=C\text{,} \end{align*}

showing (3.12) so it remains to verify (3.11). Expanding in terms of powers of \(M\text{,}\) we have

\begin{equation*} \braket{\tilde{w},\alpha_i-\alpha_j} = \sum_{h=1}^{k}M^{h-1}\braket{e_h,\alpha_i-\alpha_j} \quad\mod{M^k}\text{.} \end{equation*}

Since \(\abs{\braket{e_h,\alpha_i-\alpha_j}}\leq \norm{\alpha_i-\alpha_j}_\infty\lt M\) it follows that \(\braket{\tilde{w},\alpha_i-\alpha_j}\neq 0\) as soon as at least one \(\braket{e_h,\alpha_i-\alpha_j}\neq 0\text{.}\) Since \(\alpha_i\neq \alpha_j\text{,}\) this latter condition is always satisfied for some \(h\text{.}\)

Remark 3.9.

To obtain any positive realization, there is a much simpler polynomial time algorithm: Solve the linear programming problem

\begin{equation*} \braket{w,\alpha_i}\geq 1,\quad i=1,\ldots,N \end{equation*}

in the rational variables \(w\in\mathbb{Q}^k\) and rescale and perturb the solution to

\begin{equation*} \tilde{w} = M^kw + (1,M,\ldots,M^{k-1}) \end{equation*}

as in the proof of correctness to guarantee distinct weights. Then the push-forward grading by \(f(\tilde{w},\cdot)\colon \mathbb{Z}^k\to\mathbb{Q}\) is again a positive realization of the original grading, but the resulting weights may be quite large.

Subsection 3.4 Maximal gradings

In this section we provide an algorithm to construct a maximal grading for a Lie algebra \(\mathfrak{g}\) defined over an algebraically closed field \(F\text{.}\)

Remark 3.11.

If the Lie algebra \(\mathfrak{g}\) is defined over a field \(F\text{,}\) then \(\der(\mathfrak{g})\) has a maximal torus defined over \(F\text{.}\) Hence the base field of \(\mathfrak{g}\) does not play a role in Algorithm 3.10.

The rest of the section is devoted to proving the correctness of Algorithm 3.10 and to explaining the steps in more detail.

Steps 1 and 2 are straightforward linear algebra. Step 3 is the core of the algorithm, where the basis \(B\) is extended until the spanned torus is maximal. Directly by construction each additional element \(S_i\in\der(\mathfrak{g})\) is a semisimple derivation that commutes with all the previous elements of \(B\text{,}\) so \(B\) always spans a torus. The nontrivial part is that this construction guarantees that the resulting torus \(\mathfrak{t}\) spanned by \(B\) is maximal. This is guaranteed by the following lemma.

First we claim that if \(S_i\in\mathfrak{t}\) for all \(i=1,\ldots,n\text{,}\) then the centralizer \(C(\mathfrak{t})\) is a nilpotent Lie algebra. By Engel's theorem the centralizer is nilpotent if and only if each map \(\ad(A_i)\colon C(\mathfrak{t})\to C(\mathfrak{t})\) is nilpotent. By definition \(\mathfrak{t}\) is central in \(C(\mathfrak{t})\text{,}\) so we have

\begin{equation*} \ad(A_i)=\ad(S_i)+\ad(N_i) = \ad(N_i)\text{.} \end{equation*}

Since each \(N_i\in\der(\mathfrak{g})\) is nilpotent, so is \(\ad(N_i)\) and the claim follows.

Next, we claim that the Jordan decomposition of a sum of basis elements is

\begin{equation} A_i+A_j = (S_i+S_j) + (N_i+N_j)\text{.}\label{eq-jordan-decomp-of-sum}\tag{3.14} \end{equation}

By assumption \(S_i,S_j\in\mathfrak{t}\text{,}\) so also \(S_i+S_j\in\mathfrak{t}\) and hence the sum \(S_i+S_j\) is semisimple. Moreover since \(\mathfrak{t}\) is central, \([S_i+S_j,N_i+N_j]=0\text{,}\) so all that remains is to show that \(N_i+N_j\) is nilpotent.

Since the centralizer \(C(\mathfrak{t})\) is nilpotent, it is also solvable. Since the field \(F\) is an algebraically closed field of characteristic zero, Lie's theorem implies that there exists a basis of \(\mathfrak{g}\) such that all the derivations \(A_i\) are represented by upper triangular matrices. Then \(N_i\) and \(N_j\) are both strictly upper triangular matrices, so also the sum \(N_i+N_j\) is strictly upper triangular, and hence nilpotent.

The Jordan decompositions (3.14) and the assumption that \(S_i\in\mathfrak{t}\) for all \(i=1,\ldots,n\) imply that the semisimple part of every linear combination of the elements \(A_i\) is also contained in \(\mathfrak{t}\text{.}\) Hence there cannot exist any semisimple elements in \(C(\mathfrak{t})\setminus\mathfrak{t}\text{.}\)

Remark 3.13.

The Jordan decompositions required in Step 3 of Algorithm 3.10 can be efficiently computed using the algorithm given in Appendix A.2 of [9].

In step 4, the grading induced by the torus \(\mathfrak{t}\) has a concrete description in terms of the fixed basis \(B\) of \(\mathfrak{t}\text{.}\) Namely, the basis \(\delta_1,\ldots,\delta_k\) defines an isomorphism \(\mathfrak{t}^*\to F^k\) and hence an equivalent push-forward grading over \(F^k\text{.}\) Expanding out the construction of Lemma 2.12 shows that the push-forward grading has the layers

\begin{equation*} V_\lambda = V_{(\lambda_1,\ldots,\lambda_k)} = \bigcap_{i=1}^k E^{\lambda_i}_{\delta_i}\text{,} \end{equation*}

where \(E^{\lambda_i}_{\delta_i}\) is the (possibly zero) eigenspace for the eigenvalue \(\lambda_i\) of the derivation \(\delta_i\text{.}\)

The final part of Algorithm 3.10 is step 5, where we replace the indexing by eigenvalues of the derivations of \(\mathfrak{t}\) with indexing over some \(\mathbb{Z}^k\) given by the universal realization. The precise method was described earlier in Subsection 2.3. Since the construction of the first three steps of Algorithm 3.10 leads to a maximal torus of \(\der(\mathfrak{g})\text{,}\) by Definition 2.16 the output is a maximal grading of \(\mathfrak{g}\text{.}\)

Remark 3.14.

The relevance of the assumption that the field \(F\) is algebraically closed is to guarantee that the constructed tori are split, i.e., the semisimple derivations are diagonalizable. The Jordan decomposition and Lemma 3.12 then give us an efficient method to construct diagonalizable derivations in \(C(\mathfrak{t})\setminus\mathfrak{t}\text{.}\)

When the Lie algebra \(\mathfrak{g}\) is defined over a non-algebraically closed field \(F\text{,}\) it is possible that the computation of a maximal grading over the algebraic closure \(\bar{F}\) using Algorithm 3.10 outputs a grading that is still defined over \(F\text{.}\) Then the output is also a maximal grading of the Lie algebra \(\mathfrak{g}\) over \(F\text{.}\) Consider for example the Lie algebras denoted as \(L_{6,19}(\epsilon)\) in [1], which are 6-dimensional Lie algebras defined by the structure coefficients

\begin{align*} [X_1,X_2] \amp= X_4\amp [X_1,X_3] \amp= X_5\\ [X_1,X_5] \amp= [X_2,X_4] = X_6\amp [X_3,X_5] \amp= \epsilon X_6 \end{align*}

For the Lie algebra \(L_{6,19}(-1)\) the maximal torus computed by Algorithm 3.10 over the algebraic numbers is also defined over \(\mathbb{Q}\text{,}\) but this is not the case with the Lie algebra \(L_{6,19}(1)\text{.}\) Indeed for \(L_{6,19}(1)\text{,}\) the maximal torus over the rationals is 2-dimensional, but the maximal torus over the algebraic numbers is 3-dimensional.