Skip to main content

Appendix A Existence of a positive realization

Example 2.4 motivates an alternate approach for deciding the existence of a positive realization. The grading in the example does not admit a positive realization: suppose by contradiction that there is an injection \(\{a,b,c,d\} \rightarrow \mathbb{R}\) that gives a positive realization. The bracket relations of the Lie algebra imply the equations \(a+c=d\) and \(b+d=c\text{,}\) which are impossible for strictly positive weights. Here the non-existence of a positive realization is found simply by considering the equations implied by the bracket relations of the layers.

In general, a grading can be realized over some abelian group \(A\) with the set of weights \(\{\lambda_1,\ldots,\lambda_k\}\) if and only if certain system of equations of the type \(\lambda_i + \lambda_j = \lambda_h\) has a solution \((\lambda_1,\ldots,\lambda_k)\) whose components are all distinct. This system consists of equations coming from the non-trivial bracket relations among the layers of the grading, see step 2 of Algorithm 2.13. A positive realization exists if and only if there is a solution in the group \(A=\mathbb{R}\) with all weights strictly positive. Indeed, if there is a positive solution, then there is also a positive solution with distinct components as we shall see in the proof of Algorithm A.4.

Deciding if a solution exists with all components strictly positive is a classical problem in linear programming. By rescaling, we may replace the open conditions \(\lambda_i >0 \) with the closed conditions \(\lambda_i \geq 1\text{.}\) By a change of variables \(\mu_i=\lambda_i-1\text{,}\) we find that the linear problem for the existence of a positive realization is equivalent to an affine problem \(\mu_{i} + \mu_{j} - \mu_{h} = -1\) with all the components \(\mu_i\) non-negative.

Let \(A\) be the \(N \times k \)-matrix of coefficients of the affine problem and let \(\mathbf{b} = (-1,\ldots, -1) \in \mathbb{R}^N\text{.}\) By getting rid of linearly dependent equations, we may assume that \(\operatorname{rank}(A) = N \le k\text{.}\) Our goal is then an algorithm that either produces an element of the set

\begin{equation} P = \{ \mathbf{x} \in \mathbb{R}^{k} \mid A \mathbf{x} = \mathbf{b} \text{ and } \mathbf{x} \ge 0 \}\label{the-set-of-interest-P}\tag{A.1} \end{equation}

or indicates that the set \(P\) is empty. Here we use the shorthand notation \(\mathbf{x} \ge 0 \) to mean that all the components of the vector \(\mathbf{x} \) are non-negative. Notice that the set \(P\) of solutions is closed and convex.

There is a vast literature on how to solve linear programming problems, and we refer to the book [16]. This approach and in particular Lemma A.3 below are essentially from section 2.4 of that book. We state them here for completeness.

Definition A.1.

Let \(K \subset \mathbb{R}^n \) be a convex set. We say that a point \(x \in K\) is an extremal point if it cannot be expressed as a non-trivial convex combination of the points of \(K\text{.}\)

Consider the lexicographic order \(\prec\) on \(\mathbb{R}^n\text{,}\) where \(\mathbf{x}\prec\mathbf{y}\) if there exists some index \(i\in\{1,\ldots,n\}\) such that \(\mathbf{x}_j=\mathbf{y}_j\) for all \(j\lt i\) and \(\mathbf{x}_i\lt \mathbf{y}_i\text{.}\) Observe that if \(\mathbf{x}\prec \mathbf{y}\text{,}\) then

\begin{equation} \mathbf{x}\prec t\mathbf{x}+(1-t)\mathbf{y}\label{eq-lex-order-convex-combination}\tag{A.2} \end{equation}

for every \(0\lt t\lt 1\text{.}\) Since \(\mathbf{x}\geq 0\) for all \(\mathbf{x}\in K\text{,}\) there exists a lexicographic minimum \(\mathbf{x}_{\min}\in K\text{.}\) It follows from (A.2) that the point \(\mathbf{x}_{\min}\) cannot be expressed as a non-trivial convex combination.

By permuting the basis, we can express \(\mathbf{x} = (\mathbf{x}_+,0,\ldots,0) \text{,}\) where \(\mathbf{x}_+ \in \mathbb{R}^p \) for some \(p \le k\) and \(\mathbf{x}_+ > 0 \text{.}\) Let \(A_+ \) be the matrix consisting of the first \(p\) columns of \(A\). First we show that the matrix \(A_+\) has rank \(p\text{.}\) Suppose towards a contradiction that there is a non-zero vector \(\mathbf{w} \in \mathbb{R}^p \) such that \(A_+ \mathbf{w} =0 \text{.}\) Let \(\delta >0 \) be so small that the vectors

\begin{equation*} \mathbf{z}_1 = \mathbf{x}_+ + \delta \mathbf{w} \qquad \mathbf{z}_2 = \mathbf{x}_+ - \delta \mathbf{w} \end{equation*}

both satisfy \(\mathbf{z}_1, \mathbf{z}_2 \ge 0 \text{.}\) For both \(i \in \{1,2\} \text{,}\) let \(\mathbf{u}_i = ( \mathbf{z}_i ,0,\ldots,0) \in \mathbb{R}^k \text{.}\) Then

\begin{equation*} A \mathbf{u}_i = A_+ \mathbf{z}_i = A_+ \mathbf{x}_+ = A \mathbf{x} = \mathbf{b} \text{,} \end{equation*}

so \(\mathbf{u}_i \in P \) are both solutions. Now the solution \(\mathbf{x}\) can be represented as a non-trivial convex combination \(\mathbf{x} = \frac{1}{2} \mathbf{u}_1 + \frac{1}{2} \mathbf{u}_2\) of solutions, which contradicts the assumption that \(\mathbf{x}\) is an extremal point. We conclude that the rank of \(A_+\) must be \(p\text{.}\) Since \(\operatorname{rank}(A) = N\text{,}\) we deduce also \(p \le N\text{.}\)

If \(p \lt N\text{,}\) then since \(\operatorname{rank}(A) = N\) it is possible to form an invertible matrix \(B\) by adding some further columns of \(A\) to the matrix \(A_+\text{.}\) If instead \(p=N\text{,}\) then continue with \(B= A_+\text{.}\) Let \(\bar {\mathbf{x}} = (\mathbf{x}_+,0,\ldots,0) \in \mathbb{R}^{N} \text{.}\) Then

\begin{equation*} B \bar {\mathbf{x}} =A_+ \mathbf{x}_+=A \mathbf{x} = \mathbf{b} \end{equation*}

so \(\bar{\mathbf{x}}=B^{-1}(\mathbf{b})\) and the claim follows.

Let \(P\) be as in (A.1). Lemma A.3 implies that step 2 constructs all the extremal points of \(P\text{.}\) By Lemma A.2, if no extremal points are found, the set \(P\) is empty and no positive realization exists. We still need to argue that if \(P\) is non-empty, then a positive realization exists.

A priori, even if some \(\mathbf{x} = B^{-1}(\mathbf{b}) \ge 0\text{,}\) the corresponding weights \(x_i+1,\ldots,x_k+1\) do not necessarily define a realization of the original grading, since in general these weights are non-distinct and hence define a coarser grading. By Lemma 2.15 this coarser grading is a push-forward grading of the universal realization of the original grading, via a homomorphism from some \(\mathbb{Z}^m\) to \(\mathbb{R}\) mapping the weights to \(x_1+1,\ldots,x_k+1\text{.}\) The homomorphism is realized as a projection to some line of \(\mathbb{R}^m\text{,}\) as in the proof of Lemma 3.7. By perturbing this line, it is always possible to find another homomorphism that is injective on the weights and maps all the weights to strictly positive reals. Hence there is also a positive realization of the original grading.