1 Introduction

In linear algebra, considerable work has been done on isometric, self-adjoint, and skew-adjoint endomorphisms with respect to bilinear forms. Canonical matrices for operators of this type on complex inner product vector spaces are provided in [13]. Similar results for more general fields appear in [18] and, more recently, in [8]. In this paper, we will focus on skew-adjoint endomorphisms and their significance as a tool in constructing quadratic algebras.

A Lie algebra L is a vector space over a field equipped with a bilinear product [xy] that satisfies \([x,x]=0\) and the Jacobi identity (\([[x,y],z]+[[y,z],x]+[[z,x],y]=0\)). L is said to be quadratic if it is endowed with a non-degenerate symmetric bilinear form \(\varphi \), which is invariant, i.e., \(\varphi ([x,y],z)+\varphi (y,[x,z])=0\). If \((L,\varphi )\) is quadratic, the invariance of \(\varphi \) is equivalent to the left multiplication operators \({{\,\textrm{ad}\,}}x\), called adjoint or inner derivations, being \(\varphi \)-skew-adjoint endomorphisms. In fact, the set of inner derivations of L, \({{\,\textrm{Inner}\,}}(L)\), is an ideal of the whole algebra of derivations of L, \({{\,\textrm{Der}\,}}(L)\). And the set of \(\varphi \)-skew-adjoint derivations, \({{\,\textrm{Der}\,}}_\varphi (L, \varphi )=\{d\in {{\,\textrm{Der}\,}}(L): \varphi (d(x),y)+\varphi (x,d(y))=0\}\), is a Lie subalgebra of \({{\,\textrm{Der}\,}}L\) that contains \({{\,\textrm{Inner}\,}}(L)\).

Semisimple Lie algebras under their Killing–Cartan form, which is defined as \(\kappa (x,y)={{\,\textrm{Tr}\,}}({{\,\textrm{ad}\,}}x{{\,\textrm{ad}\,}}y)\), are nice examples of quadratic algebras. On the opposite structural side, we find abelian Lie algebras; all of them are quadratic by using any non-degenerate symmetric form. The orthogonal sum (as ideals) of semisimples and abelian algebras allows us to assert that reductive Lie algebras are also quadratic. According to [16], any non-simple, non-abelian, and indecomposable quadratic Lie algebra (i.e. the algebra does not break as an orthogonal sum of two regular ideals) is a double extension either by a one-dimensional or by a simple Lie algebra. The main tool enabling this classical procedure is the existence of skew-adjoint derivations. We point out that the class of quadratic Lie algebras is quite large and contains reductive Lie algebras and also infinitely many non-semisimple examples. Most of the examples, structures, and constructions on quadratic algebras have been set over fields of characteristic zero (see [17] for a survey-guide). In positive characteristic, they have not been as extensively studied.

Over the reals, the double extension of any Euclidean vector space by a skew-adjoint automorphism yields to the class of real oscillator algebras. They are quadratic and solvable Lie algebras of dimension \(2n+2\) and a bilinear invariant form of Lorentzian type (inner product with metric signature \((2n+1,1)\)). The real oscillator class was first introduced and easily described, thanks to the Spectral Theorem on real skew-adjoint operators, by Alberto Medina in [15, Section 4]. The name oscillator comes from quantum mechanics, because they describe a system of a harmonic oscillator n-dimensional Euclidean space. At the same time, J. Hilgert and K. H. Hofmann arrive at oscillator algebras in their characterization of Lorentzian cones in real Lie algebras [11]. In [12, Definition II.3.6], the authors term them as (solvable) Lorentzian algebras. For \(n\ge 2\), the Levi subalgebra of the algebra of skew-derivations of a \((2n+2)\)-dimensional oscillator algebra is the special unitary real Lie algebra \(\mathfrak {su}_n(\mathbb {R})\) (see [6, Theorem 3.2]). Therefore, oscillator algebras can be doubly extended to a countable series of non-semisimple and non-solvable quadratic Lie algebras. Moreover, the study of other non-associative structures on oscillator algebras provides information on connections and metrics on oscillator Lie groups [1, Section 5].

Throughout this paper, we will extend the notion of real oscillator algebras to any arbitrary field \(\mathbb {K}\) of characteristic different from 2. Under the name of generalized \(\mathbb {K}\)-oscillator algebras, we encode the double extensions of any abelian quadratic Lie algebra by any skew-adjoint endomorphisms. In the particular case \(\mathbb {K}=\mathbb {R}\), extensions through skew-adjoint automorphisms allow us to recover the class of real oscillator algebras [15, Lemme 4.2].

The paper is organized as follows. In Sect. 2, we assemble some basic properties, orthogonal decompositions, and canonical forms of skew-adjoint endomorphisms. Definition 3.1 in Sect. 3 establishes the concept of generalized oscillator algebras over arbitrary fields of characteristic not 2, and Lemma 3.2 reviews some of structural properties of this class of quadratic algebras. In Sect. 3, we also classify those algebras which are nilpotent in Theorem 3.5, and provide a characterization of those with quadratic dimension two. In Sect. 4, we prove that indecomposable quadratic algebras with Witt index 1 are simple or solvable. The solvable ones are just the subclass of generalized oscillator algebras which are constructed as a double extension of skew-adjoint automorphisms. The proof of the assertion is based on the concept of isomaximal ideal introduced in [14]. This result enables us to recover the well-established classification of real oscillator algebras given in [15].

2 Skew-Maps on Orthogonal Subspaces

Let V be a \(\mathbb {K}\)-vector space and \(f:V\rightarrow V\) a \(\mathbb {K}\)-endomorphism. From now on, we denote \(m_f(x)\in \mathbb {K}[x]\) as the (monic) minimal polynomial of f, that is \(m_f(f)=0\) and any other polynomial q(x) with \(q(f)=0\) is a multiple of \(m_f(x)\). The factorization of \(m_f(x)\) into distinct irreducible polynomials, \(m_f(x)=\pi _1^{k_1}(x)\ldots \pi _r^{k_r}(x)\), with \(\pi _i(x)\) monic and \(k_i\ge 1\), induces a direct sum vector decomposition of V, referred to as primary decomposition: \(V=V_{\pi _1}\oplus \cdots \oplus V_{\pi _r}=V_0 \oplus \bigl (\oplus _{\pi _i\ne x}V_{\pi _i}\bigr )\), where \(V_{\pi _i}=\{v\in V\mid \pi _i^{k_i}(f)(v)=0\}\). Each primary component is an f-invariant subspace. In the particular case \(\pi _1(x)=x-\lambda \), the scalar \(\lambda \) is an eigenvalue of f, and \(V_{x-\lambda }\) is usually denoted as \(V_{\lambda }\). In this case, the subspace is called a generalized \(\lambda \)-eigenspace. Thus, \(V_0\) is the generalized 0-eigenspace, and \(V_0\) will be null if and only if f is a bijective map. Let \(J_n(\lambda )\) denote the Jordan n-by-n canonical block and \(C(\pi (x))\) the companion matrix of a given monic polynomial \(\pi (x)=x^n-a_{n-1}x^{n-1}-\dots -a_1x-a_0\). Therefore,

$$\begin{aligned} J_n(\lambda ):=\left( {\begin{matrix} \lambda &{} 1 &{} &{} 0 \\ 0 &{} \lambda &{} \ddots &{} 0 \\ &{} &{} \ddots &{} 1 \\ 0 &{} 0 &{} &{} \lambda \end{matrix}}\right) \!\!,\quad C(\pi (x)):=\left( {\begin{matrix} 0 &{} &{} 0 &{} a_0 \\ 1 &{} \ddots &{} &{} a_1 \\ &{} \ddots &{} 0 &{} \vdots \\ 0 &{} &{} 1 &{} a_{n-1} \end{matrix}}\right) \end{aligned}$$
(1)

Now, let \(\varphi \) be a symmetric bilinear form, and assume that f is \(\varphi \)-skew-adjoint (henceforth, we will use \(\varphi \)-skew to abbreviate this term), which means, \(\varphi (f(x), y)=-\varphi (x, f(y))\). Then, for any \(s\ge 0\), we have

$$\begin{aligned} \varphi (f^s(x),y)=(-1)^s\varphi (x,f^s(y))=\varphi (x,(-f)^s(y)). \end{aligned}$$
(2)

This implies that,

$$\begin{aligned} \varphi (q(f)(x),y)=\varphi (x,q(-f)(y)) \text { for any } q(x)\in \mathbb {K}[x]. \end{aligned}$$
(3)

From now on, the pair \((V,\varphi )\), where \(\varphi \) is a symmetric bilinear form, will be called an orthogonal \(\mathbb {K}\)-vector space.

Proposition 2.1

Let \((V, \varphi )\) be an orthogonal \(\mathbb {K}\)-vector space, and f a \(\varphi \)-skew \(\mathbb {K}\)-endomorphism of V with minimal polynomial \(m_f(x)=\pi _1^{k_1}(x)\ldots \pi _r^{k_r}(x)\), and primary decomposition \(V=\bigoplus _{i=1}^rV_{\pi _i}\). The orthogonal subspaces \(V_{\pi _i}^\perp \) are f-invariant, and for any \(\pi _i(x)\), we have two possibilities:

  1. (a)

    \(\pi _i(-x)\ne \pm \pi _j(x)\) for \(1\le j\le r\). Here, \(V_{\pi _i}\subset V^\perp \) and \(\varphi \) is degenerate.

  2. (b)

    There is a unique \(j_i\in \{1, \dots , r\}\) such that \(\pi _i(-x)=(-1)^{\deg \pi _i}\pi _{j_i}(x)\) and then \(V_{\pi _i}^\perp =V_{\pi _{j_i}}\cap V_{\pi _i}^\perp \oplus \bigl (\oplus _{k\ne j_i}V_{\pi _k}\bigr )\). Moreover,

    • for any \(1\le i\le r\), \(V_{\pi _{j_i}}\cap V_{\pi _i}^\perp =V_{\pi _{j_i}}\cap V^\perp \) and

    • if \(i\ne j_i\), the primary components \(V_{\pi _i}, V_{\pi _{j_i}}\) are totally isotropic.

Proof

From Eq. (2), it is easily checked \(V_{\pi _i}^\perp \) is f-invariant. We point out \( W=\oplus _{\pi _i\ne x} V_{\pi _i}\subseteq {{\,\textrm{Im}\,}}f\). Even more, \(f^s\mid _{V_{\pi _i}}\) is bijective for any direct summand of W with \(s\ge 1\). Suppose \(W\ne V\), equivalently, f is not one-to-one. Then, x polynomial appears in the factorization of \(m_f(x)\). Reordering if necessary, we assume \(x=\pi _1(x)=-\pi _1(-x)\). Therefore, \(V_{\pi _1}=V_x=V_0\). For any \(w\in V_{\pi _i}\subseteq W\), there exists \(v\in V_{\pi _i}\) such that \(w=f^{k_1}(v)\). We have \(\varphi (V_0, w)=\varphi (V_0, f^{k_1}(v))=(-1)^{k_1}\varphi (f^{k_1}(V_0),w)=0\). Thus, \(V_0\perp W\), and any assertion regarding the primary component \(V_{\pi _1}\) follows. This is a particular case of item (b). Without loss of generality, we can assume f is one-to-one. Since \(\pi _i(x)\) is irreducible, so is \(\pi _i(-x)\), thus either \(\gcd (\pi _i(x),\pi _j(-x))=1\) or \(\gcd (\pi _i(x),\pi _j(-x))=\pi _i(x)\). The second case happens if and only if \(\pi _j(-x)=\pm \pi _i(x)\). Firstly, assume \(\pi _i(-x)\ne \pm \pi _j(x)\) for any \(1\le j\le r\). Then, \(\gcd (\pi _i^{k_i}(-x), \pi _j^{k_j}(x))=1\) and, from Bezout’s identity, there exist a(x), \(b(x)\in \mathbb {K}[x]\) with \(a(x)\pi _i^{k_i}(-x)+b(x)\pi _j^{k_j}(x)=1\). Take \(v\in V_{\pi _i}\), so \(\pi _i^{k_i}(f)(v)=0\). Using Bezout’s identity, for any vector w in a primary component \(V_{\pi _j}\), we have \(w=a(f)\pi _i^{k_i}(-f)(w)\) and, therefore, \(0=\varphi (a(-f)(\pi _i^{k_i}(f)(v)),w)=\varphi (v,a(f)\pi _i^{k_i}(-f)(w))=\varphi (v,w)\). This implies \(0\ne V_{\pi _i}\subset V^\perp \), and (a) follows. Otherwise, the uniqueness of the decomposition into irreducibles of the minimal polynomial ensures there is a unique \(1\le j_i\le r\) such that \(\pi _i(-x)=(-1)^{\deg \pi _i}\pi _{j_i}(x)\). Thus, \(\pi _i(-x)\) and \(\pi _k(x)\) are coprime for \(k\ne j_i\). Using Bezout’s identity as in the previous reasoning, \(V_{\pi _i}\perp V_{\pi _k}\), and the direct sum decomposition of \(V_{\pi _i}^\perp \) follows from the f-invariance of \(V_{\pi _i}^\perp \). This also shows that \(V_{\pi _i}\subseteq V_{\pi _i}^\perp \) when \(i\ne j_i\). Finally, we prove \(V_{\pi _{j_i}}\cap V_{\pi _i}^\perp =V_{\pi _{j_i}}\cap V^\perp \). From \(V^\perp \subseteq V_{\pi _i}^\perp \) we have \(V_{\pi _{j_i}}\cap V^\perp \subseteq V_{\pi _{j_i}}\cap V_{\pi _i}^\perp \). The equality follows by proving \(V_{\pi _{j_i}}\cap V_{\pi _i}^\perp \subseteq V^\perp \). Let \(v\in V_{\pi _{j_i}}\cap V_{\pi _i}^\perp \), and note that \(\varphi (v,w)=0\) if \(w\in V_{\pi _i}\). If \(w\in V_{\pi _k}\), \(k\ne i\), since \(V_{\pi _k}\perp V_{\pi _{j_i}}\), \(\varphi (v,w)=0\), so \(v\in V^\perp \). \(\square \)

Remark 2.2

For any polynomial \(p(x)=x^n+a_{n-1}x^{n-1}+\dots +a_0\) such that \(a_0\ne 0\), \(q(x)=x^n-a_{n-1}x^{n-1}+\dots +(-1)^{n-1}a_1x+(-1)^na_0\) is the unique monic polynomial fulfilling \(p(-x)=(-1)^{\deg p}q(x)\). The set of roots of q(x) is just \(\{-\lambda : p(\lambda )=0\}\). In particular, \(q(x)=p(x)\) if and only if the monomials of p(x) have even degree, and the roots of p(x) are of the form \(\pm \lambda _1,\dots , \pm \lambda _n\).

From the previous Proposition 2.1, we arrive at the following general orthogonal decomposition of \((V, \varphi )\) through a \(\varphi \)-skew map.

Corollary 2.3

Let \((V,\varphi )\) be an orthogonal \(\mathbb {K}\)-vector space over a field of characteristic not 2, and \(f:V\rightarrow V\) be a \(\varphi \)-skew linear map. Then, up to permutation, the factorization of \(m_f(x)=x^\alpha \pi _1^{k_1}(x)\ldots \pi _r^{k_r}(x)\) into irreducible monic polynomials splits as \(m_f(x)=x^\alpha p(x)q(x)s(x)n(x)\) where \(\alpha \ge 0\), and if the degree of some of the pqsn factors is \(\ge 1\):

  1. (a)

    \(p(x)=\pi _1^{k_1}(x)\ldots \pi _t^{k_t}(x)\) and \(\pi _i(-x)=\pm \pi _i(x)\),

  2. (b)

    \(q(x) =\pi _{t+1}^{k_{t+1}}(x)\ldots \pi _{t+l}^{k_{t+l}}(x)\) and \(\pi _j(-x)=\pm \pi _{l+j}(x)\) and,

  3. (c)

    \(s(x)=\pi _{t+l+1}^{k_{t+l+1}}(x)\ldots \pi _{t+2\,l}^{k_{t+2\,l}}(x)\) and \( \pi _{l+j}(-x)=\pm \pi _j(x)\),

  4. (d)

    \(n(x)=\pi _{t+2l+1}^{k_{t+2l+1}}(x)\ldots \pi _r^{k_r}(x)\) and \(\pi _j(-x)\notin \pm \{ \pi _1, \dots ,\pi _r\}\).

This yields to the orthogonal sum, \(V=V_0\perp V^1\perp V^2\perp V^3\), where \(V^1=\oplus _{\pi _i\mid p(x)}V_{\pi _i}\), \(V^2= \oplus _{\pi _j\mid q(x)}\bigl (V_{\pi _j}\oplus V_{\pi _{j+l}}\bigr )\) with \(V_{\pi _j}\) and \(V_{\pi _{j+l}}\) totally isotropic subspaces, and \(V^3= \oplus _{\pi _i\mid n(x)}V_{\pi _i}\subseteq V^\perp \).

Corollary 2.4

Let \((V,\varphi )\) be an orthogonal subspace over a field \(\mathbb {K}\) of characteristic not 2 such that \(\varphi \) is non-degenerate, and denote by \(I_\varphi =\{v\in V: \varphi (v,v)=0 \}\) the set of \(\varphi \)-isotropic vectors. Let \(f:V\rightarrow V\) be a \(\varphi \)-skew linear map with minimal polynomial \(m_f(x)=x^\alpha \pi _1^{k_1}(x)\ldots \pi _r^{k_r}(x)\). Then:

  1. (a)

    If \(I_\varphi ={0}\), f is semisimple and \(V=V_0\oplus V^1=\ker f\perp {{\,\textrm{Im}\,}}f\), so \(m_f(x)=x^\alpha \pi _1(x)\dots \pi _t(x)\), \(\alpha =0,1\). Moreover, any irreducible \(\pi _i(x)\) is of the form \(\pi _i(x)=x^{2n_i}-a_{n_i-1}x^{2(n_i-1)}-\dots -a_1x^2-a_0\), \(n_i\ge 1\), and \(a_0\ne 0\). In particular, \(f=0\) when the base field is algebraically closed.

  2. (b)

    There exists \(\sigma \in S_r\), \(\sigma ^2=1\), such that \(\pi _i(-x)=(-1)^{\deg \pi _i}\pi _{\sigma (i)}(x)\). In particular, \(m_f(-x)=(-1)^{\deg m_f}m_f(x)\), and either \(m_f(x)=x^\alpha \) or \(m_f(x)=x^\alpha p(x)\), \(p(0)\ne 0\), and the monomials of p(x) are of even degree. So, the nonzero roots of \(m_f(x)\) are of the form \(\pm \lambda _1,\dots , \pm \lambda _n\).

  3. (c)

    \(V=V_0\perp V^1\perp V^2\), \(V_{\pi _{\sigma (i)}}\cap V_{\pi _i}^\perp =0\) and the components \(V_{\pi _i}\) and \(V_{\pi _{\sigma (i)}}\) are equidimensional. The direct sum decomposition in \(V^1\) is orthogonal, and \(\bigl (V_{\pi _i}\oplus V_{\pi _{\sigma (i)}}\bigr )\perp \bigl (V_{\pi _j}\oplus V_{\pi _{\sigma (j)}}\bigr )\) for \(j\ne i, \sigma (i)\).

  4. (d)

    If the base field \(\mathbb {K}\) is algebraically closed, \(V=V_0\perp V^2\), and \(m_f(x)=x^{\alpha }(x-\lambda _1)^{k_1}(x+\lambda _1)^{k_1}\ldots (x-\lambda _r)^{k_r}(x+\lambda _r)^{k_r}\).

Proof

Suppose that there are no nonzero isotropic vectors. Thus \(V=V_0\oplus V^1\) follows from Corollary 2.3. If \(\alpha \ge 2\), there is a nonzero vector \(v\in V_0\) such that \(f^\alpha (v)=0\ne f^{\alpha -1}(v)\), and then \(\varphi (f^{\alpha -1}(v), f^{\alpha -1}(v))=-\varphi (f^{\alpha -2}(v), f^{\alpha }(v))=0\). So \(f^{\alpha -1}(v)=0\), a contradiction. Now consider any irreducible factor \(\pi _i(x)\ne x\). From Remark 2.2, \(\pi _i(x)=x^{2n_i}-\sum _{k=0}^{n_i-1} a_{k}x^{2k}\) with \(a_0\ne 0\). Denote by \( {\textbf {E}}_{1,2n_i}\) the \(2n_i\times 2n_i\) elemental matrix (\(a_{r,s}=0\) for \((r,s)\ne (1,2n_i)\) and \(a_{1,2n_i}=1\)), and let \(C(\pi _i(x))\) be the companion matrix described in Eq. (1). If \(k_i\ge 2\), there is a set T of \(4n_i\) linearly independent vectors such that \(U={{\,\textrm{span}\,}}\langle T\rangle \) is f-invariant, and the matrix of \(f|_U\) is of the form

$$\begin{aligned} \left( \begin{array}{c|c} C(\pi _i(x)) &{} \textbf{0}_{2n_i\times 2n_i} \\ \hline {\textbf {E}}_{1,2n_i} &{} C(\pi _i(x)) \end{array}\right) \end{aligned}$$
(4)

Then, we can find \(v,w\in T\) such that \(f^{2n_i}(v)=\sum _{k=0}^{n_i-1} a_{k} f^{2k}(v)+w\) and \(f^{2n_i}(w)=\sum _{k=0}^{n_i-1} a_{k} f^{2k}(w)\). So \(w=\frac{1}{a_0}(-\sum _{k=1}^{n_i-1}a_kf^{2k}(w)+f^{2n_i}(w))\). Using previous equalities, we have that:

$$\begin{aligned} a_0\varphi (v,w)= & {} \sum \limits _{k=1}^{n_i-1} -a_k\varphi (v,f^{2k}(w))+\varphi (v,f^{2n_i}(w)) \\= & {} \sum \limits _{k=1}^{n_i-1} -a_k\varphi (f^{2k}(v),w)+\varphi (f^{2n_i}(v),w) \\= & {} \sum \limits _{k=1}^{n_i-1} -a_k\varphi (f^{2k}(v),w)+\sum \limits _{k=0}^{n_i-1} a_k\varphi (f^{2k}(v),w)+\varphi (w,w) \\= & {} a_0\varphi (v,w)+\varphi (w,w). \end{aligned}$$

And thus, \(\varphi (w,w)=0\), a contradiction, proving assertion (a). From \(V^\perp =0\), the decomposition in (c), (b) and (d) follows from Proposition 2.1 and Remark 2.2. Finally, to get \(\dim V_{\pi _{\sigma (i)}}=\dim V_{\pi _i}\), note \(V_{\pi _{\sigma (i)}}\cap V_{\pi _i}^\perp =V_{\pi _{i}}\cap V_{\pi _{\sigma (i)}}^\perp \) is zero as \(V^\perp =0\) and \(\dim V_{\pi _{\sigma (i)}}^\perp +\dim V_{\pi _{\sigma (i)}}=\dim V=\dim V-\dim V_{\pi _i}+\dim V_{\pi _{\sigma (i)}}\). From previous equalities, we conclude the equidimensionality. \(\square \)

Example 2.1

Applying previous Corollary 2.4, we get the well-known Spectral Theorem (skew-Hermitian case). Let \((V, \varphi )\) be a real orthogonal vector space without isotropic vectors, and \(f:V\rightarrow V\) a nonzero \(\varphi \)-skew linear map. Then, either \(\varphi (v,v)>0\) for all \(v\in V\) (Euclidean case) or \(\varphi (v,v)<0\) for all \(v\in V\). Moreover, \(V=\ker f\perp {{\,\textrm{Im}\,}}f\), and there are \(0<\lambda _1\le \lambda _2\le \cdots \le \lambda _t\) such that \(m_f(x)=x(x^2+\lambda _1)\ldots (x^2+\lambda _t)\). Therefore, the eigenvalues of f are either 0 or purely imaginary (\(\pm i\sqrt{\lambda _k}, i^2=-1, i\in \mathbb {C}\)). In the Euclidean case, for any nonzero vector \(v\in V_{x^2+\lambda _i}\), \(f^2(v)=-\lambda _i v\). Thus, the vector space \(0\ne U_i(v)={{\,\textrm{span}\,}}\langle v, f(v)\rangle \) is 2-dimensional and f-invariant. Moreover, \(\varphi (v,f(v))=0\) and \(\varphi (f(v),f(v))=\lambda _i\varphi (v,v)>0\). Thus, \(U_i(v)\cap U_i(v)^\perp =0\), and, therefore, \(V=U_i(v)\oplus U_i(v)^\perp \). Even more, assuming \(\varphi (v,v)=1\), \(\{v_{i,1}=v, v_{i,2}=-\frac{f(v)}{\sqrt{\lambda _i}}\}\) is an orthonormal basis of \(U_i(v)\) such that \(f(v_{i,1})=-\sqrt{\lambda _i}v_{i,2}\) and \(f(v_{i,2})=\sqrt{\lambda _i}v_{i,1}\). Since \(U_i(v)\) and \(U_i(v)^\perp \) are f-invariant, reducing dimension by orthogonal decompositions and rescaling, we get an orthonormal basis of V such that the pair \((f, \varphi )\) is represented by the pair of matrices \((A_f, {{\,\textrm{Id}\,}}_{\dim V})\) and \(A_f\) is the matrix that decomposes as diagonal blocks

(5)

Here, \(d_i=\frac{1}{2}\dim V_{x^2+\lambda _i}\) is the number of \(2\times 2\) matrix \(\lambda _i\)’s blocks and \(d_0=\dim \ker f\) denotes the number of \(1\times 1\) matrix 0’s blocks. For \(f|_{V_0}\), we use any orthonormalization process. This is the Spectral Theorem for real matrices.

For any arbitrary field, assuming no isotropic vectors and the irreducible semisimple decomposition \(m_f(x)=x(x^2+\mu _1)\ldots (x^2+\mu _t)\), the pair \((f, \varphi )\) is represented by matrices \((A_f, B_\varphi )\) where \(B_\varphi \) is a diagonal matrix and

(6)

Example 2.2

Let \(\mathbb {K}\) be an arbitrary field with \({{\,\textrm{char}\,}}\mathbb {K}\ne 2\) and \(V=\mathbb {K}^{n}\), \(n\ge 2\). Consider \((A_f,{{\,\textrm{Id}\,}}_n)\) as in expression (5) where \(d_0 = n-2\), \(d_1 = 1\) and \(\lambda _1 = 1\). Therefore, \(m_f(x) | (x^2+1)x\) which splits if \(x^2+1\) has roots in \(\mathbb {K}\). This proves the converse of (a) in Corollary 2.4 does not hold because the bilinear form has isotropic elements. This situation happens, for instance, over algebraically closed fields and finite fields with q elements such that \(q-1\) is divisible by 4.

Next, we return to item (c) in Corollary 2.4, and consider the decomposition \(V=V_0\perp V^1\perp V^2\). The subspace \(V_0\) and the subspaces of \(V^2\) of the form \(V_\lambda \oplus V_{-\lambda }\) (attached to the primary components \(V_{x\pm \lambda }\)) admit canonical forms determined by the Jordan blocks \(J_n(\lambda )\) described in Eq. (1).

Theorem 2.5

Let \((V, \varphi )\) an orthogonal vector space over an arbitrary field of characteristic not two such that \(\varphi \) is non-degenerate. Suppose that \(f:V\rightarrow V\) is a \(\varphi \)-skew automorphism, and \(\lambda \) is a nonzero f-eigenvalue. Then \(V=(V_{\lambda }\oplus V_{-\lambda })\oplus (V_{\lambda }\oplus V_{-\lambda })^\perp \) and \((V_{\lambda }\oplus V_{-\lambda })^\perp \) is a f-invariant subspace. The primary components \(V_{\lambda }\), \(V_{-\lambda }\), are equidimensional and totally isotropic subspaces. Even more, \(V_\lambda \oplus V_{-\lambda }\) decomposes as orthogonal sum of a finite number of f-invariant subspaces \((W_i, \varphi |_{W_i})\) that have a basis in which the pair \((f|_{W_i}, \varphi |_{W_i})\) is expressed by a matrix pair \((A_{f|_{W_i}},B_{\varphi |_{W_i}})\) of the following type:

$$\begin{aligned} \Big ( \left( {\begin{matrix} J_{n_i}(\lambda ) &{} 0 \\ 0 &{} -J_{n_i}(\lambda )^t \end{matrix}}\right) , \left( {\begin{matrix} 0 &{} {{\,\textrm{Id}\,}}_{n_i} \\ {{\,\textrm{Id}\,}}_{n_i} &{} 0 \end{matrix}}\right) \Big ). \end{aligned}$$
(7)

And, for the zero eigenvalue, \(V_0\) is an orthogonal sum of f-invariant subspaces \(U_j\) that have a basis in which the pair \((f|_{U_j}, \varphi |_{U_j})\) is expressed by a matrix pair \((A_{f|_{U_j}},B_{\varphi |_{U_j}})\) as in Eq. (7) with \(\lambda =0\) and \(n_j=2k_j\) or (here \(\textbf{e}_1=(1,0,\dots ,0)\in \mathbb {K}^{n_j}\))

$$\begin{aligned} {\left( \left( \begin{array}{c|c|c} J_{n_j}(0) &{} &{} \\ \hline \textbf{e}_1 &{} 0 &{} \\ \hline &{} -\textbf{e}_1^t &{} -J_{n_j}(0)^t \\ \end{array} \right) , (-1)^{n_j}\mu _j\left( \begin{array}{c|c|c} &{} &{} {{\,\textrm{Id}\,}}_{n_j} \\ \hline &{} 1 &{} \\ \hline {{\,\textrm{Id}\,}}_{n_j} &{} &{} \\ \end{array} \right) \right) .} \end{aligned}$$
(8)

Proof

From Corollaries 2.3 and 2.4, we establish that \(V_{\mu }\cap V_{-\mu }^\perp =0\) and \(V_{\mu }\subseteq V_{\mu }^\perp \) for \(\mu =\lambda , -\lambda \). This readily yields \((V_{\lambda }\oplus V_{-\lambda })^\perp \cap (V_{\lambda }\oplus V_{-\lambda })=0\), and consequently, \(V=(V_{\lambda }\oplus V_{-\lambda })\oplus (V_{\lambda }\oplus V_{-\lambda })^\perp \) due to the non-degeneracy of \(\varphi \). Additionally, \(\dim V_\mu =\dim V_{-\mu }\), and the multiplicities \(k_{\mu }\) of the eigenvalues \(\mu =\pm \lambda \) in the minimum polynomial of f are equal. Therefore, \(V_{\lambda }=\ker (f-\lambda {{\,\textrm{Id}\,}})^{k_{\lambda }}\) and \(V_{-\lambda }=\ker (f+\lambda {{\,\textrm{Id}\,}})^{k_{\lambda }}\). Setting \(k:=k_\lambda -1\), we can take a nonzero \(v\in V_\lambda \) such that \((f-\lambda {{\,\textrm{Id}\,}})^k(v)\ne 0\). Since \((f-\lambda {{\,\textrm{Id}\,}})^k(v)\in V_\lambda \) and \(V_{\lambda }\cap V_{-\lambda }^\perp =0\), there exists \(w\in V_{-\lambda }\) with \(\varphi ((f-\lambda {{\,\textrm{Id}\,}})^k(v),w)=\alpha \ne 0\). For any \(u\in V\), Eq. (3) implies \(\varphi ((f-\lambda {{\,\textrm{Id}\,}})^k(v),u)=(-1)^k\varphi (v,(f+\lambda {{\,\textrm{Id}\,}})^k(u))\), and then \(\varphi (v,(f+\lambda {{\,\textrm{Id}\,}})^k(w))=(-1)^k\alpha \ne 0\), so \((f+\lambda {{\,\textrm{Id}\,}})^k(w)\ne 0\). Rescaling w if necessary, we can suppose \(\alpha =1\).

Next, define \(v:=v_0, w:=w_0\), \(v_r:=(f-\lambda {{\,\textrm{Id}\,}})^r(v_0)\) and \(w_s:=(f+\lambda {{\,\textrm{Id}\,}})^s(w_0)\) for \(1\le r,s\le k\). We point out that \(\varphi (v_r, w_s)=(-1)^{s}\varphi (v_{r+s}, w_0)\) by applying Eq. (3). We now set a new \(w'=w'_0\) obtained in the linear span of \(\{w_0 = w, w_1, \dots , w_k\}\), i.e., \(w' = \sum _{i=0}^{k} \alpha _i w_i\). Those \(\alpha _i\) coefficients are obtained as solutions from the system of equations \(\varphi (v_j, w') = \delta _{jk}\), for \(j=0, \dots , k\); where \(\delta _{jk}\) is the Kronecker delta. This is a triangular system whose diagonal coefficients are \(\pm 1\). Therefore there is a unique solution \(w'\), and, as \(\alpha _0 = 1\), \(w' \in \ker (f+\lambda {{\,\textrm{Id}\,}})^{k+1}\) but \(w' \notin \ker (f+\lambda {{\,\textrm{Id}\,}})^{k}\). Even more, \(\varphi (v_k, w'_0)=1\) and \(\varphi (v_r, w'_0)=0\) otherwise. The subspace \(W_1={{\,\textrm{span}\,}}\langle v_r, w'_s: r,s\ge 0\rangle \), where \(w'_s:=(-1)^s(f+\lambda {{\,\textrm{Id}\,}})^s(w'_0)\), is f-invariant. And the ordered set \(\{v_k,\dots , v_0, w'_0, \dots w'_k\}\) is a basis in which \((f|_{W_1}, \varphi |_{W_1})\) is expressed by a matrix pair as described in expression (7) with \(n_1=k_\lambda \). Since \(W_1\) is regular, \(V_{\lambda }\oplus V_{-\lambda }=W_1\oplus W_1^\perp \). Note that \(W_1^\perp \) is f-invariant and regular, so recursively we get the desired orthogonal decomposition for the nonzero eigenvalue \(\lambda \).

In the sequel, we assume that \(0\ne V_0\), equivalent to 0 being an eigenvalue of f. Let \(k_0\) be the multiplicity of this eigenvalue in the minimum polynomial, and \(k:=k_0-1\). Hence, \(V_0=\ker f^{k+1}\) and there exists a nonzero \(v\in V_0\) such that \(f^k(v)\ne 0\). We will consider two different cases depending on the parity of k and, in both cases, we will start with the previous v. Let us first suppose that \(k=2n-1\) is odd and \(k_0=2n\). Then, \(\varphi (v,f^k(v))=(-1)^k\varphi (f^k(v),v)=-\varphi (v,f^k(v))\). Since \(2\ne 0\), \(\varphi (v,f^k(v))=0\) and, we can find an element \(w\in V_0\) with \(\varphi (w,f^k(v))\ne 0\) by means of the non-degenerancy of \(\varphi \) and the equality \(\varphi (f^k(v),V)=\varphi (f^k(v),V_0)\). Moreover, from \(\varphi (w,f^k(v))=(-1)^k\varphi (f^k(w),v)\), we get \(f^k(w)\ne 0\). This way, we have \(0\ne v,\, w\in V_0\) such that \(f^k(v)\ne 0\ne f^k(w)\) and \(\varphi (v,f^k(w))\ne 0\). In this context, the previous procedure for the nonzero eigenvalue \(\lambda \) remains valid, allowing us to identify an f-invariant and regular vector space \(U_1\) with a basis in which \((f|_{U_1}, \varphi |_{U_1})\) is expressed by a matrix pair as described in Eq. (7) with \(n_1=k_0=2n\) and \(\lambda =0\).

The other case arises when \(k=2n\) is even, so \(k_0=2n+1\). We can assume that \(\varphi (v,f^k(v))\ne 0\). Otherwise, \(\varphi (v,f^k(v))=0\) and, since \(\varphi \) is non-degenerate, we could find \(w\in V_0\) such that \(\varphi (w,f^k(v))\ne 0\). Consequently, \(f^k(w)\ne 0\). If \(\varphi (w,f^k(w))\ne 0\), we can take w instead of v. If not, consider \(0\ne v'=v+w\in V_0\). We have \(f^k(v')\ne 0\) since \(\varphi (v,f^k(v'))=\varphi (v,f^k(w))\ne 0\). Moreover, using previous assumptions and \(k=2n\), we get \(\varphi (v',f^k(v'))=\varphi (v,f^k(w))+\varphi (w,f^k(v))=2\varphi (v,f^k(w))\ne 0\). We can then replace v with \(v'\) that fulfills the required condition. Starting next from \(v\in V_0\) such that \(0\ne \mu =\varphi (v, f^k(v))\), we define recursively \(w_0=v\) and for \(1\le j\le n\):

$$\begin{aligned} w_{j}=w_{j-1}-\frac{\varphi (w_{j-1},f^{2n-2j}(w_{j-1}))}{2\varphi (w_{j-1}, f^{2n}(w_{j-1}))}f^{2j}(w_{j-1}) \end{aligned}$$

A straightforward computation shows that \(w_n\) satisfies \(\varphi (w_n, f^{2n}(w_n))=\mu \ne 0\), and \(\varphi (w_n, f^t(w_n))=0\) for any \(1\le t<2n\) (for t odd numbers, apply Eq. (2)). Even more, \(\varphi (f^s(w_n),f^t(w_n)) = (-1)^s\mu \) if \(s+t = 2n\) and zero otherwise. Then, the subspace \(U_1={{\,\textrm{span}\,}}\langle w_n, f(w_n), \dots , f^{2n}(w_n)\rangle \), is f-invariant and regular, and the ordered set \(\{v_0, \dots , v_{2n}\}\) where \(v_j:=f^j(w_n)\) is a basis in which \((f|_{U_1}, \varphi |_{U_1})\) is expressed by a matrix pair as in Eq. (9) with \(n_1=n\), so \(k_0=2n_1+1\). By rescaling and reordering as \(v_{n-1}, v_{n-2}, \dots , v_0, v_n, -v_{n+1}, v_{n+2}, \dots , (-1)^nv_{2n}\), we arrive at the matrix pair in Eq. (8).

Concluding the proof, note that \(U_1\) is a regular subspace, thus \(V_0=U_1\oplus (U_1)^\perp \). Since \((U_1)^\perp \) is f-invariant and regular, recursively accounting for the parity in each step yields the desired orthogonal decomposition. \(\square \)

Remark 2.6

For \(\mathbb {K}=\mathbb {C}\), Theorem 2.5 is established in [13, Theorem 2] as a classical characterization of the Jordan canonical forms of complex skew-symmetric matrices. An analogous result for \(\mathbb {K}\) algebraically closed of characteristic not 2 appears in [8, Corollary 1.2]. We provide an alternative proof that highlights a recursive constructive method based on straightforward linear arguments. The canonical form \((A_{f|_{U_j}}, B_{\varphi |_{U_j}})\) proposed in [8] is:

(9)

3 Generalized Oscillator Algebras

Along this section, we assume the base field \(\mathbb {K}\) is of characteristic not 2.

In any quadratic solvable and non-abelian n-dimensional Lie algebra \((L, \varphi )\), such that \(Z(L)\subseteq L^2=Z(L)^\perp \) the centre is a totally isotropic ideal. According to [9, Proposition 2.9], any solvable quadratic Lie algebra (characteristic zero) contains a central isotropic element z, and the algebra L can be obtained as a double extension of the \((n-2)\)-dimensional quadratic and solvable Lie algebra \(\frac{(\mathbb {K}\cdot z)^\perp }{\mathbb {K}\cdot z}\). Applying this one-step process iteratively, we get the class of solvable quadratic algebras from the class of quadratic abelian Lie algebras. Let’s start this section with this construction.

Example 3.1

(One-dimensional-by-abelian double extension) Let \((V,\varphi )\) be an orthogonal \(\mathbb {K}\)-vector space with \(\varphi \) non-degenerate, and \(\delta \) be any \(\varphi \)-skew map. Denote by \(\delta ^*\) the dual 1-form of \(\delta \), so \(\delta ^*:\mathbb {K}\cdot \delta \rightarrow \mathbb {K}\) and \(\lambda \delta \mapsto \lambda \). On the vector space \(\mathfrak {d}(V, \varphi , \delta ):=\mathbb {K}\cdot \delta \oplus V\oplus \mathbb {K}\cdot \delta ^*\) we introduce the binary product

$$\begin{aligned}{}[t\delta + x+ s\delta ^*,t'\delta + y+ s'\delta ^*]:= t\delta (y)-t'\delta (x) + \varphi (\delta (x), y)\delta ^*, \end{aligned}$$
(10)

for \(t,t',s,s' \in \mathbb {K}\), \(x, y\in V\). It is easily checked that this product is skew and satisfies the Jacobi identity, \(\sum _{\circlearrowright a,b,c}[[a,b],c]=0\). Then, \((\mathfrak {d}(V, \varphi , \delta ), [a,b])\) is a Lie algebra, and the bilinear form

$$\begin{aligned} \varphi _\delta (t\delta + x+ s\delta ^*,t'\delta +y+ s'\delta ^*)=ts'+st'+\varphi (x,y) \end{aligned}$$
(11)

is symmetric, invariant, non-degenerate, and it extends \(\varphi \) by the hyperbolic space \({{\,\textrm{span}\,}}_{\mathbb {K}}\langle \delta , \delta ^*\rangle \). The method of constructing this algebra is known as double extension. Over fields of characteristic zero, this procedure was introduced simultaneously by several authors in the 80s (see [7] and references therein), but starting from any quadratic Lie algebra \((L, \varphi )\) and any \(\delta \in {{\,\textrm{Der}\,}}_\varphi L\). In this case, the bracket \([x,y]_L\) must be added in the binary product given by Eq. (10). To obtain new indecomposable quadratic algebras, it is important to take a non-inner \(\delta \) \(\varphi \)-skew derivation [10, Proposition 5.1]. This one-dimensional construction is also valid in characteristic other than 2 (see [7, Theorem 2.2] for a more detailed explanation).

Example 3.2

(Real oscillator algebras) Now fix \(\mathbb {K}=\mathbb {R}\) and consider an Euclidean vector space \((V,\varphi )\). According to Example 2.1, any \(\varphi \)-skew automorphism \(\delta :V\rightarrow V\) is completely determined by an ordered n-tuple of positive real entries, \(\lambda =(\lambda _1, \dots , \lambda _n)\) with \(\lambda _i\le \lambda _{i+1}\), and \(\dim V=2n\). Even more, there is an orthonormal basis \(\{x_1, \dots x_n, y_1,\dots , y_n\}\) for which \(\delta (x_i)=-\lambda _iy_i\) and \(\delta (y_i)=\lambda _ix_i\). By using the double extension procedure given in Example 3.1 we obtain the series of quadratic real Lie algebras:

$$\begin{aligned} (\mathfrak {d}_{2n+2}(\lambda ),\varphi _\delta )=({{\,\textrm{span}\,}}_{\mathbb {R}}\langle \delta , x_1, \dots x_n, y_1,\dots , y_n, \delta ^*\rangle , \varphi _\delta ). \end{aligned}$$
(12)

The Lie bracket and the invariant bilinear form \(\varphi _\delta \) are given in Eqs. (10) and (11). Real oscillator algebras are quadratic and solvable. The Witt index of \(\varphi _\delta \) is 1, so \(\varphi _\delta \) is a Lorentzian form. This characteristic leads to these algebras also being referred to in the literature as (real) Lorentzian algebras (see [12, Definition II.3.16]). The one-dimensional-by-abelian construction of this class of algebras appears in [12, Proposition II.3.11]. The algebras obtained through this procedure are called standard solvable Lorentzian algebras of dimension \(2n+2\), and they are denoted as \(A_{2n+2}\).

Previous examples serve as an introduction for the definition of oscillator algebras over arbitrary fields of characteristic not 2, which first appears in [6].

Definition 3.1

Let \((V, \varphi )\) be a \(\mathbb {K}\)-vector space such that \(\dim V \ge 2\), endowed with a symmetric and non-degenerate bilinear form \(\varphi \). For any \(\varphi \)-skew map \(\delta :V\rightarrow V\) and its dual 1-form linear map \(\delta ^*:\mathbb {K}\cdot \delta \rightarrow \mathbb {K}\), described as \(\lambda \delta \mapsto \lambda \), the one-dimensional-by-abelian double extension Lie algebra \(\mathfrak {d}(V, \varphi , \delta ):=\mathbb {K}\cdot \delta \oplus V\oplus \mathbb {K}\cdot \delta ^*\) with product as in Eq. (10) and bilinear form as in expression (11) is referred as generalized \(\mathbb {K}\)-oscillator algebra and as a \(\mathbb {K}\)-oscillator algebra in the particular case where \(\delta \) is automorphism.

In the sequel, we use the following terminology. A Lie algebra L is reduced if \(Z(L)\subseteq L^2\) and local if L has only one maximal ideal [3, Definition 3.1]. The derived series of L is recursively defined as \(L^{(0)}=L\) and \(L^{(k+1)}=[L^{(k)}, L^{(k)}]\). If \(L^{(k)}=0\) for some \(k\ge 1\), L is solvable. The descending central series of L is defined as \(L^1=L\) and \(L^{k+1}=[L^{k}, L]\) and the upper central series as \(Z_1(L)=Z(L)=\{x\in L:[x,L]=0\}\) (centre of L) and \(Z_{k+1}(L)=\{x\in L: [x,L]\subseteq Z_k(L)\}\). L is nilpotent if there exists \(k\ge 2\) such that \(L^k =0\). The smallest k such that \(L^k\ne 0\) and \(L^{k+1}=0\) is the nilpotency index of L. A quadratic algebra is said to be decomposable if it contains a proper ideal I that is non-degenerate (\(I\cap I^\perp =0\), the ideal is also called a regular subspace), and indecomposable otherwise. Equivalently, L is decomposable if and only if \(L=I\oplus I^\perp \). In the literature, the terms reducible and irreducible are also used as synonyms for decomposable and indecomposable.

Remark 3.1

In general, a non-reduced quadratic Lie algebra splits as an orthogonal sum, as ideals, of an abelian quadratic algebra and another reduced quadratic algebra. In characteristic zero, this assertion is just given in [19, Theorem 6.2]. Its proof also works in characteristics other than 2. Therefore, any non-reduced quadratic Lie algebra is decomposable.

Consider now the class of Lie algebras \(\mathfrak {h}\) that satisfy:

$$\begin{aligned} \mathfrak {h}^2=\mathbb {K}\cdot z=Z(\mathfrak {h}). \end{aligned}$$
(13)

Since, for any \(x, y \in \mathfrak {h}\), \([x,y]=\lambda _{x,y}z\) and \(\lambda _{x,y}\in \mathbb {K}\), the structure constants \(\lambda _{x,y}\) allow us to define in \(\mathfrak {h}\) the skew-symmetric form \(\varphi _{\mathfrak {h}}(x,y):= \lambda _{x,y}\). Any complement vector space V of \(Z(\mathfrak {h})\), i.e., \(\mathfrak {h}=V\oplus \mathbb {K}\cdot z\), is a regular subspace. The non-degeneracy of the skew-symmetric form \(\varphi _\mathfrak {h}|_V\) implies that there is a basis of V, \(\{v_1, \dots , v_n, w_1, \dots , w_n,\}\) such that \(\varphi _{\mathfrak {h}}(v_i,w_j)=\delta _{ij}\) and \(\varphi _{\mathfrak {h}}(v_i, v_j)=\varphi _{\mathfrak {h}}(w_i, w_j)=0\). Therefore, the algebras that satisfy Eq. (13) have odd dimension and exhibit a basis \(\mathcal {B}=\{x_1, \dots , x_n, y_1, \dots , y_n, z\}\) such that \([v_i,w_j]=\delta _{ij}z\) and all other products are zero. Thus, for any \(n\ge 1\), there is only one algebra of dimension \(2n+1\) that satisfies Eq. (13). In characteristic zero, these algebras are known as Heisenberg algebras. Throughout the paper, we will refer to them as generalized \(\mathbb {K}\)-Heisenberg algebras.

Lemma 3.2

Let \(A=\mathfrak {d}(V, \varphi , \delta )\) be a generalized \(\mathbb {K}\)-oscillator algebra. Then:

  1. (a)

    For any \(k\ge 1\), \(A^{k+1}={{\,\textrm{Im}\,}}\delta ^k \oplus \mathbb {K}\cdot \delta ^*\) if \(\delta ^k\ne 0\) and, in case that \(\delta ^k=0\), \(A^{m}=0\) for any \(m\ge k+1\).

  2. (b)

    For any \(k\ge 1\), \(Z_{k}(A)=\ker \delta ^{k} \oplus \mathbb {K}\cdot \delta ^*\) if \(\delta ^k\ne 0\) and, in case that \(\delta ^k=0\), \(Z_{k}(A)=A\) for any \(m\ge k\).

  3. (c)

    \((Z_k (A))^\perp =A^{k+1}\) for any \(k\ge 1\).

  4. (d)

    A is nilpotent if and only if \(\delta \) is a nilpotent map. And the nilpotency index is the degree of the minimum polynomial of \(\delta \). In addition, if \(\delta \ne 0\),

  5. (e)

    \(Z(A)\subseteq A^2\) if and only if \(\ker \delta \subseteq {{\,\textrm{Im}\,}}\delta \).

  6. (f)

    \(A^{(2)}= \mathbb {K}\cdot \delta ^*\) and \(A^{(3)}=0\). So, A is solvable and \(\frac{A^2+Z(A)}{Z(A)}\) is abelian.

  7. (g)

    If \(\delta \) is an automorphism, the dimension of V is even, and the derived algebra \(A^2=V\oplus \mathbb {K}\cdot \delta ^*\) is a generalized \(\mathbb {K}\)-Heisenberg algebra. In particular, V has a basis \(\{v_1, \dots v_n, w_1, \dots , w_n \}\) such that \(\varphi (\delta (v_i), v_j)=\varphi (\delta (w_i), w_j)=0\) and \(\varphi (\delta (v_i), w_j)=\delta _{ij}\).

Proof

Along the proof \(W^\perp =\{x\in A: \varphi _\delta (W,x)=0\}\), thus \(V^\perp ={{\,\textrm{span}\,}}\langle \delta , \delta ^*\rangle \) and \(\dim A=\dim W+\dim W^\perp \).

From Eq. (10), \(t_0\delta +x_0+s_0\delta ^*\in Z(A)\) is equivalently to \(t_0\delta (y)-t'\delta (x_0) + \varphi (\delta (x_0), y)\delta ^*=0\), for all \(t'\in \mathbb {K}\) and \(y\in V\). If \(\delta =0\), \(A=Z(A)\) and then \(A^2=0\). Otherwise, the centre turns out to be \(Z_1(A)=Z(A)=\ker \delta \oplus \mathbb {K}\cdot \delta ^*\) and \(A^2={{\,\textrm{Im}\,}}\delta \oplus {{\,\textrm{span}\,}}\langle \varphi (\delta (x),y)\delta ^*: x,y\in V\rangle ={{\,\textrm{Im}\,}}\delta \oplus \mathbb {K}\cdot \delta ^*\), the last equality is a consequence of the non-degeneracy of \(\varphi \). This proves (a) and (b) when \(k=1\). Assume \(\delta ^{k-1}\ne 0\) and \(A^k={{\,\textrm{Im}\,}}\delta ^{k-1}\oplus \mathbb {K}\cdot \delta ^*\). For the \((k+1)\)-term of the descending series, we have \(A^{k+1}=[A^k,A]={{\,\textrm{span}\,}}\langle \delta ^k(x),\varphi (\delta ^k(x),y) \delta ^*: x,y\in V\rangle \). If \(\delta ^{k}=0\), that is, \(\delta \) is a nilpotent map, \(A^{k+1}=0=A^m\) for \(m\ge k+1\), and A is a nilpotent algebra. Otherwise, \(\delta ^k\ne 0\) and we can take \(x\in V\) such that \(\delta ^k(x)\ne 0\). By the non-degenerancy of \(\varphi \), there exists \(y\in V\) such that \(\varphi (\delta ^k(x), y)\ne 0\). Then \(A^{k+1}={{\,\textrm{Im}\,}}\delta ^k \oplus \mathbb {K}\cdot \delta ^*\). This proves item (a) and the assertion from right to left in item (d). Suppose now that A is nilpotent and k is its nilpotency index, i.e., \(A^k\ne 0\) and \(A^{k+1}=0\). The assumption implies \(\delta ^{k}=0\ne \delta ^{k-1}\), so k is the degree of the minimum polynomial \(m_\delta (x)\) and (d) follows.

Next, for the upper central series, we assume \(\delta ^{k}\ne 0\) and \(Z_k(A)=\ker \delta ^{k}\oplus \mathbb {K}\cdot \delta ^*\). We check \(x\in Z_{k+1}(A)\), so \([A,x]\subseteq Z_k(A)\). Since \(\delta ^*\in Z_{k+1}(A)\), we can assume \(x\in \mathbb {K}\cdot \delta \oplus V\). If \(\delta ^{k+1}=0\), we have \([V,\delta ]\subseteq \delta (V)\subseteq \ker \delta ^{k}\subseteq Z_k(A)\) and then \(Z_{k+1}(A)=A\) because \([V,V]=\mathbb {K}\cdot \delta ^*\). Let us suppose that \(\delta ^{k+1}\ne 0\) and \([A, t_0\delta +x_0]\in Z_{k}(A)\). Equivalently, \([\delta , t_0\delta +x_0]=\delta (x_0)\) and \([v, t_0\delta +x_0]=t_0\delta (v)+\varphi (\delta (x_0), v)\delta ^*\) are both in \(Z_k(A)\). That is \(\delta (x_0), t_0\delta (v)\in \ker \delta ^{k}\) for all \(v\in V\). Since \(\delta ^{k+1}\ne 0\), the only possibility is \(t_0=0\) and \(x_0\in \ker \delta ^{k+1}\). Therefore, \(Z_{k+1}(A)=\ker \delta ^{k+1}\oplus \mathbb {K}\cdot \delta ^*\) and (b) is proven.

Statement (c) regarding the orthogonality of the terms of the descending and upper central series follows by using \(\varphi (\delta ^k(x), y)=(-1)^k\varphi (x,\delta ^k(y))\), so we have \(A^{k+1}=\mathbb {K}\cdot \delta ^*\oplus {{\,\textrm{Im}\,}}\delta ^k \subseteq (\ker \delta ^k)^\perp \cap (\mathbb {K}\cdot \delta ^*)^\perp \), the dimension formulae \(\dim A=\dim \ker \delta ^k +\dim \, (\ker \delta ^k)^\perp \), \(\dim V=\dim \ker \delta ^k + \dim {{\,\textrm{Im}\,}}\delta ^k\), and the description of the terms in both series given in items (a) and (b).

Finally, if \(\delta \) is a nonzero map, \(A^2={{\,\textrm{Im}\,}}\delta \oplus \mathbb {K}\cdot \delta ^*\) and \(Z(A)=\ker \delta \oplus \mathbb {K}\cdot \delta ^*\). Therefore, \(A^{(2)}=[A^2,A^2]=\mathbb {K}\cdot \delta ^*\subseteq Z(A)\), and items e) and f) follow easily. In the particular case that \({{\,\textrm{Im}\,}}\delta =V\), we have \(A^2=V\oplus \, \mathbb {K}\cdot \delta ^*\), and looking for \(x\in V\) such that \([y, x]=\varphi (\delta (y), x)\delta ^*=0\) for all \(y\in V\), we get \(\varphi (V, x)=0\). This implies \(x=0\) from the non-degenerancy of \(\varphi \). Then, \(Z(A^2)=\mathbb {K}\cdot \delta ^*=A^{(2)}\) and therefore \(A^2\) is a generalized \(\mathbb {K}\)-Heisenberg algebra. Therefore, there is a standard basis \(\{v_1, \dots , v_n, w_1, \dots w_n, \delta ^*\}\) such that \(V={{\,\textrm{span}\,}}\langle v_1, \dots , v_n, w_1, \dots w_n\rangle \) and \([v_i, w_j]=\varphi (\delta (v_i), w_j)\delta ^*=\delta _{ij}\delta ^*\), all other products being zero. Then item g) follows. \(\square \)

Remark 3.3

Statement (c) in Lemma 3.2, that is, the orthogonal terms, \((A_{i+1})^\perp \), of the descending central series give us the upper central terms \(Z_i(A)\) and vice versa, is well known for quadratic algebras in characteristic zero [16]. According to assertion e), generalized \(\mathbb {K}\)-oscillator algebras are reduced if and only if \(\delta \) is an automorphism or the length n of any Jordan block \(J_n(0)\) of \(\delta \) is greater than or equal to 2.

Remark 3.4

Indecomposable real quadratic Lie algebras satisfying that their quotient Lie algebra \(\frac{A^2}{Z(A)}\) is abelian have been treated in [14]. In this paper, the authors give the classification of real Lie algebras of maximal isotropic centre of dimension less or equal to 2. The method they use is the two-fold extension. In the next section, we will relate \(\mathbb {K}\)-oscillator algebras and quadratic algebras with maximal isotropic centre of dimension 1.

Lemma 3.2 ensures the existence of quadratic nilpotent algebras in characteristic not 2, using one-dimensional-by-abelian double extensions and skew-nilpotent maps. As a corollary of the results in Sect. 2, Theorem 2.5 and Lemma 3.2 we get the complete description of these algebras.

Theorem 3.5

Over fields of characteristic not 2, any nonzero nilpotent generalized \(\mathbb {K}\)-oscillator algebra, \((\mathfrak {d}(V, \varphi , \delta ), \varphi _\delta )\), of nilpotent index k is given through a \(\varphi \)-skew nilpotent map \(\delta \) such that \(m_\delta (x)=x^k\). Even more, the orthogonal space \((V,\varphi )\) decomposes as orthogonal sum of a finite number of \(\delta \)-invariant regular subspaces, \((W, \varphi |_{W})\) of dimension \(2n+1\) or 4m with \(n\le \lfloor \frac{1}{2} (k-1)\rfloor \) and \(m\le \lfloor \frac{1}{4}k\rfloor \). And there is a basis of W in which the pair \((\delta |_W, \varphi |_W)\) is expressed by the matrix pair \((A_{\delta |_W}, B_{\varphi |_W})\) as in Eq. (8) if W has odd dimension and as in Eq. (7) with \(\lambda =0\) otherwise.

Example 3.3

In characteristic 0, the smallest non-abelian and nilpotent indecomposable quadratic Lie algebras are the free nilpotent on 2-generators \(\mathfrak {n}_{2,3}\) which is 3-step nilpotent (i.e., \(\mathfrak {n}_{2,3}^2\ne 0=\mathfrak {n}_{2,3}^3\)) and 5 dimensional; and the 6-dimensional free nilpotent \(\mathfrak {n}_{3,2}\) on 3-generators which is 2-step nilpotent. This couple of algebras are the unique free-nilpotent algebras that admit quadratic structure (see [4, Theorem 3.8]). They can be obtained as one-dimensional-by-abelian double extensions of the next orthogonal vector spaces:

  • \(\mathfrak {n}_{2,3}\) is the double extension \(\mathfrak {d}(\mathbb {K}^3, \varphi _1, \delta _1)\) where \((\mathbb {K}^3, \varphi _1)\) and the \(\varphi _1\)-skew map \(\delta _1\) are given by means of the canonical form matrix pair (\(A_{\delta _1}, B_{\varphi _1}\)) as in Eq. (9) with \(n_1=1\) and \(\mu _1=-1\).

  • \(\mathfrak {n}_{3,2}\) is the double extension \(\mathfrak {d}(\mathbb {K}^4, \varphi _2, \delta _2)\) where \((\mathbb {K}^4, \varphi _2)\) and the \(\varphi _2\)-skew map \(\delta _2\) are given by means of the canonical form matrix pair \((A_{\delta _2}, B_{\varphi _2})\) as in Eq. (7) with \(\lambda =0\) and \(n_1=2\).

The algebras in Example 3.3 admit other non-degenerate and invariant bilinear forms. In the case of \(n_{3,2}\) all of their quadratic structures are isometrically isomorphic. This is clear from Theorem 2.5. The same is true for \(n_{2,3}\) if \(\mathbb {K}\) is algebraically closed, but not so clear from the previous theorem. If \(\mathbb {K}=\mathbb {R}\), we have two non-isometrically isomorphic quadratic structures: the one given through \(\mu _1=-1\) and that corresponding to \(\mu _1=1\). This comment leads us to the concept of quadratic dimension.

Example 3.4

In \({{\,\textrm{char}\,}}\mathbb {K}\ne 2\), the double extension \(\mathfrak {d}(\mathbb {K}^n,{{\,\textrm{Id}\,}}_n,f)\) attached to \((A_f,{{\,\textrm{Id}\,}}_n)\) in Example 2.2 produces a generalized oscillator \((n+2)\)-dimensional algebra which satisfies \(A^2 = A^3 = {{\,\textrm{Im}\,}}f^2\) is 3-dim, so it is not nilpotent.

The quadratic dimension (see [2]) of a Lie algebra L is defined as \(d_q(L)=\dim B^s_{inv}(L)\) where \(B^s_{inv}(L)\) is the subspace of symmetric invariant bilinear forms of L. Note that L is quadratic if and only if \(d_q(L)\ge 1\). For the quadratic free-nilpotent algebras in Example 3.3, \(d_q(n_{2,3})=4\) and \(d_q(n_{3,2})=7\). In characteristic zero, any quadratic Lie algebra such that \(d_q(L)=1\) is simple (see [2, Theorem 3.1]), and the converse is also true over algebraically closed fields. The paper [3] is devoted to the structure of quadratic Lie algebras with quadratic dimension 2. According to [3, Lemma 3.1] (characteristic zero), any indecomposable Lie algebra whose quadratic dimension is 2 is local.

Let \(A=\mathfrak {d}(V, \varphi , \delta )\) be a generalized \(\mathbb {K}\)-oscillator algebra. Apart from \(\varphi _\delta \), the bilinear form \(\varphi _{1,0}\), defined as \(\varphi _{1,0}(\delta , \delta )=1\) and \(\varphi _{1,0}(V\oplus \mathbb {K}\cdot \delta ^*, A)=0\), is invariant because of \(A^2\subseteq V\oplus \mathbb {K}\cdot \delta ^*\subseteq A^{\perp _{\varphi _{1,0}}}\). So \(d_q(\mathfrak {d}(V, \varphi , \delta ))\ge 2\). Even more, for any \(v\in V\backslash {{\,\textrm{Im}\,}}\delta \), we can split \(V=\mathbb {K}\cdot v \oplus U\) and \(A^2\subseteq \mathbb {K}\cdot \delta ^*\oplus U\). The bilinear form \(T_{v,U}(v,v)=1\) and \(T_{v,U}(\mathbb {K}\cdot \delta ^*\oplus U\oplus \mathbb {K}\cdot \delta , A)=0\) is invariant and \(T_{v,U}\notin {{\,\textrm{span}\,}}\langle \varphi _{1,0}, \varphi _\delta \rangle \). And we can define a fourth invariant and linearly independent form: \(T'_{v,U}(\delta ,v)=1\), \(\delta , v\) isotropic vectors and \(T'_{v,U}(\mathbb {K}\cdot \delta ^*\oplus U,A)=0\). Thus \(d_q(A)\ge 4\) if \(\delta (V)\ne V\). In this case, \(\ker \delta \ne 0\) and, from (b) in Lemma 3.2, \(Z(A)=\ker \delta \oplus \mathbb {K}\cdot \delta ^*\) if \(\delta \ne 0\), and therefore, \(\dim Z(A)\ge 2\). This is a particular case of a more general result.

Lemma 3.6

(Tsou, Walker, 1957) Let \((A,\varphi )\) be a quadratic Lie algebra and \(r=\dim Z(A)\). Then, \(d_q(A)\ge 1+\frac{1}{2}r(r+1)\).

Proof

Let \(W={{\,\textrm{span}\,}}\langle w_1, \dots , w_d\rangle \) be a complement of \(A^2\) in A. Then \(A=W\oplus A^2\) and \(d=\dim A-\dim A^2=r=\dim Z(A)\) because \((A^2)^\perp =Z(A)\). For each pair (ij), define the symmetric form on W as \(T_{i,j}(w_i, w_j)=1=T_{i,j}(w_j, w_i)\) and \(T_{i,j}(w_k, w_s)=0\) for \((k,s)\ne (i,j)\). We extend \(T_{i,j}\) to a symmetric form in A by defining \(T_{i,j}(A^2, A)=0\). Since \(A^2\subset A^{\perp _{T_{i,j}}}\), \(T_{i,j}\) is invariant. So, the vector space \({{\,\textrm{span}\,}}\langle T_{i,j}, \varphi : 1\le i\le j\le d\rangle \subseteq B_{inv}^s(A)\) and, as the generator forms are linearly independent, the result follows. \(\square \)

We recall that a Lie algebra is local if it has a unique maximal ideal. In any quadratic Lie algebra, I is a maximal ideal if and only if \(I^\perp \) is minimal. Therefore, having a unique maximal ideal is equivalent to having a unique minimal ideal in the class of quadratic algebras.

Lemma 3.7

Let \(A=\mathfrak {d}(V, \varphi , \delta )\) be a generalized \(\mathbb {K}\)-oscillator Lie algebra. The following assertions are equivalent:

  1. (a)

    A is local.

  2. (b)

    \(A=A^2\oplus \mathbb {K}\cdot \delta \) and \(\delta \ne 0\).

  3. (c)

    Z(A) is one dimensional.

  4. (d)

    \(\delta \) is an automorphism.

  5. (e)

    The quadratic dimension of A is 2.

Here, \(A^2\) is a generalized \(\mathbb {K}\)-Heisenberg algebra and \(B^s_{inv}(A)={{\,\textrm{span}\,}}\langle \varphi _{1,0}, \varphi _\delta \rangle \) where \(\varphi _{1,0}(A^2, A)=0\) and \(\varphi _{1,0}(d,d)=1\).

Proof

Any subspace U containing \(A^2\) is an ideal and any subspace of Z(A) is an ideal. Assume firstly A is local. As \(\dim A\ge 4\), A is not abelian, so \(\delta \ne 0\) and \(Z(A)=\ker \delta \oplus \mathbb {K}\cdot \delta ^*\). Since there is only one minimal ideal, \(\ker \delta =0\), so \({{\,\textrm{Im}\,}}\delta = V\), \(A^2= V\oplus \mathbb {K}\cdot \delta ^*\) and (b) follows. From (b) and \(A^2=(Z(A))^\perp \), we have \(\dim A=\dim A^2+ \dim Z(A)\) and we get (c). If \(Z(A)=\mathbb {K}\cdot \delta ^*\ne A\), using Lemma 3.2 we have \(\delta \ne 0\) and \(Z(A)=\ker \delta \oplus \mathbb {K}\delta ^*\), so \(\ker \delta =0\) and \(\delta \) is an automorphism. Then (c) implies d). Now we will prove the final comment and \(d)\Rightarrow e)\). From Lemma 3.6, \(d_q(A) \ge 2\). In fact, \({{\,\textrm{span}\,}}\langle \varphi _\delta ,\varphi _{1,0}\rangle \subseteq B^s_{inv}(A)\). Let \(\psi \in B^s_{inv}(A)\) and set \(\alpha :=\psi (\delta , \delta )\) and \(\beta :=\psi (\delta , \delta ^*)\). Note that \(\psi (\delta , x)=\psi (\delta ^*, x)=0\) thanks to the invariance of \(\psi \) and \(x=[\delta , \delta ^{-1}(x)]\). Consider \(x_0,y_0\in V\) such that \(0\ne k_0=\varphi (x_0, y_0)=\varphi (\delta (\delta ^{-1}(x_0)), y_0)\). Then \(0=\psi ([\delta ^{-1}x_0,y_0], \delta ^*)=\psi (k_0\delta ^*,\delta ^*)=k_0\psi (\delta ^*, \delta ^*)\). Therefore, \(\psi (\delta ^*, \delta ^*)=0\). Now, for any \(x,y \in A\) and \(y\ne 0\), \(\psi (x,y)=\psi (x, [\delta ,\delta ^{-1}(y)])=\psi ([\delta ^{-1}(y), x],\delta )=\psi (\varphi (y,x)\delta ^*, \delta )\). Then, \(\varphi (x,y)=\varphi (y,x)\psi (\delta ^*,\delta )=\beta \varphi _\delta (x,y)\) and the equality also holds for \(y=0\). In this way, we have that \(\psi =\alpha \varphi _{1,0}+\beta \varphi _\delta \). Therefore, \(d_q(A)=2\) and assertion e) follows. Assume finally e). From Lemma 3.6, \(r=\dim Z(A)=1\) so Z(A) is a minimal ideal and therefore \(\delta \) is an automorphism and \(A^2\) is a maximal ideal. Let I be a minimal ideal different from Z(A), then \(I\cap Z(A)=0\) and \(A=I^\perp +A^2\). As \(I^\perp \) is an ideal, applying the non-degenerancy of \(\varphi _\delta \), we have \([I,I^\perp ]=0\) and therefore \(0\ne [A,I]=[A^2,I]\subseteq A^2\cap I \subseteq I\). The minimality of I implies \(I= A^2\cap I \subseteq A^2\) and there is a \(0\ne v\in V\) such that \(v\oplus t\delta ^*\in I\), \(t\in \mathbb {K}\). For any \(w\in V\), we have \([v+ t\delta ^*, w]=\varphi (\delta (v), w)\delta ^* \in I\). Our assumption implies \(\varphi (\delta (v), w)=0\) for all \(w\in V\), a contradiction because \(\varphi \) is non-degenerate and \(\delta \) is an automorphism. This proves that Z(A) is the unique minimal ideal and A is a local Lie algebra. \(\square \)

Remark 3.8

In characteristic zero, [3, Theorem 3.1] offers a characterization of local algebras that includes the class of \(\mathbb {K}\)-oscillators. The main goal in this paper is to provide examples and characterizations of algebras whose quadratic dimension is 2. In fact, our proof of \(d)\Rightarrow e)\) is the one presented in Proposition 4.1. This proposition asserts that the result is true for a double extension of any quadratic Lie algebra by any skew-symmetric derivation.

In the sequel, we will tackle the problem of isomorphisms and isometric isomorphisms for \(\mathbb {K}\)-oscillator algebras. A similar result appears in [9, Proposition 2.11]. By Definition 3.1, \(\mathbb {K}\)-oscillator algebras are one-dimensional-by-abelian double extensions of orthogonal subspaces via skew-automorphisms.

Theorem 3.9

Let \(A_i=\mathfrak {d}(V_i, \varphi _i,\delta _i)\) be two \(\mathbb {K}\)-oscillator algebras. Then, \(A_1\) and \(A_2\) are isomorphic if and only if there exists an isomorphism \(f:V_1\rightarrow V_2\) and scalars \(\lambda ,\,\mu \in \mathbb {K}\) with \(\lambda \mu \ne 0\) such that:

  1. (a)

    \(\delta _1=\mu f^{-1}\delta _2 f\).

  2. (b)

    \(\lambda \mu \varphi _1=\varphi _2(f(\cdot ), f(\cdot ))\).

They are isometrically isomorphic if and only if, for \(\lambda \mu =1\), (a) and (b) stand. Also, an isomorphism \(F:A_1 \rightarrow A_2\) is completely determined by the 5-tuple \((f,z,\lambda , \mu , \nu )\) where \(f:V_1\rightarrow V_2\) satisfies (a) and (b), \(\lambda , \mu \in \mathbb {K}^\times \), \(z\in V_1\), and \(\nu \in \mathbb {K}\) this way: \(F(\delta _1) =\mu \delta _2+z+\nu \delta _2^*\), \(F(\delta _1^*)=\lambda \delta _2^*\), \(F(x)=f(x)+\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\delta _2^*\) for \(x\in V_{1}\). F is an isometry if and only if \(\lambda \mu =1\),

$$\begin{aligned} \mu \varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))+\varphi _2(z,f(x))=0\text { and }2\mu \nu +\varphi _2(z,z)=0. \end{aligned}$$
(14)

(The last conditions are fulfilled by, for instance, setting \(\nu =0, z=0\).)

Proof

Assume firstly \(A_1\) and \(A_2\) are isomorphic. The centre of both algebras is 1-dim. and any isomorphism \(F:A_1\rightarrow A_2\) induces the isomorphism \(F\mid _{Z(A_1)}:Z(A_1)\rightarrow Z(A_2)\). So, there exists \(\lambda \ne 0\) such that \(F(\delta _1^*)=\lambda \delta _2^*\). For the derived algebras \(A_i^2\), we also have the isomorphism \(F\mid _{A_1^2}:A_1^2\rightarrow A_2^2\), that acts as \(F(x)=f(x)+g(x)\delta _2^*\) over the elements \(x\in V_1\), where \(f:V_{1}\rightarrow V_{2}\) and \(g:V_{1}\rightarrow \mathbb {K}\) are linear maps. Furthermore, f must be an isomorphism since it is surjective: take \(y\in V_2\), there exists \(x+k\delta _1^*\in A_1^2\) such that \(y=F(x+k\delta _1^*)=f(x)+(g(x)+k\lambda )\delta _2^*\) implying \(f(x)=y\). Finally, \(F(\delta _1)=\mu \delta _2+z+\nu \delta _2^*\) with \(\mu \ne 0\), because otherwise, we would have \(F(A_1)\subset A_2^2\ne A_2\).

Thus, we conclude there are \(\lambda \), \(\mu \), \(\nu \in \mathbb {K}\) with \(\lambda \mu \ne 0\), an isomorphism \(f:V_1\rightarrow V_2\), a linear map \(g:V_1\rightarrow \mathbb {K}\) and \(z\in V_2\) such that, for \(x\in V_1\): \(F(\delta _1)=\mu \delta _2+z+\nu \delta _2^*\), \(F(x)=f(x)+g(x)\delta _2^*\), \( F(\delta _1^*)=\lambda \delta _2^*\). Since F is an isomorphism, \(F([\delta _1,x]_1)=[F(\delta _1),F(x)]_2\) for \(x\in V_1\), so \(F(\delta _1(x))=[\mu \delta _2+z+\nu \delta _2^*,f(x)+g(x)\delta _2^*]_2\) and \(f(\delta _1(x))+g(\delta _1(x))\delta _2^* =\mu \delta _2(f(x))+\varphi _2(\delta _2(z),f(x))\delta _2^*\). Thus, for any \(x\in V_1\), we have that \((f\circ \delta _1)(x)=(\mu \delta _2\circ f)(x)\), so condition (a) follows, and \(g(x)=\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\). Taking \(a\in V_1\) such that \(\delta _1(a)=x\) and using again that F is an isomorphism, we also have that \(F([a,y]_1)=[F(a),F(y)]_2\), \(F(\varphi _1(\delta _1(a),y)\delta _1^*) =[f(a)+g(a)\delta _2^*,f(y)+g(y)\delta _2^*]_2\) and

$$\begin{aligned} \lambda \varphi _1(x,y)\delta _2^*&=\varphi _2(\delta _2(f(a)),f(y))\delta _2^*, \\ \lambda \varphi _1(x,y)\delta _2^*&=\varphi _2(f(\delta _1(a))/\mu ,f(y))\delta _2^*, \\ \lambda \varphi _1(x,y)\delta _2^*&=\varphi _2(f(x)/\mu ,f(y))\delta _2^*. \end{aligned}$$

Therefore, \(\lambda \mu \varphi _1(x,y)=\varphi _2(f(x),f(y))\) for \(x,y\in V_{1}\), proving condition (b). The isomorphism is defined by \(F(\delta _1) =\mu \delta _2+z+\nu \delta _2^*\), \(F(\delta _1^*)=\lambda \delta _2^*\) and \(F(x)=f(x)+\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\delta _2^*\) for \(x\in V_{1}\). If F is an isometry, then \(\varphi _{1\delta _1}(\delta _1,a) = \varphi _{2\delta _2}(F(\delta _1),F(a))\) for \(a = \delta _1^*, x, \delta _1\); therefore

$$\begin{aligned} 1 =\varphi _{2\delta _2}(\mu \delta _2+z+\nu \delta _2^*,\lambda \delta _2^*) =\lambda \mu ,\\ \end{aligned}$$
$$\begin{aligned} 0&=\varphi _{2\delta _2}(\mu \delta _2+z+\nu \delta _2^*,f(x)+\varphi _2(\delta _2(z),f(\delta ^{-1}(x)))\delta ^*_2)\\&=\mu \varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))+\varphi _2(z,f(x)),\\ 0&=\varphi _{2\delta _2}(\mu \delta _2+z+\nu \delta _2^*,\mu \delta _2+z+\nu \delta _2^*)=2\mu \nu +\varphi _2(z,z). \end{aligned}$$

resulting \(\lambda \mu =1\) and satisfying the two conditions in Eq. (14).

For the converse implication, assuming (a) and (b) hold, take any \(z\in V_2\) and \(\nu \in \mathbb {K}\), and define \(F:A_1\rightarrow A_2\) by \(F(\delta _1) =\mu \delta _2+z+\nu \delta _2^*\), \(F(\delta _1^*)=\lambda \delta _2^*\), \(F(x) =f(x)+\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\delta _2^*\) if \(x\in V_1\), and extend it by linearity. This map is linear and bijective. To check F is an isomorphism, we need \(F([a,b]_1)=[F(a), F(b)]_2\) for \([\delta _1,\delta _1^*]_1\), \([\delta _1, x]_1\), \([x,y]_1\), \([x,\delta _1^*]_1\), \(x,y\in V_1\).

$$\begin{aligned} \begin{aligned} F([c,\delta _1^*]_1)&=F(0)=0=[F(a),\lambda \delta _2^*]_2=[F(a),F(\delta _1^*)]_2,\text { for any }c\in A_1 \\ F([\delta _1,x]_1)&=F(\delta _1(x))=f(\delta _1(x))+\varphi _2(\delta _2(z),f(x))\delta _2^* \\ {}&=\mu \delta _2(f(x))+\varphi _2(\delta _2(z),f(x))\delta _2^* \\ {}&=[\mu \delta _2+z+\nu \delta _2^*,f(x)+\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\delta _2^*]_2 =[F(\delta _1),F(x)]_2, \\ F([x,y]_1)&=F(\varphi _1(\delta _1(x),y)\delta _1^*)=\lambda \varphi _1(\delta _1(x),y)\delta _2^* =\textstyle \frac{1}{\mu }\varphi _2(f(\delta _1(x)),f(y))\delta _2^* \\ {}&=\varphi _2(\delta _2(f(x)),f(y))\delta _2^* \\ {}&=[f(x)+\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\delta _2^*,f(y) +\varphi _2(\delta _2(z),f(\delta _1^{-1}(y)))\delta _2^*]_2 \\&=[F(x),F(y)]_2. \end{aligned}\end{aligned}$$

If we also assume \(\lambda \mu =1\), \(\mu \varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))+\varphi _2(z,f(x))=0\), and \(2\mu \nu +\varphi _2(z,z)=0\), we have that F is also an isometry as:

$$\begin{aligned} \varphi _{1\delta _1}(\delta _1,\delta _1^*)=1&=\lambda \mu =\varphi _{2\delta _2}(\mu \delta _2+z+\nu \delta _2^*,\lambda \delta _2^*)=\varphi _{2\delta _2}(F(\delta _1),F(\delta _1^*)), \\ \varphi _{1\delta _1}(x+k\delta _1^*,\delta _1^*)&=0=\varphi _{2\delta _2}(f(x)+\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\delta _2^*+k\lambda \delta _2^*,\lambda \delta _2^*) \\ {}&=\varphi _{2\delta _2}(F(x+k\delta _1^*),F(\delta _1^*)), \\ \varphi _{1\delta _1}(x,y)&=\varphi _1(x,y)=\textstyle \frac{1}{\lambda \mu }\varphi _2(f(x),f(y)) \\ =\varphi _{2\delta _2}(f(x)&+\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\delta _2^*,f(y)+\varphi _2(\delta _2(z),f(\delta _1^{-1}(y)))\delta _2^*) \\&=\varphi _{2\delta _2}(F(x),F(y)), \\ \varphi _{1\delta _1}(\delta _1,x)&=0=\mu \varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))+\varphi _2(z,f(x)) \\ {}&=\varphi _{2\delta _2}(\mu \delta _2+z+\nu \delta _2^*,f(x)+\varphi _2(\delta _2(z),f(\delta _1^{-1}(x)))\delta _2^*) \\ {}&=\varphi _{2\delta _2}(F(\delta _1),F(x)), \\ \varphi _{1\delta _1}(\delta _1,\delta _1)&=0=2\mu \nu +\varphi _2(z,z)=\varphi _{2\delta _2}(\mu \delta _2+z+\nu \delta _2^*,\mu \delta _2+z+\nu \delta _2^*) \\ {}&=\varphi _{2\delta _2}(F(\delta _1),F(\delta _1)). \end{aligned}$$

\(\square \)

Example 3.5

Let \(\mathbb {K}\) be an arbitrary field with \({{\,\textrm{char}\,}}\mathbb {K}\ne 2,3\) and \(V_{1}=V_{2}=\mathbb {K}^{4}\) with basis \(\{e_{1},e_{2}, v_{1}, v_{2}\} \) with bilinear form given by \(\varphi \left( e_{i},e_{j} \right) =1-\delta _{ij}=\varphi \left( v_{i},v_{j} \right) \) and \(\varphi \left( e_{i},v_{j} \right) =0\). Let \(f_{1}\) and \(f_{2}\) be the skew-symmetric maps with the matrix diagonal forms \(A_{f_{1}}={{\,\textrm{diag}\,}}\left( 2,-2,3,-3 \right) \) and \(A_{f_{2}}={{\,\textrm{diag}\,}}\left( 1,-1,4,-4 \right) \). Consider the algebras \(\mathfrak {d}_{i}=\mathfrak {d}\left( V_{i},\varphi ,f_{i} \right) \) and assume \(\mathfrak {d}_{1}\cong \mathfrak {d}_{2}\). According to (a) of Theorem 3.9, \(f_{1}=\mu f^{-1}f_{2}f\), then \(\alpha \) is an eigenvalue of \(f_{2}\) if and only if \(\mu \alpha \) is an eigenvalue of \(f_{1}\).

In this case the eigenvalues of \(f_{2}\) are \(\pm 1\), \(\pm 4\), and those of \(f_{1}\) are \(\pm 2\), \(\pm 3\). Therefore \(\pm \mu =\pm 2\) and \(\pm 4\mu =\pm 3\) or \(\pm \mu =\pm 3\) and \(\pm 4\mu =\pm 2\). In the first case, \(\mu =\pm 2\) so \(\pm 4\mu =\pm 8=\pm 3 \), thus \(8=\pm 3\), and thus \({{\,\textrm{char}\,}}\mathbb {K}=5,11\). In the second case, \(\mu =\pm 3\) so \(\pm 4\mu =\pm 12=\pm 3\), thus \(12=\pm 3\), and then \({{\,\textrm{char}\,}}\mathbb {K}=5\). Therefore the algebras are non-isomorphic for \({{\,\textrm{char}\,}}\mathbb {K}\ne 5,11\). If \({{\,\textrm{char}\,}}\mathbb {K}=5\), taking \(f={{\,\textrm{Id}\,}}\), \(\mu =2\) and \(\lambda =2^{-1}\), Theorem 3.9 holds, and \(\mathfrak {d}_{1}\cong \mathfrak {d}_{2}\). If \({{\,\textrm{char}\,}}\mathbb {K}=11\), the same is true taking g such that \(g\left( e_{i} \right) =e_{i}\), \(g\left( v_{1} \right) =v_{2}\) and \(g\left( v_{2} \right) =v_{1}\), \(\mu =2\), \(\lambda = 2^{-1}\).

4 Isomaximality and Lorentzian Algebras

In this section \({{\,\textrm{char}\,}}\mathbb {K}= 0\). According to [12, Definition II.3.16], a Lorentzian Lie algebra is a pair \((L, \varphi )\) with a real Lie algebra L and \(\varphi \) an invariant and non-degenerate Lorentzian form, i.e., a real invariant symmetric bilinear form with signature \((p,q=1)\) where p is the number of positive eigenvalues and q is the number of negative eigenvalues. Section 6 of Chapter II in [12] focuses on these algebras and explores the study of Lie semialgebras in them. The section points out that the complete classification of Lorentzian algebras is reduced to the indecomposable (named as irreducible by the authors) subclass, see [12, Remark II.6.1]. And the classification of this subclass is fully covered by Theorem II.6.14. Throughout this final section, we will prove that, over any arbitrary field \(\mathbb {K}\) of characteristic zero, the class of solvable quadratic irreducible Lie algebras with quadratic form of Witt index 1 (only one hyperbolic plane) is just the class of one-dimensional-by-abelian double extension through skew-automorphisms of orthogonal subspaces without isotropic vectors (i.e., the class of \(\mathbb {K}\)-oscillator algebras according to Definition 3.1). The Witt index 1 condition generalizes the (p, 1) or the (1, q) signature condition in the real case. To get our result, we will use the concept of isomaximal ideal introduced in [14, Definition 2.3] and basic facts on quadratic Lie algebras.

Definition 4.1

Let I be a totally isotropic ideal in a quadratic Lie algebra not contained in any other totally isotropic ideal, then it is called isomaximal.

Lemma 4.1

(Kath, Olbrich, 2003) Let \((L, \varphi )\) be a quadratic indecomposable Lie algebra over \(\mathbb {K}\) (\({{\,\textrm{char}\,}}\mathbb {K}\ne 0\)), and R(L) be the solvable radical of L. Then:

  1. (a)

    L has no proper simple ideals and \(R(L)^\perp \subseteq R(L)\).

  2. (b)

    If I is an isomaximal ideal, then \(I\ne L\), \(I^\perp \subseteq R(L)\), and \(\frac{I^\perp }{I} \) is abelian.

Proof

It is [14, Lemma 2.2 and 2.3]. Both proofs hold in characteristic 0. \(\square \)

Theorem 4.2

The following assertions are equivalent:

  1. (a)

    \((L, \psi )\) is an non-semisimple indecomposable quadratic Lie algebra such that \(\psi \) has Witt index 1.

  2. (b)

    \(L=\mathfrak {d}(V,\varphi , \delta )\) is a \(\mathbb {K}\)-oscillator algebra, and \(\varphi (v,v)\ne 0\) for all \(v\in V\).

Lie algebras that satisfy (a) or (b) statements are double extensions of an abelian quadratic algebra \((V,\varphi )\) by a \(\varphi \)-skew semisimple automorphism \(\delta \). Even more, any irreducible polynomial of the minimum polynomial \(m_\delta (x)\) is of the form \(\pi (x)=x^{2n}-a_{n-1}x^{2(n-1)}-\dots -a_1x^2-a_0\in \mathbb {K}[x]\), \(n\ge 1\).

Proof

Assume condition (a) and note the maximal dimension of any totally isotropic subspace is 1. From Lemma 4.1, \(R(L)^\perp \subseteq R(L)\), so \(R(L)^\perp \) is a totally isotropic ideal, and \(d=\dim R(L)^\perp \le 1\). Since \(d=\dim L-\dim R(L)\) is just the dimension of any Levi factor of L, \(d\ge 3\) if \(d\ne 0\). Thus, \(d=0\) and \(L=R(L)\) is a solvable Lie algebra. This implies \(L^2\ne L\), and then \(Z(L)=(L^2)^\perp \ne 0\). From Remark 3.1, \(0\ne Z(L)\subset L^2=Z(L)^\perp \) by indecomposibility; therefore, Z(L) is totally isotropic. So, \(Z(L)=\mathbb {K}\cdot z\) is a minimal ideal, \(\psi (z,z)=0\), and \((Z(L))^\perp =L^2\) is a maximal ideal of codimension one. Then \(L=\mathbb {K}\cdot x \oplus L^2\), and from the non-degeneracy of \(\psi \), we can assume without loss of generality \(\varphi (x,z)=1\), \(\varphi (x,x)=0\), i.e., \(\langle x,z\rangle \) is a hyperbolic plane. From [9, Lemma 2.7], we get that L is isometrically isomorphic to the double extension of the quadratic algebra \((V=\frac{L^2}{Z(L)}, \varphi )\), \(\varphi (x+Z(L), y+Z(L)):=\psi (x,y)\). But the centre is an isomaximal ideal; thus, \((V=\frac{L^2}{Z(L)}, \varphi )\) is a quadratic abelian Lie algebra following (b) in Lemma 4.1. Since the Witt index of \(\psi \) is one, from the decomposition \(L=\langle x,z\rangle \oplus \langle x,z\rangle ^\perp \) and \(L^2=\mathbb {K}\cdot z\oplus \langle x,z\rangle ^\perp \), it is easy to check that \(\varphi \) has no isotropic vectors. Note, as \(L\cong \mathfrak {d}(V,\varphi , \delta )\) and Z(L) is 1-dim, \(\delta \) is an automorphism according to Lemma 3.7. Finally, as \(L^2\) is the only maximal ideal of L, if I is any proper ideal, \(I^\perp \subseteq L^2\) and therefore \(Z(L)\subseteq (I^\perp )^\perp =I\). This implies that L is indecomposable. Final assertion on \(\delta \) semisimple and irreducible factors of \(m_\delta (x)\) comes from Corollary 2.4\(\square \)

Remark 4.3

The class of \(\mathbb {K}\)-oscillator algebras described in Theorem 4.2 is broad. According to Lemma 3.7, their quadratic dimension is 2. Moreover, Theorem 3.9 provides criteria for isometric isomorphisms between two algebras of this class. In the real case, the algebraic structure of oscillator algebras provides information on the geometry of oscillator groups [5, Theorem 5.1].

Remark 4.4

Any indecomposable quadratic Lie algebra with trivial solvable radical is simple. For the algebra \(\mathfrak {sl}(n, \mathbb {K})\) of zero-trace matrices, the bilinear form \(b(A,B)=\frac{1}{2}{{\,\textrm{Tr}\,}}AB\) allows the recovery of the Killing form as \(\kappa =8b\). Over the reals, \((\mathfrak {sl}(2, \mathbb {R}), \lambda \kappa )\) with \(\lambda >0\) are the unique simple Lorentzian algebras. All of them are isometrically isomorphic to \((\mathfrak {sl}(2, \mathbb {R}), \kappa )\).

Corollary 4.5

[15] The 3-dimensional special linear algebra is the unique quadratic non-solvable indecomposable real Lorentzian algebra, and the Killing form is the unique, up to positive scalars, invariant and non-degenerate Lorentzian form. Solvable indecomposable quadratic Lorentzian algebras are the \((2n+2)\)-dimensional algebras \((\mathfrak {d}_{2n+2}(\lambda ),\varphi _\delta )\) from Example 3.2 with \(\mathfrak {d}_{2n+2}(\lambda )={{\,\textrm{span}\,}}_\mathbb {R}\langle \delta , x_i,y_i, \delta ^*\rangle \) and \(\lambda = (1,\lambda _2, \dots , \lambda _n)\), \(0<\lambda _i\le \lambda _{i+1}\).

Proof

For the solvable case, apply Theorems 3.9 and 4.2 to prove that any \(\mathbb {R}\)-oscillator determined by \(\lambda =(\lambda _1, \lambda _2, \dots , \lambda _n)\) is isometrically isomorphic to that described through \(\frac{1}{\lambda _1}\lambda \). \(\square \)

Corollary 4.6

The invariant and non-degenerate bilinear forms of the Lorentzian algebra \((\mathfrak {d}_{2n+2}(\lambda ),\varphi _\delta )\) where \(\lambda = (1,\lambda _2, \dots , \lambda _n)\) and \(0<\lambda _i\le \lambda _{i+1}\) are \(\varphi _{t,s}\) with \((t,s)\in \mathbb {R}^2\) and \(s\ne 0\) defined by \(\varphi _{t,s}(\delta ,\delta )=t\), \(\varphi _{t,s}(\delta ,\delta ^*)=s\), \(\varphi _{t,s}(x_i,x_j)=\varphi _{t,s}(y_i,y_j)=\delta _{ij}s\), and \(\varphi _{t,s}(\delta ,a)=\varphi _{t,s}(\delta ^*,a)=\varphi _{t,s}(\delta ,a)=0\) for \(a=x_i,y_j\). Then, \(\varphi _\delta =\varphi _{0,1}\). Furthermore, \((\mathfrak {d}_{2n+2}(\lambda ), \varphi _{t,s})\) is isometrically isomorphic to \((\mathfrak {d}_{2n+2}(\lambda '), \varphi _{t',s'})\) if and only if \(\lambda =\lambda '\) and \(ss'>0\). In particular, any \(\mathbb {R}\)-oscillator algebra admits 2 non-isometrically isomorphic quadratic structures: the one given by the invariant form \(\varphi _{0,1}\) of signature \((2n-1,1)\) and the one of signature \((1,2n-1)\) given by \(\varphi _{0,-1}=-\varphi _{0,1}\).

Proof

Bilinear forms come from the fact their quadratic dimension is 2. Now, for the isomorphism assertion, let \(\mathfrak {d}_{2n+2}\left( \lambda \right) ={{\,\textrm{span}\,}}_{\mathbb {R}}\langle \delta , x_{i}, y_{i}, \delta ^*\rangle \) and \(\mathfrak {d}_{2n+2}\left( \lambda ' \right) ={{\,\textrm{span}\,}}_{\mathbb {R}}\langle \delta ',x_{i}', y_{i}',\delta '^*\rangle \). By Theorem 3.9 for these algebras to be isomorphic there must be \(0\ne \mu \in \mathbb {R}\) and an isomorphism \(f:{{\,\textrm{span}\,}}_{\mathbb {R}}\langle x_{i},y_{i}\rangle \rightarrow {{\,\textrm{span}\,}}_{\mathbb {R}} \langle x_{i}',y_{i}'\rangle \) such that \(\delta =\mu f^{-1}\delta ' f\). The characteristic polynomial for the map on the rigth side is equal to \(q(x)=( x^{2}+\mu ^{2})( x^{2}+( \mu \lambda _{2}')^{2})\ldots ( x^{2}+( \mu \lambda _{n}' ) ^{2} ) \) with \(\mu \le \mu \lambda _{2}'\le \cdots \le \mu \lambda _{n}'\). The characteristic polynomial for \(\delta \) is \(p(x)=\left( x^{2}+1 \right) \left( x^{2}+\lambda _{2}^{2} \right) \ldots \left( x^{2}+\lambda _{n}^{2} \right) \) with \(1\le \lambda _{2} \le \cdots \le \lambda _{n}\). Since they must be equal, \(1=\mu \), \(\lambda _{i}=\mu \lambda _{i}'\), this happen if and only if \(1=\mu \) and \(\lambda _{i}=\lambda _{i}'\), that is, \(\lambda =\lambda '\). Thus, \(\varphi _{t,s}(\delta ,\delta ^*) =s=\lambda \mu s'=\varphi _{t',s'}(F(\delta ),F(\delta ^*))\), and \(\frac{s}{s'}\varphi (x,y) =\varphi (f(x),f(y))\), for \(x,y\in \mathfrak {d}_{2n+2}(\lambda )\). For the isometric part, assume that \(F:\left( \mathfrak {d}_{2n+2}\left( \lambda \right) ,\varphi _{t,s}\right) \rightarrow \left( \mathfrak {d}_{2n+1}\left( \lambda \right) ,\varphi _{t',s'}\right) \) is an isometric isomorphism where F is as in Theorem 3.9. Write \(f(x_{1}) =\sum _{k=1} ^{n}a_{k}x_{k}+b_{k}x_{k}\) some \(a_{k}\) or \(b_{k}\) is not zero since f is an isomorphism. Then \(\frac{s}{s'}=\frac{s}{s'}\varphi \left( x_{1},x_{1} \right) =\varphi \left( f\left( x_{1} \right) , f\left( x_{1} \right) \right) =\sum _{k=1}^{n}a_{k}^{2}+b_{k}^{2}>0\), i.e., \(ss'>0\). Conversely, let \(s=\pm 1\), \(t=0\) and \(s'\) such \(ss'>0\). Define the map \(F:\left( \mathfrak {d}_{2n+1} \left( \lambda \right) ,\varphi _{0,s}\right) \rightarrow \left( \mathfrak {d}_{2n+1}\left( \lambda \right) ,\varphi _{t',s'} \right) \) by \(F(\delta )=\delta +\nu \delta ^*\), \(F(\delta ^*) =\frac{s}{s'}\delta ^*\) and \(F(x)= \sqrt{\frac{s}{s'}} x\), where \(\nu \) satisfies \(t'+2\nu s'=0\). By Theorem 3.9, F is an isomorphism and straightforward computations check that it is also an isometry. Therefore, \((\mathfrak {d}_{2n+2}(\lambda ), \varphi _{t',s'>0})\cong (\mathfrak {d}_{2n+2}(\lambda ), \varphi _{0,1 })\) and \((\mathfrak {d}_{2n+2}(\lambda ), \varphi _{t',s'<0})\cong (\mathfrak {d}_{2n+2}(\lambda ), \varphi _{0,-1})\). \(\square \)