1 Introduction

In the portfolio selection problem, two phases are usually involved. The first one is devoted to the selection of promising assets, the second one focuses on the allocation of capital among them. In the modern portfolio theory, the so-called mean-variance analysis developed by Markowitz (1952) represents a milestone paper, and it has gained widespread acceptance as a practical tool for portfolio optimization among researchers and practitioners (Guerard 2009). In this model the mean return of a portfolio represents the profit measure, while the portfolio variance is the risk. Accordingly, a portfolio is efficient if it provides the maximum return for a given level of risk or, equivalently, if it has the minimum risk for a given level of return. The set of optimal mean-variance tradeoffs in the risk-return space forms the efficient frontier. To guide the choice of an investor among these efficient portfolios, Sharpe (1994) has introduced a performance measure defined as the ratio between the excess return of an investment with respect to a risk-free asset and its standard deviation. Sharpe ratios with higher values correspond to more promising alternatives. In this manner, only the first two moments of the returns distribution are involved in the portfolio selection problem. However, the issue of whether higher moments should be considered to properly represent the investors’ behaviour has been widely debated in literature and it is still open. The pioneering studies by Arditti (1967) and Samuelson (1970) have pointed out the importance of the third-order central moment of the returns distribution in the portfolio allocation process. In the same direction, Scott and Horvath (1980) have observed that a positive preference for skewness and a negative preference for kurtosis, known as prudence and temperance respectively (Kimball 1990), properly explain the behaviour of investors. Thus, to take advantage of the potential upside return represented by positive skewness and the possible benefit given by small kurtosis, in the mean-variance framework, two novel performance measures have been recently proposed in literature, namely the adjusted for skewness Sharpe ratio (Zakamuline and Koekebakker 2009) and the adjusted for skewness and kurtosis Sharpe ratio (Pézier and White 2008). The former multiplies the classical Sharpe index by a factor linked to the portfolio skewness, while the latter extends the Sharpe ratio by including a factor that incorporates both skewness and kurtosis.

These two Sharpe ratio-based performance measures have been used for stock performance evaluation (Nagy and Benedek 2021) but, to the best of our knowledge, they have not been applied as objective functions into the portfolio optimization process. Therefore, we propose portfolio selection strategies which maximize the aforementioned performance measures, considering four types of real-world constraints. A cardinality constraint is used to manage the portfolio size, buy-in threshold constraints ensure that all the available capital is invested. To characterize the investment profile, we introduce a set of bound constraints and a turnover threshold. In this manner, thanks to the former constraints, we avoid both the concentration of money in a few assets and its splitting into too many assets. Further, due to the turnover bound, we limit the possibility of portfolio changes over time. The proposed asset allocation problem is analyzed from the perspective of an institutional investor who operates in equity markets with hundreds or thousands of constituents and selects a restricted pool of stocks to build up an active portfolio.

The introduction of cardinality constraints in the portfolio design leads to optimization problems for which finding optimal solutions becomes computationally challenging (Moral-Escudero et al. 2006). For this reason, in recent years, swarm optimization algorithms, inspired by the self-organizing interaction among agents, have become popular for this topic. In particular, the particle swarm optimization (PSO) algorithm has shown a good capability in solving small and mid-size portfolio allocation models (Cura 2009; Zhu et al. 2011; Kaucic et al. 2020; Corazza et al. 2021). This algorithm, mimicking the swarm behaviour of social animals such as bird flocking, gathers the information about good solutions through the swarm, and floats in the whole search space to find the global solution of the problem (Wang et al. 2018). However, PSO does not work efficiently in solving high-dimensional optimization problems, due to the so-called curse of dimensionality (Gilli and Schumann 2012; Oldewage et al. 2020). To overcome this issue, Yang et al. (2018) have developed a variant of the level-based learning swarm optimizer (LLSO), which exhibits superiority in achieving higher quality solutions with respect to other competitors in the literature for large scale optimization problems. The algorithm is based on the teaching paradigm where individuals are divided into levels according to their fitness and they are treated differently. The most performing candidates are stored in higher levels and guide the learning of the other particles in the swarm.

In this paper, we adopt a dynamic variant of the LLSO algorithm to solve our portfolio optimization problems, with a specific clam** and reversing procedure for the particles update rule, to improve the exploration efficiency. Moreover, since the LLSO is blind to the constraints, we equip it with a novel hybrid constraint-handling technique which works as follows. In order to deal with the cardinality constraint we use a projection operator, that selects the largest components of the candidate solutions and sets equal to zero the remaining ones. With this technique, we can relax the cardinality equality condition proposed in Kaucic and Piccotto (2022) as an inequality, assuming that the number of stocks included in the portfolio is lower than or equal to a fixed threshold. Furthermore, through this process, we transform the original mixed-integer optimization problem into a problem involving only real variables. Then, buy-in thresholds and budget constraints are handled using the repair operator proposed in Meghwani and Thakur (2017). Finally, to control the turnover constraint, a \(\ell _1\)-exact penalty function method is adopted, as in Corazza et al. (2021).

Summing up, the contribution of this work to the current literature is threefold. On the one hand, it is the first time that the adjusted for skewness Sharpe ratio and the adjusted for skewness and kurtosis Sharpe ratio measures are employed in the portfolio optimization problem. On the other hand, regarding the algorithmic novelties, we propose an improved variant of the LLSO equipped with a novel ad-hoc constraint-handling procedure which involves a repair operator as well as an \(\ell _1\)-exact penalty function strategy. Moreover, from a practical point of view, we study the robustness of the proposed multi-moment strategies by comparing their profitability in a long-run setting, with almost 14 years of observations, involving also the recent phases of large market fluctuation due to the COVID-19 pandemic and the Ukrainian crisis.

The remainder of the paper is organized as follows. In the next section we review some literature related to our work. In Sect. 3 we introduce the investment framework and the objective functions involved. Section 4 presents the improved LLSO algorithm with the novel hybrid constraint-handling technique. Section 5 is devoted to the experimental analysis while the conclusions and future works are reported in Sect. 6.

2 Related works

In this section, we first survey the foremost contributions appeared in literature on the multi-moment formulations of the portfolio optimization problem. Since a complete review of the most recent population-based heuristics for large-scale optimization problem can be found in Omidvar et al. (2022a) and Omidvar et al. (2022b), we focus solely on the papers related to our study and concerning the PSO improvements.

2.1 Multi-moment portfolio optimization models

In order to highlight the critical effects of prudence and temperance on investment decisions, and to provide a more complete characterization of investor preferences, many authors have revised the mean-variance framework by incorporating the third and fourth moments when constructing a portfolio (see Jurczenko and Maillet 2006 and references therein). Lai (1991) has introduced a new term in the objective function for including skewness into the asset allocation problem. Konno et al. (1993) have developed a model in which they maximize the third-order moment given a threshold for portfolio expected return and for the variance. In addition, Liu et al. (2003) have employed also a transaction costs constraint. Lai et al. (2006) have proposed a multi-objective portfolio optimization problem which involves the first four central moments of the portfolio return distributions.

The direct optimization of the third and the fourth moment terms in the problem formulation has been shown to be computationally demanding, due to the difficulty in obtaining reliable estimators for the co-skewness and co-kurtosis matrices, as highlighted in Kim et al. (2014). In some recent contributions, Chaigneau and Eeckhoudt (2020) and Gao et al. (2022) have extended the mean-variance framework by considering alternative measures which incorporate prudence and temperance in risk exposure. Finally, several experimental investigations in the field of behavioural finance have been carried out in the last years, pointing out the importance of prudence, temperance, and higher-order preferences on the investors’ behaviour (see, among others, Colasante and Riccetti 2020, 2021).

2.2 PSO enhancements for large-scale optimization

In literature, there are two major algorithmic approaches to solve the curse of dimensionality for PSO, namely decomposition-based and non decomposition-based approaches. The first type of procedures separate a high-dimensional problem into several small-dimensional instances using a divide-and-conquer strategy to reduce the dimensionality. In this direction, Van der Bergh and Engelbrecht (2004) and Li and Yao (2012) have proposed cooperative co-evolutionary particle swarm optimization algorithm, which randomly divides the decision variables into subgroups and then uses PSO to optimize each subgroup separately. The second type of algorithms directly optimizes all the variables at the same time, employing a learning mechanism to properly balance diversity and convergence. In this context, several learning strategies have been proposed recently. For instance, the competitive swarm optimizer (CSO, Cheng and ** 2015a) compares two randomly chosen particles, and then the superior particle guides the update of the inferior one. Inspired by social animal behaviors, Cheng and ** (2015b) have proposed the social learning particle swarm optimizer (SL-PSO) which first sorts particles by fitness and then worse individuals learn from the better ones. Following the teaching concept that teachers should treat students in accordance with their abilities, Yang et al. (2018) have developed the so-called level-based learning swarm optimizer (LLSO). The optimal compromise between exploration and exploitation of this learning technique guarantees more accurate solutions than the above cited CSO and SL-PSO. For this reason, several extensions of LLSO have been developed. The dynamic LLSO (Yang et al. 2018) dynamically adjusts the number of groups in which the particle swarm is divided based on the performance of the algorithm over time. Due to the oversensitivity of the standard LLSO to the parameter setting, Song et al. (2021) have proposed an adaptive variant in which the evolution state of the swarm is adjusted on the information given by the swarm aggregation indicator. Similarly, the reinforcement learning level-based particle swarm optimization algorithm by Wang et al. (2022) introduces a reinforcement learning strategy to control the number of levels and to improve the search efficiency.

3 Investment framework

In this study, we consider a frictionless market where short selling is not allowed and all investors act as price takers. The investable universe is represented by n risky assets. A portfolio is denoted by the vector of its assets weights \(\textbf{x} = (x_1,\ldots ,x_n)\in \mathbb {R}^n\). In our dynamic setting, portfolio weights are periodically rebalanced, with an investment horizon of length h. We observe the market over a time window \(\mathcal {T} = \{0,1,\ldots ,T\}\) and we adopt the following two-steps scheme for the investment strategy:

  1. 1.

    at time T the optimal portfolio composition is determined using a scenario-based approach;

  2. 2.

    the same portfolio composition is retained until time \(T+h\), assuming that the stocks selected are still available at \(T+h\).

The prices of the n risky assets are available for the time window \(\mathcal {T}\). Then, we define the observed price of asset i at time t, \(t \in \mathcal {T}\) and \(i = 1,\ldots n\) with \(p_{i,t}\), and the realized rate of return at time t, with \(t \ge 1\), as \(r_{i,t} = \frac{p_{i,t}}{p_{i,t-1}}-1\). Based on this information, we aim to identify the optimal allocation at time T that guarantees the best performance at time \(T+h\).

Let now \((\Omega , \mathcal {F}, P)\) be the probability space on which we assume the random variables are defined. We denote by:

  • \(R_{i}^{(h)}\) the random variable representing the rate of return of asset i at future time \(T+h\), with expected value \(\mu _i\);

  • \(R_{\textbf{x}}^{(h)} = \sum _{i=1}^{n} x_i R_{i}^{(h)}\) the random variable that expresses the rate of return of portfolio \(\textbf{x}\) at future time \(T+h\).

We repeat this investment procedure over time, updating the observation window \(\mathcal {T}\) by eliminating the h oldest observations and adding the most recent ones. For semplicity of notation, we will indicate the random rate of return of portfolio \(\textbf{x}\) at \(T+h\), \(R_{\textbf{x}}^{(h)}\), by \(R_{\textbf{x}}\).

3.1 Sharpe ratio-based performance measures

In this section, we will introduce the three performance measures that we will consider in our portfolio optimization problems.

3.1.1 Sharpe ratio

In the asset allocation problem proposed by Markowitz (1952), portfolio risk is represented by the volatility, given by \(\sigma (R_{\textbf{x}}) = \sqrt{\sum _{i=1}^{n}\sum _{j=1}^{n}c_{ij}x_i x_j}\), where \((C)_{ij} = c_{ij}\) is the covariance between stocks i and j, with \(i,j = 1,\dots ,n\). In this model, the portfolio choice is made solely with respect to the expected rate of return \(\mu\) of the portfolio \(\textbf{x}\) and its risk, where a large portfolio volatility is perceived as damaging by the investors. As mentioned in the introduction, the Sharpe ratio is then defined as

$$\begin{aligned} SR(\textbf{x}) = \frac{\mu -r_f}{\sigma (R_{\textbf{x}})} \end{aligned}$$
(1)

where \(r_f\) is a risk-free rate. This performance measure can be interpreted as the compensation earned by the investor per unit of risk. Thus, higher values of SR indicate more promising portfolios and are preferred by rational investors.

Even if it has an easy interpretation, the Sharpe ratio presents some pitfalls. The first is the choice of the risk-free rate used to define the excess rate of return. The debate among scholars and practitioners is still open. For instance, Hitaj and Zambruno (2016) consider \(r_f = 0\) as a reasonable value. Similarly, Amédée-Manesme and Barthélémy (2022) set exogenously \(r_f = 2\%\). Alternatively, Deguest et al. (2022) suggest the use of 1 month or 3 month maturity US Treasury Bills. Since these sovereign bonds exhibited values close to zero during a large part of the investment period analyzed in this paper, in the empirical part we follow Hitaj and Zambruno (2016) and set \(r_f = 0\).

Another problem related to this indicator is its incoherence with preference relations in periods of market downturns, when the expected excess return of the portfolio is negative. In these cases, a Sharpe ratio-oriented agent could select portfolios with higher volatility. To overcome this issue, we consider only the first moment, that means we prefer, among two portfolios with negative expected excess rate of return, the one with the less expected loss.

3.1.2 Adjusted for skewness Sharpe ratio

As observed in the introduction, a possible drawback of the Sharpe ratio is that it uses only the first two moments of the portfolio returns distribution, and does not consider the potential upside return represented by positive skewness. For this reason, Zakamuline and Koekebakker (2009) have derived an adjustment to the Sharpe ratio for skewness, which depends on the investor’s utility function.

The proposed alternative performance measure is called adjusted for skewness Sharpe ratio (ASSR), and is given by

$$\begin{aligned} ASSR_b(\textbf{x}) = SR(\textbf{x}) \sqrt{1+ b\frac{S_3(R_{{\textbf {x}}})}{3}SR({\textbf {x}})} \end{aligned}$$
(2)

where b expresses the individual’s relative preference to the third moment of the returns distribution, and \(S_3(R_{{\textbf {x}}})\) is the skewness, defined by \(E \left[ \left( \frac{R_{{\textbf {x}}} - \mu }{\sigma (R_{\textbf{x}})} \right) ^3 \right]\).

The properties of this performance measure have been investigated by Cheridito and Kromer (2013). In particular, we can note that the \(ASSR_b\) preserves the standard Sharpe ratio for zero skewness, while it is higher if the skewness is positive. However, in order to compute the \(ASSR_b\) one needs to determinate the value of b, which depends on the choice of the utility function. Thus, this performance measure is not unique for all investors, but it is rather an individual performance measure. In this work, we will set \(b = 1\), meaning that we consider an investor with exponential utility, and we will indicate the above quoted performance measure by ASSR.

3.1.3 Adjusted for skewness and kurtosis Sharpe ratio

In order to account for both skewness and kurtosis, Pézier and White (2008) have proposed the so-called adjusted for skewness and kurtosis Sharpe ratio (AKSR), that is

$$\begin{aligned} AKSR(\textbf{x}) = SR(\textbf{x})\left[ 1 + \frac{S_3(R_{{\textbf {x}}})}{3!}SR(\textbf{x}) - \left( \frac{K_4(R_{{\textbf {x}}})-3}{4!}\right) SR(\textbf{x})^2 \right] \end{aligned}$$
(3)

where \(K_4(R_{{\textbf {x}}})\) is the kurtosis, defined by \(E \left[ \left( \frac{R_{{\textbf {x}}} - \mu }{\sigma (R_{\textbf{x}})} \right) ^4 \right]\).

According to this performance measure, an investor prefers portfolios with higher skewness and dislikes portfolios with kurtosis values higher than 3, meaning that leptokurtic portfolio distributions are penalized in order to avoid extreme events.

3.2 Portfolio optimization model

Now, we introduce the family of constraints used to define the set of admissible portfolios.

  1. 1)

    Budget constraint. We require that all the available capital is invested. In terms of portfolio weights this translates into

    $$\begin{aligned} \sum _{i=1}^{n} x_i = 1. \end{aligned}$$
    (4)
  2. 2)

    Cardinality constraint. We assume that the portfolio includes up to K stocks out of the n available, where \(K < n\) is a predefined number. To model the inclusion or the exclusion of the i-th asset in the portfolio, an auxiliary variable \(\delta _i\) is defined as follows

    $$\begin{aligned} \delta _i = \left\{ \begin{array}{l} 1\text{, } \text{ if } \text{ asset } i\hbox { is included} \\ 0\text{, } \text{ otherwise } \end{array} \right. \end{aligned}$$

    for \(i = 1,\ldots ,n\).

    The resulting vector of selected assets is \(\mathbf {\delta } = (\delta _1,\ldots ,\delta _n)\). We can write the cardinality constraint as

    $$\begin{aligned} \sum _{i=1}^{n} \delta _i \le K. \end{aligned}$$
    (5)
  3. 3)

    Box constraints. To avoid extreme positions and foster diversification, we introduce a maximum and a minimum limit for the wealth allocation in the i-th stock included in the portfolio. Let \(l_i\) and \(u_i\) be respectively the lower and the upper bound for the weight of the i-th asset, with \(0< l_i < u_i \le 1\), then we can write the box constraints as

    $$\begin{aligned} \delta _i l_i \le x_i \le \delta _i u_i,\quad i = 1,\ldots ,n \, . \end{aligned}$$
    (6)

    Note that if an asset is not included in the portfolio, no capital is invested on it.

  4. 4)

    Turnover constraint. In every rebalancing phase, the portfolio composition used in the previous investment window, denoted by \(\textbf{x}_0\), is updated. Let \(\textbf{x}\) be the vector of weights in the rebalanced portfolio and \(\widetilde{\textbf{x}}_0\) be the vector of re-normalized weights associated to \(\textbf{x}_0\) (see Shen et al. 2014), which is calculated component-wise as

    $$\begin{aligned} \widetilde{x}_{0,i} = \frac{x_{0,i}(r_{i}+1)}{\sum _{j=1}^{n} x_{0,j}(r_{j}+1)} \end{aligned}$$

    and \(r_i\), \(i =1,\dots ,n\) is the rate of return of the i-th stock in the portfolio at the moment of the rebalancing phase. Then, the portfolio turnover constraint is given by

    $$\begin{aligned} \sum _{i=1}^{n} |x_{i} - \widetilde{x}_{0,i} |\le TR \end{aligned}$$
    (7)

    where TR denotes the maximum turnover rate, which lies between 0 and 1. It can be noted that if \(TR=0\) rebalancing is not allowed, and as TR increases more trades are allowed.

We indicate with \(\mathcal {X}\) the feasible set comprising the pairs \((\delta ,\textbf{x})\) that satisfy (4), (5), (6) and (7).

Summing up, our asset allocation model can be written as

$$\begin{aligned} \begin{aligned} \max \quad&\Phi (\textbf{x})\\ \text {s.t.} \quad&\left( \delta ,\textbf{x}\right) \in \mathcal {X} \end{aligned} \end{aligned}$$

where \(\Phi (\textbf{x})\) is one of the three Sharpe ratio-based performance measures introduced in Sect. 3.1. As it is customary in the programming literature, we transform this maximization problem into the equivalent minimization instance

$$\begin{aligned} \begin{aligned} \min \quad&f(\textbf{x})\\ \text {s.t.} \quad&\left( \delta ,\textbf{x}\right) \in \mathcal {X} \end{aligned} \end{aligned}$$
(8)

where \(f(\textbf{x}) = - \Phi (\textbf{x})\).

3.2.1 Scenario generation technique

To solve the optimization problem (8), we use the following scenario-based generation technique. To estimate the values of the considered performance measures for a given portfolio \(\textbf{x}\), we need to simulate the distribution of the h-step ahead rate of return \(R_{\textbf{x}}\). To this end, we consider the historical rates of return of the n risky assets realized on the time window [0, T]. We assume that historical observations are good proxies for the future rates of return. Then, we define a scenario as the set of the joint realizations of the rates of return for the n assets in a given time period. Due to the good performance of block bootstrap** techniques to preserve correlations between time series (see, for instance, Guastaroba et al. 2009), we adopt the so-called stationary bootstrap (Politis and Romano 1994). This technique considers a random block size, that is a set of consecutive scenarios with variable length, in order to bring some robustness with respect to the standard block bootstrap, which uses fixed block size. The procedure works as follows. First, we select the optimal average block size \(B^*\) by the procedure developed in Politis and White (2004). Then, we extract randomly from the observed data frame a block of length \(B^*\). We repeat this exercise until the extracted sample reaches the desired size h, adjusting the last block length if the procedure exceeds the desired number of periods.

After having a bootstrap sample of h rates of return for each asset i, denoted as \(\widehat{R}_{i,t'}\), with \(i=1,\ldots ,n\) and \(t'=1,\ldots ,h\), we calculate the simulated h-step ahead rate of return of the i-th asset as \(\widehat{R}_{i} = \prod _{t'=1}^{h} \left( 1+\widehat{R}_{i,t'}\right)\) and the simulated h-step ahead rate of return of portfolio \(\textbf{x}\) as \(\widehat{R}_{\textbf{x}} = \sum _{i=1}^{n} x_i \widehat{R}_{i}\). We repeat this procedure S times to have an estimate of the empirical distribution of \(R_{\textbf{x}}\). With an abuse of notation, let \(\widehat{R}_{\textbf{x}}(s)\) be the s-th simulation of the h-step ahead rate of return of portfolio \(\textbf{x}\). We can then calculate all the quantities used to evaluate the Sharpe ratio-based performance measures. Indeed, the sample mean of the empirical distribution is given by

$$\begin{aligned} \widehat{\mu }= \frac{1}{S}\sum _{s=1}^{S}\widehat{R}_{\textbf{x}}(s) \end{aligned}$$

and the sample standard deviation is

$$\begin{aligned} \widehat{\sigma }= \sqrt{\frac{1}{S-1}\sum _{s=1}^{S}(\widehat{R}_{\textbf{x}}(s)-\widehat{\mu })^2}\, . \end{aligned}$$

Similarly, we estimate the skewness and kurtosis as follows

$$\begin{aligned} \widehat{S}_3(R_\textbf{x}) = \frac{1}{S-1}\sum _{s=1}^{S} \left( \frac{\widehat{R}_{\textbf{x}}(s)-\widehat{\mu }}{\widehat{\sigma }} \right) ^3 \end{aligned}$$

and

$$\begin{aligned} \widehat{K}_4(R_\textbf{x}) = \frac{1}{S-1}\sum _{s=1}^{S} \left( \frac{\widehat{R}_{\textbf{x}}(s)-\widehat{\mu }}{\widehat{\sigma }} \right) ^4 \, . \end{aligned}$$

4 Optimization algorithm

After introducing the LLSO paradigm, we present the dynamic LLSO with the proposed improvements as well as the hybrid constraint handling technique.

4.1 Level-based learning swarm optimizer

The LLSO algorithm, developed by Yang et al. (2018), evolves a swarm \(\mathcal {P}\) of NP candidate solutions using the so-called level-based population strategy. This process operates in accordance with the following two steps.

  1. 1.

    First, at each generation g the individuals in \(\mathcal {P}\) are sorted ascending based on their fitness and grouped into \(NL_g\) levels. Each level contains \(LP_g = \lfloor \frac{NP}{NL_g}\rfloor\) particles and in the last one, there are \(\lfloor \frac{NP}{NL_g} \rfloor + NP \% NL_g\) candidate solutions.Footnote 1 Better individuals belong to higher levels, and a higher level corresponds to a smaller level index. Therefore, we denote with \(L_1\) the best level and with \(L_{NL}\) the worst one.

  2. 2.

    Individuals belonging to the first level \(L_1\) are not updated and directly enter into the next generation, because they represent the most valuable information conveyed in the swarm at the current generation. On the contrary, the p-th particle in level \(L_l\), denoted by \(\textbf{x}^{l,p}(g)\), where \(l = 3,\ldots ,NL_g\) and \(p = 1,\ldots ,LP_g\), is allowed to learn from two particles \(\textbf{x}^{l_1,p_1}(g)\) and \(\textbf{x}^{l_2,p_2}(g)\). These two individuals are randomly extracted from two different higher levels \(L_{l_1}\) and \(L_{l_2}\), with \(l_1 < l_2\), and \(p_1\), \(p_2\) randomly chosen from \(\{1,\ldots ,LP_g\}\). For \(l=2\), we sample two particles from \(L_1\) in such a way that \(\textbf{x}^{l_1,p_1}(g)\) is better than \(\textbf{x}^{l_1,p_2}(g)\) in terms of fitness function. Thus, the update rule for particle \(\textbf{x}^{l,p}(g)\) is given by

    $$\begin{aligned}&\textbf{v}^{l,p}(g+1) = r_1 \textbf{v}^{l,p}(g) + r_2 (\textbf{x}^{l_1,p_1}(g)-\textbf{x}^{l,p}(g))+ \psi r_3 (\textbf{x}^{l_2,p_2}(g)-\textbf{x}^{l,p}(g)) \end{aligned}$$
    (9)
    $$\begin{aligned}&\textbf{x}^{l,p}(g+1) = \textbf{x}^{l,p}(g) + \textbf{v}^{l,p}(g+1) \end{aligned}$$
    (10)

    for \(i=1, \dots , n\), where \(\textbf{v}^{l,p}(g)\) is the so-called velocity of particle p in level \(L_l\) at generation g, and \(r_1\), \(r_2\), \(r_3\) are three real numbers randomly generated within [0, 1]. The initial velocities, at generation 0, are all set equal to the zero vector, that is \(\textbf{v}^{l,p}(0) = \textbf{0}\). The parameter \(\psi \in [0,1]\) controls the influence of the less performing exemplar \(\textbf{x}^{l_2,p_2}(g)\) on \(\textbf{v}^{l,p}(g+1)\).

The algorithm repeats these two steps until a maximum number of generations, \(MAX_{GEN}\), is reached.

4.1.1 Dynamic LLSO with clam** and reversion

Following the suggestions in Yang et al. (2018), we adopt a dynamic setting for the number of levels \(NL_g\) by designing a pool \(S = \{ l_1,\dots , l_s \}\) containing s different candidate integers. Then, at each generation g, the algorithm selects one of the elements of S based on their probabilities. At the end of the generation, the performance of the algorithm with the current level number is recorded, in order to update the probability of this level number for the next generation selection. To compute the probabilities of the elements of S, a record list \(\Upsilon _s = \{ \gamma _1,\dots \gamma _s \}\) is defined. Initially, each \(\gamma _i\) is set equal to 1. Then, the element \(\gamma _i\) corresponding to the level number \(l_i\) in the current generation is updated as follows

$$\begin{aligned} \gamma _i = \frac{|F - \tilde{F} |}{|F |} \end{aligned}$$
(11)

where F is the global best fitness of the last generation, and \(\tilde{F}\) is the global best fitness of the current generation. Then, the i-th element of the probability vector \(P_s = \{p_1,\dots ,p_s \}\) is computed as

$$\begin{aligned} p_i = \frac{e^{7 \cdot \gamma _i}}{\sum _{j=1}^{s} e^{7\cdot \gamma _j}} \end{aligned}$$
(12)

with \(i=1,\dots ,s\). At each generation, based on \(P_s\), an integer from the pool S is selected as the level number following a roulette wheel scheme.

From now on, we will remove the dependency on the generation g if it will be clear from context.

Based on the results of a preliminary analysis, we introduce a “clam** and reversion" procedure to increase the exploration capability of the LLSO. The clam** mechanism is applied component-wise to the velocity vector in (9) as follows

$$\begin{aligned} v^{l,p}_i = \min \{\max \{v^{l,p}_i,v_i^{\textrm{min}}\},v_i^{\textrm{max}}\} \end{aligned}$$
(13)

where \(v_i^{\textrm{min}}\) and \(v_i^{\textrm{max}}\) are the minimum and the maximum velocity allowed for component i, with \(i = 1,\ldots ,n\). In the experimental part, recalling (6), we set the maximum velocity as \(v_i^{\textrm{max}} = u_i\) and the minimum velocity as \(v_i^{\textrm{min}} = -v_i^{\textrm{max}}\). Moreover, when \(v^{l,p}_i\) and \(x^{l,p}_i\) are both negative, we reverse and scale \(v^{l,p}_i\) as follows:

$$\begin{aligned} v_i^{l,p} = - r_4 \cdot v_i^{l,p} \end{aligned}$$
(14)

where \(r_4\) is a random number uniformly generated in [0, 1].

4.2 Hybrid constraint-handling procedure

In the so-called construction phase, admissible portfolios have to satisfy cardinality, buy-in threshold, budget and turnover constraints. However, the LLSO update procedure is blind to these constraints. To overcome this issue, we equip the solver with a hybrid constraint-handling procedure.

First, in order to assure the cardinality requirement, for each candidate solution \(\textbf{x}^{p}\), the K largest components enter the corresponding portfolio, while zero weights are assigned to the remaining \(n-K\). In this manner, we implicitly remove the binary variables from problem (8).

To guarantee the feasibility with respect to the bound constraints (6), we consider the following projection

$$\begin{aligned} x_i^{p} = \min \left\{ \max \left\{ x_i^{p},l_i \right\} , u_i \right\} \end{aligned}$$
(15)

where \(p=1,\dots ,NP\) and \(i \in I_{K}^p = \left\{ i=1,\dots ,n :x_i^p > 0 \right\}\). Note that \(|I_{K}^p |\le K\). Then, we use the repair transformations developed in Meghwani and Thakur (2017) to also satisfy the budget constraint (4). More precisely, for each \(p = 1,\dots ,NP\), assuming that \(l_i\) and \(u_i\) are such that \(\sum _{i \in I_K^p} l_i < 1\) and \(\sum _{i \in I_K^p} u_i > 1\), we adjust the candidate solution \(\textbf{x}^p\) component-wise:

$$\begin{aligned} x^p_i = \left\{ \begin{aligned}&l_i + \frac{({x}^p_i - l_i)}{\sum _{j \in I_K^p} ({x}^p_j - l_j)} \left( 1 - \sum _{j \in I_K^p} l_j \right) \, , \text {if }\quad \sum _{j \in I_K^p} {x}^p_j > 1 \\&{x}^p_i, \quad \text {if}\quad \sum _{j \in I_K^p} {x}^p_j = 1\, \\&u_i - \frac{(u_i - {x}^p_i)}{\sum _{j \in I_K^p} (u_j - {x}^p_j)} \left( \sum _{j \in I_K^p} u_j - 1 \right) \, , \text {if}\quad \sum _{j \in I_K^p} {x}^p_j < 1 \end{aligned} \right. \end{aligned}$$
(16)

for all \(i \in I_K^p\). As proved in Meghwani and Thakur (2017), solutions transformed through (16) fulfill at the same time budget and box constraints.

Finally, to handle the turnover constraint, we use the \(\ell _1\)-exact penalty function approach as in Corazza et al. (2021). In particular, we define the constraint violation of (7) as

$$\begin{aligned} CV = \max \left\{ \sum _{i=1}^{n} |x_i - \tilde{x}_{0,i} |- TR, 0 \right\} . \end{aligned}$$
(17)

Then, we introduce the \(\ell _1\)-exact penalty function

$$\begin{aligned} F_{\ell _1}(\textbf{x} , \varepsilon _0(g), \, \varepsilon _1(g)) = f(\textbf{x})+ \frac{\varepsilon _1(g)}{\varepsilon _0(g)}\, CV \end{aligned}$$
(18)

where \(\varepsilon _0\) and \(\varepsilon _1\) are two positive real numbers, defined adaptively at each generation g. Initially, \(\varepsilon _0(0) = 10^{-4}\) and \(\varepsilon _1(0) = 1\) in order to privilege feasible solutions. This parameters are then updated by checking the decrease of the objective function \(f(\textbf{x})\) and the violation of the constraints. More precisely, every 5 iterations \(\varepsilon _0(g)\) is updated according to the rule

$$\begin{aligned} \varepsilon _0(g+1) = {\left\{ \begin{array}{ll} \min \{3 \cdot \varepsilon _0(g), \, 1 \} \quad \text { if }\quad f(\textbf{x}(g))\ge f(\textbf{x}(g-1))\\ \max \{0.6 \cdot \varepsilon _0(g), \, 10^{-15} \} \quad \text { if }\quad f(\textbf{x}(g))< 0.9 \cdot f(\textbf{x}(g-1))\\ \varepsilon _0(g) \quad \text { otherwise} \end{array}\right. } \end{aligned}$$
(19)

while, every 10 iterations, \(\varepsilon _1(g)\) is updated following the scheme

$$\begin{aligned} \varepsilon _1(g+1)={\left\{ \begin{array}{ll} \min \{2 \cdot \varepsilon _1(g), \, 10^4 \} \quad \text { if }\quad CV(g)> 0.95\cdot CV(g-1)\\ \max \{0.5 \cdot \varepsilon _1(g), \, 10^{-4} \} \quad \text { if }\quad CV(g) < 0.9 \cdot CV(g-1) \\ \varepsilon _1(g) \quad \text { otherwise}. \end{array}\right. } \end{aligned}$$
(20)

With this strategy, we privilege optimality of solutions possibly at the expenses of their feasibility, due to the fact that \(\varepsilon _0(g+1)\) in (19) is increasing in \(F_{\ell _1}(\textbf{x}, \varepsilon _0(g+1),\, \varepsilon _1(g+1))\) when the function value \(f(\textbf{x}(g))\) increases. Moreover, to favour feasibility of solutions possibly at the expenses of their optimality, the penalty parameter \(\varepsilon _1(g+1)\) in (20) increases when the relative constraint violation in the g-th generation increases with respect to the previous one.

Using the penalty approach, the constrained optimization problem (8) reduces to an unconstrained one, in which we minimize \(F_{\ell _1}\) and thus it can be solved by the proposed LLSO variant.

4.3 Initialisation strategy

Due to the complexity of the problem and to the fact that the search space grows exponentially with the dimension, the common strategies of seeking a search space coverage by initializing the particles uniformly throughout the space are inefficient (see van Zyl and Engelbrecht 2015). In particular, for our portfolio optimization problems, the presence of the turnover constraint exacerbates even more the initialization phase. To address this issue, we directly initialize the candidate solutions in a neighbourhood of \(\textbf{x}_0\) as proposed by Kaucic et al. (2023). A brief description of the procedure follows. Let \(d^{min}_i\) and \(d^{max}_i\) be the minimum and the maximum allowed weight changes for \(\textbf{x}_{0,\, i}\) respectively, with \(i = 1, \ldots , n\). Let \(D^p\) denote the total portfolio weight allowed to be re-allocated in \(\textbf{x}_0\) for defining the p-th candidate solution \(\textbf{x}^p(0)\) at generation 0, with \(p = 1, \ldots , NP\). Then, for each p,

  1. 1.

    we randomly select \(D^p\) within \(\left[ 0,\, \frac{TR}{2}\right]\);

  2. 2.

    we select a subset \(J^-\) of \(K'\) assets from the assets with positive weight in \(\textbf{x}_0\), so that

    $$\begin{aligned} x^p_j(0) = x_{0,\, j} - d_j,\, \text { for } j \in J^- \end{aligned}$$

    where \(d_j\) is randomly sampled in \(\left[ d^{min}_j,\, d^{max}_j\right]\) in such a way that \(\sum _{j \in J^-} d_j = D^p\), and \(x^p_j(0) = 0\) or \(l_j \le x^p_j(0) \le u_j\);

  3. 3.

    we select a subset \(J^+\) of \(K''\) assets from the assets with zero weight in \(\textbf{x}_0\), with \(K'' \le K'\), so that

    $$\begin{aligned} x^p_j(0) = x_{0,\, j} + d_j,\, \text { for } j \in J^+ \end{aligned}$$

    where \(d_j\) is randomly sampled in \(\left[ d^{min},\, d^{max}\right]\) in such a way that \(\sum _{j \in J^+} d_j = D^p\), and \(l_j \le x^p_j(0) \le u_j\);

  4. 4.

    for \(j \in I \setminus \left( J^- \cup J^+\right)\), we set \(x^p_j(0) = x_{0,\, j}\).

The portfolios assembled using this scheme satisfy cardinality, buy-in threshold and turnover constraints. This initialization strategy encourages the swarm to focus on exploitation rather than exploration, and allows to identify promising solutions, even in problems with high dimension and small feasible regions.

5 Computational analysis

5.1 Data description and portfolio parameters

We select two equity investment universes that differ for the number of constituents and for the geographic area, to highlight the scalability of the proposed portfolio strategies. The first data set, called Pacific, consists of 323 assets selected among the constituents of the MSCI Pacific Index at 28/07/2022. For the second data set, called shortly World, we consider 1229 assets listed in the MSCI World Index at 28/07/2022.

We obtain the daily rates of return and the market values for each asset from Bloomberg, covering the period 01/01/2008 to 28/07/2022 for a total of 3803 observations. For comparison purposes, we build up an auxiliary market value-weighted benchmark for each data set.

Table 1 reports a preliminary analysis concerning normality assumption for the time series of rates of return in the two data sets. In both cases, the number of assets exhibiting high skewness is around 20%, while those with large kurtosis is close to 90%. The Jarque-Bera (J-B) test rejects the null hypothesis of normality at the 5% significance level in almost half of both samples.

Table 1 Preliminary analysis on the normality assumption of the assets in the two data sets

The goal of our computational analysis is twofold. On the one hand, we aim at pointing out the capabilities of the proposed dynamic LLSO algorithm in comparison to a state-of-the-art solver, which has been ad-hoc developed to solve cardinality-constrained portfolio optimization problems (see Corazza et al. 2021). On the other hand, we study the profitability of the proposed investment strategies focusing on the impact of both portfolio size and amount of trades.

To this end, we consider an investment plan with monthly portfolio rebalancing. The out-of-sample window is given by 126 months, covering the period from 02/01/2012 to 28/07/2022. For each month in the out-of-sample window, we generate 1000 scenarios of monthly rates of return for the assets in each data set by using the stationary bootstrap technique introduced above. The procedure employs an in-sample window of 1000 days, which is updated monthly by including the daily rates of return of the last month and by removing the information about the oldest month.

For the analysis, we set the buy-in thresholds \(l_i\) and \(u_i\) equal to 0.001 and 0.2 respectively, according to Kaucic and Piccotto (2022). We express the cardinality parameter K as a fraction \(K_{\%}\) of the number of assets in a given data set, that is \(K=\lfloor K_{\%}\cdot n \rfloor\), and pick up \(K_{\%}\) in the set \(\{ 15\%,\, 30\%,\, 50\% \}\). The turnover rate TR is in \(\{ 10\%,\, 20\%,\, 40\% \}\). For each data set and each out-of-sample month, an instance of Problem (8) is then obtained by fixing one of the three Sharpe-based performance measures, a value of \(K_{\%}\) and TR, for a total of 27 alternative investment schemes.

Finally, the role of the trades is analyzed ex-post through the cost function introduced in Beraldi et al. (2021), with an initial wealth \(W_0=10,000,000\$ \).

5.2 Algorithm performance evaluation

In this subsection, we compare the performance of the proposed dynamic LLSO algorithm with respect to the PSO developed by Corazza et al. (2021). The latter has been showed to tackle efficiently non-smooth portfolio optimization problems with real-world constraints.

For each data set, the test suite consists of the 27 instances of Problem (8) previously introduced and specified at three dates randomly drawn from the out-of-sample window. For each sampled date and each portfolio optimization problem, we compute the initial portfolio without including the turnover constraint. The next out-of-sample month, we optimize the portfolio weights, accounting for the rebalancing constraint. It is worth noticing that we select three different dates for the evaluations in order to avoid the possible time dependence of the results.

The parameter settings for the considered algorithms follow the suggestions in the reference papers. More specifically, for the dynamic LLSO we set \(\psi = 0.4\) and the set of candidate level numbers \(S = \left\{ 4, 6, 8, 10, 20, 50\right\}\), as in Yang et al. (2018). For the PSO variant by Corazza et al. (2021), we consider \(\omega _{min} = 0.4\), \(\omega _{max} = 0.9\), \(c_{1, min} = c_{2, min} = 0.5\), and \(c_{1, max} = c_{2, max} = 2.5\). To guarantee a fair comparison, we set for both solvers the maximum number of generations \(MAX_{GEN} = 1000\), and the swarm size NP equal to 300 for the Pacific data set and 500 for the World one. To obtain more robust results, we run 30 times each test instance. The analysis have been implemented in MATLAB 2023a and carried out on a 3.3 GHz Intel Core i9-7900X workstation with 16 GB of RAM. To prove the efficiency of the proposed algorithm, comparisons are made in terms of run time, capability to identify optimal solutions which satisfy the constraints, and accuracy in solving the optimization problems.

Due to the negligible impact of the turnover rate levels on the results, in the following we show only the findings related to \(TR = 20\%\). Tables 2 and 3 display the average computational time on 30 runs for the dynamic LLSO and the PSO. For both methods, we can observe that the results remain relatively stable when transitioning from one date to another, regardless of the cardinality threshold. However, the outcomes depend on the employed objective function and the number of involved decision variables. In our analysis the PSO exhibit a slightly lower average computational time. This difference can be attributed to the proposed hybrid constraint-handling technique.

Table 2 Average computational time in seconds on 30 runs for the two compared algorithms, for the Pacific data set and the three ex-post dates, with \(TR = 20\%\) and increasing values of \(K_{\%}\)
Table 3 Average computational time in seconds on 30 runs for the two compared algorithms, for the World data set and the three ex-post dates, with \(TR = 20\%\) and increasing values of \(K_{\%}\)

Tables 4 and 5 show the average percentage of feasible solutions provided by the two solvers over the 30 runs at the final generation. Analyzing the results of the Pacific data set, it is evident that our LLSO is able to identify feasible solutions in almost all the cases. Conversely, the PSO algorithm struggles to properly handle constraints in the first test date, performing accurately in the other ones. Furthermore, the PSO is heavily influenced by the complexity of the objective function and the increase of portfolio size. As the number of decision variables increases, moving to the World case, the difference between the algorithms becomes even more pronounced. Specifically, for the cardinality threshold of \(50\%\), the dynamic LLSO consistently finds optimal solutions, while the PSO fails to identify feasible portfolios in any of the 30 runs.

Table 4 Percentage of optimal solutions that satisfy all the constraints, over the 30 runs for the two compared algorithms for the Pacific data set and for the three ex-post dates, with \(TR = 20\%\) and increasing values of \(K_{\%}\)
Table 5 Percentage of optimal solutions that satisfy all the constraints, over the 30 runs for the two compared algorithms for the World data set and for the three ex-post dates, with \(TR = 20\%\) and increasing values of \(K_{\%}\)

Moreover, we validate the capabilities of the proposed LLSO over the PSO in solving our test problems through a non-parametric statistical test. In particular, we focus on the average of the best values of the \(\ell _1\)-exact penalty function (18), and we conduct a Wilcoxon signed-rank test to determine if there is a significant difference in the distributions of the values obtained by the two algorithms (Derrac et al. 2011). The results are given in Table 6, where \(R^+\) is the sum of ranks for the problems in which LLSO outperformed PSO, and \(R^-\) denotes the sum of the ranks for the opposite. Based on the p-values reported on the last column of this table, we conclude that our dynamic LLSO outperforms its competitor at the \(5\%\) significance level in all the case studies.

Table 6 Wilcoxon signed-rank test results for the two data sets at different dates

5.3 Long-run sensitivity analysis

5.3.1 Ex-post performance metrics

In this subsection, we present the ex-post performance measures that we will use to assess the profitability of the proposed investment strategies. Let \(r_{p,t}^{out}\) be the ex-post portfolio rate of return at the month of the out-of-sample window. We compute the net wealth at time t as

$$\begin{aligned} W_{t} = (W_{t-1} - c_{t})\left( 1+r_{p,t}^{out} \right) \end{aligned}$$
(21)

where \(c_t\) represents the transaction costs associated to the rebalancing at time t. Given the optimal portfolio at time t, \(\textbf{x}_t\), and the re-normalized portfolio at time \(t-1\), \(\widetilde{\textbf{x}}_{t-1}\), we define the portfolio cost as the sum of the trading costs of each constituent, that is

$$\begin{aligned} c_t = \sum _{i=1}^{n} c(ts_{i,t}) \end{aligned}$$

where \(ts_{i,t} = W_{t-1}|x_{t,i} - \widetilde{x}_{t-1,i}|\) is the so-called trade segment for asset i at time t, with \(i=1,\dots ,n\) and \(t=1,\dots ,126\), and \(c(\cdot )\) is given by

$$\begin{aligned} c(ts_{i,t}) = {\left\{ \begin{array}{ll} 0 &{} ts_{i,t} = 0 \\ 40 &{} 0<ts_{i,t}<8000 \\ 0.05 \times ts_{i,t} &{} 8000\le ts_{i,t}<50,000 \\ 0.04 \times ts_{i,t} &{} 50,000\le ts_{i,t}<100,000 \\ 0.025 \times ts_{i,t} &{} 100,000\le ts_{i,t}<200,000 \\ 400 &{} ts_{i,t} \ge 200,000\, . \end{array}\right. } \end{aligned}$$
(22)

According to (22), we divide the traded monetary amount into non-overlap** intervals and apply a different cost percentage on the interval in which the traded capital lies. This transaction costs structure reflects the main configurations proposed nowadays by financial brokers (Beraldi et al. 2021).

Next, we evaluate the attractiveness of the proposed asset allocation strategies through the so-called compound annual growth rate (shortly CAGR), which is calculated as

$$\begin{aligned} CAGR = \left( \frac{W_{T_{out}}}{W_0}\right) ^{\frac{12}{T_{out}}}-1 \end{aligned}$$
(23)

where \(T_{out}\) denotes the length of the out-of-sample window, while \(W_0\) and \(W_{T_{out}}\) represent the initial wealth and the wealth at the end of the investment period, respectively.

We also compute the monthly ex-post average rate of return and the ex-post standard deviation

$$\begin{aligned} \mu ^{out}&= \frac{1}{T_{out}} \sum _{t=1}^{T_{out}} r_{p,t}^{out} \end{aligned}$$
(24)
$$\begin{aligned} \sigma ^{out}&= \sqrt{\frac{1}{T_{out}-1} \sum _{t=1}^{T_{out}} (r_{p,t}^{out} - \mu ^{out})^2} \, . \end{aligned}$$
(25)

Furthermore, to analyze more precisely the distribution of the ex-post returns, we calculate the ex-post skewness and kurtosis

$$\begin{aligned} S_3^{out}&= \frac{1}{T_{out}-1} \sum _{t=1}^{T_{out}} \left( \frac{r_{p,t}^{out} - \mu ^{out}}{\sigma ^{out}} \right) ^3 \end{aligned}$$
(26)
$$\begin{aligned} K_4^{out}&= \frac{1}{T_{out}-1} \sum _{t=1}^{T_{out}} \left( \frac{r_{p,t}^{out} - \mu ^{out}}{\sigma ^{out}}\right) ^4 \, . \end{aligned}$$
(27)

To evaluate the capability of a strategy to avoid high losses, we introduce the drawdown measure, which is defined as follows (Chekhlov et al. 2005)

$$\begin{aligned} DD_t = \min \left\{ 0,\frac{W_t - W_{peak}}{W_{peak}} \right\} \quad t=1, \dots , T_{out} \end{aligned}$$
(28)

where \(W_{peak}\) is the maximum amount of wealth reached by the strategy until time t. Particularly, we focus on the mean, standard deviation, and maximum value of the drawdown measure over time.

Finally, we measure the effect of the costs on the available capital in the out-of-sample period as in Kaucic et al. (2020) by

$$\begin{aligned} \Lambda _{\%} = \frac{1}{T_{out}}\sum _{t=1}^{T_{out}} \frac{c_t}{W_{t-1}} \cdot 100 \, . \end{aligned}$$
(29)

5.3.2 Results for the two data sets

In this subsection we present the results of the ex-post analysis. We start by highlighting the relation between portfolio size, allowed trades in the rebalancing phases, and the ex-post performance. Tables 7 and 8 report the findings for the three proposed portfolio models employing the Pacific and the World data sets, respectively. In the last row, we also include the results for the associated market value-weighted benchmark. Overall, we can observe that the impact of costs, expressed as percentage of the total wealth, does not exceed \(0.057\%\) for the Pacific data set, whereas it attains \(0.15\%\) in the World case study. Therefore, the influence of transaction costs seems relatively marginal for the smaller data set and becomes more pronounced for the larger data set. Furthermore, it is worth noting that increasing the cardinality and turnover thresholds leads to an expected rise in costs. Instead, if we fix TR and vary \(K_{\%}\), we can notice the quasi-linear growth of costs. On the contrary, with \(K_{\%}\) fixed, the cost increase is not proportional to the twofold increase of the turnover levels. This is coherent with the transaction cost structure defined in (22).

Regarding the Pacific data set, the ASSR and AKSR-based investment plans show similar results in the ex-post analysis for all the choices of \(K_{\%}\) and TR, with the AKSR-based strategy with \(K_{\%}=15\%\) and \(TR=10\%\) which outperforms the other tested alternatives.

For the World data set, maximizing the ASSR with \(K_{\%} = 15\%\) and \(TR = 10\%\) gives promising results in terms of ex-post returns, CAGR, and drawdown measures. However, this investment strategy lacks of robustness, as it underperforms significantly in all other parameter configurations compared to both the AKSR and SR strategies. As in the Pacific case, the AKSR-based model appears to be the most resilient and viable choice for the long-term investments.

Summing up, the optimal combination of the parameters is \(K_{\%}=15\%\) as cardinality threshold and \(TR=10\%\) as turnover rate. The comparison with the buy-and-hold strategy with the auxiliary benchmarks points out that the proposed models, in both the investable universes, outperform the benchmark in terms of profits, generating a higher ex-post net wealth on the long-run. In addition, the multi-moment strategies show less volatility and lower drawdown than the benchmark for the Pacific data set. However, for the World data set, the ASSR and AKSR-based investments present higher standard deviations and drawdowns.

Table 7 Results of the long-run sensitivity analysis for the three Sharpe ratio-based optimization strategies with the Pacific data set
Table 8 Results of the long-run sensitivity analysis for the three Sharpe ratio-based optimization strategies with the World data set. In the first two columns we report the value of the fraction \(K_{\%}\) of assets making up the portfolio and the value of the turnover rate, TR, respectively. The other columns show the results of the ex-post metrics presented in Sect. 5.3.1
Fig. 1
figure 1

Evolution of the net wealth for the best three Sharpe ratio-based portfolio strategies (\(K_{\%} = 15\%\) and \(TR = 10\%\)) in comparison to the market value weighted benchmark using the assets in the two data sets

5.3.3 Pre- and post-COVID analysis

Figure 1 reports the evolution of the net wealth for the best three Sharpe ratio-based portfolio strategies in each data set. The plot confirms the insights previously evidenced. All these investment strategies perform better than the auxiliary benchmarks. In the Pacific case study, we can observe that the SR investment plan is the one with the best performance until February 2020, date of the pandemic outbreak, at which it realizes a very large drawdown. After this period, only the AKSR strategy seems to be able to gain a great advantage of the market fluctuations in the last years. Similarly, in the World data set, the three models have a comparable net wealth evolution until February 2020. Then, the two multi-moment strategies perform better than the SR one, with the ASSR being the most profitable.

In Table 9 we highlight the different behaviours of the auxiliary markets and the proposed strategies before the above quoted date, namely pre-COVID period, and after that date, namely post-COVID period. For both data sets, the distributions of the rates of return in the pre-COVID period have a negative skewness while post-COVID rates of return show symmetric distributions and positive skewness. Conversely, regarding the benchmarks, the distribution of the rates of return presents a lower mean with negative skewness in both market phases. We summarize the results in Figs. 2 and 3 in which, in both data sets, we can detect a positive value for the ex-post mean of the rates of return with a reduced dispersion in the pre-COVID period, while in the post-COVID epoch we can discover a lower ex-post mean of the rates of return with more dispersion and fat-tails.

These findings substantiate that extending the Sharpe ratio model by incorporating higher-order moments can yield financial performance benefits, particularly during periods characterized by market instability.

Table 9 Out-of-sample statistics for the distribution of the ex-post rates of return of the three compared strategies with \(K_{\%} = 15\%\) and \(TR = 10\%\) and the benchmark for the two data sets
Fig. 2
figure 2

Density functions of the ex-post rates of return of the three strategies proposed in Sect. 3.1 for the Pacific data set in the pre-COVID period (in the first row) and post-COVID epoch (in the second row). The vertical blue line represents the median of the distribution (colour figure online)

Fig. 3
figure 3

Density functions of the ex-post rates of return of the three strategies proposed in Sect. 3.1 for the World data set in the pre-COVID period (in the first row) and post-COVID epoch (in the second row). The vertical blue line represents the median of the distribution (colour figure online)

6 Conclusions and future works

In this study, we have proposed a comparison of three long-run investment strategies based on Sharpe ratio type performance measures on large-scale global market indices. In particular, we have considered the standard Sharpe ratio and two extensions which involve higher-order moments of the returns distribution. Furthermore, we have included four real-world constraints, namely the cardinality, buy-in threshold, budget and turnover constraints, in order to provide a complete control on the portfolio composition. To solve this family of optimization problems, we have developed a novel swarm optimization algorithm equipped with an ad-hoc constraint handling technique combining the global convergence properties of the \(\ell _1\)-exact penalty functions with a repair operator.

The empirical findings are obtained on two large-scale data sets of the Pacific and World areas, which include several hundreds of stocks, covering the last 14 years. We have performed a sensitivity analysis for the portfolio size and for the limit of the trades magnitude, in order to identify the best combination of these parameters in terms of ex-post performance and management cost. Results show that portfolios with a reduced number of constituents (\(15 \%\) of the investment pool) and with a two-sided turnover up to \(10 \%\) provide, on both data sets, better profits which are stable over time.

A more detailed analysis reveals that the inclusion of higher-order moments in the performance measures produces superior results in terms of net wealth with respect to the benchmark and to the portfolio optimized through the standard Sharpe ratio. This is more evident after the pandemic outbreak of 2020, where more market fluctuations are present.

In the future works, on the one hand, we plan to extend the above experimental analysis to other markets and, on the other hand, we are interested in the possibility to introduce similar multi-moment performance measures in a passive investment framework.