A large portfolio of independent returns is optimized under the variance risk measure with a ban on short positions. The no-short selling constraint acts as an asymmetric $\ell_1$ regularizer, setting some portfolio weights to zero and keeping the estimation error bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value $2$ of the ratio $N/T$, where $N$ is the number of different assets in the portfolio and $T$ the length of available time series. This means that a ban on short positions does not prevent the phase transition in the optimization problem, it merely shifts the critical point from its non-regularized value of $N/T=1$ to $2$. We show that this critical value is universal, independent of the distribution of the returns. Beyond this critical value, the variance of the portfolio vanishes for any portfolio weight vector constructed as a linear combination of the eigenvectors from the null space of the covariance matrix, but these linear combinations are not legitimate solutions of the optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large variances, in accord with one's natural expectation. The analytic calculations are supported by numerical simulations. The analytic and numerical results are in perfect agreement for $N/T<2$, but some numerical solvers keep yielding a stable solution even in the region $N/T>2$. This is because there are regularizers built into these solvers that stabilize the otherwise freely fluctuating, meaningless solutions.
↧