Back to all posts

The Quest for Capacity: Optimizing Sharpe Ratio under Varying Capital Levels

With interest rates at all-time lows, the pressures on institutional and private investors to seek new harbors for vast sums of capital are high. Quantitative hedge funds are thus racing to increase capacity of their portfolio of trading algorithms. Despite these market forces, surprisingly little public information is available on estimating and maximizing capacity.

In this blog post we will take a look at the problem of maximizing the Sharpe Ratio[1] of a portfolio of uncorrelated trading algorithms under different capital bases.

What are the limiting factors of capacity of an algorithm?

Fixed transaction costs are usually not a concern at institutional trading levels as brokers offer very competitive transaction fees. The main headwind when trading larger amounts comes instead from slippage and the fact that large orders drive up the price (for more information on the impact of slippage on trading algorithms, see also my other blog post on liquidity).

In [1]:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
import pandas as pd
import seaborn as sns
sns.set_style('white')
sns.set_context('talk')
plt.set_cmap('viridis')

Simulate Sharpe Decay curves

Consider, for example, the following three strategies with carefully chosen Sharpe Decay curves. As we deploy more capital to them, their Sharpe Ratio decays because of the increase in slippage cost. Strategy 'A' for example has a very high Sharpe Ratio at low capital deployments but decays rapidly and even becomes negative if more than $50MM is invested. Note, that these curves are purely theoretical considerations. In reality, we would estimate the Sharpe Ratio of a strategy at various capital levels and derive an empirical Sharpe Decay curve to use instead.

In [3]:
x = np.linspace(1, 200, 100)

er_funcs = (
    lambda x: np.exp(-x/50)*0.5 -0.2,
    lambda x: np.exp(-x/40)*0.2 - 0.02,
    lambda x: np.exp(-x/70)*0.15
)

vols = (
    0.06,
    0.06,
    0.07
)

sharpe_decay = pd.DataFrame(columns=['A', 'B', 'C'], 
                            index=x, 
                            data=np.asarray([f(x)/v for f, v in zip(er_funcs, vols)]).T)

ax = sharpe_decay.plot()
ax.set(title='Sharpe decay of 3 strategies (simulations)', xlabel='Algorithm Gross Market Value ($)', 
       ylabel='Sharpe Ratio');
sns.despine()

Changes of optimal allocation

We can compute the portfolio Sharpe Ratios under a certain capital levels as a simple weighted sum.

In [15]:
def portfolio_sharpe(weight, total_capital=10):
    weight = np.array(weight)
    if np.any(weight < 0):
        return np.nan
    capital = total_capital * weight
    numerator = np.sum([f(c)*w for f, c, w in zip(er_funcs, capital, weight)])
    denominator = np.sqrt(np.sum([(v*w)**2 for v, w in zip(vols, weight)]))
    return numerator / denominator

weight = (1., 0., 0.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
weight = (0., 1., 0.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
weight = (0., 0., 1.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
Portfolio Sharpe Ratio under $10MM GMV with weight vector (1.0, 0.0, 0.0) = 3.48942294232
Portfolio Sharpe Ratio under $10MM GMV with weight vector (0.0, 1.0, 0.0) = 2.2626692769
Portfolio Sharpe Ratio under $10MM GMV with weight vector (0.0, 0.0, 1.0) = 1.85759549946

Next we will look at all possible weightings. As each weight vector has to sum to one, they all live on a simplex which can be displayed as a triangle.

In [5]:
corners = np.array([[0, 0], [1, 0], [0.5, 0.75**0.5]])
triangle = tri.Triangulation(corners[:, 0], corners[:, 1])

# Mid-points of triangle sides opposite of each corner
midpoints = [(corners[(i + 1) % 3] + corners[(i + 2) % 3]) / 2.0 \
             for i in range(3)]
def xy2bc(xy, tol=1.e-3):
    '''Converts 2D Cartesian coordinates to barycentric.'''
    s = [(corners[i] - midpoints[i]).dot(xy - midpoints[i]) / 0.75 \
         for i in range(3)]
    return np.clip(s, tol, 1.0 - tol)

def draw_obj_contours(obj, total_capital=100, nlevels=200, subdiv=8, **kwargs):
    import math

    refiner = tri.UniformTriRefiner(triangle)
    trimesh = refiner.refine_triangulation(subdiv=subdiv)
    pvals = [obj(xy2bc(xy), total_capital) for xy in zip(trimesh.x, trimesh.y)]

    tcf = plt.tricontourf(trimesh, pvals, nlevels, **kwargs)
    plt.axis('equal')
    plt.xlim(0, 1)
    plt.ylim(0, 0.75**0.5)
    plt.axis('off')
    cbar = plt.colorbar()
    cbar.set_label('Portfolio Sharpe Ratio')
    plt.text(-.03, 0., 'A')
    plt.text(1, 0, 'B')
    plt.text(0.485, .88, 'C')
    
    return tcf
In [6]:
for cb in [10, 100, 200]:
    plt.figure()
    tcf = draw_obj_contours(portfolio_sharpe, total_capital=cb)
    plt.suptitle('Portfolio Gross Market Value = ${}MM'.format(cb))

We observe that the optimal weighting changes as we deploy more capital. With low capital, a large allocation to algorithm 'A' yields the highest portfolio Sharpe Ratio, while at high levels, a mix between mostly 'B' and some 'C' is best. This makes intuitive sense when looking at the Sharpe Decay curves above [2].

Optimization

At the core, this is a non-linear, constrained optimization problem. scipy provides many different optimization routines and Sequential Least SQuares Programming (SLSQP) suits us well for this specific type of problem (thanks to James Christopher for alerting me to this method).

In [7]:
from scipy import optimize

num_algos = sharpe_decay.shape[1]
guess = np.ones(num_algos) / num_algos

def objective(weight, total_capital):
    return -portfolio_sharpe(weight, total_capital=total_capital)

# Set up equality constrained
cons = {'type':'eq', 
        'fun': lambda x: np.sum(np.abs(x)) - 1} 

# Set up bounds for individual weights
bnds = [(0, 1)] * num_algos

results = optimize.minimize(objective, guess, args=10, 
                            constraints=cons, bounds=bnds, method='SLSQP',
                            options={'disp': True})
results
Optimization terminated successfully.    (Exit mode 0)
            Current function value: -5.47600793014
            Iterations: 6
            Function evaluations: 33
            Gradient evaluations: 6
Out[7]:
     fun: -5.4760079301402325
     jac: array([ 0.66125989,  0.66126466,  0.66127276,  0.        ])
 message: 'Optimization terminated successfully.'
    nfev: 33
     nit: 6
    njev: 6
  status: 0
 success: True
       x: array([ 0.44902592,  0.32466753,  0.22630654])

Looping over different capital levels yields optimal weights and portfolio Sharpe Ratio for each allocation size.

In [8]:
weights = pd.DataFrame(index=range(1, 200), columns=['A', 'B', 'C'], dtype=np.float32)
sharpe = pd.Series(index=range(1, 200), dtype=np.float32)
for i in weights.index:
    val = optimize.minimize(objective, guess, args=i, constraints=cons, 
                            bounds=bnds, method='SLSQP').x
    weights.loc[i, :] = val
    sharpe.loc[i] = portfolio_sharpe(weights.loc[i, :].values, i)

This provides us with the Portfolio Sharpe Decay curve.

In [9]:
ax = sharpe.plot()
ax.set(xlabel='Capital in $MM', ylabel='Portfolio Sharpe Ratio', 
       title='Performance under different capital levels')
sns.despine()

At each level, we achieve the maximum Sharpe Ratio possible by finding the weighting to optimize Sharpe under the external capacity constraints.

In [10]:
ax = weights.plot()
ax.set(xlabel='Capital in $MM', ylabel='weight', title='Allocations under different capital levels')
sns.despine()

It is also instructive to look at the total capital deployed to each strategy as we increase the total portfolio capital:

In [11]:
ax = weights.multiply(pd.Series(weights.index), axis='index').plot()
ax.set(xlabel='Portfolio Gross Market Value $MM', ylabel='Algorithm Gross Market Value $MM', 
       title='Allocations under different capital levels')
sns.despine()

Conclusions

In this blog post we presented a general framework to think about capacity at the algorithm and portfolio level. From here, it is trivial to make the optimization more complex. For example, we probably would want to consider portfolio volatility when computing our allocations and we can relax the zero correlation assumption. Thanks to Jonathan Larkin for useful feedback on an earlier version of this blog post.

You can also run the notebook underlying this analysis on Quantopian here.

[1] Sharpe Ratio is a statistical measurement of the risk adjusted performance of a portfolio, and is calculated by dividing a portfolio’s return by the standard deviation of its returns. It shows a portfolio’s reward per unit of riskand is useful when comparing two similar portfolios. As the Sharpe Ratio increases, risk adjusted performance improves.

[2] Gross Market Value is the sum of the absolute value of all long positions, short positions and cash held by a portfolio. The hypothetical Sharpe Ratios illustrated are not derived from actual investments. Actual Sharpe Ratios will vary and that difference may be material.

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian.

In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

David

Awesome post. Really enjoyed this.

My only question is, and maybe you have a previous post, but how did you come up with the initial decay rates? Are these an assumption?

Comments are closed.