With interest rates at all-time lows, the pressures on institutional and private investors to seek new harbors for vast sums of capital are high. Quantitative hedge funds are thus racing to increase capacity of their portfolio of trading algorithms. Despite these market forces, surprisingly little public information is available on estimating and maximizing capacity.

In this blog post we will take a look at the problem of maximizing the Sharpe Ratio[1] of a portfolio of uncorrelated trading algorithms under different capital bases.

## What are the limiting factors of capacity of an algorithm?

Fixed transaction costs are usually not a concern at institutional trading levels as brokers offer very competitive transaction fees. The main headwind when trading larger amounts comes instead from slippage and the fact that large orders drive up the price (for more information on the impact of slippage on trading algorithms, see also my other blog post on liquidity).

```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
import pandas as pd
import seaborn as sns
sns.set_style('white')
sns.set_context('talk')
plt.set_cmap('viridis')
```

## Simulate Sharpe Decay curves

Consider, for example, the following three strategies with carefully chosen Sharpe Decay curves. As we deploy more capital to them, their Sharpe Ratio decays because of the increase in slippage cost. Strategy 'A' for example has a very high Sharpe Ratio at low capital deployments but decays rapidly and even becomes negative if more than $50MM is invested. Note, that these curves are purely theoretical considerations. In reality, we would estimate the Sharpe Ratio of a strategy at various capital levels and derive an empirical Sharpe Decay curve to use instead.

```
x = np.linspace(1, 200, 100)
er_funcs = (
lambda x: np.exp(-x/50)*0.5 -0.2,
lambda x: np.exp(-x/40)*0.2 - 0.02,
lambda x: np.exp(-x/70)*0.15
)
vols = (
0.06,
0.06,
0.07
)
sharpe_decay = pd.DataFrame(columns=['A', 'B', 'C'],
index=x,
data=np.asarray([f(x)/v for f, v in zip(er_funcs, vols)]).T)
ax = sharpe_decay.plot()
ax.set(title='Sharpe decay of 3 strategies (simulations)', xlabel='Algorithm Gross Market Value ($)',
ylabel='Sharpe Ratio');
sns.despine()
```

## Changes of optimal allocation

We can compute the portfolio Sharpe Ratios under a certain capital levels as a simple weighted sum.

```
def portfolio_sharpe(weight, total_capital=10):
weight = np.array(weight)
if np.any(weight < 0):
return np.nan
capital = total_capital * weight
numerator = np.sum([f(c)*w for f, c, w in zip(er_funcs, capital, weight)])
denominator = np.sqrt(np.sum([(v*w)**2 for v, w in zip(vols, weight)]))
return numerator / denominator
weight = (1., 0., 0.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
weight = (0., 1., 0.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
weight = (0., 0., 1.)
print("Portfolio Sharpe Ratio under $10MM GMV with weight vector {} = {}").format(weight, portfolio_sharpe(weight))
```

Next we will look at all possible weightings. As each weight vector has to sum to one, they all live on a simplex which can be displayed as a triangle.

```
corners = np.array([[0, 0], [1, 0], [0.5, 0.75**0.5]])
triangle = tri.Triangulation(corners[:, 0], corners[:, 1])
# Mid-points of triangle sides opposite of each corner
midpoints = [(corners[(i + 1) % 3] + corners[(i + 2) % 3]) / 2.0 \
for i in range(3)]
def xy2bc(xy, tol=1.e-3):
'''Converts 2D Cartesian coordinates to barycentric.'''
s = [(corners[i] - midpoints[i]).dot(xy - midpoints[i]) / 0.75 \
for i in range(3)]
return np.clip(s, tol, 1.0 - tol)
def draw_obj_contours(obj, total_capital=100, nlevels=200, subdiv=8, **kwargs):
import math
refiner = tri.UniformTriRefiner(triangle)
trimesh = refiner.refine_triangulation(subdiv=subdiv)
pvals = [obj(xy2bc(xy), total_capital) for xy in zip(trimesh.x, trimesh.y)]
tcf = plt.tricontourf(trimesh, pvals, nlevels, **kwargs)
plt.axis('equal')
plt.xlim(0, 1)
plt.ylim(0, 0.75**0.5)
plt.axis('off')
cbar = plt.colorbar()
cbar.set_label('Portfolio Sharpe Ratio')
plt.text(-.03, 0., 'A')
plt.text(1, 0, 'B')
plt.text(0.485, .88, 'C')
return tcf
```

```
for cb in [10, 100, 200]:
plt.figure()
tcf = draw_obj_contours(portfolio_sharpe, total_capital=cb)
plt.suptitle('Portfolio Gross Market Value = ${}MM'.format(cb))
```

We observe that the optimal weighting changes as we deploy more capital. With low capital, a large allocation to algorithm 'A' yields the highest portfolio Sharpe Ratio, while at high levels, a mix between mostly 'B' and some 'C' is best. This makes intuitive sense when looking at the Sharpe Decay curves above [2].

## Optimization

At the core, this is a non-linear, constrained optimization problem. `scipy`

provides many different optimization routines and Sequential Least SQuares Programming (SLSQP) suits us well for this specific type of problem (thanks to James Christopher for alerting me to this method).

```
from scipy import optimize
num_algos = sharpe_decay.shape[1]
guess = np.ones(num_algos) / num_algos
def objective(weight, total_capital):
return -portfolio_sharpe(weight, total_capital=total_capital)
# Set up equality constrained
cons = {'type':'eq',
'fun': lambda x: np.sum(np.abs(x)) - 1}
# Set up bounds for individual weights
bnds = [(0, 1)] * num_algos
results = optimize.minimize(objective, guess, args=10,
constraints=cons, bounds=bnds, method='SLSQP',
options={'disp': True})
results
```

Looping over different capital levels yields optimal weights and portfolio Sharpe Ratio for each allocation size.

```
weights = pd.DataFrame(index=range(1, 200), columns=['A', 'B', 'C'], dtype=np.float32)
sharpe = pd.Series(index=range(1, 200), dtype=np.float32)
for i in weights.index:
val = optimize.minimize(objective, guess, args=i, constraints=cons,
bounds=bnds, method='SLSQP').x
weights.loc[i, :] = val
sharpe.loc[i] = portfolio_sharpe(weights.loc[i, :].values, i)
```

This provides us with the Portfolio Sharpe Decay curve.

```
ax = sharpe.plot()
ax.set(xlabel='Capital in $MM', ylabel='Portfolio Sharpe Ratio',
title='Performance under different capital levels')
sns.despine()
```

At each level, we achieve the maximum Sharpe Ratio possible by finding the weighting to optimize Sharpe under the external capacity constraints.

```
ax = weights.plot()
ax.set(xlabel='Capital in $MM', ylabel='weight', title='Allocations under different capital levels')
sns.despine()
```

It is also instructive to look at the total capital deployed to each strategy as we increase the total portfolio capital:

```
ax = weights.multiply(pd.Series(weights.index), axis='index').plot()
ax.set(xlabel='Portfolio Gross Market Value $MM', ylabel='Algorithm Gross Market Value $MM',
title='Allocations under different capital levels')
sns.despine()
```

## Conclusions

In this blog post we presented a general framework to think about capacity at the algorithm and portfolio level. From here, it is trivial to make the optimization more complex. For example, we probably would want to consider portfolio volatility when computing our allocations and we can relax the zero correlation assumption. Thanks to Jonathan Larkin for useful feedback on an earlier version of this blog post.

You can also run the notebook underlying this analysis on Quantopian here.

[1] Sharpe Ratio is a statistical measurement of the risk adjusted performance of a portfolio, and is calculated by dividing a portfolio’s return by the standard deviation of its returns. It shows a portfolio’s reward per unit of riskand is useful when comparing two similar portfolios. As the Sharpe Ratio increases, risk adjusted performance improves.

[2] Gross Market Value is the sum of the absolute value of all long positions, short positions and cash held by a portfolio. The hypothetical Sharpe Ratios illustrated are not derived from actual investments. Actual Sharpe Ratios will vary and that difference may be material.

Awesome post. Really enjoyed this.

My only question is, and maybe you have a previous post, but how did you come up with the initial decay rates? Are these an assumption?

Comments are closed.