Learning Objectives
- Formalize portfolio construction in terms of expected returns, covariance, constraints, leverage, and rebalancing choices
- Identify the allocator-specific evaluation metrics that complement the Chapter 16 backtest report, especially benchmark-relative performance, concentration, diversification, and implementation stability
- Explain why simple baselines such as equal weight, inverse volatility, and related heuristic allocators remain demanding benchmarks
- Apply mean-variance optimization with shrinkage, realistic constraints, and turnover-aware regularization
- Interpret Kelly sizing, especially fractional Kelly, as a log-growth principle for translating signal strength into position size
- Build and evaluate hierarchical allocations that prioritize diversification stability over direct covariance-matrix inversion
- Compare allocators under a common research protocol while limiting allocator-selection bias and other forms of overfitting
From Signals to Positions: Defining the Allocation Problem
Every allocator combines three inputs: a view on expected return (alpha scores from Chapters 11-15), a risk description (covariance matrix), and an institution's definition of admissible risk (leverage caps, position limits, concentration rules). This section frames the allocation problem through the Fundamental Law of Active Management, showing how an IC of 0.03 can still support a useful allocator when effective breadth compensates for weak individual bets. It emphasizes that constraints are modeling choices rather than annoyances, and that the gap between nominal and effective breadth is a primary reason realized portfolio performance falls short of naive signal-quality projections.
A Portfolio Construction Workflow
This section introduces a structured, documented workflow for portfolio construction that parallels the backtesting discipline of Chapter 16. Its central contribution is the allocator term sheet -- a compact record of the objective function, inputs, constraints, rebalancing protocol, and evaluation plan that must be written down before implementation. The section identifies three specific leakage channels in allocation (covariance estimated with future data, mismatched estimation horizons, and repeatedly inspecting the test sample to tune allocator hyperparameters) and argues for strict separation between signal generation and allocation to enable diagnosis when a good IC produces a poor portfolio Sharpe.
Portfolio Evaluation Metrics
Beyond Chapter 16's core backtest metrics, allocator comparison requires benchmark-relative performance (information ratio, tracking error, active share), concentration measures (HHI, effective number of bets, risk contribution per position), and stability metrics (turnover as an instability indicator, leverage path). The section argues that the Sharpe ratio alone is insufficient because an allocator can improve Sharpe simply by lowering total risk without adding value relative to a simple benchmark. It also establishes that covariance models should be judged by portfolio outcomes -- realized out-of-sample variance under competing estimates -- rather than by abstract matrix fit.
1 notebook
Baseline Allocators: Simple Heuristics That Are Hard to Beat
This section presents five baseline allocation methods -- equal weight, inverse volatility, volatility targeting, score-weighted, and risk parity -- plus the Kelly criterion as a sizing framework. Citing DeMiguel, Garlappi, and Uppal (2009), it explains why equal-weight portfolios are famously hard to beat: estimation error in optimization inputs often overwhelms the theoretical gains from optimal tilts. The Kelly criterion is developed from its binary-betting origins through the continuous multi-asset case, where it collapses to the tangency portfolio, with the practical recommendation that fractional Kelly (half or quarter sizing) sacrifices expected growth for materially lower leverage and drawdowns.
Mean-Variance Optimization and the Markowitz Curse
MVO remains the canonical allocation framework, but its practical implementations produce unstable, concentrated portfolios that fail out of sample because the optimizer multiplies noisy expected returns by a noisy covariance inverse. This section presents Ledoit-Wolf shrinkage and factor-based covariance as complementary remedies for numerical instability, and reinterprets common constraints (position caps, long-only rules, turnover penalties) as regularization that prevents estimation error from becoming extreme portfolios. It also covers beta hedging under parameter uncertainty, factor-mimicking portfolios, and the insight that allocation quality depends more on alpha ranking than calibration -- ranking mistakes do more damage than uniform scale errors.
3 notebooks
Hierarchical Allocation: HRP as a Stability-First Optimizer
Hierarchical Risk Parity avoids MVO's matrix inversion entirely by using agglomerative clustering to group similar assets and then recursively allocating capital in inverse proportion to cluster variance. The section walks through the four-step algorithm (distance matrix, clustering, quasi-diagonalization, recursive bisection) and explains four sources of stability: no matrix inversion, hierarchical regularization where within-cluster estimation errors partially cancel, stable hierarchical structure across rebalancing dates, and elimination of return-forecast dependence. Limitations include the inability to express directional views, long-only construction by default, and sensitivity to the clustering recipe.
1 notebook
Regime-Adaptive Allocation Without Discrete Switching
Rather than hard regime switches that detect late and create turnover at the worst times, this section advocates continuous adaptation of risk model inputs, constraint parameters, and sizing rules as functions of observable state variables like realized volatility. It covers conformal sizing (scaling positions by forecast confidence), volatility-targeted leverage caps, and the DeePM architecture which learns a features-to-weights mapping that maximizes pooled Sharpe while penalizing weak rolling windows through a SoftMin term. The practical lesson is that regime awareness becomes convincing when tied to a specific allocation failure mode rather than presented as a generic claim about sophistication.
3 notebooks
Comparing Allocators Under a Common Protocol
A controlled comparison of equal weight, inverse volatility, robust MVO, and HRP on identical ETF signals shows that inverse volatility and MVO are nearly tied on Sharpe while equal weight delivers the shallowest drawdown despite the lowest return. Cross-case evidence confirms there is no universal winner: allocator quality depends on the trading environment, covariance structure, and forecast reliability. The section warns about allocator-selection bias when comparing multiple methods and argues that more value is often created by improving signal quality and execution realism than by repeatedly reparameterizing allocator geometry.
3 notebooks
Related Case Studies
See where these chapter concepts get applied in end-to-end trading workflows.
ETF Cross-Asset Exposures
All six model families compared across 100 ETFs spanning 9 asset classes
Crypto Perpetuals Funding
Alternative data and non-standard frequencies in 24/7 crypto markets
NASDAQ-100 Microstructure
Intraday microstructure signals across 114 stocks at 15-minute frequency
S&P 500 Equity + Option Analytics
Combining options-derived features with equity data for multi-source prediction
US Firm Characteristics
Classic factor investing with ML on monthly fundamental data
FX Spot Pairs
Momentum and carry factors in the world's most liquid market
CME Futures
Carry signals across 30 products — data quality as the critical variable
S&P 500 Options (Straddles)
Direct options trading and why equity-style cost models fail for options
US Equities Panel
Large-scale cross-sectional prediction across 3,200 stocks with 16 walk-forward folds