Chapter 17: Portfolio Construction

Benchmark-Relative Portfolio Evaluation: Tracking Error, Information Ratio, and Active Share

Once a benchmark exists, Sharpe ratio stops answering the whole question.

Benchmark-Relative Portfolio Evaluation: Tracking Error, Information Ratio, and Active Share

Once a benchmark exists, Sharpe ratio stops answering the whole question.

The Intuition

Portfolio construction is usually not judged against cash. It is judged against another allocator.

That benchmark may be:

  • equal weight
  • inverse volatility
  • a policy portfolio
  • last quarter's production allocator

Once that benchmark is explicit, the evaluation problem changes. A portfolio can have a decent Sharpe ratio and still be a bad choice if it adds complexity without adding enough value relative to the benchmark it replaces.

That is why Chapter 17 needs benchmark-relative metrics. They answer different questions:

  • tracking error asks how far the portfolio wanders from the benchmark path
  • information ratio asks whether those deviations were paid
  • active share asks how different the holdings are in weight space

They overlap, but they are not substitutes.

Active Return, Tracking Error, and Information Ratio

Let benchmark-relative return be

$$ r_t^{\text{active}} = r_t^{\text{portfolio}} - r_t^{\text{benchmark}}. $$

Then tracking error is the volatility of that active return:

$$ \text{TE} = \sqrt{\operatorname{Var}(r_t^{\text{active}})}. $$

And the information ratio is

$$ \text{IR} = \frac{\mathbb{E}[r_t^{\text{active}}]}{\text{TE}}. $$

If $r_t^{\text{active}}$ is daily, then these are daily quantities. In practice, tracking error and information ratio are usually annualized using the same square-root-of-time convention as volatility and Sharpe calculations when they are computed ex post from realized active returns. Ex-ante tracking error from a risk model is often already annualized.

This is the benchmark-relative analogue of the Sharpe ratio. The numerator is not total excess return. It is the value added over the benchmark. The denominator is not total portfolio volatility. It is the volatility of the deviations from the benchmark.

That distinction matters because a low-risk portfolio can improve Sharpe simply by becoming more passive, while the information ratio asks the sharper question:

given the benchmark already available, were the active deviations worth taking?

What Active Share Measures

Active share is a holdings-based quantity:

$$ \text{Active Share} = \frac{1}{2}\sum_i \left|w_i^{\text{portfolio}} - w_i^{\text{benchmark}}\right|. $$

For the standard long-only case, it measures how different the weights are from the benchmark in capital space. In long-short portfolios, active share can exceed 100%, so it still measures difference in holdings space but no longer has the same intuitive "fraction of the book that differs from benchmark" interpretation.

This is useful because two portfolios can have similar benchmark-relative returns while being constructed very differently:

  • one may hug the benchmark closely with a few small tilts
  • the other may hold a visibly different set of names

Active share captures that difference. But it ignores covariance. Two portfolios can have the same active share while one takes far larger common-factor bets than the other. That is why active share does not replace tracking error.

A Joint Reading

The three metrics are most useful together.

Pattern Likely interpretation
low active share, low tracking error closet benchmark
high active share, low tracking error different holdings, but offsetting common risks
low active share, high tracking error small weight changes creating large factor deviations
high active share, high tracking error genuinely active portfolio

The information ratio then tells you whether those deviations were actually rewarded.

The counterintuitive third row is often the one that matters most. Small benchmark-relative tilts in high-beta or high-duration names can create large active risk even when the holdings still look close to the benchmark on paper.

This joint reading is exactly what allocator comparison needs. Looking at only one metric usually creates a false story.

Why Sharpe Ratio Is Not Enough

Suppose allocator A and allocator B both have a Sharpe ratio near 0.8.

That does not imply they are equally useful. One allocator may be almost identical to equal weight, while the other may take meaningful active positions and earn modest but consistent active return. Once a benchmark is explicit, Sharpe alone no longer tells you whether the deviations from that benchmark were worth taking.

Benchmark-relative evaluation is therefore not a nice add-on. It is the actual decision frame once a baseline allocator exists.

A Worked Example

Suppose you compare two long-only ETF allocators against an equal-weight benchmark.

Allocator A

  • Sharpe: 0.82
  • tracking error: 2.5%
  • information ratio: 0.10
  • active share: 18%

Allocator B

  • Sharpe: 0.79
  • tracking error: 6.8%
  • information ratio: 0.42
  • active share: 61%

Allocator A looks slightly better on Sharpe, but it is barely different from the benchmark. Its low active share and low tracking error suggest it hugs equal weight closely, and the low information ratio says those deviations did not add much value.

Allocator B has slightly lower Sharpe but much more meaningful benchmark-relative performance. It is truly active, and the active bets were paid well enough to generate a stronger information ratio.

For intuition, an annualized information ratio above roughly 0.5 is often considered strong, while values below about 0.2 usually suggest the active bets are not paying for themselves consistently enough. But those thresholds are noisy on short samples, so they should be read as rough calibration points, not as hard pass-fail lines.

That does not automatically make B better. Higher tracking error may be unacceptable for the mandate. But now the trade-off is visible.

Common Interpretation Errors

The first mistake is to treat active share as a skill metric. It is not. It measures difference, not success.

The second mistake is to treat tracking error as bad by construction. High tracking error can be appropriate if the allocator is supposed to express strong active views.

The third mistake is to compare information ratios without checking the benchmark definition. An IR against equal weight is answering a different question than an IR against a policy portfolio or a production baseline.

The fourth mistake is to forget that active share is blind to covariance. Two portfolios can look equally active in holdings space while one is taking far more hidden market or sector risk.

What Good Reporting Looks Like

A useful benchmark-relative evaluation block should show:

  • benchmark definition
  • active return
  • tracking error
  • information ratio
  • active share
  • turnover, because some active return is purchased through churn rather than stable conviction

Without the benchmark definition, the rest of the metrics are partly uninterpretable.

In Practice

Use these rules:

  • define the benchmark before evaluating the allocator
  • read tracking error and active share together, never in isolation
  • use information ratio for the "was the deviation worth it?" question
  • keep Sharpe ratio in the report, but do not let it dominate allocator comparison once a benchmark exists
  • treat low active share plus low IR as a warning that the allocator may be adding complexity without adding value

Common Mistakes

  • Comparing allocators only on Sharpe ratio.
  • Reporting active share without tracking error.
  • Treating high tracking error as automatically undesirable.
  • Comparing information ratios computed against different benchmarks.
  • Ignoring turnover when judging active return.

Connections

This primer supports Chapter 17's allocator-comparison logic. It connects directly to the Fundamental Law bridge, factor-risk attribution, concentration diagnostics, and the backtest report from Chapter 16, which provides the absolute-performance layer beneath these benchmark-relative metrics.

Register to Read

Sign up for a free account to access all 61 primer articles.

Create Free Account

Already have an account? Sign in