Event-Driven Architecture and Deterministic Strategy Design
If the same strategy cannot behave the same way under replay and live events, the backtest and the production system are not really the same system.
Event-Driven Architecture and Deterministic Strategy Design
If the same strategy cannot behave the same way under replay and live events, the backtest and the production system are not really the same system.
The Intuition
Notebook strategies often look simple because the market is already packaged into a tidy table. The code loops over rows, computes signals, and writes down trades. Live trading is different. Data arrives one event at a time, orders change state asynchronously, and the strategy must react while carrying forward its own state.
That is why Chapter 25 insists on an event-driven core. The architecture is not an implementation detail. It is the mechanism that lets the same decision logic run in both historical replay and live execution.
The deeper requirement is determinism. Given the same market state and the same internal state, the strategy should produce the same decision. If it depends on wall-clock time, hidden globals, untracked randomness, or broker-side side effects, replay stops being trustworthy.
Backtest/live parity therefore rests on two ideas:
- event-driven execution gives both environments the same control flow
- deterministic strategy logic gives both environments the same decisions
The Core Architecture
The simplest useful trading runtime looks like this:
market event -> handler -> feature update -> signal -> target position -> order intent
In backtest mode, the event stream is replayed from history. In live mode, it arrives from a feed. The same handlers should process both.
That means the strategy function should be close to:
$$ (\text{signal}, \text{intent}, \text{new state}) = f(\text{event}, \text{old state}, \text{config}) $$
and not:
$$ f(\text{event}, \text{old state}, \text{current system time}, \text{random environment}, \text{hidden globals}). $$
The moment the second form appears, reproducibility weakens.
Why Event-Driven Matters
An event-driven core solves three trading-specific problems.
Shared logic across modes
If replay and live ingestion both call the same on_bar, on_trade, or on_fill handlers, you
do not need one strategy for backtest and another for production. That is the continuity Chapter 25
wants.
Explicit state transitions
Position state, pending orders, cash, and rolling features all evolve through events. An event-driven runtime makes those transitions explicit instead of scattering them across notebook cells or cron jobs.
Replayability
If every important state change is driven by an event log, you can replay a day and ask:
- why did the signal flip here?
- why was an order canceled?
- why did the position differ after restart?
That is the foundation for auditability and parity testing.
Determinism Is the Real Constraint
Many systems are event-driven but still not reproducible. The common failure modes are boring:
- using
now()inside the strategy instead of event timestamps - drawing random numbers without a fixed seed or deterministic mode
- mutating module-level globals
- reading unversioned external files mid-run
- depending on broker state that is not persisted and restored
A deterministic strategy should depend only on:
- the event payload
- persisted strategy state
- persisted portfolio and order state
- explicit configuration
Everything else should be treated as infrastructure, not strategy logic.
A Worked Example
Consider a dual moving-average crossover strategy.
In a notebook, a reader might write:
df["fast"] = df.close.rolling(10).mean()
df["slow"] = df.close.rolling(30).mean()
df["signal"] = (df.fast > df.slow).astype(int)
That is fine for research, but it hides the runtime assumptions. The event-driven version makes them explicit:
$python
def on_bar(bar, state):
state.prices.append(bar.close)
if len(state.prices) < 30:
return None, state
fast = mean(state.prices[-10:])
slow = mean(state.prices[-30:])
target = 1.0 if fast > slow else 0.0
return target, state
$
Now the same handler can be called:
- during historical replay with bars from disk
- in paper trading with broker-simulated bars
- in live trading with feed-delivered bars
If the input bar sequence and starting state are identical, the target sequence should be identical. That is the practical meaning of deterministic strategy design.
What Belongs Outside the Strategy
A useful boundary rule is:
The strategy decides what it wants. The runtime decides how that intent is carried out.
The strategy should not own:
- broker session management
- retries and reconnects
- persistence and recovery
- order-acknowledgment handling
- external monitoring and kill switches
Those belong in the engine and broker adapters. When they leak into the strategy class, replay and live behavior start to diverge because infrastructure concerns become mixed with decision logic.
That separation is why Chapter 25 can keep the strategy layer stable while adding live-only pieces around it.
In Practice
Three design checks catch most problems early.
Use event timestamps, not system time
The event already tells you when the market observation belongs. Using wall-clock time inside the strategy makes replay dependent on when the code runs rather than on the market state being replayed.
Persist state explicitly
If the process restarts, the engine should restore:
- rolling-feature buffers
- open-order state
- positions and cash
- strategy-specific state variables
Without this, a restart creates a different strategy than the one you backtested.
Keep handlers small and pure
A handler that mutates many global objects is hard to replay and hard to test. A handler that takes event plus state and returns updated intent plus state is much easier to audit.
Common Mistakes
- Writing notebook-style batch logic and calling it "live capable."
- Letting strategy decisions depend on wall-clock time or environment state.
- Mixing broker mechanics and strategy rules in the same function.
- Failing to persist and restore strategy state across restarts.
- Treating event-driven design as an engineering preference instead of a parity requirement.
Connections
This primer supports Section 25.1's unified backtest/live framework. It connects directly to order state machines, technical parity testing, reconciliation, and operational controls. Chapter 26 builds on the same idea: governance is only meaningful if the runtime is replayable and the strategy behavior is stable under identical inputs.
Register to Read
Sign up for a free account to access all 61 primer articles.
Create Free AccountAlready have an account? Sign in