Monte Carlo
Monte Carlo Option Pricing: Paths, Payoffs, & Practical Tricks
Closed‑forms are neat, but sometimes you just want to throw random numbers at the problem and call it a day.
1. Model Assumptions
- Same fairy‑tale setup as Black–Scholes (frictionless, constant , no jumps).
- We’ll discretise time and simulate many paths of to estimate option value.
2. Discretising the SDE
For a tenor split into steps of size :
(Exact scheme for GBM — no discretisation bias.)
3. Monte Carlo Estimator
European call with strike :
where is the number of simulated paths and is the sample standard deviation of discounted payoffs.
4. Greeks via Pathwise Derivative
Delta (pathwise):
Requires differentiability of payoff; works for vanilla calls/puts.
5. NumPy Implementation
python
# mc_black_scholes_numpy.pyimport numpy as np
def simulate_gbm_paths(S0, r, sigma, T, steps, n_paths, seed=None): """Exact GBM paths (log‑Euler). Returns array shape (n_paths, steps+1).""" if seed is not None: rng = np.random.default_rng(seed) dt = T / steps nudt = (r - 0.5 * sigma**2) * dt sigsdt = sigma * np.sqrt(dt)
increments = nudt + sigsdt * rng.standard_normal((n_paths, steps)) log_paths = np.cumsum(np.hstack([np.zeros((n_paths, 1)), increments]), axis=1) return S0 * np.exp(log_paths)
def price_call_mc(S0, K, r, sigma, T, steps=100, n_paths=100_000): paths = simulate_gbm_paths(S0, r, sigma, T, steps, n_paths) payoffs = np.maximum(paths[:, -1] - K, 0.0) disc_payoffs = np.exp(-r * T) * payoffs price = disc_payoffs.mean() stderr = disc_payoffs.std(ddof=1) / np.sqrt(n_paths) return price, stderr
def delta_call_mc(S0, K, r, sigma, T, steps=100, n_paths=100_000): paths = simulate_gbm_paths(S0, r, sigma, T, steps, n_paths) indicator = (paths[:, -1] > K).astype(float) disc_factor = np.exp(-r * T) * paths[:, -1] / S0 delta = (indicator * disc_factor).mean() return delta
Quick test
python
if __name__ == "__main__": S0, K, r, sigma, T = 100, 100, 0.05, 0.2, 1.0 price, se = price_call_mc(S0, K, r, sigma, T) print(f"MC Price: {price:.4f} ± {1.96*se:.4f} (95% CI)")
6. JAX Autodiff / GPU Variant
python
# mc_black_scholes_jax.pyimport jax.numpy as jnpfrom jax import random, jit, grad
@jitdef simulate_paths_jax(key, S0, r, sigma, T, steps, n_paths): dt = T / steps nudt = (r - 0.5 * sigma**2) * dt sigsdt = sigma * jnp.sqrt(dt)
z = random.normal(key, shape=(n_paths, steps)) increments = nudt + sigsdt * z log_paths = jnp.cumsum(jnp.hstack([jnp.zeros((n_paths, 1)), increments]), axis=1) return S0 * jnp.exp(log_paths)
@jitdef call_price_jax(S0, K, r, sigma, T, steps, n_paths, key): paths = simulate_paths_jax(key, S0, r, sigma, T, steps, n_paths) payoffs = jnp.maximum(paths[:, -1] - K, 0.0) return jnp.exp(-r * T) * payoffs.mean()
# Automatic Delta: differentiate w.r.t. S0delta_jax = jit(grad(call_price_jax, argnums=0))
7. Variance‑Reduction Goodies
- Antithetic Variates – pair each with .
- Control Variate – subtract the discrepancy to analytic Black–Scholes price:
latex
$$\hat C_{\text{cv}} = \hat C_{\text{MC}} +\bigl(C_{\text{BS}} - \hat C_{\text{BS,MC}}\bigr)$$
- Moment‑matching – scale ’s so sample mean , variance .
- Quasi‑random – Sobol / Halton sequences for faster convergence.
8. Local Vol & Path‑Dependent Payoffs
Monte Carlo shines when:
- Volatility is state‑ or time‑dependent.
- Payoff depends on path history (Asian, Barrier, Cliquet).
- Greeks via likelihood‑ratio / bump‑and‑reprice remain doable.
9. Next Steps
- Implement antithetic & control‑variate toggles in code.
- Extend to Greeks via likelihood‑ratio (a.k.a. LR or score‑function).
- Speed‑test JAX vs. NumPy vs. PyTorch on GPU clusters.
Tip: For exotics, store full path arrays; for vanillas, stream the final node to keep memory lean.
0 views