Monte Carlo

·4 min read·Sashank Neupane

Monte Carlo Option Pricing: Paths, Payoffs, & Practical Tricks

Closed‑forms are neat, but sometimes you just want to throw random numbers at the problem and call it a day.

1. Model Assumptions

Risk‑neutral GBM:dSt=rStdt+σStdWt,S0=S\textbf{Risk‑neutral GBM:}\quad dS_t = r S_t\,dt + \sigma S_t\,dW_t,\qquad S_0 = S
  • Same fairy‑tale setup as Black–Scholes (frictionless, constant r,σr,\sigma, no jumps).
  • We’ll discretise time and simulate many paths of StS_t to estimate option value.

2. Discretising the SDE

For a tenor TT split into NN steps of size Δt=T/N\Delta t = T/N:

Stk+1=Stkexp ⁣[(r12σ2)Δt+σΔtZk],ZkN(0,1)  i.i.d.S_{t_{k+1}} = S_{t_k}\, \exp\!\Bigl[\bigl(r - \tfrac{1}{2}\sigma^2\bigr)\,\Delta t + \sigma \sqrt{\Delta t}\,Z_k\Bigr], \quad Z_k \sim \mathcal N(0,1)\;\text{i.i.d.}

(Exact scheme for GBM — no discretisation bias.)

3. Monte Carlo Estimator

European call with strike KK:

C^=erT1Mi=1Mmax ⁣(ST(i)K,0),SE=σ^payoffM\hat C = e^{-rT}\,\frac1{M}\sum_{i=1}^{M} \max\!\bigl(S_T^{(i)} - K,\,0\bigr),\quad \operatorname{SE} = \frac{\hat\sigma_{\text{payoff}}}{\sqrt{M}}

where MM is the number of simulated paths and σ^payoff\hat\sigma_{\text{payoff}} is the sample standard deviation of discounted payoffs.

4. Greeks via Pathwise Derivative

Delta (pathwise):

ΔerT1Mi=1M1{ST(i)>K}ST(i)S0\Delta \approx e^{-rT}\,\frac1{M}\sum_{i=1}^{M} \mathbf 1_{\{S_T^{(i)} > K\}}\,\frac{S_T^{(i)}}{S_0}

Requires differentiability of payoff; works for vanilla calls/puts.

5. NumPy Implementation

python
# mc_black_scholes_numpy.py
import numpy as np
def simulate_gbm_paths(S0, r, sigma, T, steps, n_paths, seed=None):
"""Exact GBM paths (log‑Euler). Returns array shape (n_paths, steps+1)."""
if seed is not None:
rng = np.random.default_rng(seed)
dt = T / steps
nudt = (r - 0.5 * sigma**2) * dt
sigsdt = sigma * np.sqrt(dt)
increments = nudt + sigsdt * rng.standard_normal((n_paths, steps))
log_paths = np.cumsum(np.hstack([np.zeros((n_paths, 1)), increments]), axis=1)
return S0 * np.exp(log_paths)
def price_call_mc(S0, K, r, sigma, T, steps=100, n_paths=100_000):
paths = simulate_gbm_paths(S0, r, sigma, T, steps, n_paths)
payoffs = np.maximum(paths[:, -1] - K, 0.0)
disc_payoffs = np.exp(-r * T) * payoffs
price = disc_payoffs.mean()
stderr = disc_payoffs.std(ddof=1) / np.sqrt(n_paths)
return price, stderr
def delta_call_mc(S0, K, r, sigma, T, steps=100, n_paths=100_000):
paths = simulate_gbm_paths(S0, r, sigma, T, steps, n_paths)
indicator = (paths[:, -1] > K).astype(float)
disc_factor = np.exp(-r * T) * paths[:, -1] / S0
delta = (indicator * disc_factor).mean()
return delta

Quick test

python
if __name__ == "__main__":
S0, K, r, sigma, T = 100, 100, 0.05, 0.2, 1.0
price, se = price_call_mc(S0, K, r, sigma, T)
print(f"MC Price: {price:.4f} ± {1.96*se:.4f} (95% CI)")

6. JAX Autodiff / GPU Variant

python
# mc_black_scholes_jax.py
import jax.numpy as jnp
from jax import random, jit, grad
@jit
def simulate_paths_jax(key, S0, r, sigma, T, steps, n_paths):
dt = T / steps
nudt = (r - 0.5 * sigma**2) * dt
sigsdt = sigma * jnp.sqrt(dt)
z = random.normal(key, shape=(n_paths, steps))
increments = nudt + sigsdt * z
log_paths = jnp.cumsum(jnp.hstack([jnp.zeros((n_paths, 1)), increments]), axis=1)
return S0 * jnp.exp(log_paths)
@jit
def call_price_jax(S0, K, r, sigma, T, steps, n_paths, key):
paths = simulate_paths_jax(key, S0, r, sigma, T, steps, n_paths)
payoffs = jnp.maximum(paths[:, -1] - K, 0.0)
return jnp.exp(-r * T) * payoffs.mean()
# Automatic Delta: differentiate w.r.t. S0
delta_jax = jit(grad(call_price_jax, argnums=0))

7. Variance‑Reduction Goodies

  • Antithetic Variates – pair each ZZ with Z-Z.
  • Control Variate – subtract the discrepancy to analytic Black–Scholes price:
latex
$$
\hat C_{\text{cv}} = \hat C_{\text{MC}} +
\bigl(C_{\text{BS}} - \hat C_{\text{BS,MC}}\bigr)
$$
  • Moment‑matching – scale ZZ’s so sample mean =0=0, variance =1=1.
  • Quasi‑random – Sobol / Halton sequences for faster convergence.

8. Local Vol & Path‑Dependent Payoffs

Monte Carlo shines when:

  • Volatility is state‑ or time‑dependent.
  • Payoff depends on path history (Asian, Barrier, Cliquet).
  • Greeks via likelihood‑ratio / bump‑and‑reprice remain doable.

9. Next Steps

  • Implement antithetic & control‑variate toggles in code.
  • Extend to Greeks via likelihood‑ratio (a.k.a. LR or score‑function).
  • Speed‑test JAX vs. NumPy vs. PyTorch on GPU clusters.

Tip: For exotics, store full path arrays; for vanillas, stream the final node to keep memory lean.

0 views