Fig Volatility Lab — Fig Research Suite
CRD# 315131
SEC 801-121821
Fig Research Suite · Volatility Lab
Fig Volatility Lab
Grounded in Bayesian Analysis of Stochastic Volatility Models — Jacquier, Polson & Rossi (Journal of Business & Economic Statistics, 1994; extended 2004) — a landmark contribution to financial econometrics that fundamentally advanced how latent volatility is estimated and understood.
The Fig spirit: no model here predicts tomorrow. We use every tool with caution, as structured reference — and share the science openly, so the mathematics of modern finance belongs to everyone.
Realized volatility module: classroom work by Victor Melfa, portfolio manager · Managing Financial Risk · Johns Hopkins University · Prof. Nicola Fusari
Panel 01 · Intellectual Lineage
The Volatility Model Family Tree
Click any model to expand its equation, plain-English meaning, and honest limitations.
1994 · JP Morgan
RiskMetrics / EWMA
Exponentially weighted moving average. The first practitioner standard.
Details
σ²ₜ = λσ²ₜ₋₁ + (1−λ)r²ₜ₋₁
λ ≈ 0.94 (daily)
✓ Simple. Intuitive. Fast.

✗ One parameter. No mean-reversion. Volatility never "forgets" — a shock today decays at fixed rate forever. Treats all calm periods the same.
Plain English: Yesterday's variance counts for 94%, today's squared return adds 6%. Easy math, but it treats volatility like a thermostat set to one speed.
1982 · Engle → Bollerslev 1986
GARCH(1,1)
Adds mean-reversion. Volatility clusters but eventually returns to average.
Details
σ²ₜ = ω + α·r²ₜ₋₁ + β·σ²ₜ₋₁
Long-run var: ω/(1−α−β)
✓ Mean-reversion. Captures vol clustering. Nobel Prize (Engle, 2003).

✗ Still a single number per period. Assumes symmetric response (up and down shocks treated equally). Underestimates tails. Leverage effect ignored.
Plain English: Volatility today depends on yesterday's shock AND yesterday's volatility, both pulling it back toward a long-run average. Better — but still one answer, not a range of answers.
~2000s · Andersen, Bollerslev, Diebold
Realized Volatility
Uses intraday high-frequency data to measure vol directly, not estimate it.
Details
RVₜ = Σⁿᵢ₌₁ r²ₜ,ᵢ
Sum of squared intraday returns
✓ Model-free. Uses real tick data. Most accurate when data available.

✗ Requires minute-by-minute data (infrastructure cost). Market microstructure noise at very high frequency. HAR-RV model needed for forecasting.
Plain English: Instead of guessing volatility from daily closes, you measure it by adding up every small move within the day. Like measuring a mountain's height by walking every inch of slope — much more precise, but you need the data.
Reference: Hopkins/Fusari class data — see Panel 04
Taylor 1982 · Hull-White 1987
Stochastic Volatility (SV)
Volatility itself is a random process — not a deterministic formula.
Details
rₜ = √hₜ · εₜ
log(hₜ) = μ + φ(log hₜ₋₁ − μ) + σᵥ·νₜ
εₜ, νₜ ~ N(0,1)
✓ Volatility has its own randomness. Naturally fat-tailed returns. More realistic than GARCH.

✗ Latent state — you never observe hₜ directly. Classical estimation (MLE, GMM) is very hard. This is where Bayesian MCMC becomes essential.
Plain English: Volatility doesn't just react to yesterday's news — it has its own personality, its own randomness. Think of it as two invisible processes happening at once: returns, and volatility of returns. You can't see either directly. That's the problem JPR solved.
Jacquier, Polson & Rossi · 1994 / 2004 · JBES
Bayesian SV + MCMC
The breakthrough: recover the full posterior distribution of latent volatility using Markov Chain Monte Carlo.
Details
p(h₁:T, μ, φ, σᵥ | r₁:T) ∝ p(r | h) · p(h | μ,φ,σᵥ) · p(μ,φ,σᵥ)
Posterior = Likelihood × Prior × Normalizing constant
2004 extension: corr(εₜ, νₜ) = ρ ≠ 0 (leverage)
εₜ ~ t(ν) (fat tails)
✓ Full uncertainty quantification. Exact finite-sample inference. Handles leverage + fat tails. Multistep predictive densities include parameter uncertainty.

✗ Computationally intensive. Results sensitive to prior specification. Harder to explain to clients than "yesterday's VaR was X%."
Plain English: Instead of answering "what WAS volatility yesterday?" with a single number, JPR answers: "here is the entire probability distribution of what volatility could have been, given all the data." A cloud of plausible values, not a single point. That honesty about uncertainty is the revolution.
What this lineage achieves
Each generation fixes the previous model's biggest flaw
Bayesian SV recovers the unobservable — the latent volatility path
Uncertainty about volatility is itself quantified
Parameter uncertainty feeds into predictive densities
What no model in this tree does
Predict next period's return with reliability
Tell you when the next crisis begins
Eliminate the need for judgment about model choice
Perform equally well across all market regimes
Panel 02 · Interactive Simulation
The Volatility Simulator
Simulated daily returns with realistic parameters. Hover any point to see which model is catching — or missing — a movement. Jump diffusion is layered in below.
Simulation seed parameters
μ = 0.0350% daily · σ_long ≈ 13% annual · φ = 0.97 · σᵥ = 0.14
Simulated daily returns — 500 trading days · ⚡ jump events marked
Volatility estimates by model (annualized %) · hover to inspect
EWMA (λ=0.94)
GARCH(1,1)
Realized Vol proxy
SV posterior mean
SV 5–95% band
Hover the volatility chart to see how each model is reading that moment — and what it's missing.
Jump Diffusion — Merton (1976) · Extended by Jacquier & Polson
Adding Jump Terms to the Diffusion
Standard SV models assume returns move continuously. Markets don't. Jump diffusion adds a Poisson process for sudden discontinuous moves.
Merton Jump-Diffusion Model
rt = μ + √ht·εt + Jt·ηt
where Jt ~ Bernoulli(λ) — jump arrives with prob λ per day
ηt ~ N(μJ, σ²J) — jump size, typically negative skew
λ ≈ 0.01–0.03 (1–3 jumps per ~100 days)
Plain English: Every day, a coin is flipped. With ~98% probability, the return follows the smooth SV process. With ~2% probability, a sudden jump hits — drawn from a separate distribution (usually fat-tailed and negatively skewed — crashes are bigger than spikes). EWMA and GARCH see this as "a big return" and slowly adjust. A jump model knows it came from a different process entirely.
Show jump events on chart Jump markers: ⚡
Jump-diffusion model overlay Off — compare without
Jump event zoom — click a ⚡ marker above or select a jump below
EWMA — misses jump
GARCH — lags badly
Jump-SV — catches it
SV no-jump — underestimates
True jump moment
Select a jump event above to see a close-up of which models catch it and which miss.
The honest conclusion on jump models
Jump model assumption: P(jump | data) = posterior estimate
Reality: jump times are latent — never directly observed
Key limitation: λ̂, μ̂J, σ̂J are estimated from past data
Even jump models miss jumps. They may misclassify a large diffusion move as a jump (false positive), or miss a genuine jump that looks like an extreme diffusion draw (false negative). In-sample parameter estimates of jump frequency λ and jump size distribution depend entirely on the period chosen. A model calibrated on 2003–2007 assigned near-zero probability to the 2008 jump regime. No jump model reliably times the arrival of the next jump. What it does: it honestly widens the predictive distribution during high-λ periods, reflecting that a jump regime is more probable — which is itself the useful signal.
Jump model limitations — overtiming, undertiming, and false positives
True jump (ground truth)
Model posterior P(jump)
False positive — no jump, model thinks so
False negative — real jump, model missed
Orange posterior spikes that align with red true-jump markers = a correct detection. Orange spikes without red = false positives. Red markers without orange = missed jumps (false negatives). No model gets all of them.
Historical Market Data · Model Performance in Real Time
What Each Model Actually Saw
Calibrated historical daily returns from major market events. Each model ran in real time — seeing only past data, never the future. The highlighted regions show where models diverged from reality. This is the exercise: not simulation, but history.
Period:
How to read this chart
Grey bars = actual daily returns (what really happened)
Colored lines = each model's real-time volatility estimate (what it thought was happening)
Red shaded regions = model significantly UNDERESTIMATED risk vs realized vol
Blue shaded regions = model OVERESTIMATED risk (still elevated after vol calmed)
The exercise: EWMA/RiskMetrics lags badly at both the onset and recovery. GARCH catches the persistence better but still misses sharp initial spikes. Jump diffusion reacts fastest to discontinuous moves. No model times the crisis perfectly — but their failure modes are instructive and different.
Historical daily returns — S&P 500 proxy · 2008–2009
Real-time model volatility estimates vs realized vol · ■ under-estimated · ■ over-estimated
Realized vol (ground truth)
EWMA / RiskMetrics
GARCH(1,1)
Jump Diffusion SV
Bayesian SV
Select a period above, then hover the chart to inspect each day.
Key insight — why the SV band matters even with jumps
EWMA and GARCH give you a number. That number carries false precision. The SV credible band is honest: it says "given this data and these priors, volatility is somewhere in this range." Add jump diffusion and the band widens further around jump events — correctly. A narrow GARCH estimate during a crisis is not confidence. It's overconfidence. A jump model's widened predictive density during high-λ periods is the most honest signal available.
Panel 03 · Jacquier, Polson & Rossi · Core Methodology
The Bayesian Lens
Three windows into the JPR framework. Each reveals a different layer of what Bayesian inference adds that classical methods cannot.
JPR Core Equation — Posterior of latent volatility
p(hₜ | r₁:T, θ) p(rₜ | hₜ) · p(hₜ | hₜ₋₁, θ)
← Obs. likelihood × State transition prior
What this says: The probability distribution of today's hidden volatility hₜ is shaped by two forces — how well it explains today's return (the likelihood), and how consistent it is with yesterday's volatility level given the AR(1) process (the prior). Bayes multiplies these together and normalizes. The result: not a point, but a distribution.
Fat-tailed errors (Student-t) Off — Gaussian errors
Leverage effect (ρ ≠ 0) Off — symmetric shocks
Posterior distribution of √hₜ — mean and 5%, 25%, 75%, 95% quantiles
Toggle fat tails and leverage above to see how the credible band changes.
The fundamental difference
Frequentist (GARCH): σ̂ₜ = single number
Bayesian (JPR): p(hₜ | data) = full distribution

Predictive density (Bayesian):
p(rₜ₊₁ | r₁:T) = ∫ p(rₜ₊₁ | hₜ₊₁) · p(hₜ₊₁ | hₜ) · p(hₜ, θ | r₁:T) dhₜ dθ
The magic: The Bayesian predictive density integrates over ALL uncertainty — both the latent state hₜ and the parameters θ. A frequentist plugs in point estimates and pretends they're known. JPR's predictive distribution is wider and more honest — especially in the tails, exactly where it matters most for risk management.
Side-by-side: GARCH point estimate vs JPR posterior mean + band
GARCH(1,1) — point estimate
JPR — posterior mean
JPR — 90% credible interval
Notice the crisis region (shaded): GARCH gives one number and moves on. The JPR band widens dramatically — not a failure, but an honest signal that the data are less informative in extreme regimes. That widening is the risk signal.
The JPR Algorithm — Cyclic Metropolis / Gibbs Sampler
For t = 1 to T (each latent state hₜ):
Step 1: Sample hₜ | r₁:T, h₋ₜ, θMetropolis step
Step 2: Sample μ, φ, σᵥ | h₁:TGibbs step (conjugate)
Repeat N times → draws converge to posterior p(h₁:T, θ | r₁:T)
How MCMC works, in plain English: Imagine you're trying to find the shape of a mountain range in thick fog. You can't see the whole landscape — but you can measure the elevation exactly where you're standing. MCMC is a random walk through that fog: each step moves to a new position, accepted with probability proportional to how much higher (more probable) it is. After thousands of steps, the path you've traced outlines the mountain — you've mapped the posterior distribution without ever seeing it directly.

Why does this matter for volatility? Classical methods give you one answer — a point estimate of how volatile the market was yesterday. MCMC gives you a full distribution of plausible answers, each weighted by how well it fits the data. That distribution is what the chart below is building in real time. The histogram on the right is not a prediction — it is an honest picture of what the data can and cannot tell us about the persistence of volatility.

Watch the chain: It starts far from the true value (left side, faded) — that is the burn-in. As it runs, it gravitates toward the true parameter. The histogram fills in the posterior. This is Bayesian inference made visible.
MCMC chain — sampler converging on posterior for φ (persistence parameter)
Press Run Sampler. Watch the chain wander — then converge. The burn-in period (first ~200 draws, shown faded) is discarded. What remains are draws from the true posterior.
Panel 04 · The Honest Reckoning
No Model Wins All Regimes
Click a market regime. See which model performs best — and why no single model dominates. Reference: Hopkins/Fusari class data on realized volatility.
🐂 Bull Market
e.g. 2013–2019 · Low, stable vol
🐻 Bear Market
e.g. 2000–2002 · Rising, persistent vol
⚡ Crisis
e.g. 2008–2009 · Vol spike, fat tails
↗ Recovery
e.g. 2009–2011 · Rapidly falling vol
Bayesian Model Comparison — Log Predictive Likelihood
LPL(Mₖ) = Σₜ log p(rₜ | r₁:ₜ₋₁, Mₖ)
= how well model Mₖ predicted each return, one step ahead, summed over all periods

Bayes Factor: BF₁₂ = p(data | M₁) / p(data | M₂)
Plain English: At the end of each day, each model makes a prediction for tomorrow's return. We score it by asking: "How probable was what actually happened, according to your distribution?" Add up those scores over all days. Higher total = better model. This is a proper scoring rule — you can't game it by artificially widening your distribution. JPR's advantage comes in crisis periods when fat tails and uncertainty matter most. Even so, every model here is at best a structured reference — not a crystal ball. No score, however high, translates into reliable prediction of the next day's return. We use all of these tools carefully, as lenses on the data, not answers.
Hopkins / Fusari Class Reference — Realized Volatility
In Prof. Nicola Fusari's Managing Financial Risk course at Johns Hopkins, the progression from EWMA → GARCH → Realized Volatility was demonstrated using minute-by-minute tick data. Realized Vol (RV = Σᵢ r²ₜ,ᵢ summed over intraday intervals) is the most accurate estimator when available — but requires high-frequency data infrastructure that daily-only investors cannot access. The Fig Volatility Lab uses simulated RV proxies for illustration; actual RV in the regime scorecard above reflects Fusari-class calibrated parameters. Data source: Victor J. Melfa III / Johns Hopkins Managing Financial Risk, Prof. Nicola Fusari.
Panel 05 · The New Frontier
Waves, Wavelets & The Wicked Problem
From Fourier to wavelets to Graph Wavelet Neural Networks — each step adds power, each step adds honest limitations. Interact with every layer.
1
Pure Frequency — The Sine Wave
All signal processing begins here. Drag the sliders to build your own wave — notice how frequency and amplitude are perfectly specified but there is zero time information.
Interactive sine wave · drag sliders
Sine wave
f(t) = A · sin(2πωt + φ) — amplitude A, frequency ω, phase φ
Plain English: Perfect information about frequency — zero information about when. A wave that repeats forever at a fixed rate. Beautiful math. Useless for markets — nothing in finance repeats at a constant rate forever.
2
Fourier Decomposition — Many Frequencies at Once
Click the frequency bars to add or remove components. Watch the reconstructed signal change. This is what Fourier analysis does to a return series.
Interactive Fourier builder · click bars to toggle frequency components
Lower half = power spectrum (click bars). Upper half = reconstructed signal from selected components.
Fourier Transform
X̂(ω) = ∫ x(t) · e−2πiωt dt — frequency content of signal x(t)
Plain English: Like a musical chord — you know which notes are present, not when each was played. The power spectrum tells you which cycles dominate a return series, not when the regime shifted. That's the fundamental limitation Fourier cannot solve.
3
Wavelets — Time AND Frequency Simultaneously
Drag the scale slider to zoom in or out. The heatmap shows which time periods carry the most variance — at each frequency band. Click any region for a probability readout.
Interactive wavelet power spectrum · drag scale · click to inspect
Click any region of the heatmap to see the probabilistic vol inference for that time-scale combination.
Continuous Wavelet Transform
W(a, b) = (1/√a) ∫ x(t) · ψ*((t−b)/a) dt — scale a (zoom), location b (time), mother wavelet ψ
Plain English: A microscope with adjustable zoom AND the ability to move along the time axis. Small scale = zoom in on short-term noise. Large scale = zoom out to long-run cycles. Unlike Fourier, you preserve both when and at what frequency the variance occurred.
4
Probabilistic Inference — Not Point Prediction
Toggle the forecast horizon. Watch how the probability distribution over next-period volatility regime shifts based on wavelet structure. This is the honest use of wavelet analysis in finance.
Conditional regime probability distribution from wavelet structure
Honest probabilistic statement from wavelet analysis
p(Vol regimet+h | wavelet structuret) — not: Volt+h =
What we can honestly say: "Given this wavelet structure, the next-period volatility is more likely to remain in the current regime than transition — but with meaningful probability mass on higher regimes." What we cannot say: "Volatility will be 18.4% in 30 days."
5
Graph Wavelet Neural Networks — The Frontier
GWNNs apply wavelet-style filtering not on the time axis, but on a graph of asset relationships. Each node is an asset; edges represent correlation or co-movement. Click any node to see its wavelet-filtered signal propagate.
Graph Wavelet Neural Network — spectral convolution
H(l+1) = σ(Ψs H(l) W(l))
Ψs = diag(ψ(s·λ₁),...,ψ(s·λn)) — graph wavelet operator at scale s
λ₁,...,λn = eigenvalues of graph Laplacian L = D − A
A = adjacency (correlation) matrix · D = degree matrix
Plain English: Instead of applying wavelets to a single time series, a GWNN applies them across a network of assets simultaneously. The graph Laplacian encodes which assets are "neighbors" (correlated). Wavelet filters at different scales capture whether volatility shocks propagate locally (one sector) or globally (systemic). This is the current research frontier — used for systemic risk detection and volatility forecasting across portfolios.
Interactive asset graph · click a node to propagate a volatility shock · adjust scale
Small s = local shock (stays in sector). Large s = global propagation (systemic).
Click any asset node to inject a volatility shock. Watch how it propagates through the graph at the current wavelet scale.
GWNN — Honest Limitations
GWNNs are powerful tools for understanding cross-asset volatility structure — but they require a stable graph (the correlation matrix must be estimated from data, itself dependent on period choice). The network changes in crises. The model trained on a calm-period graph will misfire in a crisis when correlations spike and the graph structure changes entirely. Like all models in this lab: informative, not prescriptive.
The Wicked Problem in Finance
Every tool in this lab — EWMA, GARCH, Realized Vol, Bayesian SV, jump diffusion, Fourier, wavelets, GWNNs — eventually returns us to the same set of unanswerable questions. These are not failures of mathematics. They are the structure of the problem itself.
Q1: Which model governs returns? (We never know. We choose. That choice is an assumption.)
Q2: What period do we use for in-sample estimation? (Too short: insufficient data. Too long: the regime has changed.)
Q3: Is the current regime indicative of the future? (The only honest answer: sometimes. We cannot know when.)
Q4: Can wavelets or GWNNs go beyond general probabilistic statements? (No. And claiming otherwise is the most dangerous move in finance.)

The Fig approach: we use these tools not as oracles, but as structured lenses. Each model reveals a different facet of the data. No single facet is the truth. Together, they inform judgment — and judgment, not algorithm, is the irreducible core of investment management.
What wavelets & GWNNs genuinely add
Time-frequency decomposition invisible to GARCH
Cross-asset shock propagation structure
Regime probability distributions from past structure
Complement — never replace — Bayesian SV
What wavelets & GWNNs cannot do
Predict the next day's return with reliability
Identify regime shifts in real time
Eliminate the in-sample / out-of-sample gap
Free you from model choice and period selection
Panel 06 · The Intellectual Lineage
The Academic Network
The web of scholarship connecting Nobel laureates, pioneering researchers, and the work that shaped Fig's approach to portfolio science. Click any node to read more.
Filter:
The Fig Thread
Victor Melfa's graduate work at Johns Hopkins Carey Business School — studying under Prof. Frank Fabozzi and Prof. Nicola Fusari — was a direct application of the theory represented in this network to Fig's live portfolio research. The volatility models in this tool, the factor surface methodology, and the Bayesian inference framework all trace to the scholars shown above. The Centre for Financial History at Cambridge provided a further historical lens on how these mathematical tools evolved from practical market necessity. The lineage is not decorative — it runs through every model Fig uses, and every honest caveat Fig applies.
▼ DISCLOSURES & LEGAL NOTICES
Melfa Wealth Management, Inc. dba Factor Investing Group · CRD# 315131 · SEC 801-121821
8 Lyman St. Suite 204, Westborough MA 01581 · (508) 366-6040 · factorig.com
This tool is for educational and research illustration purposes only. It does not constitute investment advice, a recommendation to buy or sell any security, or a guarantee of future results. All simulations use synthetic data generated from academic calibrations and do not represent actual fund or index performance. Volatility models shown are illustrations of academic methodology; no model reliably predicts future volatility or returns. Realized volatility reference data is based on classroom demonstration parameters from Johns Hopkins University, Managing Financial Risk (Prof. Nicola Fusari); it is not drawn from live market feeds. Factor Investing Group is a trade name of Melfa Wealth Management, Inc., an SEC-registered investment adviser. Past performance is not indicative of future results. Consult your investment adviser before making any financial decisions.