Model Governance and Controls Requirements

What separates a model failure that costs your firm a headline from one that costs it everything? Knight Capital had no change-management gate to stop untested code from reaching production and lost $440 million in 45 minutes; JPMorgan's London Whale VaR model divided by a sum instead of an average, independent validation never caught it, and the bank bled $6.2 billion; Archegos in 2021 exposed stress tests that underestimated exposure by billions, inflicting $10 billion in combined prime broker losses. Every one of these disasters traces directly to governance breakdowns in model development, validation, or change management — not to model sophistication. What actually works has never been about building smarter models; it lies in enforcing the control framework already codified in SR 11-7 and its global equivalents.
TL;DR: Model governance requires three interlocking pillars—development standards, independent validation, and board-level oversight. Failures in any one pillar create the conditions for catastrophic loss. This article covers the regulatory framework, a worked example of model tiering and validation, and a controls checklist for derivatives operations teams.
What Model Governance Actually Means (The Regulatory Foundation)
The Federal Reserve's SR 11-7, issued April 4, 2011, defines a model as any quantitative method that applies statistical, economic, financial, or mathematical theories to process input data into quantitative estimates. That includes valuation models, risk models, and pricing models across your derivatives business. The FDIC adopted SR 11-7 in June 2017 (FIL-22-2017), extending these standards to all FDIC-supervised institutions.
The OCC's companion guidance (Bulletin 2011-12) establishes three pillars of model risk management:
- Model development and implementation — documentation standards, assumptions testing, data quality controls
- Model validation — independent review by parties with no stake in model outcomes
- Governance, policies, and controls — board oversight, model inventory, risk appetite, change management
The point is: these aren't aspirational guidelines. They are supervisory expectations with enforcement teeth. Model risk arises from two sources: fundamental errors in the model itself and incorrect or inappropriate use of model output. Governance must address both.
The Three Pillars in Practice (How Controls Work Day-to-Day)
Pillar 1: Model Inventory and Tiering
SR 11-7 requires institutions to maintain a firm-wide model inventory — a comprehensive registry of every model in use, including model name, owner, business unit, validation status, risk tier, last validation date, and next scheduled review.
Model tiering classifies each model by materiality, complexity, and usage:
| Tier | Risk Level | Validation Frequency | Typical Scope |
|---|---|---|---|
| Tier 1 | High | Annual or more frequent | Front-office pricing, regulatory capital (VaR, SA-CCR) |
| Tier 2 | Medium | Every 18–24 months | Middle-office risk analytics, collateral valuation |
| Tier 3 | Low | Every 24–36 months | Back-office reconciliation, non-material reporting |
Why this matters: tiering determines resource allocation. A Tier 1 model triggering regulatory capital calculations demands annual independent validation with full documentation review. A Tier 3 reconciliation tool can follow a lighter protocol. Getting the tiering wrong means either wasting validation resources on low-risk models or under-scrutinizing high-risk ones.
Pillar 2: Independent Validation and Effective Challenge
SR 11-7 identifies effective challenge as the cornerstone of sound validation. Effective challenge means critical analysis by objective, informed parties who can identify model limitations and produce appropriate changes. "Objective" means independent of model development and model use (not the team that built it, not the desk that profits from it).
Validation activities include:
- Conceptual soundness review — are the model's theoretical foundations appropriate for its intended use?
- Outcomes analysis — backtesting model predictions against actual results
- Benchmarking — comparing outputs against alternative models or industry standards
- Champion-challenger testing — periodically comparing the production model (champion) against alternatives (challengers) to confirm the champion remains the best choice
Pillar 3: Change Management and Escalation
Model change management requires a formal process for modifications: documentation of changes, impact assessment, re-validation for material changes, and approval from the model risk management function before deployment. This is where Knight Capital's $440 million loss originated — new market-making software was deployed without adequate testing controls or a kill switch, activating obsolete trading code that hemorrhaged money for 45 minutes before anyone could stop it.
The core principle: deployment controls are governance controls. Code deployment without testing, rollback capability, and kill switches is a model governance failure, not just an IT failure.
Regulatory Reporting Requirements That Drive Model Controls
Model outputs feed directly into clearing, margin, and regulatory reporting. When those outputs are wrong, the downstream consequences cascade.
EMIR Reporting (EU and UK)
Under EMIR Refit (effective April 29, 2024 in the EU; September 30, 2024 in the UK), derivatives reporting expanded from 129 to 203 reportable fields. UK firms had a 180-day transition period ending March 31, 2025 for existing transactions. Model outputs feeding clearing and margin calculations must meet explicit data quality standards — garbage-in from a poorly governed model means reporting failures, regulatory scrutiny, and potential fines.
Dodd-Frank (US)
Title VII of the Dodd-Frank Act requires swap reporting to swap data repositories (SDRs). Derivatives clearing organizations must comply with 15 core principles, including risk management standards and clearing member model governance obligations. If your margin model underestimates exposure, the clearing organization bears the shortfall risk (and the regulator wants to know why).
BCBS 239: Data Aggregation Standards
The Basel Committee's BCBS 239 establishes 14 principles across four categories — governance, data aggregation accuracy, completeness and timeliness, and adaptability. These apply to all G-SIBs and are recommended for D-SIBs. The point is: model governance doesn't stop at the model itself. The data feeding the model and the reports consuming its output are equally subject to governance requirements.
FRTB Backtesting: A Worked Example of Quantitative Model Controls
The Fundamental Review of the Trading Book (FRTB) provides the clearest example of how model governance translates into concrete, measurable requirements. EU implementation is set for January 1, 2026; UK implementation follows on January 1, 2027.
The Backtesting Requirement
Under FRTB, each trading desk seeking internal models approach (IMA) approval must conduct daily backtesting comparing 1-day VaR predictions at the 97.5th and 99th percentiles against actual P&L, using the most recent 250 business days of data.
The results fall into a traffic-light classification:
| Zone | Exceptions (99th percentile VaR, 250 days) | Consequence |
|---|---|---|
| Green | 0–4 exceptions | Desk qualifies for IMA; no capital surcharge |
| Amber | 5–9 exceptions | Capital multiplier surcharge: 3.0 + 0.2 per exception above 4 |
| Red | 10+ exceptions | Desk must revert to standardized approach for capital |
Worked Example: Desk-Level Capital Impact
Assume a derivatives trading desk runs its 250-day backtest and records 7 exceptions at the 99th percentile VaR level. Here is the governance and capital impact:
Step 1: Zone classification. 7 exceptions falls in the amber zone (5–9 exceptions).
Step 2: Capital multiplier calculation.
- Base multiplier: 3.0
- Additional surcharge: 7 − 4 = 3 exceptions above threshold × 0.2 = 0.6
- Total multiplier: 3.0 + 0.6 = 3.6
Step 3: Capital impact. If the desk's base VaR-based capital requirement is $50 million:
- Capital charge = $50 million × 3.6 = $180 million
- Compare to green zone: $50 million × 3.0 = $150 million
- Incremental capital cost of model underperformance: $30 million
Step 4: Governance escalation. The amber result triggers:
- Root cause analysis of the 7 exceptions (required documentation)
- Model risk committee review within the next reporting cycle
- Assessment of whether model modifications are needed
- Potential re-validation if systematic bias is identified
Step 5: P&L Attribution Test. Separately, a quarterly P&L attribution test (using the most recent 250 business days) compares front-office pricing model outputs against risk model outputs using Spearman correlation and Kolmogorov-Smirnov metrics. Results follow the same traffic-light system — a red result forces the desk to standardized capital treatment.
The practical point: backtesting isn't optional analysis — it's a capital allocation mechanism. Three extra exceptions cost this desk $30 million in additional capital. That makes model accuracy a P&L issue, not just a compliance issue.
Case Studies: What Happens When Governance Fails
JPMorgan London Whale (2012)
The Chief Investment Office's Synthetic Credit Portfolio used a VaR model with a calculation error — an Excel spreadsheet divided by a sum instead of an average, cutting reported risk roughly in half. The new model was approved in January 2012 without proper independent validation. Result: $6.2 billion in losses and $920 million in combined fines to US and UK regulators.
The governance failures: no independent validation of the new model, no effective challenge from risk management, and no escalation when traders began manipulating the marks.
Archegos Capital (2021)
Archegos built $36 billion in total exposure through total return swaps across multiple prime brokers. Stress tests and potential future exposure models underestimated losses by billions. Credit Suisse alone lost $5.5 billion and raised $1.9 billion in emergency capital. The PRA fined Credit Suisse £87 million in July 2023 for risk management and governance failures.
The governance failures: counterparty exposure models failed to aggregate cross-product positions, stress testing scenarios were inadequate, and model limitations were not escalated despite warning signals.
Knight Capital (2012)
Knight Capital deployed new market-making software without adequate testing controls or a kill switch. Obsolete trading code activated, generating $440 million in losses in approximately 45 minutes. The firm required a $400 million emergency recapitalization and was subsequently acquired. The SEC fined Knight Capital $12 million.
The governance failure: model change management — no pre-deployment testing protocol, no rollback procedure, no automated circuit breaker.
Model Risk Appetite and Board Oversight
SR 11-7 expects a board-approved model risk appetite statement defining the level and types of model risk the institution is willing to accept. This statement informs model tiering, validation frequency, and exception escalation thresholds.
The test: if your board cannot articulate what level of model risk is acceptable and what triggers escalation, you don't have a governance framework — you have documentation.
Controls Checklist for Derivatives Model Governance
Essential (High ROI — Prevents 80% of Governance Failures)
- Maintain a complete model inventory with owner, business unit, risk tier, validation status, and next review date for every model
- Enforce independent validation — validation teams must be separate from development and front-office usage, with authority to require changes
- Implement model change management — no production deployment without documented impact assessment, testing, re-validation (if material), and sign-off
- Run backtesting on schedule — daily for VaR models, quarterly for P&L attribution, with automated exception flagging
High-Impact (Workflow and Automation)
- Automate model inventory updates — changes in model status, validation results, and exception counts should flow to a central dashboard
- Establish escalation thresholds tied to the board-approved risk appetite — amber/red backtesting results trigger defined review and remediation timelines
- Conduct champion-challenger reviews annually for Tier 1 models to confirm the production model remains the best available option
- Map model outputs to regulatory reports — trace which models feed EMIR, Dodd-Frank, and capital calculations so reporting failures can be traced to model issues
Optional (Valuable for Complex Derivatives Businesses)
- Implement kill switches and rollback procedures for all automated trading and pricing model deployments
- Cross-validate counterparty exposure models against actual margin calls to identify systematic underestimation
- Benchmark internal models against vendor alternatives quarterly for Tier 1 valuations
Your Next Step: Build (or Audit) Your Model Inventory
If you don't have a model inventory, start one today. If you have one, audit it against this minimum field set:
- Model name and version — what is it called, what version is in production?
- Model owner and business unit — who is accountable?
- Risk tier (1, 2, or 3) — based on materiality, complexity, and downstream usage
- Last validation date and next scheduled validation — is it current?
- Validation outcome — green, amber, or red? Any open findings?
- Downstream dependencies — which regulatory reports, margin calculations, or capital charges consume this model's output?
Cross-reference your inventory against your regulatory reporting obligations (EMIR's 203 fields, Dodd-Frank SDR submissions, FRTB capital calculations). Any model feeding a regulatory output without a current validation is an open risk item — escalate it.
For related regulatory context, see Regulation Best Interest and Derivative Sales and Accounting Standards ASC 815 Overview.
Related Articles

Compliance Testing for Position Limits
Position limit violations are accelerating as an enforcement priority—and the penalties are no longer symbolic. In FY 2024, the CFTC issued 3 position-limit-specific orders in a single quarter, tot...

EMIR and MiFID Considerations for US Firms
US firms trading derivatives with EU counterparties routinely discover that EU regulations reach across borders. EMIR's reporting, clearing, and margin obligations apply whenever an EU entity is on...

Using Futures to Hedge Commodity Exposure
Learn how producers and consumers use futures contracts to hedge commodity price risk, including hedge ratio calculation and basis risk management.