Certicate in Quantitative Finance.

note that Workshops will be held to discuss the project topics in detail.

This document outlines each available topic together with submission requirements for the

project report. By-step instructions oer a structure but limitations to what you can imple-

ment. The brief should be used together with preparatory reading and review of the relevant

CQF lectures.

Exclusively for the use by current CQF delegates. No distribution.

1

1 Instructions

Assessment for Module Six is carried out by the means of a programming project. It is

designed to give an opportunity for further study of numerical methods required to implement

and validate a quantitative model. The topics rely on material up to and including Module Five

as well as Workshops. Print out and focus on the section that covers your chosen topic.

The following four topics available for you to choose from:

1. Pricing Basket Credit Default Swap (sampling by copula)

2. Factors in Interest Rate Models (forward curve data)

3. LIBOR Market Model with OIS Discounting (interest rate volatility)

4. Portfolio Construction using Black-Litterman Model (time series)

Each project must have a mandatory element

CVA Calculation for Interest Rate Swap

To complete the project, you must implement one of the listed topics plus CVA calculation

element. Submit working code together with a well-written report and originality declaration.

1.1 Submission date is Monday, 13 July 2015

There is no extension to the Final Project. Late submissions will be deferred to the

next cohort by default.

The report must be soft-bound in thesis style. Normal length is 15-30 pages, ex. code.

The software must be provided on a

ash drive securely attached, or a DVD in a plastic

pocket attached to the inside back cover.

One copy must be posted to Fitch Learning, 4 Chiswell Street, London EC1Y 4UP, United

Kingdom. Fitch Learning will keep the submitted reports. E-mailed submissions will not

be accepted.

The report must be postmarked or delivered by hand on the submission date at the latest.

Projects without declaration or code are incomplete and will be returned.

All projects are checked for originality. The program reserves an option to invite

a delegate for a viva voce before the CQF qualication can be awarded.

2

1.2 Project Code

Traditionally, project code is written in VBA or C++. Depending on topics chosen,

delegates also use Python with libraries, MATLAB with toolboxes or R’s integration with

Wolfram’s Mathematica. Java and C# with .NET Framework are also acceptable.

If you are using other coding environments please consult with a CQF Tutor in advance.

Use of Excel spreadsheets or Mathematica is usually limited to data processing, visualisa-

tion as well as validation of interim and nal results.

The aim of the project is to enable you to code numerical methods and develop model

prototypes in a production environment. Therefore, it is recommended that you minimise

the use of Excel application functions. They are unstable and computationally inecient.

It is desirable that you re-code numerical methods that are central to your model. However,

a balanced use of libraries is allowed. When to re-use ready code is ultimately your decision.

When using ready functions, consider their limitations in the report.

Note: submission of a project that is solely based on unchanged code that was provided

in CQF lectures and textbooks would be a serious drawback.

The code must be thoroughly tested and well-documented: each function must be de-

scribed, and comments must be used. Provide instructions on how to run the code.

It is up to delegates to develop their own test cases, sensibility checks, and validation. It

is normal to observe irregularities when the model is implemented on real life data. If in

doubt, re

ect on this in your project report.

1.3 Project Report

It is a technical report with the main purpose to facilitate access to implemen-

tation of numerical methods (the code) and pricing results.

The report must contain a sucient description of the mathematical model, numerical

methods and their properties. In-depth study is welcome but report should be relevant.

Include notes on algorithms and code in your report, noting which numerical methods you

have re-coded.

Please give due attention and space for presentation and discussion of your pricing results.

Use charts, test cases and comparison to research results where available.

Mathematical sections of the report can be prepared using LaTeX, Microsoft Equation

Editor (Word) or Mathematica.

The next sections outline requirements and implementation steps. A separate Final Project

Q&A document discusses the most frequent issues encountered.

3

Fair Spread for Basket CDS

Summary

Price a fair spread for a Basket CDS with at least 5 reference names using the sampling from

a copula algorithms. The input of the appropriate default correlation, preferably estimated from

the historical CDS data, is required. The fair spread is calculated as an expectation over the

joint distribution of default times. In order to convert simulated uniform random variables to

default times, you will need their marginal distributions, dened empirically by hazard rates.

The hazard rates are bootstrapped from traded credit spreads (CDS) under the assumption of

being piecewise constant.

A successful project will implement sampling from both, Gaussian and Student’s t copulae,

pricing for 1st to 5th to default basket swaps as well as elaborate sensitivity analysis for the

prices of these contracts. Key numerical aspect for the topic is the use of low discrepancy

sequences (e.g., Halton, Sobol).

Data Requirements

Two separate datasets required, together with matching discounting curve data for each (can

be as approximate as necessary).

1. A snapshot of credit curves (on a particular day) is required, each reference name

(debt issuer) will have its own CDS curve from which a term structure of hazard rates is

bootstrapped and utilised to convert simulated correlated ui into exact default times, ui !

i). If you do not have access to nancial data providers, CDS values for 1Y; 2Y; : : : ; 5Y

can be visually stripped from plots oered by nancial media (eg, CNBC website).

2. Historical credit spreads data, usually taken at the most liquid tenor 5Y . The ve

columns of that sample data give the calculation input for 55 default correlation matrix.

Corporate credit spreads are not in the open access: if interested in that sampling please

obtain them from Bloomberg or Reuters terminals via your rm or a colleague.

For sovereign credit spreads, time series of ready-bootstrapped default probabilities (5Y

tenor) can be downloaded from DB research. Click on Excel icon to download le, rename

extension to :xls, open and check for missing values in rows.

http://www.dbresearch.com/servlet/reweb2.ReWEB?rwnode=DBR_INTERNET_EN-PROD\$EM\&rwobj=

CDS.calias\&rwsite=DBR_INTERNET_EN-PROD

Even if Sovereign CDS5Y and PD5Y series are available with daily frequency, co-movement

in daily changes might re

ect market noise more than correlation of default events that

are relatively rare to observe. Weekly changes give more appropriate input for estimation

of such fundamental quantity as default correlation however that would entail using 2-3

years of historical data given that we need at least 100 data points to estimate correlation.

4

As an alternative to credit spreads data, correlation matrix can be estimated

from historical returns (on debt or equity).

Step-by-Step Instructions

1. For each reference name, bootstrap implied default probabilities from quoted CDS and

convert them to a term structure of hazard rates, Exp(^1Y ; : : : ;^5Y ).

2. Estimate default correlation matrix (dierent for Gaussian and t) and d.f. parameter (ie,

calibrate copul). You will need to implement pricing by Gaussian and t copulseparately.

3. Using sampling form copula algorithm, repeat the following routine (simulation):

(a) Generate a vector of correlated uniform random variable.

(b) For each reference name, use its term structure of hazard rates to calculate exact

time of default (or use semi-annual accrual).

(c) Calculate the discounted values of premium and default legs for every instrument

from 1st to 5th-to-default. Conduct MC separately or use one big simulated dataset.

4. Average premium and default legs across simulations separately. Calculate the fair spread.

Model Validation

The fair spread for kth-to-default Basket CDS should be less than k-1 to default. Why?

Project Report on this topic should have a section on Risk and Sensitivity Analysis

of the fair spread w.r.t.

1. default correlation among reference names: either stress-test by constant high/low

correlation or percentage change in correlation from the actual estimated levels.

2. credit quality of each individual name (change in credit spread, credit delta) as well

as recovery rate.

Make sure you discuss and compare sensitivities for all ve instruments.

Ensure that you explain historical sampling of default correlation matrix and copula t (ie,

uniformity of pseudo-samples) { that is, Correlations Experiment and Distribution Fitting

Experiment (will be described at the Module Five Workshop). Use histograms.

Convergence and robustness of spread pricing depend on correct copula tting.

Introduction to Copula

MathWorks have tutorials on copula tting and applications { I can recommend one from

their older CD (dated 2005). Please search for Market Risk using GARCH, EVT and Copula.

5

Resources

The starting source on sampling from copula algorithm is Monte Carlo Methods in Finance

textbook by Peter Jaekel (2002). Please see Chapter 5, particularly pages 46-50.

Most likely, you will need to re-visit CDO Lecture material, particularly about slides 48-52

that illustrate Elliptical copula densities and discuss factorisation.

Rank correlation coecients are introduced Correlation Sensitivity and State Dependence

and Jaekel (2002) as above. Any good statistical textbook can serve as a reference.

Bootstrapping of survival probabilities is covered in Credit Default Swaps Lecture.

6

Interest Rate Derivatives and Discounting

Summary

Focus on factorisation of the yield curve and numerical techniques of Principal Component

Analysis. Identify and attribute factors that determine evolution of the forward curve. Choose

a sampling and dierencing approach that contributes to the robustness of covariance matrix

of interest rate changes (dierences). The sampling and factorisation eorts will ensure robust

calibration of volatility functions and acceptable results from the HJM SDE simulations.

A successful project will provide a. calibration of volatility functions from your original

dataset and b. convergent pricing of forward-starting caplets (

oorlets) and caps, supported by

charts and analysis of Monte-Carlo eciency. Option pricing examples must oer a view across

a range of strikes and tenors as well as sensitivity analysis wrt market risk factors. Report on

this topic should concentrate on numerical techniques rather than generic SDE derviations.

Discounting is applied to maturing payos of IR derivatives. Discounting is intimately related

to curve construction because cash

ows mature at various tenors (becoming arbitrary dates in

calendar time T). When pricing LIBOR-linked derivatives there are two main approaches as to

where from to choose ad iscount factor :

a. LIBOR rates which has an advantage of pricing under the same risk-neutral measure. Bank

of England provides Bank Liability Curve constructed from LIBOR-linked instruments.

b. OIS spot curve taken in addition to BLC curve { the practice referred to as dual-curve

pricing.

c.,d. Swap curve or ZCB yield from bonds priced by Monte-Carlo using the evolution of r(t)

from one-factor stochastic interest rate models (eg, Hull-White, Vasicek).

There is an argument that discounting must re

ect the cost of funding (that is dierent for each

market participant!). LIBOR is a better estimate of the dealers cost of funding, but collater-

alised transactions earn OIS rate. Recall, however, that our SDE models rely on the risk-free

rate, same for all investors, which OIS rate currently represents. 1

Overnight Indexed Swap has its

oating leg indexed to an overnight interest rate (eg, Fed

Funds, SONIA, EONIA). OIS discounting is compatible with the rolling risk-neutral numer-

arie, where instead of a money market account we use a rate provited by a central bank on

cash-collateralised account). Forward-looking Overnight Indexed Swaps are available for var-

ious tenors not just overnight. Compared to OIS, there is another common kind of the spot

curve { one built from government bonds (GLC) or interest rate swaps (swap curve).2

1\The evaluation of an investment should depend on the risk of the investment and not on the way it is

funded.” See LIBOR vs. OIS: The Derivatives Discounting Dilemma

http://www-2.rotman.utoronto.ca/~hull/downloadablepublications/LIBORvsOIS.pdf

2Pre-2010, LCH.Clearnet used discount factors from the swap curve.

7

Forward Rates and Interpolation It useful to look up how instantaneous forward rates are

explained in the BoE Methodology paper. Forward rates are bootstrapped from spot rates. In

practice, government bonds pay semi-annual coupons and have calendar time to expiry in days

(not tenors), therefore, it is necessary to strip spot rates from the dirty bond prices.

Please do check \Yield Curve.xlsm” le for explanatory examples of bootstrapping.

CQF Extra Valuation framework for interest rate derivatives in today’s LIBOR world (2011) by

Wojciech Slusarski provides Excel examples implementation from Interpolation Methods for

Curve Construction paper by Pat Hagan and Graeme West.

Bloomberg Curves Toolkit focuses on stripping data and interpolating the curves. A tutorial

available via the platform’s Excel Add-in XCTK with links to White Papers on methodology.

Matlab resources and cases are at http://www.mathworks.co.uk/discovery/yield-curve.html

Sampling for PCA (volatility functions) Covariance matrix is estimated on historical

data of changes in forward rates of constant maturity. There is no single `correct’ way about

sampling. The aims are identifying regimes in interest rates and obtaining robust . It is

volatility regimes that are of interest, not simply high rates/low rates. Rate dierences are

an iid process { we are essentially analysing shocks to interest rates, so it makes sense why a

decomposed covariance matrix gives volatility functions.

a. A period of 2-3 years is common choice. Large sample is possible however using very

large periods (>5Y) leads to mixing of regimes and diculty in attribution of principal

components (factors).

Given the low rates regime and lack of liquidity, you will need to remove the rows that

have zero rates or miss data for certain tenors.

b. Rates are reported daily so calculating daily dierences is the natural choice. However,

using the weekly changes is possible and can be advantageous if rates do not move much.

PCA also requires subtracting a mean but dierences are likely to be centered around zero.

c. To improve robustness of estimation (smooth variance,

esh out trend), log-dierences

can be used vs. simple dierences. That would require applying exponent to volatility

functions before using them in the HJM SDE.

8

Data Sources (forward rates)

Pound Sterling Bank Liability Curve (BLC) is built from the LIBOR-linked instruments

(LIBOR xes up to 18M, then FRAs). Forward and spot rate data is available; for PCA

analysis, start with the full curve up to 25Y and consider including the short end data.

http://www.bankofengland.co.uk/statistics/Pages/yieldcurve/archive.aspx

NEW Bank of England provides OIS spot rates, to be used to calculate discount factors

when pricing caps and

oors.

http://www.bankofengland.co.uk/statistics/Documents/yieldcurve/ukois09_mdaily.xls

ECB provides Government Liability instantaneous forward rates, known as Euro area curve

http://www.ecb.europa.eu/stats/money/yc/html/index.en.html

US Federal Reserve does not provide a forward curve. FRB H.15 release provides the

historic data for spot rates of daily and weekly frequency. There are two instruments:

Treasuries and Interest Rate Swaps, each instrument gives its own curve.

http://www.federalreserve.gov/releases/h15/data.htm

If one were to bootstrap USD forward curve from those rates3 { that would be a curve

construction project on its own requiring interpolation and smoothing choices. Put simply,

bootstrapping d = 0:5 forward curve from 1Y, 2Y, 3Y, 5Y, 7Y, 10Y, 20Y and 30Y tenors

does not appear reasonable.

Summary. Monte-Carlo simulation within the HJM framework relies on consistent forward

curve representation by the PCA factors obtained from historical data of instantaneous forward

rates. Caplet pricing is done under the risk-neutral measure. Conversion of caplet price (cash-

ow) into implied volatility can be checked by a calculator, such as Bloomberg SWPM. Caplet

volatility is the output, suitable for risk management.

3Conceptual problem with USD spot rates and ECB forward curve is that they re

ect Government Liability

and are more suitable to build discounting curves (asset appreciation in the risk-neutral world) rather than curves

that give expectation of forward rates, i.e., what LIBOR will be in the future. Note that BLC provided by the

Bank of England does re

ect LIBOR-linked instruments and most IR derivatives are linked to the LIBOR.

Further comparison: ECB uses AA-rated government bonds to build a forward curve, while bonds of AA-rated

nancial institutions provide input for bootstrapping of the (forward) LIBOR curve.

9

Step-by-Step Instructions – HJM Model (Yield Curve Data)

Part I: Data and Volatility Estimation

1. Obtain time series data on evolution of instantaneous forward rates.

Include both, short and long ends of a yield curve.

Even if you model a market other than sterling, check out data format on the Bank

of England website (Yield Curve Data section).

2. Convert interest rates data into dierences and calculate the covariance matrix.

This is `a quest for invariance’ as we eectively analyse the covariance matrix of

innovations/daily shocks in interest rates.

Part II: Calibration (Factorisation)

3. Conduct Principal Component Analysis on the covariance matrix. Use the sum of eigen-

values to evaluate contribution of components (factors).

4. Scale principal components with

p

i and apply curve tting technique to each volatility

function. An example of cubic spline (polynomial tting) using regression is provided but

other techniques can be sourced.

Avoid using with more factors (eigenvectors) than you deem necessary. Another

kind of over-tting would be use of the higher-order polynomial t O( 4);O( 5); etc.

Remember, that PCA is data-driven, so functional forms of PCi are not guaranteed.

5. HJM SDE’s drift calculation is done via numerical integration over tted volatility func-

tions. The approach allows

exibility over analytical integration of changing ts.

Part III: Pricing by Monte-Carlo

4. Calibration and tting of volatility functions (with numerical integration) enables running

of HJM SDE sumulations and pricing derivatives by Monte-Carlo.

For each round of simulation, generate a vector of Normal random variables for each

factor that will give simulated movement (independent of other factors).

Price convergence might require from 2; 000 to 10; 000 simulations but no more. Use

of low-latency quasi-random numbers will be advantageous.

5. After each simulation, calculate the average price of caps/

oors with parameters of your choice

(ie, tenor, strikes and maturities)

That running average is the quantity that must exhibit convergence (usual problems

are too much rounding, data types and their conversion).

Cap/

oor pricing examples must oer a view on volatility across a range of strikes

and maturities. Convert cap cash

ow into implied volatility using Black formula.

10

6. Results Analysis and Discussion section of report can, but not limited to, cover

Caplet price sensitivity to market risk factors (quarterly vs. semi-annual expiries, cap

maturity, choice of discounting factor from the model/Spot curve/OIS). Testing for

impact of `bucket risks’, ie, bumping rates at particular tenors and then re-simulating

and re-pricing would be advanced analytics.

Implementation of OIS discounting (data now available from the Bank of England).

Comparison of prices obtained using simple vs. convexity-preserving interpolation.

As compared to Excel demonstrations provided with the HJM Lecture, implemen-

tation can be improved particularly by employing numerical techniques.

Resources

Start with a methodology paper from the Bank of England, Inferring market interest rate

expectations and The yield curve, and spot and forward interest rate by Moorad Choudhry

(2008), both provided with the HJM Lecture.

The next step is to review Methods for Constructing a Yield Curve by Pat Hagan and

Graeme West { best to start with version published in WILMOTT (May-June 2008).

The LIBOR Market Model in Practice specialised textbook by Gatarek, et al. (2006) gives

technical detail on cailbration from caplets and swaptions (Chapters 7 and 9 respectively)

that will be useful to those working with LIBOR derivatives. (Please email the tutor.)

11

Advanced IR Derivatives – LIBOR Market Model

Summary

This advanced version of Interest Rate Derivatives topic is for delegates with experience in

interest rates who would like a challenge. The topic does require access to caplet and/or swap-

tion data, usually maintained by trading desks and interdealer brokers.

The interim outcome is boostrapped 3M caplet volatility together with tted volatility func-

tions.4 While HJM calibration means tting volatility from the historic data of interest rate

changes, LMM utilizes market data of implied volatility and therefore is deemed to be forward-

looking. Both models rest on the no-arbitrage argument: the drift cannot be random and

depends on volatility of forward rates.

There is overlap between this advanced topic and Interest Rate Derivatives topic, particu-

larly on the matters of discounting, caplet pricing and Black formula. Therefore, please review

the Brief and Q&A documents for that topic.

The nal outcomes are a. analysis and discussion of caplet/

oorlet pricing and sensitivity to

market risks and b. discussion of interest rate swaps and pricing of pricing vanilla and Bermu-

dan swaptions.

Data Sources

(LMM) Caplet or swaption data is usually maintained by trading desks and interdealer

brokers. Data for certain markets is available from Thomson Reuters and Bloomberg.

Here, research papers and textbooks can be used as data sources (cap volatility).

LMM is a discretised market model that requires 3M caplet volatility data on input (we

usually need to bootstrap that from traded caps 1Y, 2Y, etc.), to which volatility functions

are tted. LMM can also be calibrated to swaption volatility data, which is achieved by

optimisation and called Rebonato Method.

4The brief provides guidance on the LIBOR Model however, the arbitrage-free SABR oers explicit analytical

solutions for volatility tting.

12

Step-by-Step Instructions – LIBOR Market Model (Rate Options Data)

Part I: Data

1. You will need market prices data of cap and

oor options (simply referred to as `caps’).

The data can be two columns (series) of cap prices and corresponding discount factors.

Caps are quoted in terms of implied volatility cap (notation is cap

i

p

T or i for a caplet).

(a) Black formula is conventional means of converting the cash

ow of a cap (caplet) into

an implied volatility gure.

2. The second set of data to which model tting can be done is swaptions, for which the

deliverable asset is an interest rate swap.

Part II: Calibration (Volatility Stripping)

3. Stripping caplet volatilities cap

i

p

T or i as LMM requires more discretisation than cap

prices traded in one-year increment.

Algorithm involves calibration of strikes as forward swap rates S(t; Ti??1; Ti).

4. Equally, volatilities can be calibrated from vanilla swaptions as they are options and give

another source of implied volatility i (Rebonato method makes Black(1976) suitable).

5. Fitting the abcd instantaneous volatility function ( ) dened for each tenor as i(t).

Coecients abcd are estimated by optimisation that can be joint wrt caplet implied volatil-

ities cap

i and swaption implied volatilities i. Goal is to minimise the squared dierences

between two implied volatilities (for the same tenor enumerated i).

(a) Correlation structure ij is dened parametrically.

In a sense, pricing of swaptions has already been done in the process of calibration (stripping)

of caplet volatilities because forward swap rates S(t; Ti??1; Ti) has been calculated. Pricing of

path-dependent options, such as Bermudans that give exercise

exibility on some or all payment

dates Ti; Ti+1; :::; Tm, would require the modied Least Squares Monte-Carlo simulation.

13

Portfolio Construction with Time Series Extensions

Summary

Construct a portfolio of diverse assets using large historic time series and estimate allo-

cations for multiple levels of risk aversion together with the key risk measures such as VaR,

Expected Shortfall, Sharpe or other information ratios. The measures often used to quantify

`diversication’. The key point of the Black-Litterman model is introduction of the analysts’

recommendations into portfolio construction. You will have to identify sensible recommenda-

tions and formalise them in form of constraints suitable for BL estimation.

The eective implementation will (a) improve on the sample covariance matrix estimator

and (b) carefully diagnose and treat the optimisation problem, choosing from mean-variance,

mean-TE, or mean-ES optimisation modes.5 It is also necessary to study the robustness of

allocations (i.e., what makes the weights to change) under dierent assumptions and constraints

ceteris paribus; this study needs to be detailed and well-illustrated.

A successful project will have matrix form calculations and other numerical techniques coded

(rather than spreadsheet calculations), robust optimisation results for dierent levels of risk

aversion, and extensive graphic presentation. Note: a naive mean-variance optimisation on

sample mean and covariance is of little value. Recommended to attempt time series analysis,

such as variance estimation with GARCH models, returns projection, or cointegration analysis.

Data Requirements (Portfolio Design)

The objective is to come up with either (a) a multi-asset diversied portfolio or (b) a `spe-

cialised’ portfolio that focuses on an industry, emerging market(s), credit, etc; another example

would be a managed futures portfolio. The rst kind of portfolio can have under 15 diverse

assets and sub-assets, while the specialised portfolio usually includes more.6

Multi-asset includes equity, xed income, credit and volatility. Commodities, real estate

and other assets are optional. ETFs can be used to represent the asset classes for which

the data is hard to obtain (e.g., bonds, futures).

Replication of broad equity indices is convenient.7 However, the multi-asset approach

comes from the need to account for key factors that drive performance of names selected.

It is possible to include a factor into optimisation as if it were a traded asset.

Mean-variance optimisation is specied for excess linear returns. The technical challenges,

such as the risk-free rate changing over time can be dealt using simplied assumptions.

5TE stands for Tracking Error and ES stands for Expected Shortfall mathematically known as CVaR.

6If you follow portfolio compiled in others’ study please provide a reference.

7Because index weights can be converted into equilibrium weights, alternatively market cap approach can be

used.

14

The minimum historical time period is 2-3 years (for daily returns) though you might use the

shorter periods of 1-3 months for variance estimation; that would require robust estimation and

exclusion of stressed periods. Portfolios that are tradeable strategies themselves might require

the higher frequency data (1-10min). A starting source for historical daily close prices of US

equities and ETFs is Yahoo!Finance.

Step-by-Step Instructions

Part I: The Black-Litterman Model

1. Construct the prior (reference distribution): equilibrium returns can come from a bench-

mark index, while covariance is estimated from historical data. Improve robustness.

2. Dene input views of both kinds, relative and absolute.

3. Estimate the posterior distribution of excess returns using the Black-Litterman formulae.

Part II: Robust Allocation

5. Choose at least one-two more optimisation designs in addition to variance minimisation.

Formulate reasonable optimisation constraints (if necessary in addition to views), for ex-

ample `no short positions in bonds’.

6. Obtain allocations for three levels of risk aversion.

7. Study robustness of allocations (explore optimisation behaviour) and check for common

pitfalls such as `corner solutions’. Can you pinpoint a few factors that drive allocations?

Part III: Time Series Analysis [Two suggestions]

8. Stabilise covariance matrix by GARCH model before conducting BL optimisation.

Alternatively, you can test the performance and risk of optimised allocations on GARCH-

simulated returns (forward projection).

9. Cointegration analysis can be used to identify candidates for the long-short and managed

futures portfolios. An alternative is to focus on pair trade design, quality of mean-reversion

(OU process t), and properties of P&L.

Resources

CQF Lecture on Fundamentals of Optimization and Application to Portfolio Selection

A Step-by-step Guide to The Black-Litterman Model (Incorporating user-specied con-

dence levels). Thomas Idzorek, 2002

The Black-Litterman Approach: Original Model and Extensions Attilio Meucci, 2010.

http://ssrn.com/abstract=1117574

http://blacklitterman.org/

15

CVA Calculation for Interest Rate Swap

Summary

To recognise the importance of credit value adjustments to the derivatives business we in-

troduce this mandatory component which must be implemented with each topic.

Calculate the credit valuation adjustment (taken by Counterparty A) to the price of an inter-

est rate swap using the credit spreads for Counterparty B. Produce Expected Exposure prole

using the mean of exposure distribution { distribution of Forward LIBORs { at each time Ti+1.8

Advanced implementation will also produce Potential Future Exposure with the simulated L6M

taken from 97:5% or 99% percentile.

Provide a brief discussion of your observations, e.g., exposure over time, location of maximum

exposure. The advanced sensitivity analysis will illustrate the concept of the wrong-way risk.

Inputs and By-Step Instructions

The inputs for CVA calculation are available through other topics. You can make any

assumptions about the required data (credit spreads, discount factors).

Probability of default is bootstrapped from credit spreads for a reference name (any rea-

sonable set of credit spreads can be assumed). Linear interpolation over spreads and use of

ready PD bootstrapping spreadsheet are acceptable, RR = 40%. CVA LGD – own choice.

Assume the swap is written on 6M LIBOR L6M expiring in 5Y.

To simulate the future values of L6M at times T1; T2; T3; : : : take either HJM MC spread-

sheet or ready implementation of a calibrated model for r(t), such as Hull & White.

At each time Ti+1 there is a distribution of Forward LIBORs but we only require its mean

(an average) to calculate the exposure. Notional N = 1 and payment frequency = 0:5.

Dene MtM position as Floating Leg??Fixed Leg = (L6M ??K) appropriately discounted.

For simplicity, it is best to choose xed leg (rate) K such that the exposure is positive.

The maths of obtaining Forward LIBOR from a simulated forward curve is illustrated in

the Yield Curve spreadsheet. Discounting factors to be taken from the OIS spot curve.

Resources

CQF Lecture on Credit Value Adjustment. For an alternative implementation example,

8Market practice is to use the mean for positive side of the distribution only. Another EE calculation method

uses an average of maximum values from each simulation: 1

N

PN

i=1 max(Li

6M; 0).

16