Loading...

Backtesting Expected Shortfall

A GARCH-EVT-Copula Approach

©2015 Master's Thesis 98 Pages

Summary

Advanced research on coherent risk measures revealed some weaknesses in the widely used value-at-risk (VaR) model; hence, coherent risk measures, such as expected shortfall
(ES) became increasingly popular. Consequently, a publication of the Basel Committee in 2012 suggested moving from VaR to ES as the new risk measure for the
minimum amount of capital to cover potential loss. The backtesting of models has an important role in the Basel framework, because it proofs the predictability of risk
models. While the backtesting for VaR is simple and established in most financial institutions, the backtesting for ES is unexplored and widely unknown. Moreover, the
discovery in 2011 that ES is not an elicitable risk measure suggested that backtesting might not be possible. First, this thesis models financial portfolio returns using an
approach combining GARCH models, extreme value theory (EVT) and copula functions. This approach is applied in the numerical computer language MATLAB to a
sample portfolio of shares from Bayrische Motoren Werke AG, Deutsche Bank AG, Adidas AG and Siemens AG. The theory is based on the advanced research of statistical
properties of financial returns, known as stylized facts. Finally, this approach is used to compare the backtesting results of VaR and ES. The results show that ES is
backtestable but the basic idea of its backtesting procedure differs from that of VaR. The statistical tests for backtesting ES require the storage of more information and
a large number of scenarios to compute the significance and power of the tests. This relates to some technical challenges for applying the backtest to ES.

Excerpt

Table Of Contents


List of Abbreviations
ARCH Autoregressive conditional heteroscedasticity
ARMA Autoregressive moving average
BIS Bank for Internal Settlements
CDF Cumulative distribution function
CMLE Canonical maximum likelihood estimation
EL
Expected loss
ES
Expected shortfall
EVT Extreme value theory
GARCH Generalized autoregressive conditional heteroscedasticity
GEV Generalized extreme value distribution
GPD Generalized Pareto distribution
i.i.d. Independent and identically distributed
LGD Loss given default
MDA Maximum domain of attraction
MLE Maximum likelihood estimation
PD Probability of default
POT Peak over threshold
SWN Strict white noise
VaR Value-at-risk

List of Tables
3.1 Inuence of a dividend payment to nancial returns . . . . . . . . . . . 23
3.2 Ljung-Box test for 20 lags of daily DAX returns between January 1,
2000 and September 30, 2014 from Yahoo!Finance . . . . . . . . . . . . 27
3.3 Ljung-Box test for 20 lags of daily squared DAX returns between Jan-
uary 1, 2000 and September 30, 2014 from Yahoo!Finance . . . . . . . 27
3.4 Daily return correlations of Commerzbank AG (CBK), Deutsche Bank
AG (DBK), Siemens AG (SIE) and Adidas AG (ADS) . . . . . . . . . 31
4.1 VaR and ES results of simulated portfolio returns for sample condence
levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1 Decision error in statistical hypothesis tests . . . . . . . . . . . . . . . 66
5.2 Trac light zones of the Basel Committee . . . . . . . . . . . . . . . . 66
5.3 Sample trac light system of the Basel committee for T = 250 and
= 1%
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.1 Backtesting parameter of 4 sample portfolios . . . . . . . . . . . . . . . 72
6.2 Results of VaR and ES backtests with parameters of table 6.1. The rst
scenario of M simulations is the realization Z(x). P-value of realization
Z(x)
is simulated over M scenarios . . . . . . . . . . . . . . . . . . . . 75

List of Figures
2.1 Historical development of the market risk regulatory of the Bank for
Internal Settlements based on the published Basel Accords from the
BIS [5]and Nagler & Company [38] . . . . . . . . . . . . . . . . . . . . 13
2.2 Application of Basel II accord, based on Jorion [28, page 59, Figure 3-1]15
3.1 Comparison between closing prices and adjusted closing prices of BMW
between January 1, 2000 and September 30, 2014 . . . . . . . . . . . . 24
3.2 Returns of the DAX index between January 1, 2000 and October 30,
2014. The red line is a moving average process of the unconditional
volatility with recent 100 trading days. . . . . . . . . . . . . . . . . . . 25
3.3 Autocorrelation plots for 20 lags of DAX index between January 1, 2000
and September 30, 2014 from Yahoo!Finance. The blue lines show the
95% condence interval . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 Density and CDF plots of DAX 30 returns between January 1, 2000
and September 30, 2014 from Yahoo!Finance against a normal tted
distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 QQ plots of DAX 30 returns between January 1, 2000 and September
30, 2014 from Yahoo!Finance . . . . . . . . . . . . . . . . . . . . . . . 29
3.6 Yearly correlations of Commerzbank AG (CBK) against Deutsche Bank
AG (DBK), Siemens AG (SIE) and Adidas AG (ADS) . . . . . . . . . 31
3.7 Distribution functions and densities of GPD in three cases . . . . . . . 44
3.8 Loss distribution, VaR and ES presented graphically . . . . . . . . . . . 46
4.1 Relative adjusted closing daily prices movement between January 1,
2010 and December 31, 2012 from BMW AG, Deutsche Bank AG, Adi-
das AG and Siemens AG . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 ACF plots of the standardized residuals . . . . . . . . . . . . . . . . . . 54
4.3 ACF plots of the squared standardized residuals . . . . . . . . . . . . . 55
4.4 Empirical CDF of standardized residuals . . . . . . . . . . . . . . . . . 56
4.5 Upper tail of standardized residuals . . . . . . . . . . . . . . . . . . . . 57
4.6 CDF plot of simulated 10-day portfolio returns . . . . . . . . . . . . . . 60
5.1 Backtesting procedure graphically . . . . . . . . . . . . . . . . . . . . . 64
7

5.2 Sample trac light system of the Basel Committee graphically for T =
250 and = 1% . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.1 Observed loss, ES and VaR estimations for each backtest iteration of
four sample portfolios with parameters of table 6.1. A violation occurs
if a blue line touches a VaR estimation (green or black line) . . . . . . 74
6.2 Probability density function of Z
1
(L
H
) of each sample portfolio back-
testing with parameters of table 6.1 . . . . . . . . . . . . . . . . . . . . 76
6.3 CDF of Z
1
(L
H
) of each sample portfolio with parameters of table 6.1 . 76
6.4 Probability density function of Z
2
(L
H
) of each example portfolio back-
testing with parameters of table 6.1. . . . . . . . . . . . . . . . . . . . 77
6.5 CDF of Z
2
(L
H
) of each sample portfolio backtesting with parameters
of table 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

1 Introduction
Since the publication of the market risk amendment in 1996, value-at-risk (VaR) has
become the most widely used risk measure in risk management and the standard for
evaluatingthe risk reserve in the Basel framework. The research of Artzner et al.
(1998) [4] on coherent risk measures reveals some weaknesses of the VaR approach.
It rejects the property of subadditivity, which means VaR models could be manipu-
lated, because the diversication eect does not benet. Furthermore, VaR does not
detect the problem of fat tails of nancial returns. For that reason, the Basel Com-
mittee published the Fundamental review of the tradingbook in May 2012. This
consultative paper suggests a shift from VaR to expected shortfall (ES) as the new
risk measure for the risk reserve. The backtest of the risk models plays an important
role in the Basel framework and is one of the major strengths of the VaR model. In
contrast, the backtestingof ES is largely unexplored and the research on elicitable
risk measures conducted by Gneiting(2010) [25] leads to the assumption that ES is
not backtestable. Finally, the research of Acerbi and Szekely (2014) [2] shows that
elicitability is not necessary for backtesting. Furthermore, they suggest three methods
to backtest ES.
The objective of this thesis is to show the dierence between the actual backtest-
ing procedure of VaR and the suggested backtesting methods of Acerbi and Szekely
for ES with a quantitative analysis of stock portfolios. A backtestingprocedure is a
statistical method to proof the predictability of risk models. Hence, the rst step of
this thesis is to nd a good model to simulate nancial portfolio returns. The progress
of research on nancial returns has revealed some important properties, which are
known as stylized facts of nancial returns. The most important stylized fact is the
fat tail property, which means that large positive or negative returns are observed.
This is why the normal distribution is unsuitable for modelingnancial returns. Ex-
treme value theory (EVT) is used to model these extreme events. It needs independent
and identically distributed (i.i.d.) random variables. A further stylized fact is that
nancial return show a little serial correlation and volatility a profound serial correla-
tion, which means that nancial returns do not t the assumption of i.i.d. variables.
Therefore, a combined AR(1) and GJR-GARCH(1, 1) process is used to produce an

Chapter 1. Introduction
10
i.i.d. process of random variables. The last stylized fact is nonlinear dependence.
This property states that multivariate correlation depends on the market situation.
For example, bear markets see a higher correlation than do bull markets. Hence, the
dependence structure of a portfolio is simulated with copulas. This approach to mod-
eling nancial returns is called the GARCH-EVT-Copula approach.
This thesis starts with a non-mathematical introduction of risk and randomness in
nancial markets and an overview of market risk regulations in the Basel frameworks,
which forms the background for this thesis. Chapter 3 provides a theoretical back-
ground for nancial returns, its stylized facts, the GARCH-EVT-Copula approach
and the risk measures VaR and ES. This is followed by a short description of the
data provider and the programming language MATLAB, which is used in this thesis.
Chapter 4 presents the use of the GARCH-EVT-Copula methodology on a sample
portfolio and the implementation in MATLAB. Finally, the backtesting procedures of
VaR developed by the Basel Committee and the suggested methods for ES, as devel-
oped by Acerbi and Szekely, are described and tested with a sample of stock portfolios
in chapter 5. The results show that ES is backtestable but the basic ideas of the
backtests dier between the current Basel approach and the suggested test of Acerbi
and Szekely. The computation of the signicance and power requires the storage of
more information and the simulation of a large number of scenarios, which relates to
some technical challenges for applying the backtest to ES.

2 Risk in Perspective
2.1 Risk and Randomness
There is no general denition for risk. The Oxford English Dictionary [48, page 1313]
denes risk as the possibility of something bad happening at some time in the future.
Most often, only the downside of risk is mentioned but risk is also a potential for gain.
This thesis embraces the nancial risk of assets and portfolios. For this content, a
denition such as volatility of unexpected outcomes, which can represent the value
of assetsor the quantiable likelihood of loss or expected returnprovides a good
basis. These denitions capture some of the important elements of risk, but no single
one-sentence denition is entirely satisfactory in all contexts (see McNeil et al. (2005)
[37, page 1]).
Since 1970, the volatility of nancial markets has increased, a development that is
accentuated by the automated high-frequency trading systems. While the average
holding period of a stock was seven years in 1940, it decreased to two years in 1987.
Today, the average holding period of a stock is ve days
1
. This downward trend is
attributable to the increasing frequency of crises such as the oil price shock in 1973,
Black Monday in 1987, the Japanese stock-price bubble in 1989, a terrorist attack on
November 9, 2001, the nancial crisis in 2008 and the beginning of the debt crisis in
the south European countries in 2011. These are just a few examples of recent crises.
Jorion [28, chapter 1] shows a full historical review since 1970. This development of
the nancial market has led to the recent growth in quantitative risk management.
The only constant across these nancial events is their unpredictability. Investors
were aghast at the rapidity with which high volatility entered the markets, which
created substantial losses. Financial risk management provides a partial protection
against such sources of risk. Now, it is necessary to build a mathematical environ-
ment to quantify nancial risk. A mathematical description for unpredictable nancial
events is the notion of randomness. Russian mathematician Kolmogorov [31] oered
1
A complete review of this development is presented in the article by Je Kleintop published in
LPL Financial [30]

Chapter 2. Risk in Perspective
12
an axiomatic denition of randomness and probability in 1933. Kolmogorov denes a
probabilistic model by a triplet (, F, P ). The object denes an experiment and an
element of is a realization of an experiment. F is the set of all possible events and
P
denotes the probability measure for these events. Hence, if A is an element of F, the
statement P (A) is the probability that an event A occurs. It is not the object of this
thesis to dene a probabilistic model with all mathematical proofs but Kolmogorov
translates this train of thought into a denition with clear rules. Consider, once again,
the case of investors who hold a portfolio of stocks. Investors today hold some assets
with an uncertain future value. The development of single stocks is risky and unpre-
dictable
2
. To model this situation, it is necessary to dene a random variable for the
risky position X to be a function on the probability space (, F, P ). A distribution
function F
X
(x) = P (X x) can than be modeled by observing the historical risk of
the portfolio. The random variable of this example is the change of the portfolio and
is better known as the rate of return or nancial return. With concepts such as VaR
and ES the future risk of a portfolio can be calculated. All necessary denitions for
modeling risk using ES will be introduced in chapter 3.
2.2 Financial Risk
Financial risks are classied into four categories: market risk, liquidity risk, credit risk
and operational risk. This thesis deals with market risk but some risks interact with
each other. Understanding the risks and their eect on the market is signicant for
quantifying them later. The following sections are based on the research of Jorion [28,
chapter 1.4] and Wolke [49, chapter 4].
2.2.1 Market Risk
Even market risk includes a broad spectrum. It can take two forms: absolute risk
and relative risk. Absolute risk measures risk in the relevant currency. This can
include, for example, the volatility of returns. The relative risk is measured relative to
a benchmark index. This method often uses the tracking error or a deviation from the
index. Furthermore, market risk is caused by the direction of movements in nancial
2
The message of this sentence is important. A world in which one or all persons can look toward the
future does not work. No person is in the position to predict actual future market movements.
Imagine that every person can predict the future and plays lotto. Everyone would then win the
jackpot and after discounting the cost of lotto, everyone would lose. The same applies for a single
person. If a single person were to win all the time, other people would stop playing lotto because
of the obvious disadvantage. This example can be transferred to equity trading. There are no
options or nancial models to generate prot without risk. Risk management and other nancial
models can only support the investor and reduce the risk.

Chapter 2. Risk in Perspective
13
variables, such as stock prices, interest rates, exchange rates and commodity prices.
The focus of this thesis is on the change in stock prices, hereinafter also referred to
as market price risk for stocks (see Wolke [49, page 145]). For this reason, the risks
caused by interest rates, exchange rate and commodity prices are not specied but are
similar. The following denition denes risk for equity portfolios. Chapter 4 of this
thesis focuses on how to model it accurately. It follows from the denition that the
nancial return is the random variable of market risk for stocks.
2.1 Denition of Market Price Risk for Stocks: The negative deviation from a
planned target size (assets, prot) because of uncertain future developments in stocks
is dened as the market price risk for shocks.
2.2.2 Liquidity Risk
Liquidity risk can take two forms: asset liquidity risk and funding liquidity risk. Asset-
liquidity risk, also known as market/product-liquidity risk, plays an equally important
role in the application of VaR and ES. It arises when a transaction cannot be con-
ducted at prevailing market prices owing to a large size of the portfolio position. This
risk varies across categories of assets. Major currencies, treasury bonds and equities
from big companies have deep markets where most positions can be liquidated easily.
In contrast, exotics derivatives contracts or emerging-market equities are not often
traded. The liquidation of them can take some time and any transaction can quickly
aect prices (see Jorion [28, page 23]). This risk is modeled by a parameter-time
horizon H, also named as the liquidation period or holding period, for VaR and ES.
Funding-liquidity risk, also known as cash ow risk, refers to the inability to meet
payment obligations, caused by bad or changed liquidation planing. This is also an
issue for portfolios that are leveraged or have a short sales position. An example for
the second eect is the short squeeze, which is very popular for hedge funds
3
.
2.2.3 Credit Risk
Credit risk deals with the fact that counterparties may be unwilling or unable to repay
credit or bank loans. If another party defaults, most often, the whole payment is not
lost. For example, if a private individual buys a house and defaults, the bank can
3
A popular example of a short squeeze is Volkswagen. In 2008, the stock share of Volkswagen rose
from ¿200 to ¿1000 in two days. Several hedge funds were holding a short position on Volkswagen
and, on October 26, 2008, Porsche revealed that theyhad increased their shares. There were not
enough shares to buythe short positions of the hedge funds and the stock price exploded (see
Reuters [41])

Chapter 2. Risk in Perspective
14
sell the house and generate income. Hence, the result of a credit risk analysis is the
expected loss (EL) of a loan. The EL is calculated by EL=PD · EaD · LGD. PD
is the probability of defaultmore specically, the probability that a client defaults.
EaD is the exposure at default. This is the mark-to-market value of the loan object
for example, the price of a house as determined by an expert. LGD is the loss given
defaultin other words, the expected rate that a bank loses at a default. For example,
selling a house generates costs that must pay the bank. This unpredictability contains
three statistical values. When there is a change in the counter-party's ability to
perform its obligations, it has implications for the PD.
2.2.4 Operational Risk
Operational risk is the risk of loss resulting from inadequate or failed internal processes,
people and systems or from external events. The most important eld of operational
risk for this thesis is model risk. It is part of inadequate internal processes. This
is referred to when the model and the data do not t together. This applies in
VaR and ES models if an incorrect distribution is chosen as the normal distribution.
Furthermore, there is a risk associated with tting the parameter of the distribution.
The correct time period for the historical data is the signicant variable here, but
it is not easy to dene accurately. Unfortunately, model risk is insidious and the
correct time period is modeled during the backtesting with the variable named as the
historical data window.
2.3 Measurement and Management
Much of this thesis is concerned with techniques for the measurement of marked price
risk. In the section 2.1, a simple example for a portfolio is described. The rst step
in nancial management begins with the initialization of recent risk factors. The
next step is to nd a practical model for the risk factors and the historical data. The
research in market risk is far advanced. It is well-known that the ES is a good model for
that kind of risk. The major diculty in measuring market price risk is to nd a tted
distribution for the returns and combine it with the ES. The last step of managing risk
is to validate and backtest this model for historical data. Each time series of portfolio
returns generates another empirical distribution. Therefore, it is dicult to nd a
general model for all market data. Due to these problems, a standard regulation to
quantify risk has been developed by the Basel Committee over recent decades.

Chapter 2. Risk in Perspective
15
2.4 Regulation
Most models in banks, insurance companies or other institutions for risk management
are based on the Basel regulatory framework. The target of this chapter is to show
the development of market risk regulation and the current quantitative requirements.
This development is displayed graphically in gure 2.1 and explained in the following
sections. A full overviewof all regulations and a historical overvieware provided on
the website of the Bank for Internal Settlements (BIS) [5], and a detailed summary
can be found in Jorion's study [28, chapter 3]
x Minimum capital standard
x Focus credit risk
x Less rules for market risk
x Equity capital rules for
market price risks
x Quantitative measure of
market risk with Value-at-
Risk approach
x Standardized Method or
Internal-Models
Approach
x New definition of the trading book
x Stressed calibration
x Move from Value-at-Risk (VaR) to
Expected Shortfall (ES)
x New confidence interval of 97.5%
x Quantitative Impact Studies since
2014
x 3-pillar system (minimum
capital requirements,
supervisory review, market
discipline)
x Market risk: less changes to
market risk amendment
x Operational risk charge
x Market risk: stressed
Value-at-Risk
x Incremental Risk Capital
Charge
x Comprehensive Risk
Measure Charge
x Credit Valuation Adjustments
charges
x Higher standards for regulatory
capital
x Higher standards for minimum
capital standard for market and
credit risk
Basel II
Basel I
Basel 2.5
Basel III
Fundamental review
of the trading book
Market Risk
Amendment
Jul. 1988
Jan. 1996
Jun. 2004
Jul. 2009
Dec. 2010
May 2012
Oct. 2013
QIS
Figure 2.1: Historical development of the market risk regulatory of the Bank for In-
ternal Settlements based on the published Basel Accords from the BIS
[5] and Nagler & Company [38]

Chapter 2. Risk in Perspective
16
2.4.1 Road to Regulation
Regulations go back a long way. The Basel Committee of Banking Supervision gener-
ated much of the regulatory drive. It was established by the Central Bank Governors
of the Group of Ten (G-10) at the end of 1974 with the aim to create a level of stan-
dardization in nancial sector supervision. The development of the Basel Committee
was stimulated by the low equity capital of nancial institutions around the world.
But the Basel Committee does not have the right to set legal laws. It formulates only
broad supervisory standards and guidelines, and recommends statements of best prac-
tice. The individual authorities of each national system select suitable models and
implement them in their national laws. The supervisory requirements in Germany
are adapted by Kreditwesengesetz (KWG), Solvabilitätsverordnung (SolvV) and Min-
destanforderung an das Risikomanagement (MaRisk). In general, the national laws
and guidelines of the Basel Committee are very similar. For this reason, only the Basel
concepts are discussed in this thesis.
The rst Basel Accord on Banking Supervision was released in 1988. It is named
Basel I and included mainly a minimum capital standard for credit risk, which was
the most import source of risk in the banking industry at this time. With the develop-
ment of derivatives and higher frequency trades, some nancial institutions saw a new
need for risk management. J.P. Morgan published the paper RiskMetrics [29] and the
market risk measure, VaR, was born. In a highly dynamic world with round-the-clock
market activity, the need for instant market valuation of trading positions became a
necessity. In 1996, the important amendment to Basel I was released
4
. The Basel I
framework was expanded with the market risk. For the quantication of market risk,
the VaR was proposed.
2.4.2 Basel 2
In June 2004, the second Basel accord was published by the G-10 central bank gover-
nors and heads of supervision. The implementation date for the new rules was set to
end of 2006 and several changes were made. A full list of changes is available in the
original Basel II paper [6]. The most important changes are set out below. Basel II
is based on a three-pillar system. This is very popular in the nancial industry and
illustrated in gure 2.2. The rst pillar consists of risk-based capital requirements
for credit risk, market risk and operational risk. For each risk type, the mathemat-
4
It was originally released in January 1996 and modied in 1997. The amendment was further
revised on June 2004 and published in the paper Basel II: International Convergence of Capital
Measurement and Capital Standards: A Revised Framework [7]

Chapter 2. Risk in Perspective
17
Pillar 1:
Minimum capital requirements
Pillar 2:
Supervisory
review
Pillar 3:
Market
discipline
Basel II
Credit risk
1.Standardized
approach
2. Foundation
internal ratings
based (IRB)
3. Advanced
internal ratings
based (IRB)
Market risk
1.Standardized
approach
2. Internal
models approach
(IMA)
Operational risk
1. Basic indicator
approach
2. Standardized
approach
3. Advanced
measurment
approach(AMA)
Figure 2.2: Application of Basel II accord, based on Jorion [28, page 59, Figure 3-1]
ical models are dened and default input parameters are set. The level of capital
in the global banking system is determined at eight per cent of risk-weighted assets.
The second pillar describes the role of regulators for banks. The supervisor has to
ensure that banks operate above the minimum regulatory capital ratios. They have
also taken on responsibility for checking that banks have a process for accessing their
risks and that there is a correct problem management procedure in place if failures
occur. The third pillar focuses on market discipline. This pillar oers an incentive for
banks to conduct their business in a safe and ecient manner. Moreover, it is appro-
priate that banks publicize information about their exposure, risk proles and capital
cushion to ensure better information content. The market risk charge was added in
the amendment in 1996. Since the amendment, the market risk separates the bank's
assets into two categories: trading book and banking book. The trading book in-
cludes the bank portfolio including nancial instruments that are held intentionally
for short-term resale and are typically marked to market. The banking book consists
of other instruments, mainly loans. (see Jorion [28, page 60]). Basel II allows either of
two approaches for quantifying risk: the standardized method or the internal-models
approach. The standardized method is suitable for small nancial institutions due
the low costs and low complexity. At rst, market risks of each typeinterest rate
risk, exchange rate risk, share price risk, and commodity riskis calculated for the
portfolio using specic guidelines. The market risk results as the summation of all risk
charges across the four categories. This method is simple but ignores diversication

Chapter 2. Risk in Perspective
18
between risk factors. Hence, this capital charge will be much too high. Jorion [28,
page 62, Box 3 - 2] presents an example for this circumstance. An alternative is the
internal model approach. This was developed in response to industry criticism of the
standardized method and marked the rst time that banks allowed using their own
risk measurement models to determine their capital charge. By this time, institutions
had developed their own ambitious risk systems. It is clear that banks and insur-
ance companies earn money through risk. Hence, they are interested in managing and
measuring it. Consequently, the own risk systems of the nancial institutions were
more complex than could be dictated by regulators. But, rst, banks must meet some
qualitative criteria. To begin with, they must demonstrate sound risk management
that is included in management decisions. Second, the risk systems must be subject
to a rigorous regulatory stress test. The last requirement contains an independent
risk-control unit, such as external audits. When this is satised, the own risk system
can be used to determine market risk by following quantitative requirements.
2.2 Denition of the Quantitative Parameters of Market Risk Instituted
by the Basel Committee): The qualitative and quantitative standards are listed
in the Basel II paper [6, page 191--197]. The following points are an abstract of the
minimum requirements. Banks have exibility in devising the precise nature of their
models.
VaR must be computed on a daily basis
A horizon of 10 trading days or two calendar weeks
VaR can be scaled by the square-root-of-time method
A 99 percent one-tailed condence interval
Historical data period at least one year and updated at least once a quarter
No particular type of model is prescribedfor example, historical simulations or
Monte Carlo simulations
Banks can use their own approach to calculate empirical correlations; the bank's
system for measuring correlation must be sound and implemented with integrity
The risk measure system must conduct a regular backtesting program against
actual daily changes in portfolio value over longer periods of time; the supervisor
sets a plus factor for the risk measure reecting the result of this backtesting

Chapter 2. Risk in Perspective
19
2.4.3 Fundamental Review of the Trading Book: A Revised Market Risk
Framework
The nancial crisis has revealed some weaknesses of the Basel Committee regulatory.
In May 2012, the banking supervisors published a consultative paper entitled Funda-
mental review of the trading book: A revised market risk framework [8]. A second
version of this consultative paper was published in October 2013 [9]. In January 2014,
the commenting period of the consultative period expired and the quantitative impact
studies (QIS) began thereafter. A fundamental review of the trading book provides
signicant changes to measure market risk. The important changes of this paper that
are of importance for this thesis is the transition from the risk measure VaR to ES
and a reduction of the condence level from 99% to 97.5%. The Basel Committee of
Supervision attributes this to the inability to capture tail risk using the VaR method.
The list of further changes is available in the original consultative papers. The changes
are also discussed and summarized by consulting companies such as Deloitte [19]. The
ES has some important advantages to its counterpart VaR. But an unsolved issue is
the backtesting of the ES. While it is easy to use VaR with a Bernoulli sampling, the
ES is an expected value of a tail distribution and does not fulll the conditions for
an independent Bernoulli sampling. Nevertheless, the Basel Committee proposes the
backtesting method of VaR for the ES. This circumstance is discussed and criticized
by prestigious risk magazines and nance professors such as John Hull, Alan White
[27], Paul Embrechts, Laurie Carver [14] and Duncan Wood [50] on Risk.Net [42]. The
authors describe that the backtesting of ES is not impossible but much harder than
for VaR. They expect some publications of researchers on this topic and the Basel
Committee will recommend one of the researched methods. It is undisputed that the
methodology of the new backtesting diers from the current idea of backtesting the
VaR.

3 Basic Concepts in Risk
Management
3.1 Trading Days and Asset Prices
In business, the trading day is the time span that a particular stock exchange is open
for example, the Frankfurter Wertpapierbörse [13] is open from 9:00 AM to 5:30 PM.
When a trading day ends, all share trading ends and is frozen in time until the next
trading day begins. Trading days are usually Monday to Friday, with the exception of
non-business days. There are, on average, 252 trading days in a year.
3.1 Denition of Asset Price: The nal price at which a security is traded on a
given trading day is dened as asset price or closing price by P
t
. If there is a portfolio
with many assets, each asset is indicated by P
t,k
= P
time,asset
and the portfolio price
with the subscript port.
3.2 Returns
3.2.1 Discrete Returns
Investors are interested not in asset price, but in the generated rate of return of the
investment. Furthermore, returns also have more attractive statistical properties than
prices, such as stationarity. There are two accepted types of returns in science: discrete
and continuous. The research of Daníelsson [17, chapter 1] and Schmid [44, chapter
1-3] provides a good overview of returns and their statistical properties. For now,
technical components such as dividends or stock splits are ignored for simplicity.
3.2 Denition of Discrete Return: Given are the asset prices P for the time points
t
and t - n. The discrete return is the percentage change in asset prices for period n,
indicated by R
t
(n). If there is a portfolio with many assets, each asset is indicated by

Chapter 3. Basic Concepts in Risk Management
21
R
t,k
(n) = R
time,asset
(n) and the portfolio return with the subscript port :
R
t
(n) =
P
t
- P
t-n
P
t-n
=
P
t
P
t-n
- 1
(3.1)
This is a general denition for an n-period return. For simplicity, the period of daily
returns is not indicated with the period: R
t
=
P
t
-P
t-1
P
t-1
. Many applications need to
convert daily returns to another period such as monthly or annual returns. There-
fore, formula (3.1)can be transformed so that only daily returns are required for the
calculation and they can be converted to any period.
R
t
(n) =
P
t
P
t-n
- 1
=
P
t
· P
t-1
· . . . · P
t-n+1
P
t-1
· . . . · P
t-n+1
· P
t-n
- 1
=
t-n+1
i=t
P
i
P
i-1
- 1
=
t-n+1
i=t
(R
i
+ 1) - 1
Thus, the return of the period n is not as simple as the sum of the daily returns.
Consequently, the average return of a discrete time series is calculated by the geometric
average:
R
t
(n) =
n
t-n+1
i=t
(R
i
+ 1) - 1
(3.2)
A specic advantage of discrete returns is that the return of a portfolio R
t,port
is simply
the weighted sum of the returns of the individual assets. The portfolio weight is the
number of units of a given security divided by the total number of shares held in the
portfolio.
3.3 Denition of Portfolio Weight: Given are N asset prices P
t,k
with k =
1, . . . , N of a portfolio for point of time t. The number of stock shares of asset k
is given by n
k
. The portfolio weight for asset k is given by:
w
t,k
=
n
k
P
t,k
N
i=1
n
i
P
t,i
(3.3)

Chapter 3. Basic Concepts in Risk Management
22
Obviously, the sum of all weights is equal to one andthe value of the portfolio at time
t
is
P
t,port
=
N
i=1
n
i
P
t,i
.
3.4 Theorem of the Discrete Return of a Portfolio: Given are the weights w
for time t - 1 and the discrete one-day period return R of N assets for time t. The
one-day period return of a portfolio for time t is:
R
t,port
=
N
i=1
w
t-1,i
R
t,i
(3.4)
Proof:
R
t,port
=
N
i=1
n
i
P
t,i
N
j=1
n
j
P
t-1,j
- 1
=
N
i=1
P
t,i
P
t-1,i
n
i
P
t-1,i
N
j=1
n
j
P
t-1,j
- 1
=
N
i=1
(1 + R
t,i
)w
t-1,i
N
j=1
n
j
P
t-1,j
N
j=1
n
j
P
t-1,j
- 1
=
N
i=1
(1 + R
t,i
)w
t-1,i
- 1
=
N
i=1
w
t-1,i
R
t,i
For statistical applications, it is often necessary to calculate or estimate the average
return of a period. Here, weaknesses in the denition of the discrete return are re-
vealed. Most statistical methods and tests use the arithmetic average for estimating
the average. As already established above, the geometric average is the correct method
for calculating the average return of a period(see formula (3.2)). Consequently, the
discrete return cannot use a lot of implemented functions of statistical software, and
mathematical proofs andapplications may be complicated.
3.2.2 Continuous Returns
The denition of the discrete rate of return is based on discrete compounding whereby
interest is calculated and added to the principal at certain set points in time such
as daily, weekly, monthly or yearly. The continuous compounding is the opposite of

Chapter 3. Basic Concepts in Risk Management
23
discrete compounding. For the denition of the continuous compounding, the interest
is earned constantly and immediately begins earning on itself. A detailed description
and derivation of the compounding formulas can be found in Ortmann [39, chapter
2.1].
3.5 Denition of Continuous Return: Given are the asset prices P for times t
and t - n. The continuous return is the percentage change in asset prices for period
n
, indicated by r
t
(n). If there is a portfolio with many assets, each asset is indicated
by r
t,k
(n) = r
time,asset
(n) and the portfolio return with the subscript port :
r
t
(n) = ln
P
t
P
t-n
= ln P
t
- ln P
t-n
(3.5)
For simplicity, the period of daily returns is not indicated by the period: r
t
= ln P
t
-
ln P
t-1
. Between formulas (3.1) and (3.5), apply the relations
r
t
(n) = ln (R
t
(n) + 1) ,
(3.6)
R
t
(n) = e
r
t
(n)
- 1.
(3.7)
Hence, each type of return can convert simply to the other one. The main advantage
over the discrete return is that the n-period return results from the sum of daily
returns.
r
t
(n) = ln
P
t
P
t-n
= ln
P
t
· . . . · P
t-n+1
P
t-1
· . . . · P
t-n
= ln
t-n+1
i=t
P
i
P
i-1
=
t-n+1
i=t
ln
P
i
P
i-1
=
t-n+1
i=t
r
i
(3.8)
Consequently, the average n-period return is calculated by the arithmetic mean:
r
t
(n) =
1
n
t-n+1
i=t
r
i
(3.9)

Chapter 3. Basic Concepts in Risk Management
24
Based on this additivity property, the continuous return is the preferred type of deni-
tion for nancial and time series applications (see Schmid [44, page 6]). On the other
hand, the disadvantage of the continuous return is the calculation of the portfolio
return. It is more complicated, but does not really matter in today's computer age.
3.6 Theorem of Continuous Return of a Portfolio: Given are the weights w for
time t - 1 and the continuous one-day period return r of N assets for time t. The
one-day period return of a portfolio for time t is:
r
t,port
= ln
N
i=1
w
t-1,i
e
r
t,i
(3.10)
Proof: With equations (3.6) and (3.7) the proof is very simple. Furthermore, the
formula (3.4) and the awareness
i
w
t,i
= 1 are used.
r
t,port
= ln (R
t,port
+ 1)
= ln
N
i=1
w
t-1,i
R
t,i
+ 1
= ln
N
i=1
w
t-1,i
(R
t,i
+ 1)
= ln
N
i=1
w
t-1,i
e
r
t,i
Discrete and continuous returns have two dierent types of denitions. Investors
are primarily interested in discrete returns because they show the correct investment
return. But there are good reasons for continuous returns being preferable. First, the
continuous return is additive for an n-period return. Second, continuous returns are
symmetric, while discrete returns are not. This means that an investment of ¿100
that yields a discrete return of 50% followed by -50% on the next period will result in
¿75. The same investment on continuous returns results in ¿100 (see Daníelsson [17,
page 4]). For these reasons, the denition of continuous returns is used in this thesis
and named only as returns.
3.3 Adjusted Asset Prices
It has been specied that the return of a portfolio is the important variable for measur-
ing quantitative risk. For the calculation of the return, it is necessary to get the asset
prices of the stocks. But it is usually incorrect to calculate the return with closing

Chapter 3. Basic Concepts in Risk Management
25
prices. Over time, actions can be observed that change the prices of equities such as
stock splits and dividends, without aecting the value of the rm. Table 3.1 shows
the eect of receiving payment of any dividend. On day 3, the value of the observed
return is -1.98%. But on this day the investor receives a payment of a dividend of
¿3.00 and, consequently, the realized return is +0.98%. For this reason, it has become
accepted in quantitative risk management to use adjusted asset prices or, as they are
better known, as adjusted closing prices. The idea behind this is as follows: Create
a multiplier for closing prices so that the formulas of the previous chapter for the
return are valid. This procedure is shown in an example in table 3.1. The dividend
multiplier is calculated based on dividend as a percentage of the closing price. For
example, on day 3, the multiplier is calculated as
100
100+3
. All closing prices before the
payment of the dividend are adjusted with that multiplier. As a result, the calculated
returns of the adjusted closing prices are equal to the realized return. This method is
known as retrograde adjusting. It is also possible to adjust progressively, that means
the closing prices after the payment of the dividend is adjusted. The retrograde ad-
justing is more established because the current closing prices and the adjusted closing
prices are equal. Figure 3.1 shows the dierence between the closing prices and the
retrograde adjusted prices of BMW in the period from January 1, 2000 to September
30, 2014. BMW is a typical dividend stock with an average two payments of divi-
dends per year. Accordingly, the realized continuous return of 163.53% is signicantly
higher than the return of the closing prices of 105.88%. Furthermore, there are more
unwanted actions on returns like stock splits, capital increase or capital decrease. A
complete list and detailed description of the multipliers can be found in Schmid [44,
page 89]. The website of Yahoo!Finance [51] provides adjusted closing prices and uses
the retrograde method for stock split and dividend actions. Other unwanted actions
are not supported for the adjustment (see Yahoo!Finance [52]). This corresponds to
the standards of the Center for Research in Security Price [15].
Day Asset Dividend
Crude
Realized Multiplier
Adjusted
Return
price
return
return
asset price
adjusted
asset price
1
100
-
-
-
100
100+3
97.087
-
2
102
-
+1.98%
+1.98%
100
100+3
99.029
+1.98%
3
100
3.00
-1.98% +0.98%
-
100.000
+0.98%
4
103
-
+2.96%
+2.96%
-
103.000
+2.96%
Table 3.1: Inuence of a dividend payment to nancial returns

Chapter 3. Basic Concepts in Risk Management
26
Jan00
Sep02
Jun05
Mar08
Dec10
Sep13
10
20
30
40
50
60
70
80
90
100
Date
Price in euro
Closing Price
Adjusted Closing Price
Figure 3.1: Comparison between closing prices and adjusted closing prices of BMW
between January 1, 2000 and September 30, 2014
3.4 Stylized Facts of Returns
Research on the properties of nancial returns has progressed in recent years. Four
statistical properties have become apparent. These are often called the stylized facts
of nancial returns:
Fat tails
Volatility clusters
Returns show little and squared return a profound serial correlation
Non-linear dependence
Periods of high volatility can be observed, followed by periods of low volatility (see
gure 3.2). The tendency of returns to cluster together is known as volatility cluster.
Furthermore, nancial returns have very large positive or negative returns. This is
very unlikely to be observed if returns are modeled by a normal distribution. Hence,
the tails do not t the normal distribution. This is called fat tails and plays an im-
portant role in this thesis. Moreover, the returns are not i.i.d., although they show
a little correlation and squared returns show a profound serial correlation. This is

Chapter 3. Basic Concepts in Risk Management
27
a signicant issue for modeling nancial returns because most known concepts need
i.i.d. random variables. The last property, non-linear dependence, addresses how mul-
tivariate returns in a portfolio relate to each other. Non-linear dependence means
that correlations between returns depends on the individual market situation. Lower
correlations are observed in bull markets rather than in bear markets, whereas during
nancial crises, the correlation tends to 100% (see Daníelsson [17, page 9]). These
aspects are discussed in brief in the following sections. The studies of Schmid [44] and
McNeil et al. (2005)[37, chapter 4] provide a good statistical analysis of nancial re-
turns. The chosen GARCH-EVT-Copula approach is based on these scientic ndings
and is standard in modeling nancial returns (see Rachev [40]).
3.4.1 Volatility and Clustering
The most popular characteristic factor of market risk is volatility.
3.7 Denition of Volatility:
Volatilityis dened as the standard deviation of nancial returns. In general, the time
period for volatilityis annual.
Jan00
Aug01
Apr03
Dec04
Jul06
Mar08
Nov09
Jul11
Feb13
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
Date
BMW
Volatility (MA) - 100 days
Low Volatility
Low Volatility
High Volatility
High Volatility
Figure 3.2: Returns of the DAX index between January1, 2000 and October 30,
2014. The red line is a moving average process of the unconditional
volatilitywith recent 100 trading days.
There are two concepts of volatility: conditional and unconditional. Conditional
volatilityis dened as volatilityin a given time period indicated by
t
while uncondi-
tional volatilityis dened for an entire time period . Figure 3.2 shows the returns of

Chapter 3. Basic Concepts in Risk Management
28
the DAX index between January1, 2000 and September 30, 2014. The unconditional
volatilityi.e. the volatilityover the total time periodis given by1.55 %. It is the
standard to scale the dailyvolatilityover an annual period using the square-root-of-
time adjustment
1
. This results in an annual volatilityof
252 · 1.55% = 24.62% .
Unconditional volatilitycannot identifywith a single number. It is marked with a
red line, calculated using a moving average method and the time period is set to ve
months, which equates to 100 trading days. The mentioned clustering of volatility is
easilyseen. The gure shows onlyan example but all nancial returns exhibit volatil-
ityclusters. Sub-populations having dierent variabilities from others in a collection of
random variables is known as heteroscedasticityin statistics. This propertyof volatil-
ityis widelyknown and discussed in recent literature and, therefore, this thesis does
not seek to prove this propertyusing a statistical test. The White test or Levene's
test are statistical tests for proving heteroscedasticity.
An important question is if the volatilityon one daycorrelates with the volatility
on the previous day. The clustering may present evidence for this property. Further-
more, it is interesting if nancial returns have this property, too. A standard graphical
method for exploring predictabilityin statistical data is the autocorrelation function.
A detailed description of this topic can be found in chapter 3.5.
3.8 Denition of Empirical Autocorrelation Function (ACF): The empirical
autocorrelation function (ACF), denoted by r
of a time series (y
1
, . . . , y
N
) and number
of lags is dened as
c
=
1
N
N-
t=1
(y
t+
- y)(y
t
- y),
( 0)
r
=
c
c
0
.
The ACF measures how a random variable on one daydepends linearlyon the random
variable of a previous point in time. The signicance of autocorrelation over several
lags can tested byusing the Ljung-Box test. The lags are set to 20 dailyreturns
of DAX between January1, 2000 and September 30, 2014 and the condence level
is set to 95%. Figure 3.3 shows the ACF plot while tables 3.2 and 3.3 present the
Ljung-Box test results for described data and parameters. The same procedure is
repeated for squared returns. In nancial scientic lectures, squared returns are used
for approximate volatility(see Daníelsson [17, page 13]). Table 3.2 shows that the test
is signicant for the full sample, but the most recent 100 days reject the statistical test.
In contrast to this are the results of the squared returns. All p-values of the squared
1
see Jorion[28, page 98]

Chapter 3. Basic Concepts in Risk Management
29
returns are much lower than the corresponding value for the return and all tests are
signicant. The test results demonstrate that it is easier to predict volatility than
the expected nancial return. But some degree of autocorrelation can be observed for
nancial returns.
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(a) nancial returns
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(b) squared nancial returns
Figure 3.3: Autocorrelation plots for 20 lags of DAX indexbetween January 1, 2000
and September 30, 2014 from Yahoo!Finance. The blue lines show the
95% condence interval
T
Ljung-Boxtest, 20 lags
p-value
3, 761
46.48
6.9024e
-16
1, 000
33.34
0.0310
100
18.73
0.5398
Table 3.2: Ljung-Boxtest for 20 lags of daily DAX returns between January 1, 2000
and September 30, 2014 from Yahoo!Finance
T
Ljung-Boxtest, 20 lags
p-value
3, 761
3.6432e
3
0
1, 000
866.50
0
100
40.72
0.0040
Table 3.3: Ljung-Boxtest for 20 lags of daily squared DAX returns between January
1, 2000 and September 30, 2014 from Yahoo!Finance

Chapter 3. Basic Concepts in Risk Management
30
3.4.2 Non-normality and Fat Tails
Many applications and mathematical models assume that nancial returns are nor-
mally distributed. The reason for this assumption is simple. All statistical models in
portfolio theory and derivative pricing need a distribution of the data. If the researcher
cannot identify a known distribution, such as normal distribution or t-distribution, an
analytic calculation formula cannot be derived. Furthermore, the normal distribution
simplies the modeling. Alan Greenspan (1997) [26] sums it up well:
. . . the biggest problems we now have with the whole evaluation of risk is the fat-
tail problem, which is really creating very large conceptual diculties. Because as
we all know, the assumption of normality enables us to drop o the huge amount of
complexity of our equations very much to the right of the equal sign. Because once you
start putting in non-normality assumptions, which unfortunately is what characterizes
the real world, then these issues become extremely dicult.
-0.1
-0.08 -0.06 -0.04 -0.02
0
0.02
0.04
0.06
0.08
0.1
0
100
200
300
400
500
600
700
800
900
1000
Densitiy DAX Returns
Densitiy normal fitting
(a) Density of DAX against normal
-0.07
-0.06
-0.05
-0.04
-0.03
-0.02
-0.01
0
50
100
150
200
250
300
Densitiy DAX Returns
Densitiy normal fitting
Fat tail
(b) Fat tail in density
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
F(x)
DAX 30 returns cdf
Normal cdf
(c) CDF of DAX against normal
-0.06 -0.055 -0.05 -0.045 -0.04 -0.035 -0.03 -0.025 -0.02 -0.015
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
x
F(x)
DAX 30 returns cdf
Normal cdf
(d) Fat tail in CDF
Figure 3.4: Density and CDF plots of DAX 30 returns between January 1, 2000 and
September 30, 2014 from Yahoo!Finance against a normal tted distribu-
tion

Chapter 3. Basic Concepts in Risk Management
31
3.9 Denition of Fat Tails:
A random variable is said to have fat tails if it exhibits more extreme outcomes than
a normally distributed random variable with the same mean and variance.
The fat-tailed property of returns has been known since Fama (1963) [23] and is
illustrated in gure 3.4. It shows a density plot and cumulative distribution function
(CDF) of the DAX returns from January 1, 2000 and September 30, 2014 in comparison
to a normal distribution with the same mean and variance of the returns. Hence,
the assumption of normal distribution (whether univariate or mulitvariate) for risk
calculations leads to a gross underestimation of risk. This fact is widely popular and
discussed by many researchers. There are some statistical tests, such as the Jarque-
Beta test, which are based on empirical skewness and kurtosis measures, thereby
rejecting the normal hypothesis. Univariate and multivariate results for that test can
be found in McNeil et al. (2005) [37, chapter 4]. In statistical literature, the fat tail
issue is known as leptokurtic. This property of returns leads to the application of EVT
for modeling market risk.
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
-0.1
-0.05
0
0.05
0.1
0.15
Quantiles of normal Distribution
Quantiles of Input Sample
(a) Normal
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
Quantiles of tlocationscale Distribution
Quantiles of Input Sample
(b) Student t(3)
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
Quantiles of tlocationscale Distribution
Quantiles of Input Sample
(c) Student t(4)
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
-0.1
-0.05
0
0.05
0.1
0.15
Quantiles of tlocationscale Distribution
Quantiles of Input Sample
(d) Student t(5)
Figure 3.5: QQ plots of DAX 30 returns between January 1, 2000 and September 30,
2014 from Yahoo!Finance

Chapter 3. Basic Concepts in Risk Management
32
There are also graphical methods for fat tail analysis. The most commonly used
method is the QQ plot (quantil-quantile plot). This compares the quantiles of the
sample data against the quantiles of a reference distribution. If the observations t
the straight line of the distribution, then the data sample can be explained by the
tested distribution. Figure 3.5 shows QQ plots of the DAX returns from January 1,
2000 and September 30, 2014 against normal and sample t-distributions. The fat tail
problem is easily seen. The kernels t the straight line but the tails dier a lot. Hence,
the kernel of the return distribution is well-described by the normal distribution (see
gure 3.5a). This knowledge will be used later in this thesis for modeling a suitable
distribution of nancial returns. Furthermore, the t-distribution QQ plots show a lower
fat fail problematic. For this reason, it is the standard in nancial risk forecasting to
t the distribution tails for nancial returns with a t-distribution.
3.4.3 Non-linear Dependence
The stylized fact of non-linear dependence describes the property that nancial re-
turns dependence on dierent return series of a portfolio varies according to changing
market conditions. This eect can especially be observed in times of crisis. While
some return series move relatively independent of each other, in times of crisis, they
all drop together without exception. Several statistical models assume linear correla-
tion between returns. If they are linearly dependent, the strength of their dependence
can be measured using correlations, such as Pearson's correlation coecient . Table
3.4 shows the correlation of Commerzbank AG (CBK), Deutsche Bank AG (DBK),
Siemens AG (SIE)and Adidas AG (ADS)during an economic recovery from 2003 to
2007 and the nancial crisis in 2008. It is widely known that correlations are usually
lower in bull markets than in bear markets. Furthermore, gure 3.6 represents the
progress of the yearly correlation of Commerzbank AG against Deutsche Bank AG,
Siemens AG and Adidas AG. A high uctuation of the correlation is observed.
This example illustrates non-linear dependence with long-run correlations but is not
a mathematical proof of the hypothesis. It is widely believed that correlations change
but the proof is less straightforward to demonstrate (see McNeil et al. (2005)[37, page
124]). Furthermore, this example shows a so-called leverage eect. This means that
assets on negative shocks tend to have a larger correlation with each other than assets
on positive shocks of the same magnitude. The next step is to handle non-linear de-
pendence in modeling nancial risk. One approach is a multivariate volatility model.
This approach is well-discussed in the works of McNeil et al. (2005)[37, chapter 4]
and Daníelsson[17, Chapter 3], who suggest a multivariate GARCH model. An alter-
native method for modeling non-linear dependence is using copulas. This approach is

Chapter 3. Basic Concepts in Risk Management
33
described in chapter 3.6 and is used to handle dependence in nancial returns within
this thesis.
CBK
DBK
SIE
DBK 0.6358
SIE0.4831 0.6430
ADS 0.3559 0.4393 0.3785
(a) Daily return correlations (January 1,
2003January 1, 2007) during economic
recovery
CBK
DBK
SIE
DBK 0.7976
SIE0.5821 0.6994
ADS 0.5706 0.6721 0.6347
(b) Daily return correlations (January 1,
2007January 1, 2009) during the
nancial crisis
Table 3.4: Daily return correlations of Commerzbank AG (CBK), Deutsche Bank
AG (DBK), Siemens AG (SIE) and Adidas AG (ADS)
2000
2002
2004
2006
2008
2010
2012
2014
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Corr CBK and DBK
Corr CBK and SIE
Corr CBK and ADS
Figure 3.6: Yearly correlations of Commerzbank AG (CBK) against Deutsche Bank
AG (DBK), Siemens AG (SIE) and Adidas AG (ADS)

Chapter 3. Basic Concepts in Risk Management
34
3.5 Time Series Analysis
This section provides a short summary of univariate time series analysis with focus
on that which is relevant for modeling market risk with a GARCH-EVT-COPULA
approach. This is the rst application where the continuous return denition is needed.
Otherwise, the properties of the following models are not fullled. It starts with
the basic denitions of the stochastic process with their characteristics, white noise
processes and ARMA processes, and ends with denitions of GARCHfamily models.
This section is based on McNeil et al. (2005) [37, chapter 4.2-4.3].
3.5.1 Basic Denitions and Properties
The basis of time series analysis builds the theory of stochastic processes. In the
application of modeling market risk, the series of random variables are continuous
returns calculated by adjusted asset returns. Stochastic processes and its moments
are dened below.
3.10 Denition of Stochastic Process: A series of random variables (X
t
)
tZ
, se-
lected on the same probability space (, F, P ) is dened as a stochastic process with
discrete time parameters.
3.11 Denition of Moments of a Time Series: Given is a stochastic process
(X
t
)
tZ
. The following variables are dened as moment functions of the stochastic
process, where s, t Z.
Mean function:
t
= (t) = E(X
t
),
Variance function:
2
t
=
2
(t) = V ar(X
t
),
Autocovariance function:
s,t
= (s, t) = Cov(X
s
, X
t
),
Autocorrelation function:
s,t
= (s, t) =
s,t
2
s
2
t
.
One of the most important properties of stochastic processes is stationarity. A strict
stationary process is a stochastic process whose joint probability distribution does not

Chapter 3. Basic Concepts in Risk Management
35
change when shifted in time. The result of that property is that the moments do not
change over time and do not follow any trend.
3.12 Denition of Strict Stationarity: The time series (X
t
)
tZ
is strict stationary
if
F
X
t1
,...,X
tn
(x
1
, . . . , x
n
) = F
X
t1+s
,...,X
tn+s
(x
1
, . . . , x
n
)
for all t
1
, . . . , t
n
, s
Z and for all n N.
3.13 Denition of Covariance Stationarity: The time series (X
t
)
tZ
is covariance
(or weakly) stationary if the rst two moments exist and satisfy
(t) = ,
t
Z
(s, t) = (s + k, t + k),
s, t, k
Z
The covariance between X
t
and X
s
for all s, t only depends on their temporal sep-
aration |s - t|, which is known as lag. This results in the denition of covariance
stationarity, where it is essential that (s - t, 0) = (s, t) = (t, s) = (t - s, 0). Use-
ful time series for models in nance are stationary processes without serial correlation.
Those processes are known as white noise processes and are dened below.
3.14 Denition of White Noise: The time series (X
t
)
tZ
is a white noise (WN)
process if it is covariance stationary with autocorrelation function
(h) =
1
h = 0
0
h = 0
Denitions 3.13 and 3.14 imply that a white noise process is centered to have a mean
of zero with variance
2
= V ar(X
t
) and will be denoted by W N(0,
2
).
3.15 Denition of Strict White Noise: The time series (X
t
)
tZ
is a strict white
noise (SWN) process if it is a series of i.i.d., nite-variance random variables.
The SWN process is a simple example of a white noise process. The SWN process is
centered to have a mean of zero with variance
2
= V ar(X
t
) and will be denoted by
SW N (0,
2
).

Chapter 3. Basic Concepts in Risk Management
36
3.5.2 ARMA Process
The family of autoregressive moving average (ARMA) models are widely used in ap-
plications of time series analysis. The basic building block is a white noise process.
Thus, the ARMA process is at least covariance stationary. The ARMA model is a
tool for understanding and predicting future values in this time series. It consist of
two parts: an autoregressive (AR) process and a moving average (MA) process.
3.16 Denition of ARMA(p, q) process: Let (
t
)
tZ
be W N(0,
2
). The process
(X
t
)
tZ
is a zero-mean ARMA(p, q) process if it is a covariance-stationary process
satisfying dierence equations of the form
X
t
-
1
X
t-1
- · · · -
p
X
t-p
=
t
+
1 t-1
+ · · · +
q t-q
,
t Z
The stationary property depends on the exact nature of the driving white noise, also
known as innovations of the process. The ARMA process is strictly stationary if the
innovations are i.i.d., or themselves form a strictly stationary process (see McNeil et
al. (2005) [37, page 128]).
3.5.3 GARCH Models
This section describes ARCH and GARCH models, the common methods of volatility
forecasts. ARCH and GARCH models are time series models that are able to explain
a number of mentioned properties of nancial returns including:
Autocorrelation: predictability in nancial returns and squared nancial re-
turns (explained in chapter 3.4.1)
Leptokurtosis: the tendency of nancial returns to have fat tails (explained in
chapter 3.4.2)
Heteroscedasticity: the tendency of volatility to appear in clusters (explained
in chapter 3.4.1)
leverage eects: the tendency of negative shocks to have a larger impact on
volatility than the positive shocks (see Glosten [24])
The main object of interest is the conditional volatility X
t
. The ARCH model was
the rst model of volatility forecast and was proposed by Engle (1982)[22]. The au-
toregressive conditional heteroscedasticity (ARCH) method models the conditional

Chapter 3. Basic Concepts in Risk Management
37
variance as a linear function of past squared innovations and is dened in 3.17. It
is supposed that E(X
t
) = 0 and innovation in returns are driven byrandom shocks,
denoted by {Z
t
}. The random shocks are a sequence of i.i.d. random variables with
mean zero and variance one. No further assumptions about the distribution of Z
t
is
necessary. In general, it is normal or t-distributed. The return on day t can then be
indicated by(see Schlittgen [45, page 225-226]):
Y
t
=
t
Z
t
.
(3.11)
3.17 Denition of the ARCH(p) Process: Let (Z
t
)
tZ
be SW N(0, 1) and p the
number of lags. The process (X
t
)
tZ
is an ARCH(p) process if it is strictlystationary
and satisfying for all t Z and some strictlypositive-valued process (
t
)
tZ
, the
equations
X
t
=
t
Z
t
,
2
t
=
0
+
p
i=1
i
X
2
t-i
where
0
> 0
and
i
0, i = 1, . . . , p.
The standard approach for estimating the parameter is the maximum likelihood method.
Finally, the calculation of an ARCH(p) process needs an estimation of p+1 parameters.
Thus, there is a large eort to t the data to the ARCH model. This results because
of the stationarityand non-negative properties of the ARCH model (see Schlittgen
[45, page 233]). An alternative wayto model volatilitywithout estimating a large
number of parameters is the generalized autoregressive conditional heteroscedasticity
(GARCH) model. The GARCH model was proposed byBollerslev (1986) [12]. The
basic idea of GARCH is to include lagged volatilityin the ARCH model to incorporate
the impact of historical returns and decrease the number of parameters. This results
in the GARCH denition.
3.18 Denition of the GARCH(p, q) Process: Let (Z
t
)
tZ
be SW N(0, 1) and
p
the number of lags. The process (X
t
)
tZ
is a GARCH(p, q) process if it is strictly
stationaryand satisfying for all t Z and some strictlypositive-valued process (
t
)
tZ
,
the equations
X
t
=
t
Z
t
,
2
t
=
0
+
p
i=1
i
X
2
t-i
+
q
j
j
2
t-i

Chapter 3. Basic Concepts in Risk Management
38
where
0
> 0
and
i
0, i = 1, . . . , p and
j
0, j = 1, . . . , q.
The ARCH and GARCH models are symmetric in the sense that negative and positive
shocks have the same eect on volatility. The research of Glosten [24] shows that
negative shocks tend to have a larger impact on volatility than positive shocks of the
same magnitude (leverage eect). Thus, the presented GARCH model is not optimal
to model returns. The GJR-GARCH(p, q) process is an asymmetric model of the
GARCH family and dened in 3.19. In practice, low-order GARCH models have been
established.
3.19 Denition of the GJR-GARCH(p, q) Process: Let (Z
t
)
tZ
be SW N(0, 1)
and p the number of lags. The process (X
t
)
tZ
is a GJR-GARCH(p, q) process if it is
strictly stationary and satisfying for all t Z and some strictly positive-valued process
(
t
)
tZ
, the equations
X
t
=
t
Z
t
,
2
t
=
0
+
p
i=1
(
i
+
i
I
t-i
)X
2
t-i
+
q
j=1
j
2
t-i
,
I
t-i
=
0
X
t-i
0
1
X
t-i
< 0
where
0
> 0
and
i
0,
i
0, i = 1, . . . , p and
j
0, j = 1, . . . , q.
3.6 Copulas
The problem of the non-linear dependence of nancial returns was discussed in chapter
3.4.3. This section provides an overview of modeling the dependence among compo-
nents of a random vector of nancial returns using the concept of copula. The theory
of copula was developed by Sklar [46] in 1959. It is a powerful tool as it does not
require any assumptions on the selected distribution function and any dependence
structure can be modeled using a copula. This section is based on Cottin [16, chapter
5.3] and McNeil et al. (2005) [37, chapter 5].
3.6.1 Basic Denitions and Properties
The copula theory is only one way of treating dependence in multivariate risk mod-
els but this approach oers a new view on dependence structure. While multivariate

Chapter 3. Basic Concepts in Risk Management
39
volatility forecast models focus on correlation, copulas express dependence on a quan-
tile scale, which is useful for describing extreme outcomes (see McNeil et al. (2005) [37,
page 184]). For this reason, the copula approach has been established in risk manage-
ment. A copula describes a function that links the univariate marginal distributions
to their multivariate distribution.
3.20 Denition of Marginal Distribution: Let F : R
d
[0, 1] be a distribution
of a random variable X, that is F (x
1
, . . . , x
d
) = P (X
1
x
1
, . . . , X
d
x
d
). The
distribution functions F
i
(x) = P (X
i
x
i
) are dened as marginal distribution for
i = 1, . . . , d
.
3.21 Denition of Copula: A d-dimensional copula is a distribution function C on
[0, 1]
d
with standard uniform marginal distributions.
The standard uniform distribution plays an important role in the copula theory. Prop-
erties of the continuous uniform distribution are displayed in appendix A. The standard
uniform distribution is dened by the choice of parameter a = 0, b = 1 and is denoted
by U(0, 1). In working with copulas, the reader must be familiar with the operations of
probability and quantile transformation. Some important properties are dened below
and can be found (including the proofs) in many probability texts such as McNeil et
al. (2005) [37, pages 186, 494].
3.22 Denition of the Generalized Inverse and Quantile Function:
(i) Given an increasing function T : R R, the generalized inverse of T is dened
by T
(y) := inf{x R : T (x) y}.
(ii) Let F be a distribution function. F
denotes its generalized inverse and is called
the quantile function of F . For c (0, 1) the c-quantile of F is given by
q
c
(F ) := F
(c) = inf{x R : F (x) c}.
(3.12)
If F is continuous, the quantile function is given by the ordinary inverse; thus,
q
c
(F ) = F
-1
(c).
3.23 Proposition : Let F be a distribution on R
(i) F is strictly increasing F
is continuous
(ii) If F is continuous, then F (F
(x)) = x.
(iii) Quantile Transformation: If U U(0, 1) has a standard uniform distribution,
then F (x) = P (F
(U) x).

Chapter 3. Basic Concepts in Risk Management
40
(iv) Probability Transformation: If Y has a distribution function F and F is
continuous univariate, then F (Y ) U(0, 1).
Propositions 3.23 (iii) and 3.23 (iv) are the keys of copula theory. Both parts taken
together implies that it can transform any continuous risk distribution function to
have any other continuous distribution. The following theorem of Sklar summarizes
the importance of copulas. It shows that any multivariate distribution can disjoint
in its marginal distributions and its copula and the copula contains the dependence
structure.
3.24 Theorem of Sklar:
(i) Let F be a joint distribution function with margins F
1
, . . . , F
d
. Then, there exists
a copula C : [0, 1]
d
[0, 1] in such a way that for all x
1
, . . . , x
d
in R = [-, ],
C(F (x
1
), . . . , F (x
d
)) = F (x
1
, . . . , x
d
)
(3.13)
If the margins are continuous, then the copula C is unique.
(ii) Conversely, let C be a copula and F
1
, . . . , F
d
univariate distribution functions.
Then the function
F (x
1
, . . . , x
d
) = C(F (x
1
), . . . , F (x
d
))
(3.14)
is a joint distribution function with margins F
1
, . . . , F
d
.
Proof: For a full proof see Schweizer and Sklar [43]
The next step is to nd copula for the margins F
1
, . . . , F
d
. For any x
1
, . . . , x
d
R, X
has the distribution function F , it is
F (x
1
, . . . , x
d
) = P ((F
1
(X
1
) F
1
(x
1
), . . . , F
d
(X
d
) F
1
(x
d
)).
Since the marginal distributions are continuous, proposition 3.23.(iv) and denition
3.21 imply that the distribution function of (F
1
(X
1
), . . . , F
d
(X
d
)) is a copula and
this copula is a joint of the marginal distributions because of theorem 3.24. Set the
arguments to the general inverse, so that x
i
= F
i
(u
i
), 0 u
i
, i = 1, . . . , d and use
proposition 3.23. (i) leads to the formula (see McNeil et al. (2005) [37, page 187]):
C(u
1
, . . . , u
d
) = F (F
(x
1
), . . . , F
(x
d
)).
(3.15)
This means that a copula is obtained by using the generalized inverse to each individual
risk of a portfolio. This is the fundamental knowledge of dealing with copulas and

Chapter 3. Basic Concepts in Risk Management
41
leads to the notion of the copula of a distribution.
3.25 Denition of Copula of F: If the random vector X = (X
1
, . . . , X
d
) has a joint
distribution function F with continuous marginal distributions F
1
, . . . , F
d
, then the
copula of F is the distribution function C of (F
1
(X
1
), . . . , F
d
(X
d
))
An important application is to t copula to data by the maximum likelihood method.
This method needs the density function of the copula and is dened below.
3.26 Denition of Copula Density Function: The density c(u
1
, . . . , u
d
) associ-
ated with the copula C(u
1
, . . . , u
d
) is dened as:
c(u
1
, . . . , u
d
) =
C(u
1
, . . . , u
d
)
u
1
, . . . , u
d
(3.16)
3.6.2 Examples of Copula
The theory provides a number of examples of copulas. Copulas are categorized into
fundamental copulas, implicit copulas and explicit copulas. The rst type represents
a number of special dependence structures, such as the independence copula and the
comonotonicity copula (see McNeil et al. (2005) [37, page 189]). By using Sklar's
theorem, implicit copulas can extract from known multivariate distributions. But they
often have no closed-form expression. The most popular examples are the Gaussian
copula and student's t-copula. On the contrary, explicit copula have a closed-form
expression and knowledge of mathematical fundamentals. The Clayton copula and
and Gumbel copula are well-known examples (see McNeil et al. (2005) [37, page
192]).
3.27 Example of a Gaussian Copula: Let X = (X
1
, . . . , X
d
) N
d
(0, ) with linear
correlation matrix of X and the univariate standard normal distribution be indicated
by . The Gaussian copula C
Ga
of a d-dimensional standard normal distribution is
the distribution function of the random vector ((X
1
), . . . , (X
d
)) and is given by
Denition 3.25:
C
Ga
= P ((X
1
) u
1
, . . . , (X
d
) u
d
)
=
d
(
-1
(u
1
), . . . ,
-1
(u
d
)),
(3.17)
where
d
denotes the joint distribution function of X

Chapter 3. Basic Concepts in Risk Management
42
3.28 Example of Student's t-Copula: Let X = (X
1
, . . . , X
d
) t
d
(, 0, ) with
linear correlation matrix of X and the univariate standard student's t-distribution be
indicated by t
. The student's t-copula C
t
,
of a d-dimensional student's t-distribution
is the distribution function of the random vector (t
(X
1
), . . . , t
(X
d
)) and is given by
Denition 3.25:
C
t
,
= P (
(X
1
) u
1
, . . . ,
(X
d
) u
d
)
= t
d
,
(t
-1
(u
1
), . . . , t
-1
(u
d
)),
(3.18)
where t
d
,
denotes the joint distribution function of X.
3.6.3 Dependence Measures
As has been pointed out, the most important strength of copula is dependence mod-
eling. There are three kinds of common dependence measures in copula theory: the
usual Pearson linear correlation, rank correlation and coecients of tail dependence.
Correlation plays an important role in nancial modeling. But it is important to re-
alize that the concept is only applicable for multivariate elliptical distribution, such
as normal distribution or student's t-distribution. McNeil et al. (2005) [37, page
202205] show with examples the limited interpretation of correlation when moving
away from elliptical models. They suggest as an alternative measure for dependence
the rank correlation. Common methods are Kendall's tau and Spearman's rho. These
methods and applications can be found in McNeil et al. (2005) [37, page 206233].
This thesis focuses on tting the data with the student's t-distribution. Therefore,
the usual Pearson linear correlation is a simple and adequate method. The linear
correlation matrix is calculated by
i,j
= (X
i
, X
j
) =
cov(X
i
, X
j
)
V ar(X
i
)V ar(X
j
)
.
(3.19)
3.6.4 FittingData
This section is devoted to the problem of estimating the parameters of a para-
metric copula C
. The main method for estimating these parameters is canonical
maximum likelihood estimation (CMLE). While the general maximum likelihood es-
timates evaluate margins and copula in a single optimization, the CMLE splits the
estimation in two steps. The idea behind this method is that the copula parameters
can be estimated without specifying the marginal distribution. Therefore, the mar-
gins are transformed to uniform variates by the semi-parametric empirical CDF. Let

Chapter 3. Basic Concepts in Risk Management
43
X
1,i
, . . . , X
n,i
the univariate samples and ^
F
1
, . . . , ^
F
d
denote estimates of the marginal
CDFs. The pseudo-sample from the copula consists of the vectors ^
U
1
; . . . , ^
U
n
, where
^
U
t
= (U
t,1
; . . . , U
t,d
) = ( ^
F
t,1
(X
t,1
); . . . , ^
F
d
(X
t,d
)) .
This thesis uses the common method of EVT for the tails to estimate the marginal
CDFs ^
F
i
. Alternative methods are described in McNeil et al. (2005) [37, page 233].
The next step is to estimate the copula parameters with the maximum likelihood
estimation (MLE). Example (3.30) shows the MLE for tting a t-copula to data.
3.29 Denition of Maximum Likelihood Estimation: Let C
denote a paramet-
ric copula and the vector of parameters to be estimated. The MLE is obtained by
maximizing
ln L(; ^
U
1
; . . . , ^
U
n
) =
n
t=1
ln c
( ^
U
t
)
(3.20)
where c
denotes the copula density as is (3.16) and ^
U
t
denotes a pseudo-observation
from the copula.
3.30 Example of Fitting the t-Copula: Let f
,
denote the joint density of a
random vector with t
d
(, 0, ) distribution, where is a linear correlation matrix, f
is
the density of a univariate t
1
(, 0, 1) distribution and t
-1
v
is the corresponding inverse
function. The log-likelihood in the case of a t-copula is given by:
ln L(, ; ^
U
1
, . . . , ^
U
n
)
=
n
t=1
ln f
,
(t
-1
( ^
U
t,1
), . . . , t
-1
( ^
U
t,d
)) -
n
t=1
d
j=1
ln f
(t
-1
( ^
U
t,j
))
(3.21)
3.7 Extreme Value Theory
This section provides a short summary of EVT focusing on the methods used in this
thesis. From a mathematical point of view, EVT deals with models and distributions
that describe extreme events, such as observed fat tails on nancial returns. There
are two main kinds of models for extreme value: the block maxima models and peak
over threshold (POT). The block maxima model works with the largest observation
collected from large samples of identically distributed observations. McNeil et al.
(2005) [37, page 275] acknowledge that this method wastes data. Furthermore, the
interior of the nancial returns can be modeled by a normal distribution. For this

Chapter 3. Basic Concepts in Risk Management
44
reason, the POT is the chosen method for this thesis. The POT considers only the
extreme values that exceed a threshold. This section is based on the works of Embrecht
[37, chapter 7] and Cottin [16, chapter 2.5.2 and 6.3.2.3]
3.7.1 Peak over Threshold (POT)
This section begins with the natural distribution model of EVT, which is the gener-
alized extreme value (GEV) distribution. The importance of GEV in the theory of
extremes is analogous to that of the normal distribution in the central limit theory for
sums of random variables.
3.31 Denition of the Generalized Extreme Value (GEV) Distribution: The
distribution function of the GEV distribution is given by
H
(x) =
exp(
-(1 + x)
-
1
)
= 0
,
exp(
-e
-x
)
= 0
,
(3.22)
Let us consider a sequence of i.i.d. random variables X
1
, . . . , X
n
and M
n
= max(X
1
, . . . , X
n
).
The theorem of Fisher and Tippett provides the central result of EVT, an asymptotic
distribution for the normalized maximums
M
n
-d
n
c
n
with the denition of a maximum
domain of attraction (MDA). A more detailed description can be found in McNeil et
al. (2005) [37, chapter 7.1]
3.32 Denition of the Maximum Domain of Attraction (MDA): The distri-
bution function F is said to be in the MDA of H, written as F MDA(H), if for
some non-degenerate distribution, the function H applies:
lim
n
P (
M
n
- d
n
c
n
x) = lim
n
F
n
(c
n
x + d
n
) = H(x)
(3.23)
where d
n
, c
n
R and c
n
> 0
for all n $
3.33 Theorem of Fisher-Tippett and Gnedenkko: If F MDA(H) for some
nondegenerate distribution function H, then H must be a distribution of type H
i.e.
a GEV distribution.
The POT theory focuses on the conditional distribution of exceeded X - u over a
threshold u. This type of function is also known as excess distribution and builds
the basis of EVT. If the mean of X exists, e(u) denotes the mean excess function. It
expresses the mean of F
u
as a function of u.

Chapter 3. Basic Concepts in Risk Management
45
3.34 Denition of Excess Distribution over Threshold: Let X be a random
variable density function F . The excess distribution over the threshold u has a density
function:
F
u
(x) = P (X - u x|X > u) =
P (X
x + u, X > u)
P (X > u)
=
F (x + u)
- F (u)
1 - F (u)
(3.24)
for 0 x < x
F
, where x
F
is the right endpoint of F .
3.35 Denition of Mean Excess Function: The mean excess function of a random
variable X with a nite mean is given by
e(u) =
E(X - u|X > u).
(3.25)
The main distributional model for exceedances over thresholds is the generalized
Pareto distribution (GPD). The GPD contains a number of special cases. Three cases
are displayed in gure 3.7. When = 0, then the GPD is an exponential distribution;
when < 0, it is a short-tailed Pareto type II distribution; nally, when > 0, the
distribution function G
,
is an ordinary Pareto distribution. A more detailed analysis
of the GPD can be found in McNeil et al. (2005) [37, page 275--276].
3.36 Denition of Generalized Pareto Distribution: Let X be a random vari-
able, and the parameters and be referred to as the shape and scale parameters.
The distribution function of the GPD is given by:
G
,
(x) =
1 - (1 +
x
)
-
1
= 0
,
1 - exp(-
x
)
= 0
,
(3.26)
where > 0, and x when 0 and 0 x -
when < 0.
The importance of the GPD for the POT method is summarized in the theorem of
PickandsBalkemade Haan. This theory implied that the GPD is the natural model
for an excess distribution over a threshold. If F is a characterization of MDA (H),
an approximated function G
,(u)
(x) meets the criteria of an excess distribution of
denition (3.34).

Chapter 3. Basic Concepts in Risk Management
46
0
1
2
3
4
5
6
7
8
9
10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
G
,
(x)
= -0.25, =1
= 0, =1
= 1, =1
(a) Distribution functions of GPD
0
1
2
3
4
5
6
7
8
9
10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
g
,
(x)
= -0.25, =1
= 0, =1
= 1, =1
(b) Densities of GPD
Figure 3.7: Distribution functions and densities of GPD in three cases
3.37 Theorem of PickandsBalkemade Haan: It is possible to nd a positive
measurable function (u) where
lim
ux
F
sup
0x<x
F
-u
|F
u
(x) - G
,(u)
(x)| = 0
(3.27)
if and only if F MDA(H
), R
The excess distribution and mean excess function of the GPD is given by an explicit
formula (see Cottin [16, page 82]):
F
u
(x) = G
,(u)
,
(u) = + u,
(3.28)
e(u) =
(u)
1 -
=
+ u
1 -
.
(3.29)
3.7.2 Modeling with POT
Given a series X
1
, . . . , X
n
of i.i.d. random variables representing continuous nancial
returns from equal (3.5) with an unknown distribution function F MDA(H
) and an
upper endpoint x
F
= sup{x R : F (x) < 1} . Let u be a certain threshold and
N
u
= #{X
i
> u
} the number of exceedances of u by X
1
, . . . , X
n
. The observed ex-
ceedances of the threshold are re-labeled as ~
X
1
, . . . , ~
X
N
u
. For each of the exceedances,
Y
j
= ~
X
j
- u is calculated as the excess loss. By denition 3.34 and theorem 3.37, the
excess distribution is given by F
u
(x) = G
,+u
for 0 x < x
f
- u and some R
and > 0. The common method for approximating the parameters of the GPD is
the log-likelihood method. With the assumption of i.i.d. random variables, the joint
density will then be a product of marginal GPD densities. If g
,
is the density of the
GPD, the log-likelihood is given by (see McNeil et al. (2005) [37, page 278] or Cottin

Chapter 3. Basic Concepts in Risk Management
47
[16, page 350]):
ln L(, ; Y
1
, . . . , Y
N
u
) =
N
u
j=1
ln g
,
(Y
j
)
= -N
u
ln - (1 +
1
N
u
j=1
ln(1 +
Y
j
))
(3.30)
This function must be maximized to the parameter constraints that > 0 and 1+
Y
j
>
0 for all j. The result of the maximization is a GPD model G
^,^
for the excess
distribution F
u
.
3.8 Value-at-Risk
VaR is one of the most widely used risk measures in the eld of risk management and
the quantitative standard for market risk in the Basel framework since the market
risk amendment of 1996. It was invented by the investment bank J.P. Morgan and
published in the popular technical document RiskMetrics [29] in 1996. The most
important properties of VaR are that it summarizes the risk of a multivariate portfolio
in a single, easily understood statistical number and is distribution independent. In the
Basel frameworks, VaR is understood as the risk reserve and, therefore, as a positive
number. In literature, VaR is found as a negative number as well. The given type
of distribution selected the calculation rule. This thesis assumes a loss distribution,
which means the loss is interpreted as a positive number and the gain as a negative
number. This is the general assumption in research literature, such as the studies of
Cottin [16], Jorion [28] or McNeil et al. (2005) [37].
3.38 Denition of Value-at-risk (VaR): Given is the loss distribution L while H
denotes the holding or liquidity period of the portfolio. The distribution function of
the loss in the period H is denoted by F
L
H
(x) = P (L
H
x). The VaR of the portfolio
at a condence level c = 1 - (0, 1) is given by the smallest number x so that the
probability that the loss L
H
exceeds is no larger than . Formally, VaR is:
V aR
H
c
= q
c
(L
H
)
= inf{x R : P (L
H
> x)
}
= inf{x R : F
L
H
(x) c}.

Chapter 3. Basic Concepts in Risk Management
48
In statistical terms, VaR is simply a quantile of the loss distribution. The loss distri-
bution and the VaR are illustrated in gure 3.8. Typically, values for the condence
level are c = 99% or c = 95%. In general, the loss distribution describes an absolute
loss of the portfolio for time horizon H. On the other hand, the loss distribution can
be given by relative values. In the example of equity portfolios, the loss distribution
can be observed by expected continuous nancial returns. With the information of the
current portfolio value , the relative values can be transformed into absolute values
by the multiplication r · and, reversed, the absolute distribution can be transformed
into a relative distribution.
Probability
density
0
Loss
Gain
D
!
)
(
H
c
H
VaR
L
P
)
|
(
H
c
H
H
H
c
VaR
L
L
E
ES
!
H
L
}
)
(
:
inf{
c
x
F
x
VaR
H
L
H
c
t
c
Figure 3.8: Loss distribution, VaR and ES presented graphically
3.8.1 Criticism of Value-at-Risk
This subsection provides a short summary of the research of issues in applying VaR
and the reason why the Basel Committee moves from VaR to ES. The criticism of
VaR is based on the work of Artzner et al. (1998) [4]. They identify four axioms that
a risk measure ideally should have. A risk measure that satises these fours axioms is
termed coherent.
3.39 Denition of Coherent Risk Measures: Consider two real-valued random
variables: X and Y. A function (·) : V R is called a coherent risk measure if it
satises for X, Y and constant c.

Chapter 3. Basic Concepts in Risk Management
49
1. Monotonicity
X, Y
V, X Y
(X) (Y ).
If portfolio X never exceeds the value of portfolio Y , the risk of Y should never
exceed the risk of X.
2. Subadditivity
X, Y, X + Y
V (X + Y ) (X) + (Y ).
The risk to the portfolio of X and Y cannot be worse than the sum of the two
individual risks.
3. Positive homogeneity
X
V, c > 0 (cX) = c(X).
Risk increases in proportion to the positive factor c.
4. Translation invariance
X
V, c R (X + c) = (X) - c.
Adding cash c to the portfolio reduces the risk of the portfolio by the amount of
cash c.
It is widely known that VaR is not a coherent risk measure, because it violates the
property of subadditivity. The proof is simple with an example such as in McNeil et
al. (2005) [37, Example 6.7]. The other three axioms are satised. Subadditivity is
a manifestation of the diversication principle. So, non-subadditivity means that the
VaR of the merged portfolio is not necessarily bounded by the sum of the VaRs of
the individual portfolios. This contradicts the general assumption of risk management
that merging portfolios results in a diversication benet. McNeil et al. (2005) [37,
chapter 2.2.3] and Daníelsson [17, chapter 4.4] emphasize that there is a decentraliza-
tion of risk management using VaR since there cannot be obtained a bound for the
overall risk of enterprises. Furthermore, VaR can be easily manipulated, because it is
only a quantile on the loss distribution. Daníelsson provides some examples of how
banks can simply manipulate VaR with options. On the other hand, the research of
Daníelsson et al. [18] shows that VaR is subadditive in the special case of normally
distributed nancial returns; therefore, VaR is a coherent risk measure. They reveal
that the property of subadditivity is violated only when the tails are super fat. In

Chapter 3. Basic Concepts in Risk Management
50
practice, most assets, exchange rates or commodities do not have such fat tails that
subadditivity may be violated. Hence, the issue of subadditivity is more theoretical
than practicable. A further important weakness of VaR is the interpretation of this
risk measure. VaR describes the maximal potential loss for a given condence level
but provides no information about what happens behind the VaR. Consequently, it
can be said that VaR gives the best worst-case scenario and, for the special case of fat
tail, it can underestimate the potential loss. This raises the question of whether VaR
is a good choice for the risk reserve of risk portfolios in the Basel frameworks.
3.9 Expected Shortfall
Artzner et al. (1998) [4] propose ES as an alternative coherent risk measure. It is
closely related to VaR but satises the property of subadditivity. Expected shortfall
describes the expected loss when losses exceed VaR. Figure 3.8 shows the relation of
ES and VaR for a continuous distribution. In this special case, the ES is simply the
conditional mean of the losses that exceed VaR. Otherwise, the calculation is more
complicated.
3.40 Denition of Expected Shortfall (ES): Given is the loss distribution L while
H
denotes the holding or liquidity period of the portfolio. The distribution function
of the loss in the period H is denoted by F
L
H
(x) = P (L
H
x). The VaR of the
portfolio at a condence level c = 1 - (0, 1) is denoted by V aR
H
c
.
ES
H
c
=
1
1 - c
1
c
q
u
(F
L
H
)du
=
1
1 - c
1
c
V aR
u
(L
H
)du
where q
u
(F
L
H
) = F
L
H
(u) is the quantile function of F
L
H
.
3.41 Lemma : For an integrable loss L
H
with continuous distribution function F
L
H
and any c (0, 1), ES is given by
ES
H
c
=
E(L
H
· I
{L
H
q
c
(L
H
)}
)
1 - c
= E(L
H
|L
H
V aR
H,c
)
(3.31)
Proof: See Lemma 2.16 of McNeil et al. (2005) [37, page 45]

Chapter 3. Basic Concepts in Risk Management
51
3.42 Lemma : For a discontinuous loss distribution function F
L
H
and any c (0, 1),
ESis given by
ES
H
c
=
1
1 - c
E(L
H
· I
{L
H
q
u
(F
LH
)}
) + q
u
(F
L
H
)(1 - c - P (L q
u
(F
L
H
)))
Proof: See Proposition 3.2 of Acerbi and Tasche (2002) [1, page 6]
Obviously, ES
c
V aR
c
and ES
c
depend only on the loss distribution. It solves the
addressed issues of VaR as a risk measure and this leads to the movement of the Basel
Committee from VaR to ESas the measurement for the risk reserve.

4 Methodology and Implementation
of Modeling Financial Returns
4.1 MATLAB
Developed by Mathworks [32], MATLAB is a powerful software and language for
technical and numerical computing. The most important functions are matrix ma-
nipulation, plotting of functions and data, implementation of algorithms, creation of
user interfaces and interfacing with programs written in other computer languages,
including C++, Java, Microsoft Excel and Microsoft Access. It is a popular tool in
applied mathematics, statistics, engineering, nance and economics. The extensive
set of online and printed documentation is one of the main strengths of MATLAB.
There are also plenty of books on the subject. For specic problems in a eld of work,
MATLAB comes with an extensive family of toolboxes.
The key functions of MATLAB and reasons for choosing this computer language for
the present thesis are matrix manipulation for the portfolio theory, easy interface with
databases (e.g. Microsoft Access), good interface to market and stock exchange data,
strong toolboxes for important and dicult-to-implement methods and the simplicity
of a scripting computer language. Every basic concept of risk management, including
VaR and ES calculations are accompanied by a short MATLAB code and application.
The focus is on the implementation of the backtesting routine for ES. Some MAT-
LAB codes of this thesis are highly technical and not described in the chapters. This
includes all interactions with databases.
4.2 Yahoo!Finance
Yahoo!Finance is selected as the data provider. It is a Yahoo! website that provides
nancial information such as stock quotes, stock exchange rates, corporate press re-
leases and nancial reports. Mathworks provides some implemented functions to load
52

Chapter 4. Methodology and Implementation of Modeling Financial Returns
53
Yahoo!Finance data into the MATLAB environment. An active internet connection is
required for downloading time series from Yahoo!Finance. The MATLAB code listing
below presents an example of how to get closing prices from BMW in the period of
January 1, 2000 to September 30, 2014. Yahoo!Finance has dened for each stock
a unique ticker such as BMW.DE. The ticker is found on the website. The result
of this MATLAB statement is a matrix. The rst column contains the date for each
trading day and the second column the closing price. Both value types are numerical.
connection =
yahoo( http : //download.finance.yahoo.com )
closeprice =
fetch(connection, BMW.DE , close , 01/01/2000 , 09/30/2014 )
4.3 Sample Portfolio
This chapter presents a common method for implementing and modeling the market
risk of an exemplary equity portfolio in MATLAB with a GARCH-EVT-Copula ap-
proach. The theoretical basis builds the knowledge and research of the stylized facts
of nancial returns, as described in chapter 3.4. The sample portfolio consists of 774
observations of adjusted daily closing prices of the DAX equities Bayrische Motoren
Werke AG, Deutsche Bank AG, Adidas AG and Siemens AG in the period between
January 1, 2010 and December 31, 2012. This example downloads the data directly
from Yahoo!Finance and the process is very time intensive. Therefore, all nancial
data are stored in a database and the backtest procedures will use the data of the
database instead of downloading it every time from Yahoo!Finance.
% connection to yahoo
connection = yahoo('http://download.nance.yahoo.com');
% dene period of historical data
startdate='01/01/2010';
enddate='12/31/2012';
% import adjusted asset prices and reverse sorting of data
adjclose.BMW=ipud(fetch(connection,'BMW.DE','adj close',startdate,enddate));
adjclose.DBK=ipud(fetch(connection,'DBK.DE','adj close',startdate,enddate));
adjclose.ADS=ipud(fetch(connection,'ADS.DE','adj close',startdate,enddate));
adjclose.SIE=ipud(fetch(connection,'SIE.DE','adj close',startdate,enddate));

Chapter 4. Methodology and Implementation of Modeling Financial Returns
54
The relative price movement of each equity is displayed in gure 4.1. Stock splits and
dividend actions are included in the adjusted asset prices. For a better comparison of
relative performance, the initial level of each equity is standardized to one.
2010
2011
2012
2013
0
0.5
1
1.5
2
2.5
Date
Value
BMW
DBK
ADS
SIE
Figure 4.1: Relative adjusted closing daily prices movement between January 1, 2010
and December 31, 2012 from BMW AG, Deutsche Bank AG, Adidas AG
and Siemens AG
The rst step of evaluating market risk starts with converting the adjusted asset prices
of each equity to daily logarithmic returns.
% calculate log returns
returns=price2ret([adjclose.BMW(:,2) adjclose.DBK(:,2) ...
adjclose.ADS(:,2) adjclose.SIE(:,2) ]);
% nTradingdays = # trading days ; nIndicies = # Equities
[nTradingdays, nIndices] =size(returns);
4.4 Filter the Financial Returns for Each Equity
Modeling semi-parametric CDF with EVT requires the observations to be approxi-
mately i.i.d.. Chapter 3.4 showed that nancial returns have the properties of lep-
tokurtosis (fat tails), a signicant autocorrelation, heteroscedasticity (volatility clus-

Chapter 4. Methodology and Implementation of Modeling Financial Returns
55
ters) and a leverage eect in volatility. To produce a series of i.i.d. observations,t a
rst-order autoregressive model AR(1) to the conditional mean of the returns of each
equity (see denition 3.16)
r
t
= r
t-1
+
t
and a rst-order asymmetric GARCH model (GJR-GARCH(1, 1)) to the conditional
variance (see denition 3.19)
2
t
=
0
+ (
1
+
1
I [
t-1
< 0])
2
t-1
+
1
2
t-1
As described in chapter 3.5,the asymmetric GARCH model compensates for the
heteroscedasticity and incorporates the leverage eect of volatility. The rst-order au-
toregressive model compensates for autocorrelation in nancial returns. It is common
practice that the standardized residuals of each equity are modeled as a standard-
ized student's t-distribution to compensate for the leptokurtosis (see Mathworks [36]).
Formally,that is
z
t
=
t
t
i.i.d. distributed t().
% t data with rst-order AR model and assymetric GJR-GARCH model
model=arima('AR',NaN,'Distribution', 't','Variance',gjr(1,1));
% preallocate storage
residuals = NaN(nTradingdays,nIndices);
variances = NaN(nTradingdays,nIndices);
% set options: fmincon - nds the minimum of a problem
% algorithm sqp - Sequential quadratic programming
options = optimset('fmincon');
options = optimset(options,'Display' ,'o','Diagnostics','o',...
'Algorithm','sqp','TolCon' ,1e-7);
for i = 1:nIndices
t{i} = estimate(model,returns(:,i),'print',false,'options',options);
[residuals(:,i), variances(:,i)] = infer(t{i},returns(:,i));
end
% standardized students' t i.i.d. residuals
residuals = residuals ./ sqrt(variances);

Chapter 4. Methodology and Implementation of Modeling Financial Returns
56
The Econometrics Toolbox [33] provides time series regression models with the func-
tions arima, estimate and infer. The rst line of the code segment sets the model to
an AR(1) and GJR-GARCH(1, 1) model with a students' t-distribution. The estimate
function of MATLAB uses the numerical fmincon algorithm of the Optimation Tool-
box [34] to estimate the parameters of the regression model time series errors and
stores it in the object fit. The function infer infers residuals of a univariate regres-
sion with time series errors t to the nancial returns of each equity.
The ACFs of a sample nancial return series was displayed and explained in gure 3.3.
The ACFs of BMW, Deutsche Bank, Adidas and Siemens are very similar and dis-
played in appendix D. To conclude this section, the ACFs of the standardized residuals
and squared standardized residuals are displayed in gures 4.2 and 4.3. Comparing
the ACFs of the standardized residuals to the corresponding ACFs of the raw nancial
returns reveals that the standardized residuals are approximately i.i.d..
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(a) BMW
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(b) Deutsche Bank
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(c) Adidas
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(d) Siemens
Figure 4.2: ACF plots of the standardized residuals

Chapter 4. Methodology and Implementation of Modeling Financial Returns
57
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(a) BMW
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(b) Deutsche Bank
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(c) Adidas
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(d) Siemens
Figure 4.3: ACF plots of the squared standardized residuals
4.5 Estimate Semi-Parametric CDFs with EVT
The next step is to estimate the CDF for each equity with the standardized i.i.d.
residuals. Chapter 3.4.2 showed that nancial returns have fat tails and the normal
distribution is not eective for modeling market risk. On the other hand, the kernel of
the CDF is approximately normally distributed. For this reason, the approach uses the
normal distribution for estimating the interior of the distribution and EVT for the fat
tails. The result is a semi-parametric CDF for each equity. The following MATLAB
code uses the POT method with threshold u = 10%. The approach and assumption
of this method was described in chapter 3.7.2. Given the exceedances in each tail,
the MATLAB function paretotails of the statistics toolbox [35] optimizes the negative
log-likelihood function to estimate the parameters of the GPD (see equation (3.30)).
The result of this method, the semi-parametric CDF for each equity, is displayed in
gure 4.4. To visualize the signicance of the GPD t, gure 4.5 shows the empirical
CDF of the upper 10% tail exceedances of the residuals along with the CDF tted by
the GPD. The tted distribution of each equity follows the exceedance data; hence,

Chapter 4. Methodology and Implementation of Modeling Financial Returns
58
the GPD model seems to be a good choice.
%Decimal fraction of residuals allocated to each tail
tailFraction = 0.1;
%Preallocated storage for cell array of Pareto tail objects
tails = cell(nIndices,1);
%estimate for each equity the GPD with log-likelihood
%paretotails variable consist of three parts:
%pareto lower tail tted data, Kernel tted data,
%pareto upper tail tted data
for i = 1:nIndices
tails{i} = paretotails(residuals(:,i), tailFraction, 1 - tailFraction, 'kernel');
end
-4
-3
-2
-1
0
1
2
3
4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Standardized Residual
Probability
Pareto Lower Tail
Kernel Smoothed Interior
Pareto Upper Tail
(a) BMW
-5
-4
-3
-2
-1
0
1
2
3
4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Standardized Residual
Probability
Pareto Lower Tail
Kernel Smoothed Interior
Pareto Upper Tail
(b) Deutsche Bank
-4
-3
-2
-1
0
1
2
3
4
5
6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Standardized Residual
Probability
Pareto Lower Tail
Kernel Smoothed Interior
Pareto Upper Tail
(c) Adidas
-3
-2
-1
0
1
2
3
4
5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Standardized Residual
Probability
Pareto Lower Tail
Kernel Smoothed Interior
Pareto Upper Tail
(d) Siemens
Figure 4.4: Empirical CDF of standardized residuals

Chapter 4. Methodology and Implementation of Modeling Financial Returns
59
0
0.5
1
1.5
2
2.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Exceedance
Probability
Fitted Generalized Pareto CDF
Empirical CDF
(a) BMW
0
0.5
1
1.5
2
2.5
3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Exceedance
Probability
Fitted Generalized Pareto CDF
Empirical CDF
(b) Deutsche Bank
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Exceedance
Probability
Fitted Generalized Pareto CDF
Empirical CDF
(c) Adidas
0
0.5
1
1.5
2
2.5
3
3.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Exceedance
Probability
Fitted Generalized Pareto CDF
Empirical CDF
(d) Siemens
Figure 4.5: Upper tail of standardized residuals
4.6 Calibrate the t-Copula
Given the standardized residuals and the tted tails with the GPD, the next step is to
estimate the scalar degrees of the freedom parameter (DoF ) and the linear correlation
matrix (R) of the student's t-copula. The MATLAB code below uses the canonical
maximum likelihood method to estimate the parameter described in chapter 3.6.4.
This method starts by transforming the standardized residuals to uniform variates
using the semi-parametric CDF of the previous step. Finally, the t-copula is tted
to the transformed data using the copulafit function of the statistics toolbox. The
function copulafit uses the maximum likelihood approach for a t-copula to estimate
the parameter of the copula, which was described in chapter 3.6.4 in example 3.30.

Chapter 4. Methodology and Implementation of Modeling Financial Returns
60
% preallocate storage
U = zeros(size(residuals));
% transform margin to uniform (for each equity)
for i = 1:nIndices
U(:,i)= cdf(tails{i}, residuals(:,i));
end
% t the copula
[R, DoF] = copulat('t', U, 'Method', 'ML');
The result of that function is the linear correlation matrix
R =
1.0000 0.5459 0.6010 0.6338
0.5459 1.0000 0.4915 0.6462
0.6010 0.4915 1.0000 0.5675
0.6338 0.6462 0.5675 1.0000
and the degrees of freedom
DoF = 10.2050.
4.7 Simulate Portfolio Returns with a t-Copula
Given the parameters of a t-copula, the next step is to simulate dependent nancial
returns for each equity corresponding to the dependence of the standardized residuals.
The MATLAB function copularnd of the Statistics Toolbox provides this functionality;
therefore, it needs the correlation matrix and degrees of freedom. The theory of this
function was described in chapter 3.6 in example 3.28. The MATLAB code below
simulates 10,000 independent random trials of dependent standardized equity residuals
over a horizon of 10 trading days.
nTrials = 10000; % # of independent random trials
horizon = 10; % time horizon for VaR and ES forecast
% preallocate storage for standardized residuals array
Z = zeros(horizon, nTrials, nIndices);
% t copula simulation
% R - linear correlation matrix, DoF - degrees of freedom
U = copularnd('t', R, DoF, horizon * nTrials);

Chapter 4. Methodology and Implementation of Modeling Financial Returns
61
So, the object U has 100,000 simulated returns for each equity. Each column of the
simulated standardized residuals array represents an i.i.d. stochastic process when
viewed in isolation, whereas each row shares the rank correlation induced by the
copula. The next step is to transform the random sample into the original scale using
the MATLAB function icdf and reshape this result into a 10 × 10, 000 × 4 matrix,
where the rst index represents the time point of the simulated horizon, the second
index represents the independent random trials for one point of time and the last index
represents the equities.
% transform random trials into original scale
% and reshape 100, 000 × 4 matrix into 10 × 10, 000 × 4 matrix
for j = 1:nIndices
Z(:,:,j) = reshape(icdf(tails{j}, U(:,j)), horizon, nTrials);
end
The next step is to reintroduce the autocorrelation and heteroskedasticity that were
eliminated with the ARCH(1) and GJR-GARCH(1,1) model to satisfy the assumption
of EVT. The MATLAB function filter of the Econometric Toolbox provides this
functionality and expects the input parameter of the observed nancial returns, the
calculated variances, standardized residuals and the fit object of the ARCH and
GARCH modeling.
Y0= returns(end,:); % presample returns
Z0= residuals(end,:); % presample standardized residuals
V0= variances(end,:); % presample variances
% preallocate storage
simulatedReturns = zeros(horizon, nTrials, nIndices);
% reintroduce autocorrelation and heteroskedasticity
for i = 1:nIndices
simulatedReturns(:,:,i) = lter(t{i}, Z(:,:,i), 'Y0', Y0(i), 'Z0', Z0(i), 'V0', V0(i));
end
The last step of modeling nancial returns is calculating the portfolio return over the
given horizon. To ensure better technical handling, the matrix simulatedreturns is
reshaped in such a way that each query simulatedreturns(:, :, i)
i=1,...,10,000
represents
a single trial of a multivariate return series, where each row represents one date of
the time horizon H and the columns the equities. Then the portfolio return of each
time point can be calculated by log(exp(simulatedReturns(:, :, i)) weights) (see the
formula for continuous portfolio return (3.10)). The continuous return of the given

Chapter 4. Methodology and Implementation of Modeling Financial Returns
62
time horizon is then the sum of the vector (see the property of n-period continuous
returns in formula (3.8)). The nal result of the modeling are the portfolio returns
displayed in a CDF plot in gure 4.6.
% reshape matrix
simulatedReturns = permute(simulatedReturns, [1 32]);
% preallocate storage
PortfolioReturns = zeros(nTrials,1);
% equally weighted portfolio
weights = repmat(1/nIndices, nIndices, 1);
%calc cumulative portfolio returns
for i = 1:nTrials
PortfolioReturns(i) = sum(log(exp(simulatedReturns(:,:,i)) * weights));
end
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Logarithmic Return
Probability
Figure 4.6: CDF plot of simulated 10-day portfolio returns
4.8 Evaluate Market Risk with VaR and ES
Finally, this thesis measures market risk with the ES. The shown formulas for VaR
and ES are based on a given loss distribution. So, the rst step of evaluating market

Chapter 4. Methodology and Implementation of Modeling Financial Returns
63
risk starts with the transformation of the given distribution into a loss distribution.
Then calculate VaR as the quantile of the portfolio returns with the given condence
level (see denition 3.38). The ES can then be evaluated simply with formula (3.31)
as conditional expectation. The results of the VaR and the ES are displayed in table
4.1 for sample condence levels.
% transform prot/loss distribution to loss distribution (loss = positive)
PortfolioReturns=(-1)*PortfolioReturns;
% condence level
c=0.99;
% calc Value-at-Risk
Var=quantile(PortfolioReturns, c)
% calc Expected Shortfall
ES=mean(PortfolioReturns(nd(PortfolioReturns >= Var)))
c = 90%
c = 95%
c = 99%
Value-at-Risk
3.81%
5.24%
8.41%
Expected Shortfall
5.81%
7.17%
10.25%
Table 4.1: VaR and ES results of simulated portfolio returns for sample condence
levels

5 Backtesting Methodology
Risk measures such as VaR and ES are useful only if they predict future risk accurately.
Backtesting is a statistical method for proong the predictability of these risk models
by comparing the losses to corresponding risk estimates. It denotes T as the number
of observations in a period while H denotes the holding period of the portfolio. It
is supposed that the period is one year; thus, T = 252. For each point of time,
the risk measure VaR and ES are estimated and denoted by V aR
t,H
c
and ES
t,H
c
. For
simplicity, H = 1,and the measured values of V aR
t,1
c
and ES
t,1
c
are compared with the
observed loss at date t + 1. This procedure is implemented for each day and known
as backtesting. The backtesting procedure of VaR is summarized in the following
subsection. On the other hand,backtesting the evaluation of ES is more complicated
and intensively discussed in recent research. The Basel Committee does not advise any
method for backtesting ES. This thesis implements and analyzes the two suggested
methods of Acerbi and Szekely (2014) [3]. This chapter uses the following notations.
They are dened in the next sections.
W
BT
Window of backtesting sample
#W
BT
Number of backtesting trading days
W
HT
Window of historical data sample
#W
HT
Number of historical trading days
^
I
H
t+H
Indicator of whether a VaR violation occurs
^
I
W
BT
Number of violations of the whole backtesting sample
T
Number of iterations in the backtesting period
64

Chapter 5. Backtesting Methodology
65
5.1 VaR Basel Backtesting framework
The backtesting method of VaR plays a key role in the Basel framework. Financial
institutes must perform a backtesting of its VaR model annually. Supervisors set a plus
factor for the risk measure reecting the result of the backtesting. A bad result leads
to a higher accounting of market risk capital for nancial institutes. A backtesting
procedure is characterized by the input parameters: backtesting window W
BT
, the
number of historical data samples #W
HT
for each estimation, the holding period H
and the condence level c = 1 - of the VaR estimation.
5.1 Denition of Backtesting Window: The backtesting window W
BT
is the
period of the backtesting procedure and dened by a rst date and a last date. The
declaration is denoted by:
W
BT
= [startdate; enddate],
startdate < enddate
The number of trading days in this period is denoted by #W
BT
.
5.2 Denition of Historical Data Window: The historical data window W
BT
is
the period of the data sample over which risk is forecast and dened by a number of
trading days in this period #W
HT
.
5.3 Denition of the Number of Backtest Iterations: Given a backtesting
window W
BT
and a holding period H, the count of the backtest iterations T is
T = #W
BT
- H
Typically, the backtesting window is one yearfor example W
BT
= [January 01,
2013; Dezember 31, 2013]. It is supposed that this period has 250 trading days;
thus, #W
BT
= 250. The holding period H is one trading day. With these given
parameters, the backtesting procedure has T = 249 iterations. For each iteration, the
VaR is estimated with the given condence level and the data sample of the observation
window of one year using the methodology of chapter 4. The backtesting procedure
involving these parameters is illustrated in gure 5.1. It can be seen that the historical
data window moves along each iteration step and inuences in this way the estimation
of the risk measure. On each iteration, the observed loss and the estimated VaR are
compared. If the observed loss exceeds the VaR forecast, then the VaR estimation is
said to have been violated.

Chapter 5. Backtesting Methodology
66
01/01/2013
12/31/2013
t=2
Backtesting window (250 trading days)
01/02/2013
01/03/2012
H=1
1
,
2
H
t
c
VaR
t=1
01/01/2013
01/02/2012
H=1
1
,
1
H
t
c
VaR
01/02/2013
01/03/2013
...
t=T=249
12/30/2013
12/31/2012
H=1
1
,
249
H
t
c
VaR
12/31/2013
Hisorical data window (250 trading days)
Hisorical data window (250 trading days)
Hisorical data window (250 trading days)
Backtesting
Iteration t:
t=1
t=250
2
H
t
L
3
L
250
L
Figure 5.1: Backtesting procedure graphically
5.4 Denition of VaR Violation: It denotes L
t+H
as the observed loss at point of
time t+H and V aR
t,H
c
as the estimated VaR for point of time t+H. A VaR violation
is dened as:
^
I
H
t+H
=
1
if L
t+H
V aR
t,H
c
0
if L
t+H
< V aR
t,H
c
The number of violations in the backtesting period W
BT
is given by:
^
I
W
BT
=
T
t=1
^
I
H
t+H
5.1.1 Test and Signicance
This section uses the notation of the previous section additionally with the random
variable L
H
= {L
H
t
}, which represents a vector of portfolio losses L
H
t
for every day
t = 1, . . . , T
in the backtesting window W
BT
. In a backtest procedure, it can be
indicated whether a violation occurred on day t by ^I
H
t+H
. This variable takes the
value of one for a violation and zero for no violation. This means that (^I
H
t+H
)
t=1,...,T
is a Bernoulli process of i.i.d. Bernoulli variables with a success probability (see
McNeil et al. (2005) [37]). It is expected that the observed sum of violations is
then binomially distributed. The backtest with i.i.d. Bernoulli violations has two
aspects: checking that the number of violations is correct on average and that the
i.i.d. property is satised. Only the rst aspect is requested by the Basel Committee

Chapter 5. Backtesting Methodology
67
and is a component of this thesis. Jorion [28, chapter 6] and Daníelsson [17, chapter
8.3] provide further tests including the independence of violations. Checking that the
number of violations is correct on average can be checked by a standard one-sided
binomial test. Therefore, an observed signicance level is dened
=
Observed number of violations
Number of trading days in the backtest period
=
^
I
W
BT
T
.
It is expected that = .
E( ) =
E(^I
W
BT
)
E(T )
=
T
·
T
=
This builds the H
0
hypothesis of the binomial test
H
0
: .
The test statistic is given by the number of violations. If H
0
is satised, then the
number of violations in the backtest period is binomially distributed; thus
Z
0
(L
H
) = ^
I
W
BT
=
T
t=1
^
I
H
t+H
B(T, ).
The probability of getting exactly k violations in T trials is given by the density of
the binomial distribution:
f (k; T, ) =
n
k
k
(1 - )
T -k
,
n
k
=
n!
k!(n
- k)!
.
The alternative hypothesis for the upper one-sided loss is given by:
H
1
: > .
Departures from the null hypothesis would suggest either systematic underestimation
or overestimation of VaR. But a departure = does not mean that it is a bad
model. Statistical deviations on the estimation of are normal. Hence, the next step
is to prove the signicance of the departure. Therefore, the Basel Committee sets up
a trac light system. It is oriented toward Type 1 errors of statistical tests. A Type
1 error describes the error of rejecting a null hypotheses when it is actually true; thus,
the error of accepting an alternative hypothesis when the results can be attributed
to chance. It is said that the departures are signicant. The illustration in table 5.1
shows the Type 1 error.

Chapter 5. Backtesting Methodology
68
Model
Decision
Correct
Incorrect
Accept H
0
OK
Type 2 error
Reject H
0
Type 1 error
OK
Table 5.1: Decision error in statistical hypothesis tests
Type 1 errors can be calculated using the CDF. The CDF of the binomial distribution
is given by
F (k; T, ) = P (X
k) =
k
i=0
n
i
i
(1 - )
T -i
.
The Basel Committee has dened that a risk model is in the red zone when k violations
produce a Type 1 error with a probability of, at most, 0.01 %. This is exactly the
case when the probability of, at most, k violations amounts to at least 99.99 %. It
is said that the departure of the model is signicant and its use must be rejected.
On the other hand, a model is in the green zone when the probability of, at most, k
violations amounts to less than 95 %. It is said that the departure of the model is not
signicant and the model can be used further. The yellow zone is between the green
and red zones. Models in this zone are calibrated with a factor for further calculation.
Detailed information about these Basel rules can be found in Jorion [28, chapter 6.2.2].
The denitions of the Basel zones are shown in table 5.2. It is easy to see that the
Basel zones are attached to the parameters' condence level c = 1- and the number
of backtest iterations T . The trac light system and the graphical interpretation for
the parameter choice T = 250 and = 1% are displayed in table 5.3 and gure 5.2.
Basel zone
Probability zone
Red zone
P (X
k) =
k
i=0
n
i
i
(1 - )
T -i
99.99%
Yellow zone
95% P (X k) =
k
i=0
n
i
i
(1 - )
T -i
< 99.99%
Green zone
P (X
k) =
k
i=0
n
i
i
(1 - )
T -i
< 95%
Table 5.2: Trac light zones of the Basel Committee

Chapter 5. Backtesting Methodology
69
Violations
k
Probability
for k
violations
Probability
for up to k
violations
Probability of
Type 1 error if
rejected for k or
more violations
Basel zone
0
8.106%
8.106 %
100.000 %
Green
1
20.469%
28.575%
91.894%
Green
2
25.742%
54.317%
71.425%
Green
3
21.495%
75.812%
45.683%
Green
4
13.407%
89.219%
24.188%
Green
5
6.663%
95.882%
10.781%
Yellow
6
2.748%
98.630%
4.118%
Yellow
7
0.968%
99.597%
1.370%
Yellow
8
0.297%
99.894%
0.403%
Yellow
9
0.081%
99.975%
0.106%
Yellow
10
0.020%
99.995%
0.025%
Red
Table 5.3: Sample trac light system of the Basel committee for T = 250 and =
1%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
0
1
2
3
4
5
6
7
8
9
10
11
12
SUR
E
DELOLW\
QXPEHU RI YLRODWLRQV
prob for k or less violations
prob for type I error
yellow zone threshhold
red zone threshhold
Figure 5.2: Sample trac light system of the Basel Committee graphically for T =
250 and = 1%

Chapter 5. Backtesting Methodology
70
5.2 Backtesting Expected Shortfall
The theory and implementation of VaR backtesting in the previous section is simple
and intuitive. In contrast, the backtesting of ES is much more complicated. The ES
is a conditional mean of the losses that exceed the VaR level. Due to conditional
mean, it is not possible to create a Bernoulli trial as test statistic. Furthermore, the
advanced research on elicitable risk measures leads to the assumption that the ES is
not backtestable. In particular, elicitability has started a confusing debate.
5.5 Denition of Elicitability: A statistical function (Y ) of a random variable Y
is dened as elicitable if it minimizes the expected value of a scoring function S:
(Y ) = argmin
x
[E(S(x, Y ))]
In the direction of the Basel backtesting framework, it has been shown that elicitability
is a key property for a risk measure to perform a backtest. Several authors support
this statementfor example, Embrechts and Hofert (2013) [21] and Tasche (2013)
[47]. If is elicitable and given a historical point of predictions x
t
for the statistics
and realizations y
t
of a random variable, the natural model to perform backtesting is
given by the average expected score
^
S =
1
T
T
i=1
S(x
t
, y
t
).
It is a well-known fact in quantile estimation that the quantile functionthus, the
VaRis elicitable with the score function S(x, y) = ((x > y) - )(x - y). Further
research has shown that ES is not elicitable, which sounded for some researcher or
supervisor like the formal proof of the fact that it is not backtestable. An advanced
description of this topic can be found, for example, in Gneiting (2010) [25] and in
Bellini and Bignozzi [10].
The backtesting problem of ES was widely discussed in popular risk management
literaturefor example, the article Back-testing Expected Shortfall: Mission Possi-
ble? [14] published on Risk.net. Finally, the research of Acerbi and Szekely [2] in
December 2014 show that elicitability is not necessary for backtesting. Furthermore,
they suggest three methods to backtest ES. This thesis focuses on the two tests that
are based on and are similar to the current Basel backtesting framework VaR.

Chapter 5. Backtesting Methodology
71
5.2.1 Test 1: Testing ES after VaR
The rst test is inspired by the conditional denition of ES (3.31).
ES
t,H
c
= E(L
H
|L
H
V aR
t,H
c
)
1 = E(
L
H
ES
t,H
c
|L
H
V aR
t,H
c
)
E(
L
H
ES
t,H
c
- 1|L
H
V aR
t,H
c
) = 0
(5.1)
Acerbi and Szekely suggest testing the magnitude of the realized exceptions against
the model exceptions, if the VaR backtest of the Basel Committee has been satised.
Using the indicator function ^I
H
t+H
, the test statistic follows from equation (5.1):
Z
1
(L
H
) =
T
t=1
L
H
t
·^
I
H
t+H
ES
t,H
c
^
I
W
BT
- 1
(5.2)
if ^I
W
BT
=
T
t=1
^
I
H
t+H
> 0
.
As the null hypothesis, they choose
H
0
: P
[]
t
= F
[]
t
,
t
where P
[]
t
= min(1, P
t
(x)/) is the distribution tail for x > V aR
t,H
c
. The alternatives
are
H
1
:
ES
t,H
c,F
ES
t,H
c
, for all t and ES
t,H
c,F
> ES
t,H
c
for some t
V aR
t,H
c,F
= V aR
t,H
c
, for all t.
Under the assumption that VaR has been successfully tested, it is easy to see that the
predicted VaR is still correct under H
1
. It is expected that the realized value Z
1
(L
H
)
is zero and it rejects the null hypothesis when it is negative; thus, it is E
H
0
(Z
1
|^I
W
BT
>
0) = 0 and E
H
1
(Z
1
|^I
W
BT
> 0) < 0
(proof in proposition A.2 of Acerbi and Szekely
[2]).

Chapter 5. Backtesting Methodology
72
5.2.2 Test 2: Testing ES Directly
The second test is inspired by the unconditional denition of ES (3.31).
ES
t,H
c
= E
L
H
t
· ^I
H
t+H
1 = E
L
H
t
· ^I
H
t+H
· ES
t,H
c
E
L
H
t
· ^I
H
t+H
· ES
t,H
c
- 1 = 0
(5.3)
Using the indicator function ^I
H
t+H
, the test statistics follow from equation (5.3):
Z
2
(L
H
) =
T
t=1
L
H
t
· ^I
H
t+H
T
· · ES
t,H
c
- 1
(5.4)
Acerbi and Szekely suggest test hypotheses similar to Test 1:
H
0
:
P
[]
t
= F
[]
t
,
t
H
1
:
ES
t,H
c,F
ES
t,H
c
, for all t and ES
t,H
c,F
> ES
t,H
c
for some t
V aR
t,H
c,F
V aR
t,H
c
, for all t
This test can directly be used for ES without testing VaR rst. Again, E
H
0
(Z
2
) = 0
and E
H
1
(Z
2
) < 0 (proof in proposition A.3of Acerbi and Szekely [2]), which means
that the expected value of Z
2
(L
H
) is zero and the test rejects the null hypothesis when
it is negative.
5.2.3 Signicance and Power
As with the VaR backtest approach of the Basel Committee, a deviation of the esti-
mated value E
H
0
(Z
1
|^I
W
BT
> 0) = 0
and E
H
0
(Z
2
) = 0 does not mean that it is a bad
model. Statistical deviations are normal and the next step proofs the signicance of
the deviation. This step was very easy for the Basel backtest because the distribution
of the test statistic was known under H
0
, the binomial distribution. This property
is not known for the presented ES backtest and the signicance must be simulated
consequently. Therefore, the distribution P
Z
is simulated under H
0
of a realization

Chapter 5. Backtesting Methodology
73
Z(x)
. The p-value can be computed by p = P
Z
(Z(x)):
simulate independent
L
i
t
P
t
,
t, i = 1, . . . , M
compute
Z
i
= Z(L
i
)
estimate
p =
M
i=1
(Z
i
< Z(l))
M
where M denotes a large number ofscenarios. Finally, the test is accepted ifp > and
rejected if p . Storage ofmore information and the simulation ofa large number
ofscenarios is the major dierence between backtesting VaR and ES. This leads to
some technical issues, which are discussed in the next chapter. The power ofthe tests
is computed similar to the signicance with the dierence that the distribution P
Z
is
simulated under H
1
.

6 Backtesting Results
This chapter applies the backtesting methodology to VaR and the two tests for ES of
the previous chapter on four sample portfolios. In table 6.1 portfolios and parameters
are given according to the Basel framework standard. To give an example, the back-
testing window is typically one nancial year, the liquidation period is 1 or 10 trading
days. Since the market risk amendment the signicance level for a VaR estimation is
V aR
= 1%. With the movement to ES, the Basel Supervisors suggest a signicance
level of
ES
= 2.5% for ES as equal estimation to V aR
1%
. Acerbi and Szekely [2,
page 3] conrm this choice of the signicance level. The parameter n
i
is the number
of stock shares of asset i (see denition 3.3). For simplicity, it is supposed that the
number of stock shares do not change over time. This is a realistic and practicable
assumption. The implementation of the backtesting procedure is very technical and
shown in appendix C.
Parameter
Portfolio 1
Portfolio 2
Portfolio 3
Portfolio 4
Stocks
BMW
SAP
Daimler
Commerzbank
Deutsche Bank
Volkswagen
Münchener Rück
Volkswagen
Adidias
Bayer
RWE
Lanxess
Siemens
Continental
Deutsche Bank
Allianz
E.On SE
Deutsche Börse
Merck
Inneon
Deutsche Post
W
BT
[Jan. 1, 2013;
[Jan. 1, 2013;
[Jan. 1, 2009;
[Jan. 1, 2008;
Dec. 31, 2013] Dec. 31, 2013]
Dec. 31, 2009]
Dec. 31, 2008]
H
1
10
1
10
#W
HT
500
500
500
500
V aR
1%
1%
1%
1%
ES
2.5%
2.5%
2.5%
2.5%
n
i
5000
5000
5000
5000
M
1000
1000
1000
1000
Table 6.1: Backtesting parameter of 4 sample portfolios
74

Chapter 6. Backtesting Results
75
Figure 6.1 shows the observed loss, the estimation of ES
t,H
2.5%
, V aR
t,H
2.5%
and V aR
t,H
1%
for
each iteration of the backtesting. If an observed loss (blue lines) touches the line of the
VaR estimation, a violation occurs. It can be seen, that the choice of the signicance
level of 2.5% for ES is a correct decision of the Basel Committee. The estimations of
ES
t,H
2.5%
and V aR
t,H
1%
are very equal over all iterations and examples. Table 6.2 shows
a full listing of the results of the backtestings. The test statistics Z
1
and Z
2
are simu-
lated M = 1000 times. Figures 6.2 and 6.4 show the probability density function and
gures 6.3 and 6.5 the CDF of the test statistics Z
1
and Z
2
.
The rst nding is, that all example backtests have been successful in the denition of
the Basel trac light concept, even the last example. Portfolio four demonstrate how
ES and VaR works in extreme bull markets. The backtesting period of this example
represents the biggest nancial crisis since 1929. The VaR backtest signals problems
concerning the yellow trac light zone. The risk measurement should be calibrated
but the model is accepted. The second ES test comply with that. The expected
value E(Z
2
) varies widely from 0. This signals a discrepancy between forecast and
observed loss (see gure 6.1d). Example two shows a comparable eect for the ES
test. These ndings suggest problems of the ES tests with large liquidation periods
H
. Example one and three show signicant results. Under H
0
, all realizations Z(x)
of all examples are signicant. With the simulated CDF of the test statistics Z
1
and
Z
2
and a predened signicance level of the tests, for example
Z
1
=
Z
2
= 5%, can
create a trac light system similar to the Basel concept. The red line in gures 6.3
and 6.5 indicates the signicance level of 5%, that equates the lower limit of the green
zone. The examples demonstrate that the distributions of Z
1
dier signicantly for
each portfolio and parameter choice. The same applies to Z
2
. That means for all
practical purposes that parameter such as p-value, power, basel zone and distribution
of the test statistics can only estimated with a large number of observations. While
the distribution of the test statistic for the Basel trac light concept was known, the
statistics of Z
1
and Z
2
includes the random variables loss L
H
t
and the estimation of
ES
t,H
c
. This combination leads to a unknown and unpredictable distribution of the
test statistics.
The next important nding is the large calculation time of the backtests. The shown
sample portfolio backtests have been implemented with M = 1, 000 scenarios. It is rec-
ommended to increase the number of simulations to at least M = 10, 000. To compute
the power it is necessary to run again a large number of experiments for each example
portfolio. That strategy results in a time problem, especially if nancial institutes
have many portfolios. MATLAB is already a fast numerical computer language and

Chapter 6. Backtesting Results
76
the code is optimized in the performance, by using matrix multiplication and vectors
instead of for loops and using a database instead of downloading the yahoo data in
every loop. The time could be improved by using faster computing machines and im-
plementing parallel computing in MATLAB. I expect that these improvements could
reduce the running time by a factor of 10 with current technologies. The problem of
large calculation times should not be underestimated of the Basel Committee.
0
50
100
150
200
250
-0.04
-0.03
-0.02
-0.01
0
0.01
0.02
0.03
0.04
0.05
Backtesting iteration
Relative loss
Observed loss
ES
2.5%
t,H
VaR
1%
t,H
VaR
2.5%
t,H
(a) Portfolio 1
0
50
100
150
200
250
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
Backtesting iteration
Relative loss
Observed loss
ES
2.5%
t,H
VaR
1%
t,H
VaR
2.5%
t,H
(b) Portfolio 2
0
50
100
150
200
250
-0.1
-0.05
0
0.05
0.1
0.15
Backtesting iteration
Relative loss
Observed loss
ES
2.5%
t,H
VaR
1%
t,H
VaR
2.5%
t,H
(c) Portfolio 3
0
50
100
150
200
-1
-0.5
0
0.5
1
1.5
2
2.5
Backtesting iteration
Relative loss
Observed loss
ES
2.5%
t,H
VaR
1%
t,H
VaR
2.5%
t,H
(d) Portfolio 4
Figure 6.1: Observed loss, ES and VaR estimations for each backtest iteration of
four sample portfolios with parameters of table 6.1. A violation occurs if
a blue line touches a VaR estimation (green or black line)

Chapter 6. Backtesting Results
77
Parameter
Portfolio 1 Portfolio 2 Portfolio 3 Portfolio 4
T
260
251
256
247
Calculation time
17 hours
70 hours
17 hours
70 hours
Value-at-Risk Basel Trac Light Concept Results
Z
0
(L
H
) = ^
I
W
BT
1%
3
0
2
7
1%
1.15%
0%
0.78%
2.99%
Probability for
73.64%
8.03%
52.78%
99.73%
up to k violations
Cumulative Type 1 Error
48.23%
100%
72.64%
0.98%
for up to k violations
Basel zone
Green
Green
Green
Yellow
Expected Shortfall Test 1 Results
Z
1
(L
H
)
0.1405
-0.11741
-0.1088
0.17434
E(Z
1
(L
H
))
0.0920
-0.7187
-0.1522
0.1641
^
I
W
BT
2.5%
4
2
6
10
Simulated p-value
89.39%
23.23%
41.64%
88.89%
ofrealization Z(x)
Test result with M
accept H
0
accept H
0
accept H
0
accept H
0
simulations and
Z
1
= 5%
Expected Shortfall Test 2 Results
Z
2
(L
H
)
-0.2982
-0.7187
-0.2088
1.0074
E(Z
2
(L
H
))
-0.2133
-0.7159
-0.1796
0.9977
^
I
W
BT
2.5%
4
2
6
10
simulated p-value
24.63%
23.23%
29.13%
84.85%
ofrealization Z(x)
Test result with M
accept H
0
accept H
0
accept H
0
accept H
0
simulations and
Z
2
= 5%
Table 6.2: Results ofVaR and ES backtests with parameters oftable 6.1. The rst
scenario of M simulations is the realization Z(x). P-value ofrealization
Z(x)
is simulated over M scenarios

Chapter 6. Backtesting Results
78
-0.1
-0.05
0
0.05
0.1
0.15
0.2
0.25
0
1
2
3
4
5
6
7
8
9
10
Z1
Probability
f(Z1); H
0
(a) Portfolio 1
-0.16 -0.15 -0.14 -0.13 -0.12 -0.11 -0.1
-0.09 -0.08 -0.07 -0.06
0
5
10
15
20
25
30
35
Z1
Probability
f(Z1); H
0
(b) Portfolio 2
-0.22
-0.2
-0.18
-0.16
-0.14
-0.12
-0.1
-0.08
0
5
10
15
20
25
30
Z1
Probability
f(Z1); H
0
(c) Portfolio 3
0.1
0.11
0.12
0.13
0.14
0.15
0.16
0.17
0.18
0.19
0.2
0
5
10
15
20
25
30
35
40
45
50
Z1
Probability
f(Z1); H
0
(d) Portfolio 4
Figure 6.2: Probability density function of Z
1
(L
H
) of each sample portfolio backtest-
ing with parameters of table 6.1
-0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Z1
Probability
F(Z1); H
0
Significance level
(a) Portfolio 1
-0.15
-0.14
-0.13
-0.12
-0.11
-0.1
-0.09
-0.08
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Z1
Probability
F(Z1); H
0
Significance level
(b) Portfolio 2
-0.2
-0.18
-0.16
-0.14
-0.12
-0.1
-0.08
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Z1
Probability
F(Z1); H
0
Significance level
(c) Portfolio 3
0.11
0.12
0.13
0.14
0.15
0.16
0.17
0.18
0.19
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Z1
Probability
F(Z1); H
0
Significance level
(d) Portfolio 4
Figure 6.3: CDF of Z
1
(L
H
) of each sample portfolio with parameters of table 6.1

Chapter 6. Backtesting Results
79
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Z2
Probability
f(Z2); H
0
(a) Portfolio 1
-0.735
-0.73
-0.725
-0.72
-0.715
-0.71
-0.705
-0.7
-0.695
0
10
20
30
40
50
60
70
80
90
100
Z2
Probability
f(Z2); H
0
(b) Portfolio 2
-0.4
-0.35 -0.3
-0.25 -0.2
-0.15 -0.1
-0.05
0
0.05
0.1
0
5
10
15
20
25
Z2
Probability
f(Z2); H
0
(c) Portfolio 3
0.9
0.95
1
1.05
1.1
1.15
1.2
1.25
0
5
10
15
20
25
30
Z2
Probability
f(Z2); H
0
(d) Portfolio 4
Figure 6.4: Probability density function of Z
2
(L
H
) of each example portfolio back-
testing with parameters of table 6.1.
-0.35
-0.3
-0.25
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Z2
Probability
F(Z2); H
0
Significance level
(a) Portfolio 1
-0.73
-0.725
-0.72
-0.715
-0.71
-0.705
-0.7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Z2
Probability
F(Z2); H
0
Significance level
(b) Portfolio 2
-0.35
-0.3
-0.25
-0.2
-0.15
-0.1
-0.05
0
0.05
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Z2
Probability
F(Z2); H
0
Significance level
(c) Portfolio 3
0.96
0.98
1
1.02
1.04
1.06
1.08
1.1
1.12
1.14
1.16
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Z2
Probability
F(Z2); H
0
Significance level
(d) Portfolio 4
Figure 6.5: CDF of Z
2
(L
H
) of each sample portfolio backtesting with parameters of
table 6.1

7 Summary and Conclusion
The Basel Committee opens a new period of risk measures with the movement from
VaR to ES. This step was necessary because of the weaknesses of VaR, which is non-
subadditive and provides no information about the loss that exceeds the VaR level.
With the publishing of the consultative paper, the Basel Committee has provided no
advice on how to handle the backtest of ES. Backtests of risk models play a key role in
the Basel framework, because it obtains statistical information about the signicance
of a model. Furthermore, risk models can be compared with each other over backtest
results. While the backtest procedure of VaR is simple and established in the nancial
industry, the backtesting of ES is unexplored and widely unknown. Developing on
the work of Acerbi and Szekely [2], this thesis shows how to implement a backtest
for market price risk. A backtest procedure tests the underlying model of the risk
measure. Hence, the rst step of this thesis was to nd an accurate approach to model
nancial returns. The advanced research on statistical properties, known as stylized
facts, of nancial returns leads to the presented GARCH-EVT-Copula approach of
this thesis. This approach starts by extracting the ltered residuals from each return
series with an AR(1) and an asymmetric GJR-GARCH(1,1) model to compensate for
the autocorrelation and heteroscedasticity of returns. The marginal CDFs of each
asset are modeled with a semi-parametric CDF by using EVT to model the fat tail
property. The dependence among the simulated residuals is modeled by tting the
data with a student's t-copula. Thereafter, the autocorrelation and heteroscedasticity
are reintroduced into the residuals. The result of this process are 10,000 simulated
portfolio returns for a given time horizon. Finally, the VaR and ES of the portfolio
are assessed over this horizon. Accurate VaR and ES estimates like this approach for
portfolios are essential for banks, insurances, hedge funds and other nancial institu-
tions.
The results of the backtests show that the basic ideas of the backtests dier between
VaR and ES but the important nding of this thesis is that ES is backtestable. But
these tests for ES reveal some new technical challenges for nancial institutions. On
the one hand, the signicance and power of the test can only be estimated by a large
number of observations and, on the other hand, the test statistics require the storage
80

Chapter 7. Summary and Conclusion
81
of more information. The additional storage of information plays an insignicant role
for current computer languages. Otherwise, the results show that the large number of
scenarios required to compute the signicance and power of the test create a temporal
issue, which is harder to handle than the storage problem. Financial institutes with a
large number of portfolios need a lot of processing power to handle the annual back-
tests. In summary, it can be said that the movement of the Basel Committee from
VaR to ES is the right approach for risk management. The change of the risk measure
will create a lot of eort for nancial institutions because of the implementation of
the new risk measure and the backtest procedure. I expect that the Basel Committee
will implement one of the suggested methods of Acerbi and Szekely in their future
framework. This movement of the Basel Committee requires some advanced research
on the signicance and power of these tests beforehand. Some other approaches, for
example, with other types of copulas or the historical simulation, should be tested as
well.

Bibliography
[1] Acerbi, Carlo, Dirk Tasche, 2002, http://arxiv.org/pdf/cond-mat/
0104295.pdf [visited on January 8, 2015]
[2] Acerbi, Carlo, Balazs Szekely, 2014, http://www.msci.com/resources/
research/articles/2014/Research_Insight_Backtesting_Expected_
Shortfall_December_2014.pdftechnicaldocument
[visited
on
January
15, 2015], mineo
[3] Acerbi,
Carlo,
Balazs
Szekely,
2014,
https://www.
parmenides-foundation.org/fileadmin/redakteure/events/Workshop_
Kondor_02._03.06.2014/Acerbi_ESBacktest_reduced_size.pdf [visited on
January 9, 2015]
[4] Artzner, Philippe, Freddy Delbaen, Jean-Marc Eber, David Heath,
1998, http://www.math.ethz.ch/~delbaen/ftp/preprints/CoherentMF.pdf
[visited on January 6, 2015]
[5] Bank for International Settlements, 2014, http://www.bis.org/index.htm
[visited on November 5, 2014]
[6] Bank for International Settlements, 2006, Basel II:International Conver-
gence of Capital Measurement and Capital Standards:A Revised Framework
Comprehensive Version, http://www.bis.org/publ/bcbs128.htm [visited on
November 5, 2014]
[7] Bank for International Settlements, 2004, International Convergence of Cap-
ital Measurement and Capital Standards, http://www.bis.org/publ/bcbs107.
pdf [visited on November 5, 2014]
[8] Bank for International Settlements, 2014, Fundamental Review of the Trad-
ing Book:A Revised Market Risk Framework, http://www.bis.org/publ/
bcbs219.pdf [visited on November 5, 2014]
[9] Bank for International Settlements, 2014, Fundamental Review of the Trad-

Bibliography
83
ing Book: A Revised Market Risk Framework, http://www.bis.org/publ/
bcbs265.pdf [visited on November 5, 2014]
[10] Bellini, Fabio, Valeria Bignozzi, 2013, Elicitable Risk Measures, https://
stat.ethz.ch/u/bvaleria/elicitability [visited on January 16, 2015]
[11] Bob, Ngoga Kirabo, 2013, Value at Risk Estimation: A GARCH-EVT-
Copula Approach, http://www2.math.su.se/matstat/reports/master/2013/
rep6/report.pdf [visited on January 1, 2015]
[12] Bollerslev, T., 1986, Generalized Autoregressive Conditional Heteroskedasticity,
Journal of Econometrics 31:307--327
[13] Börse Frankfurt, 2014, Handelszeiten, http://www.boerse-frankfurt.de/
de/wissen/marktplaetze/handelszeiten [visited on October December 27,
2014]
[14] Carver,
Laurie,
2014,
Back-testing Expected Shortfall:
Mission
Possible?,
http://www.risk.net/risk-magazine/feature/2375204/
back-testing-expected-shortfall-mission-possible [visited on December
10, 2014], mineo
[15] Center for Research in SecurityPrices, 2014, http://www.crsp.com/ [vis-
ited on October 28, 2014]
[16] Cottin, Claudia, Sebastian Döhler, 2013, Risikoanalyse, Springer Fachme-
dien Wiesbaden, Wiesbaden
[17] Daníelsson, Jón, 2012, Financial Risk Forecasting, John Wiley & Sons Ltd.,
Chichester
[18] Daníelsson, Jón, Bjorn Jorgensen, Sarma Mandira, GennadySamorod-
nitskyand Casper de Vries, 2010, Fat Tails, VaR and Subadditivity ,
http://www.riskresearch.org/index.php?paperid=35#papercontent
[19] Deloitte, 2014, Fundamental Review of the Trading Book:
Überblick
und
Neuerungen,
http://www.deloitte.com/assets/Dcom-Germany/
LocalAssets/Documents/09_Finanzdienstleister/2013/FSI_FRS_White_
Paper_62_Final_review_of_the_trading_book_2014.pdf [visited on October
28, 2014]
[20] Deutsch, Hans-Peter, 2004, Derivate und Interne Modelle, Schäer-Poeschel
Verlag, Stuttgart

Bibliography
84
[21] Embrechts, Paul, Marius Hofert, 2013, Statistics and Quantitative RiskMan-
agement for Banking and Insurance, http://www.math.ethz.ch/~embrecht/
ftp/qrm_stat_review.pdf[visited on December 15, 2014]
[22] Engle, Robert, 1982, Autoregressive Conditional Heteroscedasticity with Esti-
mates of the Variance of United Kingdom Ination, Econometrica, 50, 9871008
[23] Fama, Eugene F., 1963, Mandelbrot and the Stable Paretian Hypothe-
sis,
http://web.williams.edu/Mathematics/sjmiller/public_html/341/
handouts/Fama_MandelbroitAndStableParetianHypothesis.pdf [visited on
November 15, 2014]
[24] Glosten, Lawrence R., Ravi Jagannathan, David E. Runkle, 1993, On the
Relation between the Expected Value and the Volatility of the Nominal Excess
Return on Stocks, The Journal of Finance Vol. 48, No. 5 , pp. 1779-1801
[25] Gneiting, Tilmann, 1997, Making and Evaluating Point Forecasts, http://
arxiv.org/pdf/0912.0902.pdf [visited on January 18, 2015]
[26] Greenspan, Alan, 1997, General Discussion: Why Is Financial Stability a Goal
of Public Policy?, page 54, http://www.kansascityfed.org/publicat/sympos/
1997/pdf/s97disc1.pdf [visited on November 12, 2014]
[27] Hull, John, Alan White, 2014, Hull and White on the Pros and Cons of
Expected Shortfall, http://www.risk.net/risk-magazine/opinion/2375185/
hull-and-white-on-the-pros-and-cons-of-expected-shortfall [visited on
December 10, 2014], mineo
[28] Jorion, Philippe, 2007, Value at Risk: The New Benchmark for Managing
Financial, 3. ed, McGraw-Hill, New York
[29] J.P.Morgan/Reuters, 2014, RiskMetrics: Technical Document, http://
pascal.iseg.utl.pt/~aafonso/eif/rm/TD4ePt_2.pdf [visited on November 5,
2014]
[30] Kleintop,
Je,
2012,
StockMark
et
Investors
Have
Be-
come
Absurdly
Impatient,
http://www.businessinsider.com/
stock-investor-holding-period-2012-8 [visited on October 29, 2014]
[31] Kolmogorov, Andrej N., 1933, Grundbegrie der Wahrscheinlichkeitsrech-
nung, Springer, Berlin
[32] Mathworks, 2014, http://www.mathworks.de/ [visited on November 14, 2014]

Bibliography
85
[33] Mathworks, 2014, Econometrics Toolbox, http://de.mathworks.com/help/
econ/index.html [visitedon December 28, 2014]
[34] Mathworks, 2014, Optimization Toolbox, http://de.mathworks.com/
products/optimization/index.html [visitedon December 28, 2014]
[35] Mathworks, 2014, Statistics Toolbox, http://de.mathworks.com/help/
stats/index.html [visitedon December 22, 2014]
[36] Mathworks, 2014, Using Extreme Value Theory andCopulas to Eval-
uate
Market
Risk,
http://de.mathworks.com/help/econ/examples/
using-extreme-value-theory-and-copulas-to-evaluate-market-risk.
html?prodcode=ET&language=en [visitedon December 22, 2014]
[37] McNeil, Alexander, Rüdiger Frey, Paul Embrechts, 2005, Quantitative
Risk Management, Princeton University Press, Princeton
[38] Nagler & Company, 2013, Fundamental Review of the Trading Book
,
http://www.nagler-company.com/fileadmin/Unternehmen/Themen/
Fundamental_DE.pdf [visitedon November 5, 2014]
[39] Ortmann, Karl Michael, 2009, Lebensversicherungsmathematik, Vieweg +
Teubner, Wiesbaden
[40] Rachev, Zari, Boryana Racheva-Iotova, Stoyan Stoyanov, 2010, Cap-
turing Fat Tails, http://www.risk.net/risk-magazine/analysis/1603830/
capturing-fat-tails [visitedon November 12, 2014], mineo
[41] Reuters,
2014,
Porsche
Prevails
at
U.S.
Appeals
Court
over
VW
Squeeze,
http://www.reuters.com/article/2014/08/15/
us-porsche-volkswagen-idUSKBN0GF1KJ20140815?feedType=RSS&feedName=
businessNews [visitedon November 2, 2014]
[42] Risk.net, 2014, http://www.risk.net/ [visitedon December 10, 2014]
[43] Schweizer, B., A. Sklar, 1983, Probabilistics Metric Spaces, New York: North-
HollandElsevier
[44] Schmid, Friedrich, Mark Trede, 2006, Finanzmarktstatistik, Springer-Verlag
Berlin Heidelberg, Heidelberg
[45] Schlittgen, Rainer, 1989, Angewandte Zeitreihenanalyse mit R, Oldenbourg
Wissenschaftsverlag GmbH, Munich

Bibliography
86
[46] Sklar, A., 1959, Fonctions de répartition à dimensions et leurs marges, Publica-
tions de l'Institut de Statistique de L'Université de Paris, 8, 229231.
[47] Tasche, 2013, Risk Measures: Yet Another Search of a Holy Grail, http://
www-old.newton.ac.uk/programmes/TGM/seminars/2013032809301.pdf [vis-
ited on January 5, 2015]
[48] University of Oxford, 2005, Oxford Advanced Learner's Dictionary, Oxford
University Press, Oxford
[49] Wolke, Thomas, 2008, Risikomanagement, Oldenbourg Wissenschaftsverlag
GmbH, Munich
[50] Wood,
Duncan,
2014,
In-depth Introduction:
Expected Short-
fall,
http://www.risk.net/risk-magazine/opinion/2378490/
in-depth-introduction-expected-shortfall [visited on December 12,
2014], mineo
[51] Yahoo!Finance, 2014, https://de.finance.yahoo.com/ [visited on October
28, 2014]
[52] Yahoo!Finance, 2014, About Historical Prices: The Adjusted Close, https://
help.yahoo.com/kb/finance/historical-prices-sln2311.html [visited on
October 28, 2014]

Appendices

A Continuous Uniform Distribution
Notation:
U (a, b)
Parameters:
- < a < b <
Probability density function:
f (x) =
1
b-a
for a x b,
0
for a < x or x > b
Cumulative distribution function:
F (x) =
0
for x < a,
x-a
b-a
for a x < b,
1
for x b
Inverse:
F
-1
(p) = a + p(b - a)
for 0 < p < 1
First moment of the distribution:
E(X) =
b + a
2
Second centralized moment (or variance):
V ar(X) =
(b - a)
2
12

B Student's t-Distribution
Notation:
t()
Parameters:
- < t < ,
degrees of freedom > 0
Probability density function:
f (t, )) =
(
+1
2
)
2
1 +
t
2
-
+1
2
where is the gamma function
Cumulative distribution function:
F (t, ) =
t
-
f (u)du = 1
-
1
2
I
x(t)
(
2
,
1
2
)
where x(t) =
t
2
+ and
I
is the regularized incomplete beta function
First moment of the distribution:
E(X) = 0, for >1, otherwise undened
Second centralized moment (or variance):
V ar(X) =
- 2
, for > 2, for 1 < 2, otherwise undened

C Backtesting Implementation
function [Backtest, Portfolio] = BacktestESandVaR(connection, yahooID,
BT_startdate, BT_enddate, n_hist_data, H, nTrials, M, minasset, maxasset )
%Description: Backtests value-at-risk and expected shortfall with basel
%trac light concept for VaR and two test of Acerbi for ES, explained in the thesis
%
%INPUTS:
%Connection...
Connection to a Database
%yahooID...
List of Yahoo ticker of a portfolio
%BT_startdate...
First date of backtesting window
%BT_enddate...
Last date of backtesting window
%n_hist_data...
Number of recent historical data sample for evaluation
%H...
List of Yahoo ticker of a portfolio
%nTrials...
List of Yahoo ticker of a portfolio
%M...
List of Yahoo ticker of a portfolio
%yahooID...
List of Yahoo ticker of a portfolio
%min_asset...
Minimal number of stock options
%max_asset...
Maximal number of stock options
%
%OUTPUTS:
%Portfolio...
Portfolio object (see below)
%Backtest...
Backtest object with following variables:
%Basel_VaR...
Value at Risk estimation with Basel alpha 1%
%Basel_I...
Indicator vector of basel VaR violations
%VaR...
VaR with ES alpha 2.5%
%ES...
ES with ES alpha 2.5%
%Loss...
Observed loss of portfolio
%I...
Indicator vector of VaR violations with ES alpha
%Z0...
Test statistic of Z0
%Z0_p...
p value for Z0

Appendix C. Backtesting Implementation
91
%
Z0_Zone...
basel zone for Z0
%
Z1...
Test statistic of Z1
%
Z1_p...
p value for Z1
%
Z2...
Test statistic of Z2
%
Z2_p...
p value for Z2
%
Error1Z0...
Error type 1 Z0
%
Error1Z1...
Error type 1 Z1
%
Error1Z2...
Error type 1 Z2
%
T...
number of backtest iterations
%
% Autor:
% Marcel Jäger (B. Sc. Applied Mathematics)
% marceljaeger89@web.de
% @Xing
%###### START ######
%Basel parameter alpha:
alpha_ES=0.025;
alpha_VaR=0.01;
% calculate rst date of historical data window with a cache
O_startdate=datestr(datenum(BT_startdate,'mm/dd/yyyy')-
n_hist_data/250*365*1.2,'mm/dd/yyyy');
% get Portfolio object from database (self programmed function)
Portfolio=CreateRandomPortfolioList(connection,0, minasset, maxasset ...
, O_startdate, BT_enddate, yahooID)
% Portfolio object consist of following calculated data:
% Portfolio.YahooID - vector(nx1) of Yahoo ID's
% Portfolio.SharesofaAsset - vector(nx1) number shares of a Asset
% (random number between minasset and maxasset)
% Portfolio.NumberAssets = n (1x1) : number Assets
% Portfolio.AdjClose - matrix(mxn) of each stocks adjusted closing prices
% Portfolio.TradingDays = m (1x1) : number trading days
% Portfolio.Weight - matrix(mxn) weights of each stock of every trading day
% Portfolio.StockReturn - matrix(m,n) continous Return of each equity
% Portfolio.Return - matrix(m,1) Portfolio Return of each trading day
%calc rst and last index of Portfolio object for backtesting window
lastindex=max(nd(Portfolio.Date <= datenum(BT_enddate,'mm/dd/yyyy')));
rstindex=min(nd(Portfolio.Date >= datenum(BT_startdate,'mm/dd/yyyy')));
%number of backttest iterations
T=lastindex-rstindex-H+1;

Appendix C. Backtesting Implementation
92
%preallocate storrage
Basel_VaR=nan(T,M); %VaR calculation with 1% alpha (VaR backtest)
Basel_I=nan(T,M); %Violations for VaR with 1% alpha (VaR backtest)
VaR=nan(T,M); %VaR calculation with 2.5% alpha (ES backtest)
ES=nan(T,M); %ES calculation with 2.5% alpha (ES backtest)
Loss=nan(T,M); %Observed loss comparing to risk measure (ES backtest)
I=nan(T,M); %Violations for VaR with 2.5% alpha - need for Z1 and Z2
%###### start backtesting ######
%for each index in backtesting window
for i=rstindex:(lastindex-H)
clear SimPortReturns; % clear old variable
% Simulate GARCH-EVT-Copula Approach for <M> scenarios and last
% recent <n_nist_data> trading days
% SimPortreturns = (nTrials x M) each column represents one loss scenario
SimPortReturns=SimulatePortfolioReturns(Portfolio. ...
StockReturn(i-n_hist_data:i,:), Portfolio.Weight(i-1,:)',H,nTrials,M);
for j=1:M
%Backtest ES calculations (alpha = 2.5%)
VaR(i-rstindex+1,j)=quantile(SimPortReturns(:,j), 1-alpha_ES);
ES(i-rstindex+1,j)=mean(SimPortReturns(nd(SimPortReturns(:,j) >= ...
VaR(i-rstindex+1,j)),j));
%Loss := (-1) * observed Return for period H(sum of returns over H returns)
Loss(i-rstindex+1,j)=(-1)*sum(Portfolio.Return(i+1:i+H,1));
%calc Indicator function (VaR violation)
if Loss(i-rstindex+1,j) >= VaR(i-rstindex+1,j)
I(i-rstindex+1,j)=1;
else
I(i-rstindex+1,j)=0;
end
%Backtest VaR calculations (alpha = 1%)
Basel_VaR(i-rstindex+1,j)=quantile(SimPortReturns(:,j), 1-alpha_VaR);
%calc Indicator function (VaR violation)
if Loss(i-rstindex+1,j) >= Basel_VaR(i-rstindex+1,j)
Basel_I(i-rstindex+1,j)=1;
else
Basel_I(i-rstindex+1,j)=0;
end
end
end

Appendix C. Backtesting Implementation
93
%###### VaR Backtest results ######
Z0=sum(Basel_I(:,1)); % test statistic
p0=binocdf(Z0,T,alpha_VaR); % p value for test
% Error1 for at least k violations
if Z0 > 0
Error1_Z0=1-binocdf(Z0-1,T,alpha_VaR);
else
Error1_Z0=1;
end
%Basel zone for test (own programmed function)
Z0_zone=Baselzone(Error1_Z0);
%###### ES Backtest results ######
%Assumption:
%Scenario in index = 1 (for example I(:,1)) is the realization Z(x) !
%calculate teststatistics Z1 and Z2 for M scenarios
for i=1:M
Z1(i)=sum(I(:,i).*Loss(:,i)./ES(:,i))/sum(I(:,i))-1;
Z2(i)=sum(I(:,i).*Loss(:,i)./ES(:,i))/(alpha_ES*T)-1;
end
%simulate ERROR TYPE 1 : estimated value: P(p<=alpha)=alpha
%p1(1) and p2(1) are the p-values of the realisation Z(x)
for index=1:M
for i=1:M
if Z1(i) < Z1(index)
temp_Z1(i)=1;
else
temp_Z1(i)=0;
end
if Z2(i) < Z2(index)
temp_Z2(i)=1;
else
temp_Z2(i)=0;
end
p1(index)=sum(temp_Z1)/(M-1);
p2(index)=sum(temp_Z2)/(M-1);
end
% estimate error type 1 = alpha
Error1_Z1=sum(p1 <= alpha_ES)/(M-1);
Error1_Z2=sum(p2 <= alpha_ES)/(M-1);

Appendix C. Backtesting Implementation
94
%prepare output object Backtest
Backtest.Basel_VaR=Basel_VaR(:,1);
Backtest.Basel_I=Basel_I(:,1);
Backtest.VaR=VaR(:,1);
Backtest.ES=ES(:,1);
Backtest.Loss=Loss(:,1);
Backtest.I=I(:,1);
Backtest.Z0=Z0;
Backtest.Z0_p=p0;
Backtest.Z0_Zone=Z0_zone;
Backtest.Z1=Z1;
Backtest.Z1_p=p1(1);
Backtest.Z2=Z2;
Backtest.Z2_p=p2(1);
Backtest.Error1Z0=Error1_Z0;
Backtest.Error1Z1=Error1_Z1;
Backtest.Error1Z2=Error1_Z2;
Backtest.T=T;
end

D ACF plots BMW, DBK, ADS, SIE
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(a) ACF plot of nancial returns of BMW
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(b) ACF plot of nancial returns of DBK
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(c) ACF plot of nancial returns of ADS
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(d) ACF plot of nancial returns of SIE
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(e) ACF plot of squared returns of BMW
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(f) ACF plot of squared returns of DBK
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(g) ACF plot of squared returns of ADS
0
2
4
6
8
10
12
14
16
18
20
-0.2
0
0.2
0.4
0.6
0.8
Lag
Sample Autocorrelation
(h) ACF plot of squared returns of SIE

Selbstständigkeitserklärung
Eidesstattliche Versicherung
Ich versichere hiermit, dass ich die vorliegende Masterthesis selbstständig und ohne
fremde Hilfe angefertigt und keine anderen als die angegebenen Quellen benutzt habe.
Alle von anderen Autoren wörtlich übernommenen Stellen wie auch die sich an die
Gedankengänge anderer Autoren eng anlehnenden Ausführungen meiner Arbeit sind
besonders gekennzeichnet. Diese Arbeit wurde bisher in gleicher oder ähnlicher Form
keiner anderen Prüfungsbehörde vorgelegt und auch nicht veröentlicht.
Ort, Datum
Marcel Jäger

Anchor Academic Publishing
disseminate knowledge
Anchor Academic Publishing
If you are interessted in publishing
your study please contact us:
info@diplom.de

Details

Pages
Type of Edition
Originalausgabe
Year
2015
ISBN (PDF)
9783954899753
File size
3.6 MB
Language
English
Institution / College
University of Applied Sciences Berlin
Publication date
2015 (September)
Grade
1,3
Keywords
Backtesting Expected Shortfall Value at Risk Expected Shortfall Backtesting Marktpreisrisiko Copula Extreme value theory GARCH Basel VaR ES Conditional Value at Risk
Previous

Title: Backtesting Expected Shortfall
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
book preview page numper 13
book preview page numper 14
book preview page numper 15
book preview page numper 16
book preview page numper 17
book preview page numper 18
book preview page numper 19
book preview page numper 20
book preview page numper 21
book preview page numper 22
book preview page numper 23
book preview page numper 24
book preview page numper 25
book preview page numper 26
book preview page numper 27
book preview page numper 28
book preview page numper 29
book preview page numper 30
book preview page numper 31
book preview page numper 32
book preview page numper 33
book preview page numper 34
book preview page numper 35
book preview page numper 36
book preview page numper 37
book preview page numper 38
book preview page numper 39
book preview page numper 40
book preview page numper 41
book preview page numper 42
book preview page numper 43
book preview page numper 44
book preview page numper 45
book preview page numper 46
book preview page numper 47
book preview page numper 48
book preview page numper 49
book preview page numper 50
book preview page numper 51
book preview page numper 52
book preview page numper 53
book preview page numper 54
book preview page numper 55
book preview page numper 56
book preview page numper 57
book preview page numper 58
book preview page numper 59
book preview page numper 60
book preview page numper 61
book preview page numper 62
book preview page numper 63
book preview page numper 64
book preview page numper 65
book preview page numper 66
book preview page numper 67
book preview page numper 68
book preview page numper 69
book preview page numper 70
book preview page numper 71
book preview page numper 72
book preview page numper 73
book preview page numper 74
book preview page numper 75
book preview page numper 76
book preview page numper 77
book preview page numper 78
book preview page numper 79
book preview page numper 80
book preview page numper 81
book preview page numper 82
book preview page numper 83
book preview page numper 84
book preview page numper 85
book preview page numper 86
book preview page numper 87
book preview page numper 88
book preview page numper 89
book preview page numper 90
book preview page numper 91
book preview page numper 92
book preview page numper 93
book preview page numper 94
book preview page numper 95
book preview page numper 96
book preview page numper 97
98 pages
Cookie-Einstellungen