Quantitative Risk Management: Value-at-risk for discrete random variables

Insolvency occurs, when the total value of all assets is smaller than the value of liabilities. And, due to deviations of assets and liabilities from their expected values, any financial institution does face some risk of insolvency. It is the purpose of risk management to correctly estimate this risk, and to adjust the portfolio composition to appropriately match the company’s risk appetite. Here, we will derive the definition of value-at-risk – presumably the most important risk measure at present – and apply it to an exemplary setting with discrete random variables.

1 Measuring risk of insolvency

When introducing the definition of value-at-risk, it is helpful to first look at a different but strongly related problem: measuring the risk of insolvency for a newly founded insurance company. Thereby, we assume that the portfolio of the insurance company only consists of insurance policies together with the capital raised by the company owners. Hence, in this setting, insolvency occurs, whenever premium payments and capital are lower than the losses incurred in the insurance business. Defining the end of period wealth W,

\displaystyle W:=n\cdot \text{premiums} + C - \text{claimsize}\cdot S_{n},

with number of loss events S_{n}, we are interested in the probability

\displaystyle \mathbb{P}(W<0).

This probability of insolvency (also called probability of default) now shall be determined for a simplified exemplary setup. Thereby, we stipulate the following simplifications:

  • fix number of policy holders
  • deterministic claim size
  • independent loss events
  • known loss event probability p
  • given capital level, zero other assets
  • given premium payments

Therefore, we first specify some values for all relevant quantities, in order to fully determine the distribution of the end of period wealth.

##############################################
## visualize VaR for wealth and loss events ##
##############################################

## specify settings
nPersons <- 20
claimSize <- 20
p <- 0.6
premiums <- 14
capital <- 20

## get profit distribution
lossEvents <- rev(0:nPersons)
profitEvents <- nPersons*premiums - (lossEvents*claimSize)
wealthEvents <- profitEvents + capital

## get probabilities
eventProbs <- dbinom(lossEvents, size = 20, prob = p)

## get expected wealth
expProfit <- sum(profitEvents*eventProbs)

## get wealth distribution
wealthDistr <- data.frame(xVals = wealthEvents,
                          pVals = eventProbs)

library(ggplot2)

## create profit plot
pp1 <- ggplot() +
    geom_bar(aes(x=xVals, y=pVals), width=5, stat="identity",
             data=wealthDistr) +
    xlab("wealth") + ylab("probability")

For the chosen settings, the end of period wealth distribution is visualized in the following plot.

discrete_var-1.png

Now, in order to derive the probability of default (PD),

\displaystyle \begin{aligned} \text{PD}&:=\mathbb{P}(W<0), \\ \end{aligned}

we need to sum up the probabilities of all wealth outcomes with negative value. Decomposed into smaller steps, this amounts to

  • identifying all negative outcomes
  • extracting the associated probabilities
  • summing up the probabilities
(pd <- sum(eventProbs[wealthEvents < 0]))

For the example, we get a probability of default of

[1] 0.05095195

Let’s now put these results into context. So far, we have been confronted with some given risk setting, as well as with a pre-set level of capital. For this fixed setting, the only thing left for us was measuring the associated risk that we were determined to take. In reality, however, measuring risk is only half the task – what we want instead is adapting our risk, such that it complies with our risk appetite.

In accordance with that, we now extend the setup by allowing adjustments to the level of risk that we take. Hence, we now will allow ourselves to vary the level of capital, in order to set the risk of insolvency according to our own preferences.

2 Adjusting risk to our preferences: Value-at-risk

Let’s now assume that our risk appetite is such that we want to limit our risk of insolvency to maximum 2 percent. Hence, the probability of wealth W being smaller than zero has to be smaller than 0.02:

\displaystyle \mathbb{P}(W<0)\ \overset{!}{\leq}\ 0.02

Recall that wealth was simply defined as sum of the profits that we make (revenues minus expenditures) and the capital (C) that we chose to set aside before: W=P+C. Hence, the distribution of wealth varies with the level of capital, linearly shifting to the right for increasing levels of C. Therefore, we first need to relate the problem to a distribution that is fixed and independent of the actual level of C. We have three options:

  • the distribution of profits
  • the distribution of losses
  • the distribution of loss events

First, we now plot all three distributions. Thereby, the worst outcomes shall be shown in color, in order to highlight for each distribution the respective region that is relevant for the risk of insolvency. Looking at the smallest five possible outcomes we get the following probabilities and cumulated probabilities:

## display first 5 entries of distribution together with cumulated
## values 
(tailDistr <- data.frame(lossEvents = lossEvents[1:5],
                         profits = profitEvents[1:5],
                         probs = round(eventProbs, 4)[1:5],
                         cumProbs =
                         round(cumsum(eventProbs), 4)[1:5]))

  lossEvents profits  probs cumProbs
1         20    -120 0.0000   0.0000
2         19    -100 0.0005   0.0005
3         18     -80 0.0031   0.0036
4         17     -60 0.0123   0.0160
5         16     -40 0.0350   0.0510

As one can see from the tabulated values, the first time that the cumulated probability of the profit distribution exceeds the value of 2 percent is for the fifth smallest outcome. In contrast, this outcome can be found as fifth largest outcome for the other two distributions (loss distribution and the distribution of loss events). Note, however, that the probability of the single worst outcome is too small to be clearly visible in the graphics.

## make up threshold for VaR
nTail <- 4

## create profit distribution with adjustment for colored regions
profitDistr <- data.frame(xVals = profitEvents,
                          pVals = eventProbs)
profitTail <- profitDistr[1:nTail, ]
profitThreshold <- profitDistr[nTail+1, ]

## create loss event distribution with adjustment for colored regions
eventDistr <- data.frame(xVals = lossEvents,
                         pVals = eventProbs)
eventTail <- eventDistr[1:nTail, ]
eventThreshold <- eventDistr[nTail+1, ]


source("./rlib/multiplot.R")
## create profit plot
pp1 <- ggplot() +
    geom_bar(aes(x=xVals, y=pVals), width=5, stat="identity",
             data=profitDistr) +
    geom_bar(aes(x=xVals, y=pVals), width=5, stat="identity",
             data=profitTail, fill="red") +
    geom_bar(aes(x=xVals, y=pVals), width=5, stat="identity",
             data=profitThreshold, fill="green") +
    xlab("profit") + ylab("probability")

## create loss plot
pp2 <- ggplot() +
    geom_bar(aes(x=-xVals, y=pVals), width=5, stat="identity",
             data=profitDistr) +
    geom_bar(aes(x=-xVals, y=pVals), width=5, stat="identity",
             data=profitTail, fill="red") +
    geom_bar(aes(x=-xVals, y=pVals), width=5, stat="identity",
             data=profitThreshold, fill="green") +
    xlab("losses") + ylab("probability")

## create loss event plot
pp3 <- ggplot() +
    geom_bar(aes(x=xVals, y=pVals), width=0.3, stat="identity",
             data=eventDistr) +
    geom_bar(aes(x=xVals, y=pVals), width=0.3, stat="identity",
             data=eventTail, fill="red") +
    geom_bar(aes(x=xVals, y=pVals), width=0.3, stat="identity",
             data=eventThreshold, fill="green") +
    xlab("loss events") + ylab("probability")

## combine plots
multiplot(pp1, pp2, pp3, cols = 1)

discrete_var-2.png

2.1 Derivation based on the profit distribution

Let’s first derive the appropriate capital level based on the profit distribution. Therefore, we need to express the default event in terms of the profit distribution. We get:

\displaystyle \begin{aligned} \mathbb{P}(W<0)&=\mathbb{P}(P+C<0)\\ &=\mathbb{P}(P<-C) \end{aligned}

Based on the profit distribution, we hence need to determine a capital level fulfilling the following constraint:

\displaystyle \begin{aligned} \mathbb{P}(W<0)=\mathbb{P}(P<-C)& \overset{!}{\leq}0.02 \end{aligned}

However, we do not want to chose just any capital level which fulfills the requirements. Otherwise, we simply could choose some completely exaggerated high value. What we want is to find the smallest capital level still fulfilling the constraints, since holding capital reserves is costly. Making C as small as possible amounts to making -C as large as possible. Hence, in the graphics, -C has to be as far to the right as possible.

Let’s try some values for -C. As first guess, we chose the value equal to the fourth worst outcome: -C=-60. Checking the constraint we get:

\displaystyle \begin{aligned} \mathbb{P}(W<0)&=\mathbb{P}(P<-C) \\ &=\mathbb{P}(P<-60) \\ &=0.0036 \\ &< 0.02 \end{aligned}

Hence, a capital level of C=60 would be sufficient to diminish the probability of default to the required level. Still, however, we need to check whether it also is the smallest capital level that fulfills the constraint. Therefore, we additionally check a smaller capital level, equal to the outcome of the fifth worst outcome: -C=-40. Here, we get:

\displaystyle \begin{aligned} \mathbb{P}(W<0)&=\mathbb{P}(P<-C) \\ &=\mathbb{P}(P<-40) \\ &=0.0160 \\ &< 0.02 \end{aligned}

Hence, we found an even better capital level. And this time, any infinitesimal additional reduction of the capital level to \hat{C}=C-\epsilon, \epsilon>0, will lead to a probability of default that exceeds the required safety level:

\displaystyle \begin{aligned} \mathbb{P}(W<0)&=\mathbb{P}(P < -\hat{C}) \\ &=\mathbb{P}(P < -(C-\epsilon)) \\ &=\mathbb{P}(P < -C + \epsilon) \\ &=0.0510, \end{aligned}

\displaystyle \Rightarrow 0.02 < \mathbb{P}(P < -\hat{C}),

Hence, the optimal capital level in this case is C=40: it is the smallest amount of capital that is sufficient to restrict the probability of default to the pre-set level of maximum 0.02 percent. Or, considered from a different perspective, it is the optimal capital level in order to guarantee a minimum required survival probability. The capital level obtained this way is generally called value-at-risk, and it plays an important role in the measurement of risk of financial institutions.

The Value-at-Risk (VaR) at confidence level \alpha associated with a given profit distribution P is defined as the smallest value C such that -C is exceeded with probability of maximal 1-\alpha. That is,

\displaystyle \begin{aligned} \text{VaR}_{\alpha}&:=\inf\{C\in \mathbb{R}:\mathbb{P}(P<-C)\leq 1-\alpha\} \end{aligned}

Equivalently, \text{VaR}_{\alpha} could also be expressed in terms of the cumulative distribution function of profits. Generally, the cumulative distribution function is defined as:

\displaystyle F_{X}(k):=\mathbb{P}(X\leq k)

Hence, expressing the probability of X < k in terms of F_{X} requires some slight adjustment:

\displaystyle \begin{aligned} \mathbb{P}(X < k)&= \underset{\epsilon\rightarrow 0}{\lim}\ F_{X}(k-\epsilon) \end{aligned}

This way, the value-at-risk is defined as:

\displaystyle \begin{aligned} \text{VaR}_{\alpha}&=\inf \{C\in \mathbb{R}: \underset{\epsilon\rightarrow 0}{\lim }\ F_{P}(-C-\epsilon)\leq 1-\alpha\} \end{aligned}

In our example, this amounts to finding the smallest value of C that fulfills

\displaystyle \begin{aligned} \mathbb{P}(P<-C)&= \underset{\epsilon\rightarrow 0}{\lim}\ F_{P}(-C-\epsilon)  \overset{!}{\leq} 0.02 \end{aligned}

Therefore, we first determine the set of all points x fulfilling the constraint:

\displaystyle \{x\in \mathbb{R}: F_{P}(x)\leq 0.02\}

## get cumulative distribution function of profits
profitCdf <- data.frame(xVals = profitDistr$xVals,
                        pVals = cumsum(profitDistr$pVals))

## plot cdf
pp1 <- ggplot() + geom_step(aes(x=xVals, y=pVals), data=profitCdf)

## include line for open interval of admissible points
lowerPlotLimit <- (ggplot_build(pp1)$panel$ranges[[1]]$x.range[1])

shiftDown <- -0.0008
interval <- data.frame(xVals=c(lowerPlotLimit, -40),
                       yVals=shiftDown*c(1, 1)) 
alphaLine <- data.frame(xVals=c(lowerPlotLimit, -40),
                        yVals=c(0.02, 0.02))
openInterval <- data.frame(xVals=-40, yVals=shiftDown)


pp2 <- pp1 + geom_line(aes(x=xVals, y=yVals),
                color="red", data=interval,
                size=1) + 
    geom_line(aes(x=xVals, y=yVals),
              color="red", linetype="dashed",
              size=1,
              data=alphaLine) +
    coord_cartesian(xlim = c(lowerPlotLimit, 30),
                    ylim = c( -0.01, 0.4)) +
    geom_point(aes(x=xVals, y=yVals), shape=79, color="red",
               data=openInterval, size=5, solid=F) +
    xlab("profits") + ylab("cumulated probability")

discrete_var-3.png

Note that this set is open: due to F_{P}(-40)>0.02, the limiting point x=-40 at the right side is not part of the set itself. This way, any small variation to the left will fulfill the constraint:

\displaystyle \begin{aligned} \mathbb{P}(P<-40)&= \underset{\epsilon \rightarrow 0}{\lim }\ F_{P}(-40 -\epsilon)\leq 0.02 \end{aligned}

Hence, we get \text{VaR}_{0.98}=40.

2.2 Derivation based on the loss distribution

As losses are just the negative value of profits, the probability of default can also be expressed in terms of the loss distribution. Replacing P with -L, we get:

\displaystyle \begin{aligned} \mathbb{P}(W<0)&=\mathbb{P}(P<-C)\\ &=\mathbb{P}(-L<-C)\\ &=\mathbb{P}(C<L) \end{aligned}

Moreover, we can rewrite the constraint for the capital level, such that it is expressed through the cumulative distribution of losses.

\displaystyle \begin{aligned} \mathbb{P}(W<0)& \leq 1-\alpha & \Leftrightarrow\\ \mathbb{P}(C<L)& \leq 1-\alpha & \Leftrightarrow\\ 1- \mathbb{P}(L\leq C)&\leq 1-\alpha & \Leftrightarrow\\ \alpha &\leq\mathbb{P}(L\leq C)& \Leftrightarrow\\ \alpha &\leq F_{L}(C) \end{aligned}

Through this, we get some alternative representations of \text{VaR}_{\alpha}:

\displaystyle \begin{aligned} \text{VaR}_{\alpha}&=\inf\{C\in \mathbb{R}:\mathbb{P}(P<-C)\leq 1-\alpha\} \\ &=\inf\{C\in \mathbb{R}:\mathbb{P}(C<L)\leq 1-\alpha\}\\ &=\inf\{C\in \mathbb{R}:\mathbb{P}(L \leq C)\geq \alpha\}\\ &=\inf\{C\in \mathbb{R}:F_{L}(C) \geq \alpha \} \end{aligned}

Applying this to our example, we first need to determine the cumulative distribution function of losses L. Afterwards, we need to find the set of points defined through

\displaystyle \{x\in \mathbb{R}:F_{L}(x)\geq 0.98\}

## get loss distribution
lossDistr <- data.frame(xVals=rev(-profitDistr$xVals),
                        pVals=rev(profitDistr$pVals))

## get loss distribution cdf
lossCdf <- data.frame(xVals=lossDistr$xVals,
                      pVals=cumsum(lossDistr$pVals))

## get value-at-risk
valRisk <- lossCdf$xVals[lossCdf$pVals >= 0.98][1]

## visualize derivation

pp1 <- ggplot() + geom_step(aes(x=xVals, y=pVals), data=lossCdf)

upperPlotLimit <- (ggplot_build(pp1)$panel$ranges[[1]]$x.range[2])
lowerPlotLimit <- (ggplot_build(pp1)$panel$ranges[[1]]$x.range[1])

shiftDown <- -0.0008
alphaLine <- data.frame(xVals = c(lowerPlotLimit, valRisk),
                        yVals = c(0.98, 0.98)) 
interval <- data.frame(xVals=c(valRisk, upperPlotLimit),
                       yVals=shiftDown*c(1, 1)) 
closedInterval <- data.frame(xVals=valRisk, yVals=shiftDown)


pp2 <- pp1 + geom_line(aes(xVals, yVals), data=alphaLine, color="red",
                linetype="dashed", size=1) +
    geom_line(aes(x=xVals, y=yVals),
              color="red",
              size=1,
              data=interval) +
    geom_point(aes(x=xVals, y=yVals), shape=16, color="red",
               data=closedInterval, size=5, solid=F) +
    xlab("losses") + ylab("cumulated probability")

discrete_var-4.png

This time, the set of points fulfilling the constraint is closed to the left, and we can take its left border as value-at-risk.

As a general rule, you can find the value-at-risk through the intersection of the cumulative distribution function and a horizontal line. For the case of the profit distribution, the horizontal line is at height 1-\alpha, while it is of height \alpha for the loss distribution.

However, there is an exception to this rule whenever the horizontal line intersects the cumulative distribution function exactly in one of the corner points. In this case, the capital level can be further reduced up to the next discrete outcome of profits.

## determine alpha for visualization
cornerAlpha <- 0.95

## set profits
cornerProfits <- c(-60, -40, -20, 40, 60)

## set probabilities
cornerProbs <- c(0.02, 0.03, 0.55, 0.3, 0.1)

## profit and loss cdf
cornerProfitCdf <- data.frame(xVals=c(-80, cornerProfits),
                              yVals=c(0, cumsum(cornerProbs)))
cornerLossCdf <- data.frame(xVals=c(-80, rev(-cornerProfits)),
                            yVals=c(0, cumsum(rev(cornerProbs))))

###################################
## visualize profit distribution ##
###################################

pp1 <- ggplot() + geom_step(aes(x=xVals, y=yVals), data=cornerProfitCdf)

## get figure limits
lowerPlotLimit <- (ggplot_build(pp1)$panel$ranges[[1]]$x.range[1])
upperPlotLimit <- (ggplot_build(pp1)$panel$ranges[[1]]$x.range[2])

## get alpha lines for profit distribution
alphaLine <- data.frame(xVals = c(lowerPlotLimit, -20),
                        yVals = c(1-cornerAlpha, 1-cornerAlpha)) 
alphaLine2 <- data.frame(xVals = c(-20, -20),
                         yVals = c(0, 1-cornerAlpha)) 

pp2 <- pp1 + geom_line(aes(xVals, yVals), data=alphaLine, color="red",
                       linetype="dashed", size=1) +
    geom_line(aes(xVals, yVals), data=alphaLine, color="red",
              linetype="dashed", size=1) +
    geom_line(aes(xVals, yVals), data=alphaLine2, color="red",
              linetype="dashed", size=1) +
    xlab("profits") + ylab("cumulated probability")

#################################
## visualize loss distribution ##
#################################

pp3 <- ggplot() + geom_step(aes(x=xVals, y=yVals), data=cornerLossCdf)

## get figure limits
lowerPlotLimit <- (ggplot_build(pp3)$panel$ranges[[1]]$x.range[1])
upperPlotLimit <- (ggplot_build(pp3)$panel$ranges[[1]]$x.range[2])

## get alpha lines for loss distribution
alphaLine <- data.frame(xVals = c(lowerPlotLimit, 20),
                        yVals = c(cornerAlpha, cornerAlpha)) 
alphaLine2 <- data.frame(xVals = c(20, 20),
                         yVals = c(0, cornerAlpha)) 

pp4 <- pp3 + geom_line(aes(xVals, yVals), data=alphaLine, color="red",
                       linetype="dashed", size=1) +
    geom_line(aes(xVals, yVals), data=alphaLine, color="red",
              linetype="dashed", size=1) +
    geom_line(aes(xVals, yVals), data=alphaLine2, color="red",
              linetype="dashed", size=1) +
    xlab("losses") + ylab("cumulated probability")

discrete_var-5.png

2.3 Derivation based on loss event distribution

Losses in turn can easily be related to the underlying loss events S_{n} of policy holders:

\displaystyle L=S_{n}\cdot \text{claimsize}-n\cdot \text{premium}

Hence, value-at-risk also can be derived based on the cumulative distribution of loss events F_{S_{n}}. Reformulating the constraint, we get:

\displaystyle \begin{aligned} \mathbb{P}(W<0)& \leq0.02 & \Leftrightarrow\\ 0.98 &\leq\mathbb{P}(L\leq C)& \Leftrightarrow\\ 0.98 &\leq\mathbb{P}(S_{n}\cdot \text{claimsize} - n\cdot \text{premium} \leq C)& \Leftrightarrow\\ 0.98 &\leq\mathbb{P}(S_{n}\cdot \text{claimsize}  \leq C + n\cdot \text{premium})& \Leftrightarrow\\ 0.98 &\leq\mathbb{P}\left(S_{n}   \leq \frac{C + n\cdot \text{premium}}{\text{claimsize}}\right)& \Leftrightarrow\\ 0.98 &\leq F_{S_{n}}\left( \frac{C + n\cdot \text{premium}}{\text{claimsize}}\right)& \Leftrightarrow\\ F_{S_{n}}^{-1}(0.98) &\leq \frac{C + n\cdot \text{premium}}{\text{claimsize}} & \Leftrightarrow\\ F_{S_{n}}^{-1}(0.98)\cdot \text{claimsize}-  n\cdot \text{premium} &\leq C  \end{aligned}

Again, we want to find the smallest capital level fulfilling the constraint, in order to not provide any unnecessary capital. Therefore, we simply can render the inequality into an equality, and choose the capital level such that it satisfies:

\displaystyle C=F_{S_{n}}^{-1}(0.98)\cdot \text{claimsize}- n\cdot \text{premium}

#######################
## get value-at-risk ##
#######################

## determine confidence level for value-at-risk
alpha <- 0.98

## get number of losses that one needs to be able to withstand
numberOfLossesToSecure <- qbinom(p = alpha, size = nPersons, prob = p)

## get value-at-risk
(valRisk <- numberOfLossesToSecure * claimSize - nPersons*premiums)

[1] 40

Advertisements

Posted on 2013/11/12, in financial econometrics, R and tagged , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: