Value-at-Risk (VaR) is a short-term risk statistic used to forecast downside losses. The** traditional VaR methodology** has many assumptions, the most noteworthy that it assumes continuous returns are normally distributed, which is rarely the case. The growing complexity of global economies and financial instruments have made forecasting risk far more difficult than in the past.

The use of and accessibility to derivatives, fixed income securities, and global markets has only increased over the years. The **parametric approaches** used to model the risk profile for many of these products tends to generalize their true risk profile, whereas **non-parametric approaches** capture and retain detailed information about price movement.

The two most common white noise functions used in the parametric approaches assume either a normal or t-distributed *probability density function* (PDF). For more volatile positions, like **equities** and **derivatives**, the t-distributed PDF is preferred due to its fatter tails, which **better model tail-risk** when compared to the normally distributed PDF. Both methodologies are widely used to model risk, even though they have some serious limitations and make stretched assumptions.

More accurate methods exist to forecast risk.

However, while the goal is to maximize the accuracy of the forecast, there is an equilibrium that must also be met: *accuracy vs. resources*. With the exponential growth in computational power, server networks, and vast resources that major financial intuitions have, this equilibrium is not far-fetched. Historically, this has been a limitation for upgrading to more complex methods, such as the non-parametric approach.

The non-parametric approach takes into account **key principles** that financial products have: volatility clustering, asymmetric risk profiles, and autocorrelation. To do this, the historical data is fit to an **ARMA model** with **a volatility parameter specified** as an EGARCH model. The two models combined, capture the dynamic behavior of the data over time, self-adjusting for increased clustering or changing regimes. The residuals of this model can be standardized and bootstrapped, making no parametric assumptions about the security’s risk profile.

Each approach has their limitations. For example, the non-parametric model needs **extensive historical data** to increase accuracy and more computational power to run. Below we show a comparison between the normal and t-distributed parametric approaches against the non-parametric approach. The data in the table is the breach percentage, when the underlying data breached a 10-day forecasted confidence interval value. As expected, the t-distributed performed better than the normally distributed model because of its fatter tails, which resembles financial securities better. However, the **non-parametric model performs better** than both parametric models.

VaR | CVaR | |||||

Probability | normal | t-distribution | non-parametric | normal | t-distribution | non-parametric |

90% |
9.72% | 8.74% | 7.27% | 4.18% | 1.22% | 1.04% |

95% |
4.85% | 3.36% | 2.38% | 2.02% | 0.30% | 0.18% |

99% |
1.08% | 0.12% | 0% | 0.59% | 0.06% | 0% |