Interest Rate Volatility Reviewed by Momizat on . The Choice of Models and Considerations When Developing a Rate for Your BV Report (Part I of II) Why all the excitement about interest rate volatility? Can’t we The Choice of Models and Considerations When Developing a Rate for Your BV Report (Part I of II) Why all the excitement about interest rate volatility? Can’t we Rating: 0
You Are Here: Home » QuickRead Top Story » Interest Rate Volatility

Interest Rate Volatility

The Choice of Models and Considerations When Developing a Rate for Your BV Report (Part I of II)

Why all the excitement about interest rate volatility? Can’t we just look at a multi-year average and use that in our calculations? Well, we could, but considering recent and expected volatility, that might lead to some big errors. There are several reasons. In this article, the author discusses various models and the importance of the rate developed for a BV report.

Interest Rate Volatility: The Choice of Models and Considerations When Developing a Rate for Your BV Report (Part I of II)

Why all the excitement about interest rate volatility? Can’t we just look at a multi-year average and use that in our calculations? Well, we could, but considering recent and expected volatility, that might lead to some big errors. There are several reasons. Here are two.

First, interest rates contain a lot of information. They embody assumptions about risk, time, inflation, opportunity cost, solvency, future Federal Reserve actions, and recession projections. No other parameter is that versatile.

Second, present value calculations tend to be more sensitive to hurdle rates than any other input. In my experience, simulation results that highlight sensitivity analysis almost always place hurdle rates at the top of the list.

If we do not want to rely on averages, how about extrapolation? Just assume that a body in motion tends to stay in motion and continue the line. It might work. There have been long periods of time when interest rates have moved in what could be smoothed to a straight line. However, if you are doing a present value (PV) calculation over five years or more, do you really want to assume that a trend line will continue that long? Or do you want to extrapolate for a while then allow the rate to level off? Could make the math easier, but we have no recent history of rates behaving that way.

Even if we could control the time frame to be consistent with the lengths of historic trend lines, there is another potential problem. Nassim Taleb, writing in “Fooled By Randomness,” teaches us that when we are in the middle of events, it is difficult to tell the signal from the noise. Given recency bias, we are too quick to extrapolate from the noise. We may think we are being true to historic trends while extending the wrong vector.

So, if we are going to model interest rates, how precise do with need to be? George Box is credited with the observation that all models are wrong, but some are useful. What are the parameters to ensure that ours are useful?

Notwithstanding constant reminders that past performance does not guarantee similar results, the past is data rich. We cannot ignore the past data. We can learn from it, especially as we build our models. We can also agree that an approximation that attempts to be right beats one that is precisely wrong. Keep the past in context.

Next, we must deal with some statistics. Is the magnitude of interest rate variability a function of the current rate, or a truly random variable? Is there mean reversion? If so, over what period?

Finally, there is no getting around the fact that a random variable in continuous time entails differential equations. Besides being challenging to solve, differential equations do not fit comfortably in our spreadsheet models. Instead of diving headfirst into that math, let us look at some of the best-known equations to see what we can learn from them. Then apply that knowledge to our spreadsheets.

The relevant equations tend to come in one of two flavors. The first equation is known as equilibrium term structure, which attempts to measure changes in the rate of interest. There are variations of this model. Essentially, one solves for what interest rates ought to be, then compares the result to actual rates. If there is a difference, there could be an opportunity for arbitrage.

The other form, aptly named arbitrage free, starts with observed prices and assumes reasonably efficient markets. Both model categories assume that the term structure can be reasonably predicted from the current rate. Today’s inverted yield curve begs to differ. Another reminder that we are striving for useful, not perfect.

Let’s start with the Vasicek model: dr = a (b – r) dt + σ dz. The first factor suggests mean reversion, where b is the long-term rate and a is the speed of reversion. The second factor represents the “noise”. The parameters need to be specified by the user. Get it wrong, and you could solve the equation perfectly, with those wrong coefficients, and still get the wrong answer.

A similar model, called Cox-Ingersoll-Ross is dr = a (b – r) + σ √r dz. This model resembles Vasicek but throws in a square root that blocks out negative rates. The benefits and issues are otherwise the same.

The arbitrage free team is represented by the Ho-Lee model: drt = ϴt dt + σ dzt. Rather than trying to solve for the perfect interest rate, you infer ϴ from market prices. The math can be reduced to a binomial lattice model. At each node, the yield curve can move up or down with equal probability. It might be easier math—I prefer arbitrage free—but still relies on assumptions and can be a bear to solve.

An interesting sidebar to solving differential equations comes from Fischer Black’s biography. Best known for the famous Black-Scholes option pricing model, Fischer somehow gleaned that options could be described with the same math as heat exchangers. However, that flash of brilliance remained on his desk for months because he did not know how to solve differential equations. The punchline is that he was, at the time, a professor at MIT. Apparently, it did not occur to him to walk down the hall to the math department and ask if anyone knew how to solve those things. Obviously, it is not trivial.

If we are not going to directly solve the differential equations, but we then want to learn from them, what can we learn? A few things come to mind.

The models discussed in this article assume movement about, and regression to, something. I suspect that the various authors worked on regression to the mean. A lot of variables in life behave that way, so it is not crazy. Yet, considering our discussion thus far, I would argue for regression to a medium-term or long-term trend line. We still need to stipulate how far we are from the trend line and how fast we might get back to it (a and b from the equations). Historical data might help us there.

The models also build on a distribution around the trend line as well as a distribution for the random noise. A normal distribution is an easy default. After all, the central limit theorem tends to hold for decent size data sets. I would modify that criterion by substituting a log normal distribution. We would still avoid deriving an odd and possibly difficult to defend curve. However, we would be capturing the very reasonable premise that changes in interest rates are a function of current rates. For example, a 1% increase in the 10-year treasury is more likely when the current rate is 10% than when it is 1%. Fed actions probably respond to those same considerations. Log normal replaces integer movements with percentage movements.

As for the noise component, let us ignore it. We will thus steer clear of Taleb’s warning about extrapolating noise. More to the point, noise tends to disappear when we model long-term trends. The positives and negatives cancel, or close to it.

One last issue that haunts our simplified model is whether the cause of interest rate volatility needs to be considered. Certainly, an extreme case where the cause is likely to disappear, thus leaving us with zero volatility, would be a welcome addition to the model. Not likely, so we can dispense with that idea and move on.

Step one is to blame inflation itself, and reactions to it, as the culprit for interest gyrations. Then step two is to figure out inflation’s causes. It is not easy. For a few years we could not agree on whether inflation existed, and if so, was it transitory. Now the discussion has evolved to the point of dropping the word “transitory.” Debates about causes and cures continue.

Milton Friedman won a Nobel prize for explaining how inflation is a monetary phenomenon. More M2 than the economy needs. That certainly seems to be the case after the COVID-19 spending. Yet Ben Bernanke and Olivier Blanchard, in a recent well-circulated Brookings article, see it more nuanced. They blame post-COVID-19 demand exceeding supply followed by sticky wages. I am not sure that is really different from too much M2, especially if we remember that Friedman’s thesis depends on the presence of demand.

Further complicating the Friedman model, we had worldwide supply shocks and shortages, monetary growth in most of the developed world, and inflation around the globe. Whether it is Friedman, Bernanke, and Blanchard, or exogenous factors, the question at hand is whether those conditions are so strong that known Fed cures will not work the way we expect. If they do not, does it complicate our ability to model and forecast?

Right now, for the U.S. anyway, the Fed’s medicine seems to be having the desired effect. Year over year CPI growth was 3.1% at the end of June. A year before the reading was 9.1%. Also, M2, which had grown roughly 40% during the two worst years of the pandemic, compared to a routine 6.2% per annum, contracted 3.6% over the year to June. Good news, but more volatility to consider.

The bottom line here is that the cause does matter. Volatility and uncertainty remain. The data suggests that the lognormal distribution we postulate should embody a higher standard deviation than one might have deployed a few years ago.

Putting it all together, the analysis points to the importance of interest rates, the need for models rather than points or extrapolations, and the importance of understanding the causes of volatility. We can see our way to a simplified approach that can enhance our spreadsheets. More on that in a subsequent article.


Alan E. Gorlick is CEO of Gorlick Financial Strategies, in Venice, Florida. He has been a part-time college professor since 1989, teaching MBA and undergraduate finance, economics, and accounting; securities and investment advisory services offered through NEXT Financial Group, Inc.; and member FINRA/SIPC. Gorlick Financial Strategies is not an affiliate of NEXT Financial Group.

Mr. Gorlick can be contacted at 941-303-6921  or by e-mail to alan@gorlickfinancialstrategies.com

The National Association of Certified Valuators and Analysts (NACVA) supports the users of business and intangible asset valuation services and financial forensic services, including damages determinations of all kinds and fraud detection and prevention, by training and certifying financial professionals in these disciplines.

Number of Entries : 2605

©2024 NACVA and the Consultants' Training Institute • Toll-Free (800) 677-2009 • 1218 East 7800 South, Suite 301, Sandy, UT 84094 USA

event themes - theme rewards

Scroll to top
G-MZGY5C5SX1
lw