Editorial Type: ARTICLES
 | 
Online Publication Date: 01 Jan 2010

Predicting Net Discount Rates: A Comparison of Professional Forecasts, Time-series Forecasts and Traditional Methods

and
Article Category: Research Article
Page Range: 147 – 171
DOI: 10.5085/jfe.21.2.147
Save
Download PDF

Abstract

Previous research proposed two future net discount rate estimators that improved on naïve long-term average and random walk estimators. The proposed estimators were superior in the class of estimators that used only current and past observations on net discount rates. In this paper we consider two extensions. First we examine whether professional forecasts perform significantly better than the two alternatives. Second, we examine the properties and performance of multivariate estimators that account for the potentially differing time-series behaviors of the underlying wage growth and interest rate series.

I. Introduction

In personal injury and wrongful death cases, forensic economists estimate future lost earnings or earning capacity and discount them back to calculate their present value. Two important components of such a calculation are the expected growth rate in earnings or earning capacity and the expected interest rate. Adjusting for future earnings growth and discounting to the present can be combined into one step by using the expected net discount rate.

Obtaining reliable estimates of the future net discount rate is of considerable interest to applied forensic economists. A recent survey by Brookshire et al., (2006) suggests that forensic economists use a variety of techniques to estimate net discount rates. A majority use either a long-term average of historical rates or the current prevailing rate to estimate future net discount rates. Other forensic economists use commercially and/or publically available forecasts of interest and wage growth rates to estimate future net discount rates. This includes forecasts by the Congressional Budget Office or the Social Security Administration, as well as forecasts from private companies such as Blue Chip or Livingston. Another alternative not contained in the Brookshire et al., (2006) survey is to produce independent forecasts using univariate and/or multivariate forecasting strategies. A natural question arises: which is the most accurate method for estimating future net discount rates?

Cushing and Rosenbaum (2006, 2007) recently examined the performance of estimators based on a long-term average of historical rates and current rate estimators.1 In addition, they proposed an alternative estimator that optimally combined prevailing and average rates to forecast future net discount rates. They showed with both benchmark and empirical estimates that their optimal estimator outperformed the extreme alternatives. Further, they showed that for U.S. data, the optimal estimator could be approximated by the simple average of the current rate and the long-term average. They showed that this “compromise” estimator matched the performance of the optimal estimator.

In this paper we extend that analysis by comparing the performance of a variety of estimators derived from historical data to estimators developed from either professional forecasts or time-series techniques. In section II, we look at net discount rates generated from professional forecasts of interest and wage growth rates. The only readily available long-term wage forecasts are produced by the Social Security Administration (SSA). Interest rate forecasts are available from a number of sources, but the choice of interest rates is limited to either the three-month Treasury bill rate or the 10-year Treasury bond rate. We generate net discount rate estimates by combining interest rate forecasts of the Congressional Budget Office (CBO) with wage forecasts of the SSA. We examine five-year-ahead forecasts made in each of the years from 1978 to 2003 and find that although the professional forecasts outperform the long-term average and current period estimators; they perform worse than the compromise estimator of Cushing and Rosenbaum (2006).

In section III, we look at time-series forecasting methods that use more than just the history of net discount rates. Two interesting results appear from this analysis. First, in principle, using the separate histories of interest rates and wage growth could yield better forecasts of net discount rates than using only the history of net discount rates. In practice, we found little or no gain. This result is consistent with our finding that interest rates and earnings growth are cointegrated. Second, in principle, there are a large number of variables that could potentially be useful in predicting net discount rates. We evaluate forecasts for a three-variable system that includes short-term interest rates, earnings growth and a long-term interest rate. Simulations suggest that multivariate predictions from this system could, depending on the underlying stochastic structure and specification, yield significantly better predictions. However, for five-year-ahead forecasts made in the years 1978 through 2003, multivariate predictors would have performed worse than the simple compromise estimator.

In section IV, we draw two conclusions. First, based on performance tests, the compromise estimator proposed by Cushing and Rosenbaum (2006) seems a better choice for estimating net discount rates than estimates based on professional forecasts. Second, if a multivariate approach is used, a trivariate model that incorporates estimates of interest rates and earnings growth seems to perform best among the multivariate approaches. However, in empirical testing it is not clear that a multivariate approach is preferred to the compromise estimator. Further analysis is needed before one could conclusively argue that multivariate techniques are the preferred approaches to estimating net discount rates.

II. Testing Against Professional Forecasts

The Forecasts

There are advantages and disadvantages to estimating net discount rates using professional forecasts of interest and wage inflation rates. One advantage is the reputation of the forecasting entity. Forecasts from the Congressional Budget Office (CBO), Social Security Administration (SSA), or private entities such as the Livingston Survey carry a certain seal of acceptability. Another advantage is the ease of use. Public data from the CBO or SSA can be downloaded from their web sites. Data from private vendors are typically easy to use as well. The disadvantages include a potential lack of availability for specific variables, the costs of buying data from private vendors, and limited availability of the underlying properties of the estimators such as standard errors or confidence intervals.

Another disadvantage with using forecast data is that long-term forecasts of key variables may not be publicly available. For example, there are no forecasts of net discount rates. As an alternative, it may be possible to use forecasts of wage or earnings growth and interest rates to calculate net discount rates. Here again, however, the paucity of data is evident.

Wage or earnings forecasts are the most difficult to obtain. The Livingston Survey provides one- and two-year-ahead forecasts of average weekly earnings in manufacturing with the forecasts starting in 1974. These would appear to be far too short a horizon for forensic applications. The CBO has published long-term forecasts of an employment cost index since 2002. However, economists typically are interested in the worker's pay, rather than the employer's cost in calculating net discount rates. In addition, the series is too short to evaluate the forecast accuracy.

The only publicly available source of long-term wage forecasts is from the Social Security Administration. SSA provides very long-term forecasts of the growth in “wages in covered employment.” This is the average wage of those covered by social security. Forecasting started in 1960. The SSA has a strong incentive to forecast wages accurately as they are ultimately the source of its revenue. In describing its forecasting, SSA writes in the 2008 OASDI Trustees Report:

The basic economic assumptions are embodied in three alternatives that are designed to provide a reasonable range of effects on Social Security's financial status. The intermediate assumptions reflect the Trustees' consensus expectation of moderate economic growth throughout the projection period. The low cost assumptions represent a more optimistic outlook, with relatively strong economic growth. The high cost assumptions represent a relatively pessimistic scenario, with weak economic growth and two recessions in the short-range period. (SSA (2008) section V.B)

We use the projections derived from the intermediate assumptions. The Trustees Report then goes on to describe earnings as:

Average U.S. earnings is defined as the ratio of the sum of total U.S. wage and salary disbursements and proprietor income to the sum of total U.S. military and total civilian (household) employment. The growth rate in average U.S. earnings for any period is equal to the combined growth rates for total U.S. economy productivity, average hours worked, the ratio of earnings to compensation (which includes fringe benefits), the ratio of compensation to GDP, and the GDP deflator. Assumed future growth rates in productivity and the GDP deflator are discussed in the previous two sections. … Over long periods of time the average annual growth rates in average U.S. earnings and average earnings in OASDI covered employment are expected to be very close to the average annual growth rates in the average wage in OASDI covered employment (henceforth the average covered wage.(SSA (2008) Section V.B.3)

Interest rate forecasts are easier to obtain. The Congressional Budget Office forecasts interest rates to estimate the federal budget. The CBO interest rate forecasts cover the period 1978 through 2002. The CBO even provides documentary evidence of how well it does, with the evidence starting in 1999 and updated every year since (see CBO, 2007). However, CBO cautiously writes:

Apart from the general caveat that should attend any statistical conclusions, several other reasons argue for viewing any evaluation of CBO's forecasts with particular caution. First, the procedures and purposes of CBO's and the Administration's forecasts have changed over the past 20 years and may change again in the future. For example, in the late 1970s, CBO characterized its long term projections as a goal for the economy; it now considers its projections to be what will prevail, on average, if the economy continues to reflect historical trends. Unlike CBO's projections, the Administration's have always included the projected economic effects of their own policy proposals. Second, an institution's track record in forecasting may not be indicative of its future abilities because of changes in personnel or methods. Finally, errors in a forecast increase when the economy is more volatile. All three groups of forecasters—CBO, the Administration, and the Blue Chip survey—made exceptionally large errors when forecasting for periods that included turning points in the business cycle. (p. 2)

Blue Chip provides private estimates of interest rates. Blue Chip does, however, make disclaimers as to the accuracy of its long-term forecasts that may prove problematic in a forensic setting.2 Both the SSA and Blue Chip projections are annual average rates for three-month treasuries.

Testing Forecast Accuracy

In this section we compare the performance of net discount rates constructed from CBO interest rate forecasts and SSA wage growth forecasts to net discount rate forecasts using standard methods.3 We first describe our criterion for forecast comparisons and then present the empirical results.

An Index of Future Net Discount Rate Forecasts:

Because forecasts will generally consist of an entire path of estimated future net discount rates that must be compared to a path of realized net discount rates, it is convenient to summarize the path of both estimated and realized net discount rates in a single index. This also facilitates comparisons with traditional estimates of future net discount rates—the long-term average and current value estimators—as well as the compromise estimator proposed in Cushing and Rosenbaum (2006,2007). Those latter three estimators produce a single forecast value.

Since net discount rate forecasts will ultimately be used to discount a stream of future payments, the weights on such an index of future net discount rates should reflect how they affect the present value of the relevant steam of future payments. We define the Present Value Prediction Error (PVPE) as the difference between the realized present value of a stream of payments (At+1, At+2,..., At+m) discounted by the actual future net discount rate, and the estimated present value of the same stream of payments discounted by the estimates of future net discount rates, ,

Two properties of the PVPE criterion make it somewhat inconvenient to use as a criterion for comparing estimators. First, it depends on the particular path of payments. Second, it is a highly nonlinear function of the underlying annual net discount rates. To provide a benchmark case, we allow the stream of payments to be constant and normalize these payments to unity. Second, we take a linear approximation of (1) about the average value of the net discount rate, which we denote ndr. The linear approximation can be expressed as,

Notice that the weights attached to future prediction errors are declining over the horizon of the award. Prediction errors in the near future receive greater weight, reflecting the fact that those rates influence the present value of all future payments whereas the prediction error on the mth net discount rate affects only the valuation of the last payment.

The somewhat unwieldy formula for the weights attached to the forecasts errors in equation (2) turns out to be well approximated by a simple linearly declining scheme. In our previous work, following Shiller (1979), we approximated the weights in equation (2) with a geometrically declining series. That approximation works well for large values of the forecast horizon and moderately large values of the net discount rate. However, because net discount rates tend to be close to zero (typically close to .02), a better approximation to the weights in equation (2) is available. Taking the limit of (2) as ndr → 0 we have,

Equation (3) suggests that, for small values of the net discount rate, the error in estimating the present value of a constant real award is a linearly declining weighted average of the errors in forecasting future net discount rates.

Equation (3) suggests that an appropriate index of net discount rate forecasts and realizations has weights that decline linearly with the time horizon of the projected payout schedule. Thus, we define the Future Net Discount Rate (FNDR) at time t over a time horizon m as the weighted average of actual future net discount rates,

where the weights are normalized so that they sum to one. Similarly, the Predicted Future Net Discount Rate (PFNDR) is the weighted average of future net discount rate predictions,

The results of this paper, it should be noted, are not particularly sensitive to the linearly declining weighting scheme. In an earlier version of this paper we used Shiller's (1979) geometrically declining weighting scheme and the empirical results using the two weighting schemes are virtually indistinguishable.

Empirical Results

The wage growth forecasts consist of annual SSA projections of the year-over-year growth rate of the Social Security Wage. These projections are reported in the Annual Report of the Board of Trustees, typically published in March of each year, and a complete series of past forecasts are available starting in 1974. The interest rate forecasts consist of CBO annual projections of the average interest rate on three-month Treasury bills. These are published in The Budget and Economic Outlook each January. A complete series is available starting in 1978. These data are used to create net discount rate forecasts for a five-year-horizon.4, 5 We construct and evaluate forecasts for the period 1978 to 2003, leaving the last five years to construct the realization of net discount rates.

For comparison with the professional forecasts, we also computed traditional estimates of the future net discount rate. We formed the long-term average from SSA wage data and average three-month Treasury bill rates with the average beginning in 1960, the first year that SSA wage data are reported. We also report the “unit root” estimator which consists of the current observation and the compromise estimator which is the simple average of the unit root and long-term average. The historical data on the SSA wage has undergone some revisions over time, so that the currently reported values would not have been available to forecasters at the time they made their forecasts. To provide a fair assessment of forecasting performance, we used “real time” data to form the traditional estimates. That is, we computed the estimates from the data as it was reported in the year of forecast.

Table 1 presents the comparisons of forecast performance. For the CBO/SSA forecasts, we computed the linearly declining weighted average as suggested by equation (5); and for the actual values of future net discount, rates we computed the linearly declining weighted average suggested by equation (4). This allows us to determine an error (or deviation) of the estimate from the realization in each year. Table 1 reports the Root Mean Squared Error (RMSE) and Mean Absolute Deviation (MAD) for each estimator. We report the results for five-year-ahead forecasts made in each of the years 1978 to 2003 and we also report the results for the average performance over the sub-periods 1978–1989 and 1990–2003. We split the sample to account for the fact that forecasting methodology at the CBO and SSA may have changed over time. To the extent that forecasting methods have improved, we should attach greater weight to the more recent performance.

Table 1 Prediction Errors
Table 1

For forecasts made in the years 1978 to 2003 as a whole, on average, the professional forecasts performed better than the simple long-term average and the current period estimator. However, the compromise estimator performed better than all three. Over the earlier sub-period, 1978 to 1989, the CBO/SSA forecasts outperformed the current value and the long-term average estimators but performed just slightly better than the compromise estimator. For forecasts made over the more recent period, 1990 to 2003, the CBO/SSA forecasts fared somewhat worse. The professional forecasts were just slightly better than the current value estimator, but worse than both the long term average and the compromise estimator. The compromise estimator outperformed all three of the alternatives. The results in Table 1 show that, historically, CBO/SSA professional forecasts have performed no better than a simple combination of the current value and long-term average estimators.

III. Testing Against Multivariate Forecasts

Using only past values of the net discount rate to forecast the future clearly discards a wealth of potentially useful information. One obvious source of information is the separate histories of interest rates and earnings growth. We may be able to improve our forecasts of the net discount rate by forecasting interest rates and earnings growth jointly and then combining these forecasts. Going one step further, we could forecast with multivariate Vector Autoregression (VAR) models that include additional variables that might help predict earnings and interest rates. We could, of course, consider even more elaborate and sophisticated models such as those used in the modern finance literature to forecast interest rates. However, Cochrane (2007) notes that underlying the elaborate statistical models is often a simple VAR using data on interest rates of various maturities.6

We begin by investigating whether using the separate history of interest rates and earnings growth yields significant improvements in forecasting net discount rates. The general statistical issue here is whether it is better to form an aggregate and forecast from the history of the aggregate or to forecast the disaggregated series and then combine them to form the forecast of the aggregate. This issue crops up frequently in applied work and has been addressed in some generality in the literature.7 Unfortunately, the literature does not provide a clear-cut recommendation.

In the case of very large samples, when the time-series process of the underlying variables can be taken as known, the statistical issue is unambiguous. Multivariate forecasts using the history of the components will be at least as good as univariate forecasts using only the history of the aggregate. Using only the history of the aggregate will be efficient only under certain restrictions on the underlying process of the components.

For smaller samples when coefficients must be estimated, it is not so clear which approach is preferable. Lütkepohl (1984), writes:

For short lead times aggregating predictions of the disaggregate variables seems preferable to forecasting the aggregate directly, whereas a univariate model implied by a multivariate model for the generation process of the disaggregate data seems to be advantageous for longer lead times. (p. 213)

In forensic applications, where long-term prediction with moderate sample sizes is the norm, this suggests that using multivariate prediction models may not be preferable. To examine this issue we provide Monte Carlo evidence on the performance of alternative estimators in samples typically used in forensic settings.

Finally, the practical issue may be even more complicated. Not only are the underlying coefficients unknown, but the underlying model is unknown and potentially changing. Hendry and Hubrich (2005) write that:

… including disaggregate variables in an aggregate model should outperform in terms of predictability an aggregate model which only includes lags of the aggregate. However, we find that it does not always do so when forecasting euro area inflation. There are many steps between predictability in population and “forecastability” where the forecast model might differ from the data generation process. Recall that the predictability concept that we consider in this paper refers to a property of the variable of interest in relation to the information set considered. In contrast, forecastability refers to the improvement in forecast accuracy given the unconditional moments of a variable based on the information set available. The predictive value of disaggregate information can be off-set by estimation uncertainty; model selection; changing collinearity, as measured by the ratio of eigenvalues; and unmodeled breaks. (p. 31)

To address this final concern, we check the actual forecast performance of our alternative estimators.

In addition to the history of the components, one might suspect that other variables might be useful in predicting net discount rates. A common empirical finding is that long-term interest rates help predict future short-term interest rates. As such we also consider a three-variable system with wage growth, short-term interest rates and a long-term interest rate.

Time-series Properties of the Variables

As Cochrane (2007) points out in a related context, how one specifies the order of integration and cointegration of the underlying variables is of overwhelming importance in making and properly interpreting time-series forecasts. Unfortunately, as Cochrane also points out, we know very little about the long-term properties of the data. This section provides a battery of statistical tests on the individual and joint time-series properties of interest rates, earnings growth and net discount rates. These tests largely document the difficulties in discriminating among competing hypotheses concerning the long run properties of these variables.

We choose as our interest rate measure the one-year U.S. Treasury bill rate reported in January of each year and as our measure of the growth rate of earnings the annual growth rate of average weekly earnings of production workers as measured from January to January. These choices allow us to match the maturity of the interest rate variable with the growth rate of the earnings variable. We take the difference between these variables as the net discount rate. We also use as an additional predictor variable the five-year U.S. Treasury bill rate, again as reported in January. All data run from 1964 to 2008.

Figure 1 depicts the behavior of net discount rates over the last 45 years. This measure of the net discount rate remained relatively low in the sixties and 70s, rose dramatically during the 80s and has since returned to be close to its earlier levels.

Figure 1. Net Discount RatesFigure 1. Net Discount RatesFigure 1. Net Discount Rates
Figure 1. Net Discount Rates

Citation: Journal of Forensic Economics 21, 2; 10.5085/jfe.21.2.147

Table 2 presents formal statistical tests for stationarity and integration of the underlying variables. Using the Augmented Dickey-Fuller (ADF) test and the Elliot-Rothenberg-Stock (1996) (ERS) test, the null hypothesis of a unit root cannot be rejected for the two interest rate series and the earnings growth series. Both tests reject the null hypothesis of a unit root in the net discount rate and the term premium (five-year rates minus one-year rates.) The Kwiatkowski-Phillips-Schmidt-Shin (KPSS) results cannot reject stationarity for any of the series.8

Table 2 Tests for Stationarity in Processes: 1964–2008
Table 2

For the three underlying series, one-year treasuries, five-year treasuries and earnings growth, neither the hypothesis of a unit root nor the hypothesis of stationarity can be rejected with confidence. The diagnostic tests on the two interest rate series are consistent with Cochrane's (2007) characterization of treasury interest rates as “near unit root” processes. Failure to reject either hypothesis is robust to lag length, kernel and bandwidth selections.

For the difference between one- and five-year treasury yields (a simple measure of the “term premium”), the hypothesis of a unit root is strongly rejected. This is consistent with the standard practice in the finance literature of treating the term premium as a stationary process. The rejection of a unit root in Table 2 is robust to lag length, kernel and bandwidth selections.

Table 2 reports that for the difference between short-term treasuries and earnings growth (Net Discount Rates) the hypothesis of a unit root can be rejected at the .05 level. This rejection of a unit root in the net discount rate series deserves some discussion, given the extensive literature on this issue and the significance of this result for forecasting. Payne's (2007) recent review shows that the prior statistical results have been mixed, with some finding evidence of stationarity and some finding evidence of a unit root. One reason for mixed results from these unit root tests is that they are well known to be sensitive to nuisance parameters such as the lag length (in ADF tests) and bandwidth parameters (in ERS tests). The ADF test reported in Table 2 rejects a unit root using a lag length of zero, the choice indicated by the Akaike and Schwarz criteria. However, the Modified Akaike Criterion, Ng and Perron (2001), and the Modified Schwartz Criterion, Liu et al., (1997), chooses a lag length of four. For this choice of lag length, the null hypothesis of a unit root cannot be rejected. Thus the rejection of a unit root in net discount rates reported in Table 2 should be treated with some caution.

Our use of data through 2008 may explain why we reject a unit root whereas earlier studies failed to reject a unit root in the net discount rates series. To investigate the sensitivity of our results to the sample period, we recursively applied the ADF test (using the Schwarz lag length criterion) to samples beginning in 1964 and ending in each year from 1990 to 2008. For the sample ending in 1990, hypothesis of a unit root could not be rejected at 10% level. As the sample period is extended, the evidence for a unit root becomes less favorable. For samples ending after 1996, the unit root hypothesis can be rejected at the .05 level. The ADF statistic has continued to fall so that for the data through 2008, the p-value has fallen to .0124.

We also consider the possibility of structural shifts in the net discount rate process. Some previous studies, Horvath and Sattler (1997) and Braun et al., (2005), for example, find evidence suggesting that there have been significant shifts in the time-series process generating net discount rates. However, we find little evidence of such structural breaks. In particular, when we model net discount rates as a simple autoregressive process and test for structural change using the MaxF, ExpF and AveF statistics proposed in Andrews (1993) and Andrews and Ploberger (1994),9 the null hypothesis of no structural change could not be rejected at any reasonable significance level for either the constant term or the autoregressive coefficients.

The lack of evidence for a structural shift may be attributed to our use of more recent data through 2008. To investigate this hypothesis, we calculated the Andrews and Ploberger (1994) ExpF statistics recursively starting in year 1964 and ending in every year from 1990 through 2008. For subsamples ending in the early 1990s, the null hypothesis of no structural change on the constant term is rejected decisively. Starting in 1996, the null hypothesis could not be rejected at the .05 level. For the sample through 2008, the ExpF statistic has declined to 0.499 with a Hansen's (1997) approximate asymptotic p-value rising to 0.441. It appears that the high net discount rates of the early 1980s that, when viewed from 1990, appeared to signal a long-term break, now seem (both to the eye in Figure 1 and to formal statistical tests) to be simply transitory shocks to a stationary series.

Braun et al., (2005) argue that neither of the Andrews and Ploberger (1994) structural break tests, which assume stationarity of the underlying series, nor the ADF unit root tests, which assumes no structural breaks, are appropriate. The Lumsdaine and Papell (1997) unit root test allows for structural breaks under the alternative hypothesis, but assumes no structural breaks under the null. Instead, Braun et al., use recently developed tests (Lee and Strazicich (2003, 2004)) that allow for both unit roots and structural breaks under the null hypothesis. To allow for this possibility, we calculated both the one-break and two-break versions of the Lee and Strazicich LMρ statistics. We obtain values for the LMρ statistic of −4.985 and −6.228 respectively. Both are less than their respective .01 critical levels so the unit root hypothesis is rejected.10 Thus, failing to allow for structural breaks does not account for our rejection of the unit root hypothesis in net discount rates.

Collectively, the results from Table 2 suggest restrictions on the multivariate process generating the three underlying variables. Stationarity of the term premium suggests that the three-variable system contains at least one cointegrating vector. If net discount rates are also stationary there could be two cointegrating vectors. Finally, if all three variables are stationary, the multivariate process is a vector autoregression in levels. For the bivariate system containing one-year rates and earnings growth, stationarity of net discount rates suggests at least one cointegrating vector. If both series are stationary, the system is a VAR in levels. If neither is stationary and they do not cointegrate, the system is a bivariate VAR in differences.

Formally, Table 3 presents the results of Johansen's cointegration tests of the trivariate system and the three bivariate systems. The columns refer to the number of co-integrating vectors in the null hypothesis. For all four systems the lag length parameter was selected by the Schwarz criterion. P values are shown in parentheses below each test statistic.

Table 3 Tests for Cointegration: 1964–2008
Table 3

For the three-variable system, the hypothesis of no cointegration is rejected at high significance levels by the Trace Statistic but cannot be rejected by the Maximum Eigenvalue statistic. Again for the bivariate system including earnings growth and one-year treasuries, the Trace Statistic rejects no cointegration but the hypothesis cannot be rejected at the .05 level by the Maximum Eigenvalue statistic. For the bivariate system including earnings growth and five-year treasury interest rates, the hypothesis of no cointegration cannot be rejected by either statistic. For the Bivariate system including one and five-year treasury interest rates, the hypothesis of no cointegration is decisively rejected by both statistics.

As Emerson (2007) recently documents, conclusions concerning the number of cointegrating vectors can be sensitive to lag selection criteria. The results from Table 3 are similarly sensitive to the lag length criterion. Using the Akaike criterion, rather than the Schwarz criterion, both the Trace and Maximum Eigenvalue statistic reject no cointegration for the three-variable system. Using the Akaike criterion, neither the Trace nor the Maximum Eigenvalue statistic can reject lack of cointegration for earnings and one-year treasuries. One the other hand, the rejection of zero cointegrating vectors for one- and five-year treasury interest rates is robust to a variety of lag length criteria and specifications.

As stated in the introduction to this section, determining the order of integration and cointegration is problematic in samples typically observed in economics. In this section we found that the evidence was insufficient to determine whether or not the underlying interest and earnings growth processes contain a unit root. Similarly, though we found evidence that supports modeling net discount rates as a stationary process, we cannot conclusively rule out a unit root. The only strong finding is that the difference between one- and five-year treasury rates is stationary. We conclude that when modeling a three-variable system containing one-year treasuries, five-year treasuries and earnings growth we should consider three possibilities. First, all three variables may be stationary and the system can be modeled as a VAR in levels. Second, there may be one unit root in the system, so the system could be an Error Correction (EC) model with two cointegrating vectors. Finally, there may be two unit roots in the system so that the EC model contains only one cointegrating vector. In what follows we consider all three possibilities.

In the next three sub-sections we consider the forecasting performance of multivariate and traditional forecasting methods from three perspectives. First we consider the large sample case where we can treat the model and coefficients as known. Next we examine how alternative estimation strategies perform in moderate samples when the model must be specified and coefficients must be estimated. Finally, we evaluate how alternative estimate methods perform on actual data from the U.S. economy.

Large Sample Theory

When forecasting linear combinations of underlying variables, should one perform multivariate forecasts of the underlying variables and then use the linear combinations of the resulting forecasts; or can one simply form the linear combinations and adopt a univariate approach, that is, forecast based on the history of the linear combinations? This question has been the subject of an extensive literature summarized and expanded upon in Lütkepohl (1987). He shows, in the context of a vector autoregressive-moving average processes forecasting arbitrary linear combinations the underlying variables, when the model and coefficients are known, the multivariate approach is at least as good as the univariate approach. Under certain restrictions on the underlying multivariate model, however, they may be equivalent.

In this section, we analyze a special case of this general problem. We derive restrictions under which forecasts of net discount rates based only on their past are equivalent to multivariate forecasts that use the underlying interest rate and earnings data. Suppose interest rates and earnings growth are generated by a known, nth order bivariate VAR (Vector Auto Regressive) model,

where the fundamental error terms, v1,t and v2,t are serially uncorrelated. It can be shown that the one-step-ahead forecast error variance of the optimal univariate estimator (using only past values of ndrt = rt – gt) is equal to the one-step-ahead forecast error variance from the system (6) if and only if:

The term on the LHS of (7) is the effect of an increase in past interest rates on current interest rates minus the effect of past interest rates on current earnings growth. That is, the effect of past interest rates on current net discount rates. The term on the RHS of (7) is the effect of past earnings growth on current earnings minus its effect on current interest rates. This is the effect of a decrease in past earnings growth on net discount rates. In other words, the effect of an increase in interest rates on the net discount rate must be the same as the effect of a decrease in wage growth on net discount rates.

If the restrictions in (7) are satisfied, the correctly specified univariate autoregression on net discount rates will forecast as well as the correctly specified bivariate predictor. To justify the use of traditional estimators requires additional restrictions. The long-term average will be optimal only if the net discount process is white noise. The current value estimator requires the autoregression to be a pure random walk. The compromise estimator requires the process to be a stationary first order AR with a particular autoregressive coefficient.

Now consider the case where an additional variable, long-term interest rates (rbt), are included in the statistical model. Let the three-variable system be represented by a nth order, trivariate VAR:

The restriction that univariate prediction is equal to multivariate prediction can be shown to be:

The restrictions from the bivariate system remain and we have an additional set of restrictions. The effect of longer term bond rates on short-term rates must be the same as its effect of earnings growth. In this case, the third variable will have no effect on their difference, net discount rates. These restrictions are necessary for univariate estimators using only net discount rates to be as efficient as predictions from the trivariate system. As discussed above, traditional estimators require even further restrictions on the univariate system.

In principle, the restrictions embodied in (7) or (9) could be tested. However, such tests would not likely be terribly informative. First, if the systems (6) or (8) contain one or more unit roots, the set of restrictions will generally include coefficients with non-standard distributions. The outcome and interpretation of these tests will depend critically on choosing the exact orders of integration and cointegration among the underlying variables. As discussed in the previous section, establishing these properties is problematic.

More importantly, it is not clear that tests of these restrictions would provide useful guides to prediction in moderately sized samples. If the restrictions cannot be rejected, we may be tempted to impose the restrictions. However, the large sample results do not provide any information as to any benefits of imposing the restrictions, when true. Rejection of the restrictions would suggest that multivariate methods may be more efficient. However, these large sample results may not carry over to moderate samples and, even if they do, standard test statistics would not provide us with a useful measure of the gain in forecast efficiency afforded by multivariate methods.

In the next section, we provide evidence of the relative efficiency of multivariate and traditional estimators in moderate samples.

Performance in Moderate Samples

In this subsection we perform a simulation study to examine how multivariate forecasting techniques perform relative to traditional estimators in moderate samples. Unlike the previous section, in which the model and coefficients are taken to be known with certainty, we assume that researchers select the model using a specification search and then estimate the parameters using standard estimation methods. Our simulations allow us to measure the potential gains in forecast efficiency and to examine the robustness of multivariate forecasting techniques relative to traditional estimators.

We examine the performance of alternative estimating strategies in samples from populations similar to that which generated the U.S. data. Of course, the performance of alternative estimation strategies will depend on the underlying data generation process. As we showed earlier, we are unable to determine with high confidence the orders of integration and cointegration in the data. Therefore, we conduct our simulation using three different test populations. We allow the joint processes for short-term and long-term interest rates to contain either zero, one or two unit roots. These processes are consistent with the possibility found earlier in this paper.

To construct our first test population, we estimate an unconstrained three-lag VAR in levels consisting of the one-year and five-year Treasury bill rates, and earnings growth, using data from 1964 to 2008. The estimated VAR has (inverse) characteristic roots close to, but less than unity. In this population, short-term interest rates, long-term interest rates and earnings growth are jointly and individually stationary series. To construct our second test population, we estimate a three variable error correction (EC) model with two lagged differences and two cointegrating vectors. We impose the restriction that short- and long-term interest rates are cointegrated, with cointegrating vector (1,–1), and that short-term interest rates and earnings growth are cointegrated, with cointegrating vector (1,–1).11 In this population, all three variables are individually difference stationary, but net discount rates are stationary, as is the difference between short and long-term rates. Finally, to construct our third test population we estimate an EC model with two lagged differences and one (unconstrained) cointegrating vector. In this population, all three variables are individually difference stationary and, although earnings growth, short and long-term interest rates jointly cointegrate, net discount rates are not stationary.

For each of the three test populations, we simulate 10,000 series of length 55 using bootstrapped errors.12 We discard the first eight observations to minimize the dependence of our simulations on initial conditions. We use the next 42 observations to construct six different estimates of the Future Net Discount Rate, equation (4), leaving the last five observations of each simulation to form the actual (or realized) Future Net Discount Rate. Our six estimators consist of a three variable estimation strategy, a bivariate estimation strategy system, a univariate estimation strategy, the long-term average of net discount rates, the current net discount rate and the compromise estimator—the simple average of the long-term average and the current value.

The trivariate, bivariate and univariate strategies employ data-dependent specification search processes. For the trivariate strategy, we first determine a lag length from a VAR in differences using the Akaike Information Criterion (AIC). The order of cointegration is then chosen as the minimum number of cointegrating vectors that Johansen's Trace Statistic fails to reject at the .05 level. We then estimate an error correction with the chosen order of cointegration.13 (If the number of cointegrating vectors chosen is three, we estimate a VAR in levels.) Finally, we use this model to forecast net discount rates over the next five years and, using equation (4), form an estimate of Future Net Discount Rates (FNDR). The strategy for the bivariate approach using only short-term interest rates and earnings growth is similar. We first select a lag length according to the AIC, determine the order of cointegration, estimate the model and finally form an estimate of FNDRs. For the univariate approach using only the history of net discount rates we use the same basic strategy. First, we select a lag length from an autoregression in differences. Using this lag length, we test for unit roots according to the ADF approach. If the ADF rejects a unit root at the .05 level we estimate an autoregression in levels, otherwise we estimate an autoregression in differences. We use the resulting model to form an estimate of FNDRs.

Table 4 presents the results of our simulation for three test populations and six estimators.14 For each estimator and test population we present the RMSE and MAD. These are calculated from the difference between the “actual” Future Net Discount Rate and the rate forecast by the respective estimator for each of the 10,000 simulations.

Table 4 Forecast Performance on Test Data
Table 4

Comparing first the trivariate and bivariate estimation strategies, we see that the trivariate system performs somewhat better in the first and third populations and slightly worse in the second. In terms of relative RMSE, the gain or loss in forecast efficiency from moving to a trivariate system is less than 4%. The univariate approach performed worse than either the trivariate or the bivariate approach for all three populations. Although the loss in efficiency is minor for the second population (2%), the loss is more considerable in populations one (24%) and three (13%). We conclude that for populations close to these, there can be significant advantages to using multivariate rather than univariate forecasting techniques, though the advantage of moving to trivariate systems (including long-term interest rates) is minor.

Comparing the three traditional estimators, we observe that the current value performs worse than all three time-series estimators for all three populations. The comparative loss in efficiency ranges from 15% in the third population to over 37% in the first population. Similarly, the long-term average performs worse in all three populations. Here the comparative loss ranges from 9% in population two to a loss of over 80% in population three. These results suggest that there are considerable efficiency gains to be had by moving away from traditional method to multivariate forecasting techniques.

The results for the compromise estimator (the average of the current value and long-term average) are mixed. The compromise estimator works considerably better than the long-term average for all three populations. It outperforms the current value by a considerable margin for populations one and two and is only slightly worse in population three. Given the uncertainty about which population best represents the true underlying population, these results suggest that the compromise estimator is the better choice within the group of traditional estimators. Comparing the compromise estimator to the time-series estimators, we find that the compromise estimator performs better than the univariate estimator in populations one and two, but slightly worse in population three. The compromise estimator actually outperforms the bivariate and trivariate estimators in population two, but performs somewhat worse in populations one and three.

Taken together, these results suggest that there are gains from employing multivariate techniques over univariate prediction, though the largest gains are due to moving from univariate to bivariate methods. The traditional current value and long-term average estimators fared poorly in these simulations, performing uniformly worse than time-series estimators. The efficiency loss from using the current value estimator rather than optimal time-series estimators could be close to 40% and the loss from using the long-term average could be over 80%. Compared to the multivariate approaches, the compromise estimator performed worse in two of the test populations and somewhat better in one. In the best case, the gain in efficiency by moving from the compromise estimator to multivariate methods is about 20%, in the worse case, the multivariate estimators are less efficient by about 6%.

Actual Prediction Performance (Forecastability)

Results in the previous section are predicated upon the assumption that the time-series process can be described by simple statistical models. These models reflect the features of the historical process, but the actual process generating these variables is certainly more complex than the theoretical populations described above. Hendry and Hubrich (2005) note that sophisticated statistical techniques often appear to work well in theoretical populations, but because forecasting can be affected by model selection, changing collinearity, breaks in the data and other problems, they perform poorly with actual data. As such they make a distinction between the concept of predictability which applies to populations, and “forecastability” which applies to performance in actual data. In this section we examine how well our estimation strategies work with the historical data.

To examine the performance of the estimators on historical data, we use actual wage growth and interest rate data from 1964 through 2003 as a basis for forecasting future net discount rates. For each estimator we form predictions for the five-year-ahead Future Net Discount Rate for each year from 1978 to 2003. We stop in 2003 to leave five future years of data with which to evaluate the final five-year-ahead Future Net Discount Rate.15

For the three time-series estimators, we employ the data dependent specification search process described in the previous section. The specification search and estimation procedure is recomputed for every sample period beginning in 1964 and ending in the years 1978 through 2003. This gives us 26 predictions and realizations. It should be noted, however, that because the predictions are for overlapping five-year intervals, we do not have 26 independent observations.

Table 5 reports RMSEs and MADs for the entire set of five-year-ahead forecasts for the years 1978 to 2003 and also for two sub-periods, 1978–1989 and 1990–2003. We find that examining the performance over two sub-periods to be of interest for two reasons. First, net discount rates exhibited far greater volatility in the earlier sub-period. We wish to see how robust the techniques are over these two sub-periods. Second, the later period may be of interest simply because it is more recent.

Table 5 Historical Forecast Performance: 1978–2003
Table 5

For the overall sample, the multivariate approaches perform relatively poorly. The RMSEs for the trivariate and bivariate systems are worse than that achieved by the univariate approach, but the MAD is worst for the univariate estimator. Using either criterion, all of the time-series approaches under-perform in comparison to the three traditional approaches. Of the traditional estimators, the compromise estimator works best overall. Similar results hold over the volatile 1980s. However, for this sub-sample, the univariate estimator always performs best among the time-series techniques and the compromise estimator once again performs best among all six alternatives.

The more recent period is a different story. Here the multivariate approaches worked considerably better. The trivariate and bivariate estimators perform better than the traditional estimators using either error criterion. Of the three traditional measures, the long-term average is just slightly better than the compromise estimator.

IV. Conclusion

There are a variety of methods available to estimate net discount rates. Classes of estimators include those generated from current or historical data, those based on professional forecasts and those derived from univariate and/or multivariate forecasting methods such as VAR and error correction models. In an earlier paper, Cushing and Rosenbaum (2006) showed that among the estimators based on current and historical data, a “compromise” estimator that equally weighted current and long-term average net discount rates was the best estimator. In this paper we extend their analysis, comparing the compromise estimator to published forecasts and to time-series forecasts.

We found no generally available professional forecasts of net discount rates. However, there are both private and public forecasts of interest rates and public forecasts of wage growth rates. SSA wage growth forecasts and CBO interest rate forecasts were used to generate net discount rate forecasts. The performance of these generated forecasts was compared to the performance of realized net discount rates. Tests using both root mean squared errors and mean average deviations showed that net discount rates based on professional forecasts were no better than forecasts from the simple compromise estimator.

We then evaluated the performance of multivariate time-series forecasting techniques relative to the estimators traditionally used in forensic applications. We undertook this comparison from three different perspectives. First, we evaluated the theoretical advantages of multivariate techniques when samples are sufficiently large that the model and parameter could be taken as known. We showed that multivariate techniques would generally improve forecasts over univariate techniques that used only the history of net discount rates. In certain special cases, we demonstrated that univariate methods could perform as well.

Next, we evaluated the relative forecasting performance of time-series methods in test populations designed to be similar to those experienced in the U.S. economy over the last 30 years. We simulate 10,000 series for three test populations. The first population was generated from an unconstrained three-lag VAR in levels consisting of the one-year and five-year Treasury bill rates, and an earnings growth rate. The second test population was from a three-variable error correction model with cointegrated short and long-term interest rates and cointegrated short-term interest rates and earnings growth. The third test population was from an error correction model with two lagged differences. In this population, both interest rates as well as the earnings growth rate were jointly cointegrated.

For each of the three test populations, we constructed three multivariate, two traditional and the compromise estimators of the future net discount rate. For each, we compared predicted to realized values to calculate root mean squared errors and mean absolute deviations. Among the three multivariate estimators, the trivariate system generally performed the best in all three populations using either test criterion. Among the others, the compromise estimator generally outperformed the long-term average and current value estimators. Comparing the two best, the compromise estimator outperformed the trivariate system in population two by about 7%, but performed from 4% to 22% worse in populations one and three.

These results suggest that in our test populations, there may be gains from employing multivariate techniques over the traditional estimators. However, this conclusion is based upon the assumption that the time-series process can be described by simple statistical models. This led us to examine the performance of the estimators using actual wage growth and interest rate data for forecasting future net discount rates.

For the three multivariate and the three traditional estimators, we used historical data to form rolling predictions of the five-year-ahead Future Net Discount Rate for each year from 1978 to 2003 (leaving date through 2008 for construction of the realizations), and two sub-periods, 1978–1989 and 1990–2003. For the entire sample and the period covering the 1980s, the multivariate approaches performed worse than the three traditional approaches. Of the traditional estimators, the compromise estimator worked best overall. The more recent period, however, saw a reversal in performance. Here the multivariate approaches worked considerably better, with the trivariate estimator performing the best. Of the three traditional measures, the long-term average was just slightly better than the compromise estimator.

Taken as a whole, these results suggest two conclusions. First, if forensic economists are going to use traditional methods to forecast net discount rates, the compromise estimator proposed by Cushing and Rosenbaum (2006) seems a better choice than net discount rates based on professional forecasts. The second conclusion relates to the applicability of multivariate analysis. From the empirical results, it is clear that if a multivariate approach is to be used, a trivariate model that incorporates estimates of interest rates and wage growth is the preferred approach. However, it is not clear that a multivariate approach is preferred to the compromise estimator. We did derive restrictions under which forecasts of net discount rates using historical data are equivalent to multivariate predictions. Yet we also argued that testing these restrictions may be problematic when the underlying distributions are unknown and the samples sizes are moderate. In the moderate sample and forecastability tests, it was not clear that the trivariate estimator was consistently better than the compromise estimator. Given the compromise estimator's performance in the aggregate of testing, it may be of appeal until more conclusive evidence suggests a multivariate alternative is in order.

References

  • Andrews, D. W. K.
    “Tests for Parameter Instability and Structural Change With Unknown Change Point,”. Econometrica 1993. 61
    4
    :821856.
  • Andrews, D. W. K.
    and
    W.Ploberger
    . “Optimal Tests When a Nuisance Parameter Is Present Only Under the Alternative,”.Econometrica1994. 62:13831414.
  • Ang, A.
    and
    M.Piazzesi
    . “A no-arbitrage vector autoregression of term structure dynamics with macroeconomic and latent variables,”.Journal of Monetary Economics2003. 50:745787.
  • “Blue Chip Economic Indicators: Top Analysts' Forecasts of the U.S. Economic Outlook for the Year Ahead March 1982–2006,”. Blue Chip Aspen Publishers. 2007. 32
    3
    :
  • Braun, B.
    ,
    J.Lee
    , and
    M. C.Strazicich
    . “Historical Net Discount Rates and Future Economic Losses: Refuting the Common Practice,”.Economic Foundations of Injury and Death Damagesedited by
    Kaufman, R. T.
    ,
    J. D.Rodgers
    , and
    G. D.Martin
    .
    Edward Elgar Publishing Ltd
    . 2005. 468491.
  • Brookshire, M. L.
    ,
    M. R.Luthy
    , and
    F. L.Slesnick
    . “2006 Survey of Forensic Economists: Their Methods, Estimates, and Perspectives,”.Journal of Forensic EconomicsWinter 2006. 19
    1
    :2960.
  • “Evaluating CBO'S Record of Economic Forecasts Update,”. Congressional Budget Office.
    Washington DC
    Government Printing Office
    . 2007.
  • “Evaluating CBO'S Record of Economic Forecasts Update”. The Budget and Economic Outlook.
    Washington DC
    Government Printing Office
    . 2009. Retrieved from http://www.cbo.gov/ftpdocs/99xx/doc9957/01-07-Outlook.pdf.
  • Cochrane, J. H.
    “Commentary,”. Federal Reserve Bank of St. Louis Review 2007, July/August. 89
    4
    :27182.
  • Cushing, M. J.
    and
    D. I.Rosenbaum
    . “Historical Averages, Unit Roots and Future Net Discount Rates: A Comprehensive Estimator,”.Journal of Forensic EconomicsSpring/Summer 2006. 19
    2
    :139159.
  • Cushing, M. J.
    and
    D. I.Rosenbaum
    . “How Much Confidence Do We Have in Future Net Discount Rates?,”.Journal of Forensic Economics2007. 20
    1
    :114.
  • Efron, B.
    and
    R. J.Tibshirani
    . An Introduction to the Bootstrap.
    New York
    Kluwer Academic Publishers
    . 1993.
  • Elliott, G.
    ,
    T. J.Rothenberg
    , and
    J. H.Stock
    . “Efficient Tests for an Autoregressive Unit Root,”.Econometrica1996. 64
    4
    :813836.
  • Emerson, J.
    “Cointegration analysis and the choice of lag length,”. Applied Economics Letters 2007. 14:881885.
  • Ewing, B. T.
    ,
    J. T.Payne
    , and
    M. J.Piette
    . “Stationarity of the Net Discount Rate: Additional Evidence,”.Litigation Economics Digest1998. 3
    1
    :2732.
  • Ewing, B. T.
    ,
    J. T.Payne
    , and
    M. J.Piette
    . “Time-series Behavior of Medical Cost Net Discount Rates: Implications for Total Offsetting and Forecasting,”.Journal of Forensic Economics2001. 14
    1
    :5361.
  • Ewing, B. T.
    ,
    J. T.Payne
    , and
    M. J.Piette
    . “Forecasting Medical Net Discount Rates,”.Journal of Risk and Insurance2004. 70
    1
    :8595.
  • Ewing, B. T.
    ,
    J. T.Payne
    ,
    M. J.Piette
    , and
    M. A.Thompson
    . “Unit Roots and Asymmetric Adjustment: Implications for Valuing Fringe Benefits,”.Journal of Forensic Economics2002. 15
    2
    :173179.
  • Fair, R. C.
    and
    R. J.Shiller
    . “Comparing Information in Forecasts from Econometric Models,”.American Economic Review1990. 80
    3
    :37589.
  • Gamber, E. N.
    and
    R. L.Sorensen
    . “On Testing for the Stability of the Net Discount Rate,”.Journal of Forensic Economics1993. 7:6979.
  • Gamber, E. N.
    and
    R. L.Sorensen
    . “Are Net Discount Rates Stationary? The Implications for Present Value Calculations: Comment,”.Journal of Risk and Insurance1994. 61:503512.
  • Greene, W. H.
    Econometric Analysis,
    5th Edition
    .
    New Jersey
    Pearson Education, Inc.
    . 2003.
  • Hansen, B. E.
    “Approximate Asymptotic P Values for Structural-Change Tests,”. Journal of Business & Economic Statistics 1997. 15
    1
    :6067.
  • Haslag, J. H.
    ,
    M.Nieswiadomy
    , and
    D. J.Slottje
    . “Are Net Discount Rates Stationary? The Implications for Present Value Calculations,”.Journal of Risk and Insurance1991. 58:505512.
  • Haslag, J. H.
    ,
    M.Nieswiadomy
    , and
    D. J.Slottje
    . “Are Net Discount Rates Stationary? Some Further Evidence,”.Journal of Risk and Insurance1994. 61
    3
    :513518.
  • Hays, P.
    ,
    M.Schreiber
    ,
    J. E.Payne
    ,
    B. T.Ewing
    , and
    M. J.Piette
    . “Are Net Discount Ratios Stationary? Evidence of Mean Reversion and Persistence,”.Journal of Risk and Insurance2000. 67
    3
    :439449.
  • Hendry, D. F.
    and
    K.Hubrich
    . Forecasting Aggregates by Disaggregates
    Department of Economics, Oxford University and Research Department, European Central Bank
    . July 2005.
  • Horvath, P. A.
    and
    E. L.Sattler
    . “Calculating Net Discount Rates–It's Time to Recognize Structural Changes: A Comment and Extension,”.Journal of Forensic Economics1997. 10:327332.
  • Johnson, W. D.
    and
    G. M.Gelles
    . “Calculating Net Discount Rates–It's Time to Recognize Structural Change,”.Journal of Forensic Economics1996. 9:119129.
  • Kwiatkowski, D.
    ,
    P. C. B.Phillips
    ,
    P.Schmidt
    , and
    Y.Shin
    . “Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time-series have a unit root?,”.Journal of Econometrics1992. 54
    1–3
    :159178.
  • Lee, J.
    and
    M. C.Strazicich
    . “Minimum LM Unit Root Test with Two Structural, Breaks,”.Review of Economics and Statistics2003. 85
    4
    :10821089.
  • Lee, J.
    and
    M. C.Strazicich
    . “Minimum LM Unit Root Test with One Structural Break,”.Working Paper
    Department of Economics, Appalachian State University
    . 2004.
  • Liu, J.
    ,
    S.Wu
    , and
    J. V.Zidek
    . “On segmented multivariate regression,”.Statistical Sinica1997. 7:497525.
  • Lütkepohl, H.
    “Forecasting Contemporaneously Aggregated Vector ARMA Processes,”. Journal of Business & Economic Statistics 1984, July. 2
    3
    :201214.
  • Lütkepohl, H.
    Forecasting Aggregated Vector ARMA Processes.
    Berlin
    Springer-Verlag
    . 1987.
  • Lumsdaine, R.
    and
    D.Papell
    . “Multiple Trend Breaks and the Unit-Root Hypothesis,”.Review of Economics and Statistics1997. 79
    2
    :212218.
  • Mentz, R. P.
    ,
    P. A.Morettin
    , and
    F. A.Pino
    . “Modelling and Forecasting Linear Combinations of Time-series,”.International Statistical Review/Revue Internationale de Statistique1987. 55
    3
    :295313.
  • Ng, S.
    and
    P.Perron
    . “Lag Length Selection and the Construction of Unit Root Tests with Good Size and Power,”.Econometrica2001. 69:15191554.
  • Payne, J. E.
    “Testing for a Unit Root in the Net Discount Rate: A Survey of the Empirical Results,”. Journal of Business Valuation and Economic Loss Analysis 2007. 2
    2
    :
  • Payne, J. E.
    ,
    B. T.Ewing
    , and
    M. J.Piette
    . “An Inquiry Into the Time-series Properties of Net Discount Rates,”.Journal of Forensic Economics1999a. 12
    3
    :215223.
  • Payne, J. E.
    ,
    B. T.Ewing
    , and
    M. J.Piette
    . “Mean Reversion in Net Discount Rates,”.Journal of Legal Economics1999b, Spring/Summer. 9
    1
    :6980.
  • Payne, J. E.
    and
    H.Mohammadi
    . “Time-series Properties of Capitalization Rates: A Comment and Extension,”.Journal of Forensic Economics2006. 19
    1
    :215223.
  • Sen, A.
    ,
    G. M.Gelles
    , and
    W. D.Johnson
    . “A Further Examination Regarding the Stability of the Net Discount Rate,”.Journal of Forensic Economics2000, Winter. 13
    1
    :2328.
  • Sen, A.
    ,
    G. M.Gelles
    , and
    W. D.Johnson
    . “Structural Instability in the Net Discount Rate Series Based on High Grade Municipal Bond Yields,”.Journal of Legal Economics2002, Fall. 12
    2
    :87100.
  • Shiller, R. J.
    “The Volatility of Long term Interest Rates and Expectations Models of the Term Structure,”. Journal of Political Economy Dec. 1979. 87
    6
    :11901219.
  • Social Security Administration Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds.
    Washington, DC
    Government Printing Office
    . 2008. Retrieved from http://www.ssa.gov/OACT/TR/TR08/trTOC.html.
    1For a sample of applications and analyses, see Braun et al., (2005); Ewing et al., (1998, 2001, 2003); Ewing et al., (2002); Gamber and Sorensen (1993, 1994); Haslag et al., (1991, 1994); Hays et al., (2000); Horvath and Sattler (1997); Johnson and Gelles (1996); Payne et al., (1999a, 1999b); Payne and Mohammadi (2006); Sen et al., (2000, 2002). 2In the notes to a table showing interest rate projections, Blue Chip (2007) writes “[t]he table below shows the latest U.S. Blue Chip Consensus1 projections by years for 2009 through 2013, …. Apply these projections cautiously. For the most part economic and political forces cannot be evaluated over such long time spans.” (p. 14) 3Estimates using Blue Chip interest rate forecasts (reported in an earlier version of this paper) yielded essentially the same results. 4Results for a 10-year horizon, not reported here, are similar to our results for our choice of a five-year-horizon. We would like to use the much longer horizons typically encountered in forensic applications, but this severely limits the sample size and limits us to evaluating forecasts made in the distant past. 5We define the net discount rate defined as the difference between the interest rate and the growth rate of wages. Defining the net discount rate as (1+i)/(1+g) − 1 yields very similar conclusions. 6 Ang and Piazzesi (2003) add macroeconomic variables to their VAR interest rate forecasting model. 7See, for example, Lütkepohl (1984, 1987), Mentz et al., (1987), Hendry and Hubrich (2005). 8See Kwiatkowski et al., (1992). 9Following standard recommendations, we excluded breaks within 15% of either end of the series. This failure to reject the hypothesis of no structural change is robust to one, two and three lag specifications for the autoregression. 10The results reported are for Lee and Strazicich's “Model A” that allows for a break in the intercept. Allowing for both a trend and a unit root does not seem plausible for the net discount rate series. Following their procedure, the lag length cutoff criterion is the 10% critical value of the t-statistic of the longest lag. The Gauss program for computing these statistics is kindly provided on Junsoo Lee's website: http://cba.ua.edu/~jlee/gauss/index.htm. 11We include a constant in all cointegrating vectors. The joint restrictions imposed on the cointegrating vectors could not be rejected at the .05 level. 12See Efron and Tibshirani (1993). 13This specification search process can certainly be criticized. However, we believe it represents a standard procedure and is a fair characterization of how applied econometricians might produce forecasts from a three-variable system. 14We also evaluated the performance of the three estimators that correspond exactly to the specifications of the underlying population. Not surprisingly, the correctly specified estimator had the lowest RMSE in the corresponding population. 15Results for the 10-year forecast horizon, using forecasts from 1978 to 1998, are similar.
Copyright: © 2010 National Association of Forensic Economics 2010
Figure 1.
Figure 1.

Net Discount Rates


Contributor Notes

*Professor of Economics, Department of Economics, University of Nebraska-Lincoln, Lincoln NE.

**Corresponding author. Professor of Economics, Department of Economics, University of Nebraska-Lincoln, Lincoln NE. The authors thank the participants at the National Association of Forensic Economists session at the Western Economic Association meeting in the summer of 2008 and to three anonymous referees for their valuable comments. We also thank Mary McGarvey for help with the Gauss estimation.
  • Download PDF