Heteroskedasticity

views updated

Heteroskedasticity

BIBLIOGRAPHY

The classical statistical assumptions underlying econometric analysis refer to a set of requirements that need to hold in order for ordinary least squares (OLS) to yield the best estimator available for regression models. Heteroskedasticity violates the classical assumption that observations of the error term are drawn from a distribution that has a constant variance. Homoskedasticity, the assumption of constant variance for different observations of the error term, is not always realistic, because the larger an independent variable, the larger the variance of the associated disturbance.

In general heteroskedasticity is more likely to take place in cross-sectional models than in time-series models. However, heteroskedasticity can occur in time-series models with significant changes in the dependent variable. Heteroskedasticity can also occur in any model where the quality of data collection changes dramatically within the sample, or it can be caused by a model specification error.

When the violation of homoskedasticity takes place, ordinary least squares estimation of regression coefficient (βOLS) remains unbiased, but it no longer has minimum variance among all linear unbiased estimators. Heteroskedasticity causes OLS to tend to underestimate the variances (and standard errors) of the coefficients. As a result, tests of statistical significance, such as the t-statistic and the F-statistic, cannot be relied on in face of uncorrected heteroskedasticity. In practice OLS usually turns up with higher t-scores than would be obtained if the error terms were homoskedastic, leading researchers to reject null hypotheses that should not be rejected.

There is no universally agreed-upon method of testing heteroskedasticity; econometric textbooks list as many as eight different methods for such testing. However, the visual inspection of residuals plotted against the suspected independent variable provide the first step for detecting the problem, and many computer packages can produce this graph. Some commonly used detection tests are the Goldfeld-Quandt test, the Glejser test, the Maximum Likelihood technique, the Park test, the White test, and the Bartletts test. The majority of these tests use the residuals of an equation to test for the possibility of heteroskedasticity in the error terms. Certain disadvantages exist in every detection test, such as the computational cost (Maximum Likelihood technique) or the identification of a proper value for the best possible form of heteroskedasticity (Park test). It is worth mentioning, however, that an extensive Monte Carlo study of these techniques showed that the Maximum Likelihood approach is the most desirable.

The first step in correcting heteroskedasticity is to check for an omitted variable that might be causing impure heteroskedasticity. If the specification is as good as possible, then solutions such as the Weighted Least Squares (WLS) or the Heteroskedasticity-Corrected Standard Errors (HCSE) should be considered. The WLS involves dividing the main equation by whatever will make the error term homoskedastic and then rerunning the regression on the transformed variables; a disadvantage of this method is the identification of this proportionality factor. The HCSE is the most popular remedy for heteroskedasticity, and it takes a completely different approach to the problem. It focuses on improving the standard errors of the coefficients without changing the parameter estimates. One of the disadvantages of this technique is that it works best on large samples, and not all computer regression software packages calculate HCSE.

BIBLIOGRAPHY

Kennedy, Peter. 1987. A Guide to Econometrics. Cambridge, MA: MIT Press.

Studenmund, A. H. 2001. Using Econometrics: A Practical Guide. 4th ed. New York: Addison, Wesley, Longman.

Persefoni Tsaliki

More From encyclopedia.com