# Connexions

You are here: Home » Content » Analysis of time series

### Recently Viewed

This feature requires Javascript to be enabled.

# Analysis of time series

Module by: Christopher Curran. E-mail the author

Summary: This module provides a cursory introduction to the techniques used in the analysis of time series that is intended for advanced undergraduates.

## Analysis of Time-Series

### Introduction

This module offers a brief introduction of some of the issues that arise in the analysis of time-series. Most of the topics covered are those that we attacked first by statisticians and economists. As such they do not demand the more sophisticated tools used by the more modern approaches to time-series. In spite of these shortcomings, they should give you some understanding of the issues that arise with the use of times-series in econometric analyses. One final note of explanation is necessary. These notes are designed to give you a brief introduction to how Stata handles time-series data. These notes are not a substitute for reading the Stata manual, completing a forecasting course, or reading standard texts on the rather complicated field.

### Time-series analysis in Stata

Throughout this module we work with US macroeconomic data included in the MS Excel file Macro data.xls. The variables are real level of investments (RINV), real gross national product (RGNP), and real interest rate (RINTRATE). The real interest rate is approximated by the difference between the nominal interest rate and the rate of change of the price index from the previous year. The data are for the years 1963 to 1982. You can replicate the analysis done here by copying this data set into a Stata file.

The first step after entering the data set into Stata, is to declare that the data set is a time-series. The command to do this is:

. tsset year

The data set can be broken into any number of time periods including daily, weekly, monthly, quarterly, halfyearly, yearly and generic.1

Assume that we want to estimate the following regression:

RIN V t = β 0 + β 1 RGN P t + β 2 RINTRAT E t + ε t RIN V t = β 0 + β 1 RGN P t + β 2 RINTRAT E t + ε t MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOuaiaadMeacaWGobGaamOvamaaBaaaleaacaWG0baabeaakiabg2da9iabek7aInaaBaaaleaacaaIWaaabeaakiabgUcaRiabek7aInaaBaaaleaacaaIXaaabeaakiaadkfacaWGhbGaamOtaiaadcfadaWgaaWcbaGaamiDaaqabaGccqGHRaWkcqaHYoGydaWgaaWcbaGaaGOmaaqabaGccaWGsbGaamysaiaad6eacaWGubGaamOuaiaadgeacaWGubGaamyramaaBaaaleaacaWG0baabeaakiabgUcaRiabew7aLnaaBaaaleaacaWG0baabeaaaaa@54DD@
(1)

using the data set in the appendix. Figure 1 shows this regression command and the resultant output.

On the surface the estimates seem “reasonable” because the signs on the two explanatory variables are what theory predicts they should be and the parameter for real GNP is statistically different from zero. However, an examination of the residuals shown in Figure 2 suggest that the error terms might exhibit autocorrelation.

There are several issues that arise here. First, what sort of models can we use to account for autocorrelation? Second, what sorts of tests exist for detecting the existence of autocorrelation? We begin with the first of these questions by introducing the concept of first-order autocorrelation. Consider the following model:

y t = β 0 + β 1 x t + ε t . y t = β 0 + β 1 x t + ε t . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaakiabg2da9iabek7aInaaBaaaleaacaaIWaaabeaakiabgUcaRiabek7aInaaBaaaleaacaaIXaaabeaakiaadIhadaWgaaWcbaGaamiDaaqabaGccqGHRaWkcqaH1oqzdaWgaaWcbaGaamiDaaqabaGccaGGUaaaaa@45C1@
(2)

We say that this model exhibits first-order autocorrelation if the error terms can be written as:

ε t =ρ ε t1 + μ t , ε t =ρ ε t1 + μ t , MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyTdu2aaSbaaSqaaiaadshaaeqaaOGaeyypa0JaeqyWdiNaeqyTdu2aaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiabgUcaRiabeY7aTnaaBaaaleaacaWG0baabeaakiaacYcaaaa@4484@
(3)

where μ t ~N( 0, σ 2 ). μ t ~N( 0, σ 2 ). MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqiVd02aaSbaaSqaaiaadshaaeqaaOGaaiOFaiaad6eadaqadaqaaiaaicdacaGGSaGaeq4Wdm3aaWbaaSqabeaacaaIYaaaaaGccaGLOaGaayzkaaGaaiOlaaaa@4108@ Equation (3) implies that the error terms in (2) are correlated with each other. It is rather easy to show that, while the estimates of the unknown parameters are unbiased, the estimates of the standard errors are biased—downward if 1>ρ>0 1>ρ>0 MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaGymaiabg6da+iabeg8aYjabg6da+iaaicdaaaa@3B39@ and upward if 1<ρ<0. 1<ρ<0. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeyOeI0IaaGymaiabgYda8iabeg8aYjabgYda8iaaicdacaGGUaaaaa@3CD0@ This conclusion holds as long as the source of the autocorrelation is due to (3). If, on the other hand, the source of autocorrelation among the error terms in (2) is due to omitted explanatory variables (whose effects are absorbed in the error term), we have a potentially more serious problem. In particular, if the omitted explanatory variables are correlated with the included explanatory variables (as is often true in time-series), then the estimates of the unknown slope parameters are also biased.

For the moment we will assume that Equations (2) and (3) are true representations of the world. What then can we do to estimate (2)? What we need to do is find a way to transform (2) so that the error term of whatever regression we estimate does not exhibit autocorrelation. In time period t1 t1 MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadshacqGHsislcaaIXaaaaa@388A@ we have:

y t1 = β 0 + β 1 x t1 + ε t1 . y t1 = β 0 + β 1 x t1 + ε t1 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0bGaeyOeI0IaaGymaaqabaGccqGH9aqpcqaHYoGydaWgaaWcbaGaaGimaaqabaGccqGHRaWkcqaHYoGydaWgaaWcbaGaaGymaaqabaGccaWG4bWaaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiabgUcaRiabew7aLnaaBaaaleaacaWG0bGaeyOeI0IaaGymaaqabaGccaGGUaaaaa@4AB9@
(4)

Multiply (4) by ρ ρ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyWdihaaa@37B4@ to get:

ρ y t1 =ρ β 0 +ρ β 1 x t1 +ρ ε t1 . ρ y t1 =ρ β 0 +ρ β 1 x t1 +ρ ε t1 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyWdiNaamyEamaaBaaaleaacaWG0bGaeyOeI0IaaGymaaqabaGccqGH9aqpcqaHbpGCcqaHYoGydaWgaaWcbaGaaGimaaqabaGccqGHRaWkcqaHbpGCcqaHYoGydaWgaaWcbaGaaGymaaqabaGccaWG4bWaaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiabgUcaRiabeg8aYjabew7aLnaaBaaaleaacaWG0bGaeyOeI0IaaGymaaqabaGccaGGUaaaaa@51B9@
(5)

Now subtracting (5) from (4) gives:

y t ρ y t1 = β 0 + β 1 x t + ε t ( ρ β 0 +ρ β 1 x t1 +ρ ε t1 ) y t ρ y t1 = β 0 + β 1 x t + ε t ( ρ β 0 +ρ β 1 x t1 +ρ ε t1 ) MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaakiabgkHiTiabeg8aYjaadMhadaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaOGaeyypa0JaeqOSdi2aaSbaaSqaaiaaicdaaeqaaOGaey4kaSIaeqOSdi2aaSbaaSqaaiaaigdaaeqaaOGaamiEamaaBaaaleaacaWG0baabeaakiabgUcaRiabew7aLnaaBaaaleaacaWG0baabeaakiabgkHiTmaabmaabaGaeqyWdiNaeqOSdi2aaSbaaSqaaiaaicdaaeqaaOGaey4kaSIaeqyWdiNaeqOSdi2aaSbaaSqaaiaaigdaaeqaaOGaamiEamaaBaaaleaacaWG0bGaeyOeI0IaaGymaaqabaGccqGHRaWkcqaHbpGCcqaH1oqzdaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaaGccaGLOaGaayzkaaaaaa@6280@

or, equivalently,

( y t ρ y t1 )= β 0 ( 1ρ )+ β 1 ( x t ρ x t1 )+( ε t ρ ε t1 ). ( y t ρ y t1 )= β 0 ( 1ρ )+ β 1 ( x t ρ x t1 )+( ε t ρ ε t1 ). MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaeWaaeaacaWG5bWaaSbaaSqaaiaadshaaeqaaOGaeyOeI0IaeqyWdiNaamyEamaaBaaaleaacaWG0bGaeyOeI0IaaGymaaqabaaakiaawIcacaGLPaaacqGH9aqpcqaHYoGydaWgaaWcbaGaaGimaaqabaGcdaqadaqaaiaaigdacqGHsislcqaHbpGCaiaawIcacaGLPaaacqGHRaWkcqaHYoGydaWgaaWcbaGaaGymaaqabaGcdaqadaqaaiaadIhadaWgaaWcbaGaamiDaaqabaGccqGHsislcqaHbpGCcaWG4bWaaSbaaSqaaiaadshacqGHsislcaaIXaaabeaaaOGaayjkaiaawMcaaiabgUcaRmaabmaabaGaeqyTdu2aaSbaaSqaaiaadshaaeqaaOGaeyOeI0IaeqyWdiNaeqyTdu2aaSbaaSqaaiaadshacqGHsislcaaIXaaabeaaaOGaayjkaiaawMcaaiaac6caaaa@637B@

Let

y t = y t1 ρ y t1 , y t = y t1 ρ y t1 , MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaDaaaleaacaWG0baabaGaey4fIOcaaOGaeyypa0JaamyEamaaBaaaleaacaWG0bGaeyOeI0IaaGymaaqabaGccqGHsislcqaHbpGCcaWG5bWaaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiaacYcaaaa@451D@

β 0 = β 0 ( 1ρ ), β 0 = β 0 ( 1ρ ), MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqOSdi2aa0baaSqaaiaaicdaaeaacqGHxiIkaaGccqGH9aqpcqaHYoGydaWgaaWcbaGaaGimaaqabaGcdaqadaqaaiaaigdacqGHsislcqaHbpGCaiaawIcacaGLPaaacaGGSaaaaa@42AC@

and

x t = x t1 ρ x t1 . x t = x t1 ρ x t1 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamiEamaaDaaaleaacaWG0baabaGaey4fIOcaaOGaeyypa0JaamiEamaaBaaaleaacaWG0bGaeyOeI0IaaGymaaqabaGccqGHsislcqaHbpGCcaWG4bWaaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiaac6caaaa@451C@

Remember that (3) implies that μ t = ε t ρ ε t1 . μ t = ε t ρ ε t1 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqiVd02aaSbaaSqaaiaadshaaeqaaOGaeyypa0JaeqyTdu2aaSbaaSqaaiaadshaaeqaaOGaeyOeI0IaeqyWdiNaeqyTdu2aaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiaac6caaaa@4491@ Thus, we have:

y t = β 0 + β 1 x t + μ t , y t = β 0 + β 1 x t + μ t , MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaDaaaleaacaWG0baabaGaey4fIOcaaOGaeyypa0JaeqOSdi2aa0baaSqaaiaaicdaaeaacqGHxiIkaaGccqGHRaWkcqaHYoGydaWgaaWcbaGaaGymaaqabaGccaWG4bWaa0baaSqaaiaadshaaeaacqGHxiIkaaGccqGHRaWkcqaH8oqBdaWgaaWcbaGaamiDaaqabaGccaGGSaaaaa@489E@
(6)

where μ t ~N( 0, σ 2 ). μ t ~N( 0, σ 2 ). MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqiVd02aaSbaaSqaaiaadshaaeqaaOGaaiOFaiaad6eadaqadaqaaiaaicdacaGGSaGaeq4Wdm3aaWbaaSqabeaacaaIYaaaaaGccaGLOaGaayzkaaGaaiOlaaaa@4109@ Thus, we have a regression for which the OLS estimates will be BLUE (Best Linear Unbiased Estimator) if we only knew the true value of ρ. ρ. MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyWdiNaaiOlaaaa@3866@

We now turn to the issue of detecting the existence of autocorrelation. In what follows we focus mainly on the detection of first-order autocorrelation as shown in Equation (3). We can use the Durbin-Watson test to see if our suspicions are correct. The Durbin-Watson statistic tests the hypothesis:

H 0 : ρ=0 H 1 : ρ0 H 0 : ρ=0 H 1 : ρ0 MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGceaqabeaacaqGibWaaSbaaSqaaiaabcdaaeqaaOGaaeOoaiaabccacqaHbpGCcqGH9aqpcaaIWaaabaGaaeisamaaBaaaleaacaaIXaaabeaakiaabQdacaqGGaGaeqyWdiNaeyiyIKRaaeimaaaaaa@43E4@
(7)

The details of the test statistic can be found in any econometrics textbook and need not detain us here. What you need to know about the DW-statistic are (1) it has a mean value of 2; (2) because its distribution lies between two limiting distributions, we need to look at two critical values. For this reason there are two critical values—one for each of the limiting distributions. Figure 3 illustrates the probability distribution function (pdf) for the Durbin-Watson statistic. The true pdf lies somewhere between the blue pdf and the red pdf. What is shown in the figure is the point below which, say, 5 percent of the distribution lies for each distribution. The true critical point lies somewhere between d L d L MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamizamaaBaaaleaacaWGmbaabeaaaaa@37DA@ and d U d U MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamizamaaBaaaleaacaWGvbaabeaaaaa@37E3@ These values are relevant to testing the null hypothesis of no autocorrelation against the alternative hypothesis of positive autocorrelation ( i. e., ( i. e., MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaabeaabaGaaeyAaiaab6cacaqGGaGaaeyzaiaab6cacaqGSaaacaGLOaaaaaa@3B37@ ρ>0 ). ρ>0 ). MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaeGaaeaacqaHbpGCcqGH+aGpcaaIWaaacaGLPaaacaGGUaaaaa@3AF0@

If d< d L d< d L MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamizaiabgYda8iaadsgadaWgaaWcbaGaamitaaqabaaaaa@39C7@ , we can reject the null hypothesis of no autocorrelation; if d U <d<4 d U , d U <d<4 d U , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamizamaaBaaaleaacaWGvbaabeaakiabgYda8iaadsgacqGH8aapcaaI0aGaeyOeI0IaamizamaaBaaaleaacaWGvbaabeaakiaacYcaaaa@3F32@ we cannot reject the null hypothesis; and if d L <d< d U , d L <d< d U , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamizamaaBaaaleaacaWGmbaabeaakiabgYda8iaadsgacqGH8aapcaWGKbWaaSbaaSqaaiaadwfaaeqaaOGaaiilaaaa@3D7E@ the results of the test are uncertain. Moreover, since the distributions are symmetric around 2 and between 0 and 4, the critical values for the alternative hypothesis of negative autocorrelation ( i. e., ( i. e., MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVCI8FfYJH8YrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaabeaabaGaaeyAaiaab6cacaqGGaGaaeyzaiaab6cacaqGSaaacaGLOaaaaaa@3B37@ ρ>0 ) ρ>0 ) MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaeGaaeaacqaHbpGCcqGH+aGpcaaIWaaacaGLPaaaaaa@3A3E@ are 4 minus either the upper or lower critical values, as shown in Figure 3. Critical values for the Durbin-Watson statistic can be found in the appendices of most econometric textbooks.

The command for the test and the resultant DW-statistics for the estimate of Equation (2) are shown in Figure 4. The 5 percent level critical values for the Durbin-Watson statistic for a sample size of 19 with two parameters (less the intercept) estimated are 1.074 and 1.536—if the observed value of the DW-statistic is between 1.536 and 2.464, we can accept the null hypothesis that the residuals do not exhibit autocorrelation. Our value of 1.32 falls in the uncertain region where we are not sure if we can or cannot reject the null hypothesis.

At this point we can try the Cochran-Orcutt estimate. Figure 5 reports the results of using the Cochran-Orcutt estimation procedure. Notice that it took 7 iterations for the estimate of ρ ρ MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyWdihaaa@37B4@ to converge. If we use the Prais-Winsten estimation technique, we get the results shown in Figure 6. It is reassuring to see that the two estimation techniques do not yield estimates of the standard errors that are substantially different from each other.

Using either the Cochran-Orcutt or the Prais-Winstn estimator is dependent on the assumption that the error terms exhibit first-order autocorrelation. Unfortunately, there is no particular reason (from a theoretical viewpoint) to believe in this assumption. Why, for instance, couldn't the error terms of Equation (2) exhibit second-order autocorrelation of the form:

ε t = ρ 1 ε t1 + ρ 2 ε t2 + μ t ? ε t = ρ 1 ε t1 + ρ 2 ε t2 + μ t ? MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyTdu2aaSbaaSqaaiaadshaaeqaaOGaeyypa0JaeqyWdi3aaSbaaSqaaiaaigdaaeqaaOGaeqyTdu2aaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiabgUcaRiabeg8aYnaaBaaaleaacaaIYaaabeaakiabew7aLnaaBaaaleaacaWG0bGaeyOeI0IaaGOmaaqabaGccqGHRaWkcqaH8oqBdaWgaaWcbaGaamiDaaqabaGccaGG=aaaaa@4D9B@
(8)

There is a more troubling possible explanation for the low Durbin-Watson statistic: the model may be misspecified. In particular, there may be important explanatory variables omitted from the regression. These omitted explanatory variables may exhibit autocorrelation and, thus, may be the source of autocorrelation in the error term. If the omitted explanatory variables are correlated with the included explanatory variables, then the parameter estimates are biased. The large difference in the estimate of parameter for real interest rates for the OLS regression and the Cochran-Orcutt estimate is suggestive of model misspecification.2

### More modern time-series models

#### ARMA models

The model we described above is assumed to have first-order autoregressive error disturbances. Such a process is referred to as AR(1). The error structure in (8) is AR(2). If we apply this concept to a data series, we would call the following an AR(p) process:

y t = α 0 + i=1 p β i y ti . y t = α 0 + i=1 p β i y ti . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaakiabg2da9iabeg7aHnaaBaaaleaacaaIWaaabeaakiabgUcaRmaaqahabaGaeqOSdi2aaSbaaSqaaiaadMgaaeqaaOGaamyEamaaBaaaleaacaWG0bGaeyOeI0IaamyAaaqabaaabaGaamyAaiabg2da9iaaigdaaeaacaWGWbaaniabggHiLdGccaGGUaaaaa@49F1@
(9)

Another approach available to us is to think of a data as a weighted average of some error terms that are assumed to have a mean of zero, have a fixed variance, and be uncorrelated over time3:

(10)

A data series exhibiting this pattern is called a moving average process or MA(q). The error tern is known in the literature as white noise. A data series that has both autoregressive and moving average characteristics is call an autoregressive moving average (ARMA) series; an ARMA(p, q) is:

y t = α 0 + i=1 p β i y ti + i=0 q β i ε ti . y t = α 0 + i=1 p β i y ti + i=0 q β i ε ti . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaakiabg2da9iabeg7aHnaaBaaaleaacaaIWaaabeaakiabgUcaRmaaqahabaGaeqOSdi2aaSbaaSqaaiaadMgaaeqaaOGaamyEamaaBaaaleaacaWG0bGaeyOeI0IaamyAaaqabaaabaGaamyAaiabg2da9iaaigdaaeaacaWGWbaaniabggHiLdGccqGHRaWkdaaeWbqaaiabek7aInaaBaaaleaacaWGPbaabeaakiabew7aLnaaBaaaleaacaWG0bGaeyOeI0IaamyAaaqabaaabaGaamyAaiabg2da9iaaicdaaeaacaWGXbaaniabggHiLdGccaGGUaaaaa@5824@
(11)

It may help to show two series constructed to have different ARMA patterns. Figure 7 shows one of the potential time series generated by the ARMA(2,1) process:

y t =0.67 y t1 +0.33 y t2 +0.1 ε t +0.05 ε t1 . y t =0.67 y t1 +0.33 y t2 +0.1 ε t +0.05 ε t1 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaakiabg2da9iaaicdacaGGUaGaaGOnaiaaiEdacaWG5bWaaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiabgUcaRiaaicdacaGGUaGaaG4maiaaiodacaWG5bWaaSbaaSqaaiaadshacqGHsislcaaIYaaabeaakiabgUcaRiaaicdacaGGUaGaaGymaiabew7aLnaaBaaaleaacaWG0baabeaakiabgUcaRiaaicdacaGGUaGaaGimaiaaiwdacqaH1oqzdaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaOGaaiOlaaaa@565C@
(12)

Figure 8 shows one potential time series generated by the ARMA(1,1) process:

y t =0.67 y t1 +0.1 ε t +0.05 ε t1 . y t =0.67 y t1 +0.1 ε t +0.05 ε t1 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaakiabg2da9iaaicdacaGGUaGaaGOnaiaaiEdacaWG5bWaaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiabgUcaRiaaicdacaGGUaGaaGymaiabew7aLnaaBaaaleaacaWG0baabeaakiabgUcaRiaaicdacaGGUaGaaGimaiaaiwdacqaH1oqzdaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaOGaaiOlaaaa@4EBE@
(13)

#### Stationarity

Consider the time-series y t . y t . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaakiaac6caaaa@38D3@ We define this stochastic process as covariance stationary if

(14)
E[ ( y t μ ) 2 ]=E[ ( y ts μ ) 2 ]= σ 2 , and E[ ( y t μ ) 2 ]=E[ ( y ts μ ) 2 ]= σ 2 , and MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyramaadmaabaWaaeWaaeaacaWG5bWaaSbaaSqaaiaadshaaeqaaOGaeyOeI0IaeqiVd0gacaGLOaGaayzkaaWaaWbaaSqabeaacaaIYaaaaaGccaGLBbGaayzxaaGaeyypa0JaamyramaadmaabaWaaeWaaeaacaWG5bWaaSbaaSqaaiaadshacqGHsislcaWGZbaabeaakiabgkHiTiabeY7aTbGaayjkaiaawMcaamaaCaaaleqabaGaaGOmaaaaaOGaay5waiaaw2faaiabg2da9iabeo8aZnaaCaaaleqabaGaaGOmaaaakiaacYcacaqGGaGaaeyyaiaab6gacaqGKbaaaa@54B9@
(15)
(16)

The last term, γ s , γ s , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeq4SdC2aaSbaaSqaaiaadohaaeqaaOGaaiilaaaa@3979@ is known as the autocovariance. A time-series is defined to be covariance stationary if its mean and all its autocovariances are unaffected by a change of time origin. We define the autocorrelation between y t y t MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaaaaa@3817@ and y ts y ts MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0bGaeyOeI0Iaam4Caaqabaaaaa@39FC@ as:

ρ s := γ s γ 0 . ρ s := γ s γ 0 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyWdi3aaSbaaSqaaiaadohaaeqaaOGaaiOoaiabg2da9maalaaabaGaeq4SdC2aaSbaaSqaaiaadohaaeqaaaGcbaGaeq4SdC2aaSbaaSqaaiaaicdaaeqaaaaakiaac6caaaa@40D3@
(17)

Quite often you can create a stationary time-series from a non-stationary time-series by taking the first-differences of the non-stationary series. If the first difference does not produce a stationary series, then one continues to take first differences until you find a stationary series. For instance, the time-series shown in Figure 7 appears to be non-stationary. The first differences of this series is shown in Figure 9. Using the imperfect eye, it would appear that the first differences of (13) is stationary. However, we really cannot tell anything for sure from the graph of a data set. We need to use the restrictions of the parameters derived in advanced texts to determine if a data set is stationary.4

#### The autocorrelation function

One of the major ways to identify the structure of a time series is to look at the autocorrelation function. The autocorrelation function, ρ s , ρ s , MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyWdi3aaSbaaSqaaiaadohaaeqaaOGaaiilaaaa@3992@ is the correlation between y t y t MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaaaaa@3817@ and y ts . y ts . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0bGaeyOeI0Iaam4CaaqabaGccaGGUaaaaa@3AB8@ Stata uses the following formula to estimate it [StataCorp: p. 60] for a time-series:

The researcher then has to compare the actual autocorrelation function with the theoretical autocorrelation for comparable data series. To see to use the autocorrelation function consider the following five time series5:

(18)
(19)
(20)
MA(1): y t = ε t 0.7 ε t1 , MA(1): y t = ε t 0.7 ε t1 , MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeytaiaabgeacaqGOaGaaeymaiaabMcacaqG6aGaaeiiaiaadMhadaWgaaWcbaGaamiDaaqabaGccqGH9aqpcqaH1oqzdaWgaaWcbaGaamiDaaqabaGccqGHsislcaaIWaGaaiOlaiaaiEdacqaH1oqzdaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaOGaaiilaaaa@4943@
(21)
ARMA( 2, 1 ): y t =0.7 y t1 0.49 y t2 + ε t , and ARMA( 2, 1 ): y t =0.7 y t1 0.49 y t2 + ε t , and MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeyqaiaabkfacaqGnbGaaeyqamaabmaabaGaaeOmaiaabYcacaqGGaGaaeymaaGaayjkaiaawMcaaiaabQdacaqGGaGaamyEamaaBaaaleaacaWG0baabeaakiabg2da9iaaicdacaGGUaGaaG4naiaadMhadaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaOGaeyOeI0IaaGimaiaac6cacaaI0aGaaGyoaiaadMhadaWgaaWcbaGaamiDaiabgkHiTiaaikdaaeqaaOGaey4kaSIaeqyTdu2aaSbaaSqaaiaadshaaeqaaOGaaiilaiaabccacaqGHbGaaeOBaiaabsgaaaa@5770@
(22)
ARMA( 1, 2 ): y t =0.7 y t1 + ε t 0.7 ε t1 . ARMA( 1, 2 ): y t =0.7 y t1 + ε t 0.7 ε t1 . MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeyqaiaabkfacaqGnbGaaeyqamaabmaabaGaaeymaiaabYcacaqGGaGaaeOmaaGaayjkaiaawMcaaiaabQdacaqGGaGaamyEamaaBaaaleaacaWG0baabeaakiabg2da9iabgkHiTiaaicdacaGGUaGaaG4naiaadMhadaWgaaWcbaGaamiDaiabgkHiTiaaigdaaeqaaOGaey4kaSIaeqyTdu2aaSbaaSqaaiaadshaaeqaaOGaeyOeI0IaaGimaiaac6cacaaI3aGaeqyTdu2aaSbaaSqaaiaadshacqGHsislcaaIXaaabeaakiaac6caaaa@54E8@
(23)

Each of these functions has a theoretical autocorrelation function; graphs of these autocorrelation functions are shown in the left column of Figure 10.6

There is additional function we can use to help identify the nature of a time-series. Consider the following regressions:

y t = ϕ 11 y t1 + e t , y t = ϕ 21 y t1 + ϕ 22 y t2 + e t , etc., y t = ϕ 11 y t1 + e t , y t = ϕ 21 y t1 + ϕ 22 y t2 + e t , etc., MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaDaaaleaacaWG0baabaGaey4fIOcaaOGaeyypa0Jaeqy1dy2aaSbaaSqaaiaaigdacaaIXaaabeaakiaadMhadaqhaaWcbaGaamiDaiabgkHiTiaaigdaaeaacqGHxiIkaaGccqGHRaWkcaWGLbWaaSbaaSqaaiaadshaaeqaaOGaaiilaiaabccacaWG5bWaa0baaSqaaiaadshaaeaacqGHxiIkaaGccqGH9aqpcqaHvpGzdaWgaaWcbaGaaGOmaiaaigdaaeqaaOGaamyEamaaDaaaleaacaWG0bGaeyOeI0IaaGymaaqaaiabgEHiQaaakiabgUcaRiabew9aMnaaBaaaleaacaaIYaGaaGOmaaqabaGccaWG5bWaa0baaSqaaiaadshacqGHsislcaaIYaaabaGaey4fIOcaaOGaey4kaSIaamyzamaaBaaaleaacaWG0baabeaakiaacYcacaqGGaGaaeyzaiaabshacaqGJbGaaeOlaiaabYcaaaa@648B@
(24)

where y t = y t y ¯ . y t = y t y ¯ . MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaDaaaleaacaWG0baabaGaey4fIOcaaOGaeyypa0JaamyEamaaBaaaleaacaWG0baabeaakiabgkHiTiqadMhagaqeaiaac6caaaa@3EF9@

Our interpretation of the ϕ ii ϕ ii MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqy1dy2aaSbaaSqaaiaadMgacaWGPbaabeaaaaa@39C4@ parameters is that they are the correlation between y t y t MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaaaaa@3817@ and y ti y ti MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0bGaeyOeI0IaamyAaaqabaaaaa@39F2@ controlling for all of the y j y j MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWGQbaabeaaaaa@380D@ where j=2,,( i1 ). j=2,,( i1 ). MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamOAaiabg2da9iaaikdacaGGSaGaeSOjGSKaaiilamaabmaabaGaamyAaiabgkHiTiaaigdaaiaawIcacaGLPaaacaGGUaaaaa@3FF8@ Because these correlation coefficients control for values of y’s observed between y t y t MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0baabeaaaaa@3817@ and y ti y ti MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyEamaaBaaaleaacaWG0bGaeyOeI0IaamyAaaqabaaaaa@39F2@ , they are known as the partial autocorrelations. The theoretical partial autocorrelations are shown in the right column of Figure 10. Stata uses the command .corrgram varname to calculate the autocorrelations and partial autocorrelations for the time-series varname. Figure 11 shows the output when using this command on the real levels of investment. The autocorrelation function for this data set looks like the theoretical one for an AR(1) process. However, the partial autocorrelation function does not look like any of the partial autocorrelation functions shown in Figure 11. Thus, it would not be safe to assume that real investment follows an AR(1) process.

You can generate prettier graphs of the autocorrelation functions using the .ac varname command. For instance, the command .ac rinv generates the graph shown in Figure 12. The .pac varname generates a graph for the partial autocorrelations as is shown in Figure 13.

There are several generalizations one can use to help identify the process underlying a data series. Table 1 [Enders (2005): p. 85] offers a brief summary of these properties of the autocorrelation and partial autocorrelation functions.

#### Estimation of ARMA models

The estimation of ARMA models are relatively easy in Stata. The basic command to estimate an ARMA model is: .arima depvar [varlist], ar(numlist) ma(numlist).7 The first thing to notice in the command that this command can apply to either to a single variable or to an equation. If [varlist] is omitted, Stata will produce an estimate of the ARMA model for that variable; if the list is included, it will estimate the model with the disturbances allowed to have the ARMA structure specified in the command. Figure 14 reports the estimation of an ARMA model for real investment levels. Notice that we write AR(1/2) so that Stata knows to include both the first and second autoregressive term. A command of AR(2) would include only the second autoregressive term. In Figure 15 we report the ARMA (2, 1) estimation of (1).

 ARMA(1, 1) ARMA(2, 1) AR(1) AR(2) MA(1) Intercept 185.307 185.6556 184.8208 185.2092 189.373 (10.06) (10.83) (9.27) (10.25) (18.09) AR (L1) 0.70936 1.76342 0.80307 0.95257 — (3.12) (5.27) (5.51) (4.47) — AR (L2) — -0.81715 — -0.18963 — (-3.21) (-0.91) MA (L1) 0.26236 -0.99998 — — 0.87262 (0.90) (-0.00) (2.97) Log likelihood -86.1791 -85.8702 -86.47780 -86.21224 -88.48713 Wald χ2 26.96 422.60 30.36 31.65 8.81 Probability > χ2 0.0000 0.0000 0.0000 0.0000 0.0000 Sample size 19 19 19 19 19 (14,1) 1964-1982 1964-1982 1964-1982 1964-1982 1964-1982

The interpretation of these results is not obvious. We check the sensitivity of these results by estimation some other models. The results of these estimations are reported in Table 2 and Table 3. Based purely on ML tests, it would appear that AR(1) model in Table 2 is as good as any of the models describing the ARMA structure of real investments. On the other hand, the results reported in Table 3 suggests that the ARMA(2, 1) appears to be the best model to assume for the disturbance term in the estimates of Equation (1).

 AR(1) ARMA(1, 1) ARMA(2, 1) Intercept -14.49489 -13.37455 -16.89182 (-0.26) (-0.23) (-1.68) Real GNP 0.17006 0.16912 0.17253 (3.96) (3.78) (20.18) Real interest rate -0.82517 -0.92007 -0.33692 (-0.46) (-0.33) (-0.25) AR (L1) 0.27953 -0.02028 0.85619 (0.60) (-0.02) (1.46) AR (L2) — — -0.70702 (-2.64) MA (L1) — 0.41151 -1.00000 (0.42) (-2.98) Log likelihood -78.7868 -78.4279 -72.94569 Wald χ2 26.30 31.86 980.18 Probability > χ2 0.0000 0.0000 0.0000 Sample size 19 19 19 Sample period 1964-1982 1964-1982 1964-1982

### Other time-series concepts

There are a large number of additional time-series methods and issues that are not discussed in this module. These topics include, among others, ARCH and GARCH estimators, unit roots, the Dickey-Fuller test, and vector autoregression (VAR) models. There is no way to do justice to these topics in notes as short as these are. Moreover, it is necessary to discuss difference equations (the discrete version of differential equations) if one wants to understand many of these topic at anything more than an intuitive level. Those interested in these topics should enroll in the forecasting course (Economics 422) or, if they cannot, plan to read several textbooks on whatever econometric tool they need to understand.

### Exercise

#### Exercise 1

This exercise is designed to be sure you know how to use Stata in analyzing time-series data sets; there is no economic content in the exercise. The MS Excel file Rabun County Temperature Data reports the morning temperature (MornTemp) observed in Rabun County, Georgia for every day between March 15, 2005 to November 2, 2008. The data set includes a variable “edate” that is the daily date in Stata notation. The data set also includes dummy variables for the season, the month, and the year of each observation (with the Winter, the December, and the 2008 dummy variables omitted).

a. Create a graph of (a) the data set morntemp, (2) the autocorrelations of morntemp, and (3) the partial autocorrelations of morntemp (you will have to set the matrix size to some number greater than 43 using the command .set matsize #).

b. Estimate the following models:

1. ARMA(2,2) for morntemp.
2. ARMA(2,2) for morntemp as a function of the season dummy variables.
3. ARMA(2,2) for morntemp as a function of the monthly dummy variables.
4. ARMA(2,2) for morntemp as a function of the monthly dummy variables and the annual dummy variables.
5. ARMA(1,2) for morntemp as a function of the monthly dummy variables and the annual dummy variables.
6. ARMA(1,1) for morntemp as a function of the monthly dummy variables and the annual dummy variables.

c. Arrange the parameter estimates in a table and comment on them. Include the results of estimating (6) using OLS; what is the DW-statistic for this regression?

### References

Cochran, D. and G. Orcutt (1949). Application of Least Squares Regression to Relationships Containing Autocorrelated Error Terms. Journal of the American Statistical Association 44: 32-61.

Enders, Walter (1995). Applied Econometric Time Series (New York: John Wiley & Sons, Inc.).

Greene, William H. (1990). Econometric Analysis (New York: Macmillan Publishing Company).

StataCorp (2003). Stata Statistical Software: Release 8.0: Stata Time-Series Reference Manual (College Station, TX: Stat Corporation).

## Footnotes

1. See StataCorp [2003:119-130] for more detail on this command.
2. If the OLS parameter estimates are unbiased but the standard error estimates are, then applying the Cochran-Orcutt adjustment should change the estimates of the standard errors without changing the estimates of the equation parameters substantially.
3. That is, we assume ε t ~( 0, σ 2 ) ε t ~( 0, σ 2 ) MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaeqyTdu2aaSbaaSqaaiaadshaaeqaaOGaaiOFamaabmaabaGaaGimaiaacYcacqaHdpWCdaahaaWcbeqaaiaaikdaaaaakiaawIcacaGLPaaaaaa@3F75@ , where the distribution is not specified, and E( ε i ε j )=0 E( ε i ε j )=0 MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyramaabmaabaGaeqyTdu2aaSbaaSqaaiaadMgaaeqaaOGaeqyTdu2aaSbaaSqaaiaadQgaaeqaaaGccaGLOaGaayzkaaGaeyypa0JaaGimaaaa@3F9E@ for all ij ij MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyAaiabgcMi5kaadQgaaaa@3998@ .
4. These methods make use of the mathematics of difference equations. See advanced texts like Enders (1995: pp. 68-77) for examples of the derivation of the conditions necessary for an ARMA(p, q) time-series to be stationary.
5. AR(1) is the same as ARMA(1, 0)
6. This set of graphs is from Enders (2005: p. 79).
7. ARIMA means AutoRegressive Integrated Moving Average. See Enders (2005: 67) for a discussion of what integrated means. We can ignore it given our limited purposes.

## Content actions

PDF | EPUB (?)

### What is an EPUB file?

EPUB is an electronic book format that can be read on a variety of mobile devices.

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

#### Definition of a lens

##### Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

##### What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

##### Who can create a lens?

Any individual member, a community, or a respected organization.

##### What are tags?

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks

### Reuse / Edit:

Reuse or edit module (?)

#### Check out and edit

If you have permission to edit this content, using the "Reuse / Edit" action will allow you to check the content out into your Personal Workspace or a shared Workgroup and then make your edits.

#### Derive a copy

If you don't have permission to edit the content, you can still use "Reuse / Edit" to adapt the content by creating a derived copy of it and then editing and publishing the copy.