-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathForecasting-4-4-Exp-Smoothing.Rmd
128 lines (86 loc) · 3.84 KB
/
Forecasting-4-4-Exp-Smoothing.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
## Forecast performance
We can evaluate the forecast performance with forecasts of our test data or we can use all the data and use time-series cross-validation.
Let's start with the former.
```{r echo=FALSE}
traindat <- window(anchovyts,1964,1987)
testdat <- window(anchovyts,1988,1989)
```
### Test forecast performance
#### Test against a test data set {-}
We will fit an an exponential smoothing model with trend to the training data and make a forecast for the years that we 'held out'.
```{r fig.height=4}
fit1 <- forecast::ets(traindat, model="AAN")
h=length(testdat)
fr <- forecast::forecast(fit1, h=h)
plot(fr)
points(testdat, pch=2, col="red")
legend("topleft", c("forecast","actual"), pch=c(20,2), col=c("blue","red"))
```
We can calculate a variety of forecast error metrics with
```{r}
forecast::accuracy(fr, testdat)
```
We would now repeat this for all the models in our candidate set and choose the model with the best forecast performance.
#### Test using time-series cross-validation {-}
Another approach is to use all the data and test a series of forecasts made by fitting the model to different lengths of the data.
In this approach, we don't have test data. Instead we will use all the data for fitting and for forecast testing.
We will redefine `traindat` as all our Anchovy data.
```{r subset.anch, echo=FALSE}
traindat <- window(anchovyts,1964,1989)
```
#### tsCV() function {-}
We will use the `tsCV()` function. We need to define a function that returns a forecast.
```{r def.far2}
far2 <- function(x, h, model){
fit <- ets(x, model=model)
forecast(fit, h=h)
}
```
Now we can use `tsCV()` to run our `far2()` function to a series of training data sets. We will specify that a NEW ets model be estimated for each training set. We are not using the weighting estimated for the whole data set but estimating the weighting new for each set.
The `e` are our forecast errors for all the forecasts that we did with the data.
```{r}
e <- forecast::tsCV(traindat, far2, h=1, model="AAN")
e
```
Let's look at the first few `e` so we see exactly with `tsCV()` is doing.
```{r}
e[2]
```
This uses training data from $t=1$ to $t=2$ so fits an ets to the first two data points alone. Then it creates a forecast for $t=3$ and compares that forecast to the actual value observed for $t=3$.
```{r}
TT <- 2 # end of the temp training data
temp <- traindat[1:TT]
fit.temp <- forecast::ets(temp, model="AAN")
fr.temp <- forecast::forecast(fit.temp, h=1)
traindat[TT+1] - fr.temp$mean
```
```{r}
e[3]
```
This uses training data from $t=1$ to $t=2$ so fits an ets to the first two data points alone. Then it creates a forecast for $t=3$ and compares that forecast to the actual value observed for $t=3$.
```{r}
TT <- 3 # end of the temp training data
temp <- traindat[1:TT]
fit.temp <- forecast::ets(temp, model="AAN")
fr.temp <- forecast::forecast(fit.temp, h=1)
traindat[TT+1] - fr.temp$mean
```
#### Forecast accuracy metrics {-}
Once we have the errors from `tsCV()`, we can compute forecast accuracy metrics.
RMSE: root mean squared error
```{r rmse}
rmse <- sqrt(mean(e^2, na.rm=TRUE))
```
MAE: mean absolute error
```{r mae}
mae <- mean(abs(e), na.rm=TRUE)
```
### Testing a specific ets model
By specifying `model="AAN"`, we estimated a new ets model (meaning new weighting) for each training set used. We might want to specify that we use only the weighting we estimated for the full data set.
We do this by passing in a fit to `model`.
The `e` are our forecast errors for all the forecasts that we did with the data. `fit1` below is the ets estimated from all the data 1964 to 1989. Note, the code will produce a warning that it is estimating the initial value and just using the weighting. That is what we want.
```{r, message=FALSE, warning=FALSE}
fit1 <- forecast::ets(traindat, model="AAN")
e <- forecast::tsCV(traindat, far2, h=1, model=fit1)
e
```