title: "Setup and use onlineforecast models"
author: "Peder Bacher"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
toc_debth: 3
vignette: >
%\VignetteIndexEntry{Setup and use onlineforecast models}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
# Have to load the knitr to use hooks
library(knitr)
# This vignettes name
vignettename <- "setup-and-use-model"
# REMEMBER: IF CHANGING IN THE shared-init (next block), then copy to the others!
# Width will scale all
figwidth <- 12
# Scale the wide figures (100% out.width)
figheight <- 4
# Heights for stacked time series plots
figheight1 <- 5
figheight2 <- 6.5
figheight3 <- 8
figheight4 <- 9.5
figheight5 <- 11
# Set the size of squared figures (same height as full: figheight/figwidth)
owsval <- 0.35
ows <- paste0(owsval*100,"%")
ows2 <- paste0(2*owsval*100,"%")
#
fhs <- figwidth * owsval
# Set for square fig: fig.width=fhs, fig.height=fhs, out.width=ows}
# If two squared the: fig.width=2*fhs, fig.height=fhs, out.width=ows2
# Check this: https://bookdown.org/yihui/rmarkdown-cookbook/chunk-styling.html
# Set the knitr options
knitr::opts_chunk$set(
collapse = TRUE,
comment = "## ",
prompt = FALSE,
cache = TRUE,
cache.path = paste0("../tmp/vignettes/tmp-",vignettename,"/"),
fig.align="center",
fig.path = paste0("../tmp/vignettes/tmp-",vignettename,"/"),
fig.height = figheight,
fig.width = figwidth,
out.width = "100%"
)
options(digits=3)
hook_output <- knit_hooks$get("output")
knit_hooks$set(output = function(x, options) {
lines <- options$output.lines
if (is.null(lines)) {
return(hook_output(x, options)) # pass to default hook
}
x <- unlist(strsplit(x, "\n"))
more <- "## ...output cropped"
if (length(lines)==1) { # first n lines
if (length(x) > lines) {
# truncate the output, but add ....
x <- c(head(x, lines), more)
}
} else {
x <- c(more, x[lines], more)
}
# paste these lines together
x <- paste(c(x, ""), collapse = "\n")
hook_output(x, options)
})
Intro
This vignette explains how to setup and use an onlineforecast model. This takes offset in the example of building heat load forecasting and assumes that the data is setup correctly, as explained in setup-data vignette. The R code is available here. More information on onlineforecasting.org.
Start by loading the package:
# Load the package
library(onlineforecast)
# Set the data in D to simplify notation
D <- Dbuilding
Score period
Set the scoreperiod
as a logical vector with same length as t
. It controls
which points are included in score calculations in functions for optimization
etc. It must be set.
Use it to exclude a burn-in period of one week:
# Print the first time point
D$t[1]
# Set the score period
D$scoreperiod <- in_range("2010-12-22", D$t)
# Plot to see it
plot(D$t, D$scoreperiod, xlab="Time", ylab="Scoreperiod")
Other periods, which should be excluded from score calculations, can simply
also be set to FALSE
. E.g.:
# Exclude other points example
scoreperiod2 <- D$scoreperiod
scoreperiod2[in_range("2010-12-30",D$t,"2011-01-02")] <- FALSE
would exclude the days around new year (must of course be set in
D$scoreperiod
, not in scoreperiod2
to have an effect).
Setting up a model
A simple onlineforecast model can be setup by:
# Generate new object (R6 class)
model <- forecastmodel$new()
# Set the model output
model$output = "heatload"
# Inputs (transformation step)
model$add_inputs(Ta = "Ta",
mu = "one()")
# Regression step parameters
model$add_regprm("rls_prm(lambda=0.9)")
# Optimization bounds for parameters
model$add_prmbounds(lambda = c(0.9, 0.99, 0.9999))
# Set the horizons for which the model will be fitted
model$kseq <- c(3,18)
Steps in setting up a model
Let's go through the steps of setting up the model.
First a new forecastmodel object is generated and the model output is set (per
default it is "y"
):
# Generate new object
model <- forecastmodel$new()
# Set the model output
model$output = "heatload"
The output is simply the variable name from D
we want to forecast.
The model inputs are defined by:
# Inputs (transformation step)
model$add_inputs(Ta = "Ta",
mu = "one()")
So this is really where the structure of the model is specified. The inputs are
given a name (Ta
and mu
), which each are set as an R expression (given as a
string). The expressions defines the transformation step: they will each
be evaluated in an environment with a given data.list
. This means that the
variables from the data can be used in the expressions (e.g. Ta
is in D
) - below in [Input transformations] we will detail this evaluation.
Next step for setting up the model is to set the parameters for the regression
step by providing an expression, which returns the regression
parameter values. In the present case we will use the Recursive Least Squares
(RLS) when regressing, and we need to set the forgetting factor lambda
by:
# Regression step parameters
model$add_regprm("rls_prm(lambda=0.9)")
The expression is just of a function, which returns
a list - in this case with the value of lambda
(see onlineforecasting). The result of it begin evaluated is kept in:
# The evaluation happens with
eval(parse(text="rls_prm(lambda=0.9)"))
# and the result is stored in
model$regprm
We will tune the parameters, for this model it's only the forgetting factor, so we set the parameter bounds (lower, init, upper) for it by:
# Optimization bounds for parameters
model$add_prmbounds(lambda = c(0.9, 0.99, 0.9999))
Finally, we set the horizons for which to fit:
# Set the horizons for which the model will be fitted
model$kseq <- c(3,18)
The horizons to fit for is actually not directly related to the model, but rather the fitting of the model. In principle, it would be more "clean" if the model, data and fit was kept separate, however for recursive fitting this becomes un-feasible.
Tune the parameters
We have set up the model and can now tune the lambda
with the rls_optim()
,
which is a wrapper for the optim()
function:
# Call the optim() wrapper
model$prm <- rls_optim(model, D)$par
Note, how it only calculated a score for the 3 and 18 steps
horizons - as we specified with model$kseq
above. The parameters could be
optimized separately for each horizon, for example it is often such that for the
first horizons a very low forgetting factor is optimal (e.g. 0.9). Currently,
however, the parameters can only be optimized together. By optimizing for a
short (3 steps) and a long horizon (18 steps), we obtain a balance - using less computations compared to optimizing on all horizons.
The optimization converge and the tuned parameter becomes:
# Optimized lambda
model$prm