#### 2020-02-08

ALM stands for “Advanced Linear Model”. It’s not so much advanced as it sounds, but it has some advantages over the basic LM, retaining some basic features. In some sense alm() resembles the glm() function from stats package, but with a higher focus on forecasting rather than on hypothesis testing. You will not get p-values anywhere from the alm() function and won’t see $$R^2$$ in the outputs. The maximum what you can count on is having confidence intervals for the parameters or for the regression line. The other important difference from glm() is the availability of distributions that are not supported by glm() (for example, Folded Normal or Chi Squared distributions).

The core of the function is the likelihood approach. The estimation of parameters in the model is done via the maximisation of likelihood function of a selected distribution. The calculation of the standard errors is done based on the calculation of hessian of the distribution. And in the centre of all of that are information criteria that can be used for the models comparison.

All the supported distributions have specific functions which form the following four groups for the distribution parameter in alm():

All of them rely on respective d- and p- functions in R. For example, Log Normal distribution uses dlnorm() function from stats package.

The alm() function also supports occurrence parameter, which allows modelling non-zero values and the occurrence of non-zeroes as two different models. The combination of any distribution from (1) - (3) for the non-zero values and a distribution from (4) for the occurrence will result in a mixture distribution model, e.g. a mixture of Log-Normal and Cumulative Logistic or a Hurdle Poisson (with Cumulative Normal for the occurrence part).

Every model produced using alm() can be represented as: $$$\label{eq:basicALM} y_t = f(\mu_t, \epsilon_t) = f(x_t' B, \epsilon_t) ,$$$ where $$y_t$$ is the value of the response variable, $$x_t$$ is the vector of exogenous variables, $$B$$ is the vector of the parameters, $$\mu_t$$ is the conditional mean (produced based on the exogenous variables and the parameters of the model), $$\epsilon_t$$ is the error term on the observation $$t$$ and $$f(\cdot)$$ is the distribution function that does a transformation of the inputs into the output. In case of a mixture distribution the model becomes slightly more complicated: $$$\label{eq:basicALMMixture} \begin{matrix} y_t = o_t f(x_t' B, \epsilon_t) \\ o_t \sim \text{Bernoulli}(p_t) \\ p_t = g(z_t' A) \end{matrix},$$$ where $$o_t$$ is the binary variable, $$p_t$$ is the probability of occurrence, $$z_t$$ is the vector of exogenous variables and $$A$$ is the vector of parameters for the $$p_t$$.

The alm() function returns, along with the set of common for lm() variables (such as coefficient and fitted.values), the variable mu, which corresponds to the conditional mean used inside the distribution, and scale – the second parameter, which usually corresponds to standard error or dispersion parameter. The values of these two variables vary from distribution to distribution. Note, however, that the model variable returned by lm() function was renamed into data in alm(), and that alm() does not return terms and QR decomposition.

Given that the parameters of any model in alm() are estimated via likelihood, it can be assumed that they have asymptotically normal distribution, thus the confidence intervals for any model rely on the normality and are constructed based on the unbiased estimate of variance, extracted using sigma() function.

The covariance matrix of parameters almost in all the cases is calculated as an inverse of the hessian of respective distribution function. The exclusions are Normal, Log-Normal, Poisson, Cumulative Logistic and Cumulative Normal distributions, that use analytical solutions.

alm() function also supports factors in the explanatory variables, creating the set of dummies from them. In case of ordered variables (ordinal scale, is.ordered()), the ordering is removed and the set of dummies is produced. This is done in order to avoid the built in behaviour of R, which creates linear, squared, cubic etc levels for ordered variables, which makes the interpretation of the parameters difficult.

Although the basic principles of estimation of models and predictions from them are the same for all the distributions, each of the distribution has its own features. So it makes sense to discuss them individually. We discuss the distributions in the four groups mentioned above.

## Density functions of continuous distributions

This group of functions includes:

For all the functions in this category resid() method returns $$e_t = y_t - \mu_t$$.

### Normal distribution

The density of normal distribution is: $$$\label{eq:Normal} f(y_t) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp \left( -\frac{\left(y_t - \mu_t \right)^2}{2 \sigma^2} \right) ,$$$ where $$\sigma^2$$ is the variance of the error term.

alm() with Normal distribution (distribution="dnorm") is equivalent to lm() function from stats package and returns roughly the same estimates of parameters, so if you are concerned with the time of calculation, I would recommend reverting to lm().

Maximising the likelihood of the model is equivalent to the estimation of the basic linear regression using Least Squares method: $$$\label{eq:linearModel} y_t = \mu_t + \epsilon_t = x_t' B + \epsilon_t,$$$ where $$\epsilon_t \sim \mathcal{N}(0, \sigma^2)$$.

The variance $$\sigma^2$$ is estimated in alm() based on likelihood: $$$\label{eq:sigmaNormal} \hat{\sigma}^2 = \frac{1}{T} \sum_{t=1}^T \left(y_t - \mu_t \right)^2 ,$$$ where $$T$$ is the sample size. Its square root (standard deviation) is used in the calculations of dnorm() function, and the value is then return via scale variable. This value does not have bias correction. However the sigma() method applied to the resulting model, returns the bias corrected version of standard deviation. And vcov(), confint(), summary() and predict() rely on the value extracted by sigma().

$$\mu_t$$ is returned as is in mu variable, and the fitted values are set equivalent to mu.

In order to produce confidence intervals for the mean (predict(model, newdata, interval="c")) the conditional variance of the model is calculated using: $$$\label{eq:varianceNormalForCI} V({\mu_t}) = x_t V(B) x_t',$$$ where $$V(B)$$ is the covariance matrix of the parameters returned by the function vcov. This variance is then used for the construction of the confidence intervals of a necessary level $$\alpha$$ using the distribution of Student: $$$\label{eq:intervalsNormal} y_t \in \left(\mu_t \pm \tau_{df,\frac{1+\alpha}{2}} \sqrt{V(\mu_t)} \right),$$$ where $$\tau_{df,\frac{1+\alpha}{2}}$$ is the upper $${\frac{1+\alpha}{2}}$$-th quantile of the Student’s distribution with $$df$$ degrees of freedom (e.g. with $$\alpha=0.95$$ it will be 0.975-th quantile, which, for example, for 100 degrees of freedom will be $$\approx 1.984$$).

Similarly for the prediction intervals (predict(model, newdata, interval="p")) the conditional variance of the $$y_t$$ is calculated: $$$\label{eq:varianceNormalForPI} V(y_t) = V(\mu_t) + s^2 ,$$$ where $$s^2$$ is the bias-corrected variance of the error term, calculated using: $$$\label{eq:varianceNormalUnbiased} s^2 = \frac{1}{T-k} \sum_{t=1}^T \left(y_t - \mu_t \right)^2 ,$$$ where $$k$$ is the number of estimated parameters (including the variance itself). This value is then used for the construction of the prediction intervals of a specify level, also using the distribution of Student, in a similar manner as with the confidence intervals.

### Laplace distribution

Laplace distribution has some similarities with the Normal one: $$$\label{eq:Laplace} f(y_t) = \frac{1}{2 s} \exp \left( -\frac{\left| y_t - \mu_t \right|}{s} \right) ,$$$ where $$s$$ is the scale parameter, which, when estimated using likelihood, is equal to the mean absolute error: $$$\label{eq:bLaplace} s = \frac{1}{T} \sum_{t=1}^T \left| y_t - \mu_t \right| .$$$ So maximising the likelihood is equivalent to estimating the linear regression via the minimisation of $$s$$ . So when estimating a model via minimising $$s$$, the assumption imposed on the error term is $$\epsilon_t \sim \text{Laplace}(0, s)$$. The main difference of Laplace from Normal distribution is its fatter tails.

alm() function with distribution="dlaplace" returns mu equal to $$\mu_t$$ and the fitted values equal to mu. $$s$$ is returned in the scale variable. The prediction intervals are derived from the quantiles of Laplace distribution after transforming the conditional variance into the conditional scale parameter $$s$$ using the connection between the two in Laplace distribution: $$$\label{eq:bLaplaceAndSigma} s = \sqrt{\frac{\sigma^2}{2}},$$$ where $$\sigma^2$$ is substituted either by the conditional variance of $$\mu_t$$ or $$y_t$$.

The kurtosis of Laplace distribution is 6, making it suitable for modelling rarely occurring events.

### Asymmetric Laplace distribution

Asymmetric Laplace distribution can be considered as a two Laplace distributions with different parameters $$s$$ for left and right side. There are several ways to summarise the probability density function, the one used in alm() relies on the asymmetry parameter $$\alpha$$ (Yu and Zhang 2005): $$$\label{eq:ALaplace} f(y_t) = \frac{\alpha (1- \alpha)}{s} \exp \left( -\frac{y_t - \mu_t}{s} (\alpha - I(y_t \leq \mu_t)) \right) ,$$$ where $$s$$ is the scale parameter, $$\alpha$$ is skewness parameter and $$I(y_t \leq \mu_t)$$ is the indicator function, which is equal to one, when the condition is satisfied and to zero otherwise. The scale parameter $$s$$ estimated using likelihood is equal to the quantile loss: $$$\label{eq:bALaplace} s = \frac{1}{T} \sum_{t=1}^T \left(y_t - \mu_t \right)(\alpha - I(y_t \leq \mu_t)) .$$$ Thus maximising the likelihood is equivalent to estimating the linear regression via the minimisation of $$\alpha$$ quantile, making this equivalent to quantile regression. So quantile regression models assume indirectly that the error term is $$\epsilon_t \sim \text{ALaplace}(0, s, \alpha)$$ (Geraci and Bottai 2007). The advantage of using alm() in this case is in having the full distribution, which allows to do all the fancy things you can do when you have likelihood.

In case of $$\alpha=0.5$$ the function reverts to the symmetric Laplace where $$s=\frac{1}{2}\text{MAE}$$.

alm() function with distribution="dalaplace" accepts an additional parameter alpha in ellipsis, which defines the quantile $$\alpha$$. If it is not provided, then the function will estimated it maximising the likelihood and return it as the first coefficient. alm() returns mu equal to $$\mu_t$$ and the fitted values equal to mu. $$s$$ is returned in the scale variable. The parameter $$\alpha$$ is returned in the variable other of the final model. The prediction intervals are produced using qalaplace() function. In order to find the values of $$s$$ for the holdout the following connection between the variance of the variable and the scale in Asymmetric Laplace distribution is used: $$$\label{eq:bALaplaceAndSigma} s = \sqrt{\sigma^2 \frac{\alpha^2 (1-\alpha)^2}{(1-\alpha)^2 + \alpha^2}},$$$ where $$\sigma^2$$ is substituted either by the conditional variance of $$\mu_t$$ or $$y_t$$.

### Logistic distribution

The density function of Logistic distribution is: $$$\label{eq:Logistic} f(y_t) = \frac{\exp \left(- \frac{y_t - \mu_t}{s} \right)} {s \left( 1 + \exp \left(- \frac{y_t - \mu_t}{s} \right) \right)^{2}},$$$ where $$s$$ is the scale parameter, which is estimated in alm() based on the connection between the parameter and the variance in the logistic distribution: $$$\label{eq:sLogisticAndSigma} s = \sigma \sqrt{\frac{3}{\pi^2}}.$$$ Once again the maximisation of implies the estimation of the linear model , where $$\epsilon_t \sim \text{Logistic}(0, s)$$.

Logistic is considered a fat tailed distribution, but its tails are not as fat as in Laplace. Kurtosis of standard Logistic is 4.2.

alm() function with distribution="dlogis" returns $$\mu_t$$ in mu and in fitted.values variables, and $$s$$ in the scale variable. Similar to Laplace distribution, the prediction intervals use the connection between the variance and scale, and rely on the qlogis function.

### S distribution

The S distribution has the following density function: $$$\label{eq:S} f(y_t) = \frac{1}{4b^2} \exp \left( -\frac{\sqrt{|y_t - \mu_t|}}{s} \right) ,$$$ where $$s$$ is the scale parameter. If estimated via maximum likelihood, the scale parameter is equal to: $$$\label{eq:bS} s = \frac{1}{T} \sum_{t=1}^T \sqrt{\left| y_t - \mu_t \right|} ,$$$ which corresponds to the minimisation of “Half Absolute Error” or “Half Absolute Moment”, which is equal to $$2b$$.

S distribution has a kurtosis of 25.2, which makes it an “extreme excess” distribution. It might be useful in cases of randomly occurring incidents and extreme values (Black Swans?).

alm() function with distribution="ds" returns $$\mu_t$$ in the same variables mu and fitted.values, and $$s$$ in the scale variable. Similarly to the previous functions, the prediction intervals are based on the qs() function from greybox package and use the connection between the scale and the variance: $$$\label{eq:bSAndSigma} s = \left( \frac{\sigma^2}{120} \right) ^{\frac{1}{4}},$$$ where once again $$\sigma^2$$ is substituted either by the conditional variance of $$\mu_t$$ or $$y_t$$.

### Student t distribution

The Student t distribution has a difficult density function: $$$\label{eq:T} f(y_t) = \frac{\Gamma\left(\frac{d+1}{2}\right)}{\sqrt{d \pi} \Gamma\left(\frac{d}{2}\right)} \left( 1 + \frac{x^2}{d} \right)^{-\frac{d+1}{2}} ,$$$ where $$d$$ is the number of degrees of freedom, which can also be considered as the scale parameter of the distribution. It has the following connection with the in-sample variance of the error (but only for the case, when $$d>2$$): $$$\label{eq:scaleOfT} d = \frac{2}{1-\sigma^{-2}}.$$$

Kurtosis of Student t distribution depends on the value of $$d$$, and for the cases of $$d>4$$ is equal to $$\frac{6}{d-4}$$.

alm() function with distribution="dt" estimates the parameters of the model along with the $$d$$ (if it is not provided by the user as a df parameter) and returns $$\mu_t$$ in the variables mu and fitted.values, and $$d$$ in the scale variable. Both prediction and confidence intervals use qt() function from stats package and rely on the conventional number of degrees of freedom $$T-k$$. The intervals are constructed similarly to how it is done in Normal distribution (based on qt() function).

### An example of application

In order to see how this works, we will create the following data:

xreg <- cbind(rnorm(100,10,3),rnorm(100,50,5))
xreg <- cbind(500+0.5*xreg[,1]-0.75*xreg[,2]+rs(100,0,3),xreg,rnorm(100,300,10))
colnames(xreg) <- c("y","x1","x2","Noise")

inSample <- xreg[1:80,]
outSample <- xreg[-c(1:80),]

ALM can be run either with data frame or with matrix. Here’s an example with normal distribution:

ourModel <- alm(y~x1+x2, data=inSample, distribution="dnorm")
summary(ourModel)
#> Response variable: y
#> Distribution used in the estimation: Normal
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> (Intercept) 431.0804   113.7390   204.5496    657.6113
#> x1           -0.2466     3.4176    -7.0534      6.5602
#> x2            1.0788     2.1069    -3.1175      5.2751
#>
#> Error standard deviation: 100.2078
#> Sample size: 80
#> Number of estimated parameters: 4
#> Number of degrees of freedom: 76
#> Information criteria:
#>      AIC     AICc      BIC     BICc
#> 972.1896 972.7229 981.7177 982.8862
plot(predict(ourModel,outSample,interval="p"))

And here’s an example with Asymmetric Laplace and predefined $$\alpha=0.95$$:

ourModel <- alm(y~x1+x2, data=inSample, distribution="dalaplace",alpha=0.95)
summary(ourModel)
#> Response variable: y
#> Distribution used in the estimation: Asymmetric Laplace with alpha=0.95
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> (Intercept) 538.8506     7.1450   524.6200    553.0811
#> x1           -0.8018     0.1370    -1.0746     -0.5291
#> x2            1.0863     0.0575     0.9717      1.2009
#>
#> Error standard deviation: 143.4691
#> Sample size: 80
#> Number of estimated parameters: 4
#> Number of degrees of freedom: 76
#> Information criteria:
#>      AIC     AICc      BIC     BICc
#> 1098.564 1099.098 1108.092 1109.261
plot(predict(ourModel,outSample))

## Density functions for continuous positive data

This group includes:

Although (2) and (3) in theory allow having zeroes in data, given that the density function is equal to zero in any specific point, it will be zero in these cases as well. So the alm() will return some solutions for these distributions, but don’t expect anything good. As for (1), it supports strictly positive data.

### Log Normal distribution

Log Normal distribution appears when a normally distributed variable is exponentiated. This means that if $$x \sim \mathcal{N}(\mu, \sigma^2)$$, then $$\exp x \sim \text{log}\mathcal{N}(\mu, \sigma^2)$$. The density function of Log Normal distribution is: $$$\label{eq:LogNormal} f(y_t) = \frac{1}{y_t \sqrt{2 \pi \sigma^2}} \exp \left( -\frac{\left(\log y_t - \mu_t \right)^2}{2 \sigma^2} \right) ,$$$ where the variance estimated using likelihood is: $$$\label{eq:sigmaLogNormal} \hat{\sigma}^2 = \frac{1}{T} \sum_{t=1}^T \left(\log y_t - \mu_t \right)^2 .$$$ Estimating the model with Log Normal distribution is equivalent to estimating the parameters of log-linear model: $$$\label{eq:logLinearModel} \log y_t = \mu_t + \epsilon_t,$$$ where $$\epsilon_t \sim \mathcal{N}(0, \sigma^2)$$ or: $$$\label{eq:logLinearModelExp} y_t = \exp(\mu_t + \epsilon_t).$$$

alm() with distribution="dlnorm" does not transform the provided data and estimates the density directly using dlnorm() function with the estimated mean $$\mu_t$$ and the variance . If you need a log-log model, then you would need to take logarithms of the external variables. The $$\mu_t$$ is returned in the variable mu, the $$\sigma^2$$ is in the variable scale, while the fitted.values contains the exponent of $$\mu_t$$, which, given the connection between the Normal and Log Normal distributions, corresponds to median of distribution rather than mean. Finally, resid() method returns $$e_t = \log y_t - \mu_t$$.

### Box-Cox Normal distribution

Box-Cox Normal distribution used in the greybox package is defined as a distribution that becomes normal after the Box-Cox transformations. This means that if $$x=\frac{y^\lambda+1}{\lambda}$$, $$x \sim \mathcal{N}(\mu, \sigma^2)$$, then $$y \sim \text{BC}\mathcal{N}(\mu, \sigma^2)$$. The density function of the Box-Cox Normal distribution in this case is: $$$\label{eq:BCNormal} f(y_t) = \frac{y_t^{\lambda-1}} {\sqrt{2 \pi \sigma^2}} \exp \left( -\frac{\left(\frac{y_t^{\lambda}-1}{\lambda} - \mu_t \right)^2}{2 \sigma^2} \right) ,$$$ where the variance estimated using likelihood is: $$$\label{eq:sigmaBCNormal} \hat{\sigma}^2 = \frac{1}{T} \sum_{t=1}^T \left(\frac{y_t^{\lambda}-1}{\lambda} - \mu_t \right)^2 .$$$ Estimating the model with Box-Cox Normal distribution is equivalent to estimating the parameters of a linear model after the Box-Cox transform: $$$\label{eq:BCLinearModel} \frac{y_t^{\lambda}-1}{\lambda} = \mu_t + \epsilon_t,$$$ where $$\epsilon_t \sim \mathcal{N}(0, \sigma^2)$$ or: $$$\label{eq:BCLinearModelExp} y_t = \left((\mu_t + \epsilon_t) \lambda +1 \right)^{\frac{1}{\lambda}}.$$$

alm() with distribution="dbcnorm" does not transform the provided data and estimates the density directly using dbcnorm() function from greybox with the estimated mean $$\mu_t$$ and the variance . The $$\mu_t$$ is returned in the variable mu, the $$\sigma^2$$ is in the variable scale, while the fitted.values contains the exponent of $$\mu_t$$, which, given the connection between the Normal and Box-Cox Normal distributions, corresponds to median of distribution rather than mean. Finally, resid() method returns $$e_t = \frac{y_t^{\lambda}-1}{\lambda} - \mu_t$$.

### Inverse Gaussian distribution

Inverse Gaussian distribution is an interesting distribution, which is defined for positive values only and has some properties similar to the properties of the Normal distribution. It has two parameters: location $$\mu_t$$ and scale $$\phi$$ (aka “dispersion”). There are different ways to parameterise this distribution, we use the dispersion-based one. The important thing that distincts the implementation in alm() from the one in glm() or in any other function is that we assume that the model has the following form: $$$\label{eq:InverseGaussianModel} y_t = \mu_t \times \epsilon_t$$$ and that $$\epsilon_t \sim \mathcal{IG}(1, \phi)$$. This means that $$y_t \sim \mathcal{IG}\left(\mu_t, \frac{\phi}{\mu_t} \right)$$, implying that the dispersion of the model changes together with the expectation. The density function for the error term in this case is: $$$\label{eq:InverseGaussian} f(\epsilon_t) = \frac{1}{\sqrt{2 \pi \phi \epsilon_t^3}} \exp \left( -\frac{\left(\epsilon_t - 1 \right)^2}{2 \phi \epsilon_t} \right) ,$$$ where the dispersion parameter is estimated via maximising the likelihood and is calculated using: $$$\label{eq:InverseGaussianDispersion} \hat{\phi} = \frac{1}{T} \sum_{t=1}^T \frac{\left(\epsilon_t - 1 \right)^2}{\epsilon_t} .$$$ One of the main issues in this formulation is that we are dealing with a mixed model, where $$\mu_t$$ is an additive term and $$\epsilon_t$$ is the multiplicative one. In many cases this should be okay, but in theory the model might fail in the holdout sample, when unobserved values of explanatory variables are used. In addition, we assume that $$\mu_t$$ is just a scale for the distribution, otherwise $$y_t$$ would not follow the Inverse Gaussian distribution.

alm() with distribution="dinvgauss" estimates the density for $$y_t$$ using dinvgauss() function from statmod package. The $$\mu_t$$ is returned in the variables mu and fitted.values, the dispersion $$\phi$$ is in the variable scale. resid() method returns $$e_t = \frac{y_t}{\mu_t}$$. Finally, the prediction and confidence intervals for the regression model are generated using qinvgauss() function from the statmod package.

### Folded Normal distribution

Folded Normal distribution is obtained when the absolute value of normally distributed variable is taken: if $$x \sim \mathcal{N}(\mu, \sigma^2)$$, then $$|x| \sim \text{folded }\mathcal{N}(\mu, \sigma^2)$$. The density function is: $$$\label{eq:foldedNormal} f(y_t) = \frac{1}{\sqrt{2 \pi \sigma^2}} \left( \exp \left( -\frac{\left(y_t - \mu_t \right)^2}{2 \sigma^2} \right) + \exp \left( -\frac{\left(y_t + \mu_t \right)^2}{2 \sigma^2} \right) \right),$$$ Conditional mean and variance of Folded Normal are estimated in alm() (with distribution="dfnorm") similarly to how this is done for Normal distribution. They are returned in the variables mu and scale respectively. In order to produce the fitted value (which is returned in fitted.values), the following correction is done: $$$\label{eq:foldedNormalFitted} \hat{y_t} = \sqrt{\frac{2}{\pi}} \sigma \exp \left( -\frac{\mu_t^2}{2 \sigma^2} \right) + \mu_t \left(1 - 2 \Phi \left(-\frac{\mu_t}{\sigma} \right) \right),$$$ where $$\Phi(\cdot)$$ is the CDF of Normal distribution.

The model that is assumed in the case of Folded Normal distribution can be summarised as: $$$\label{eq:foldedNormalModel} y_t = \left| \mu_t + \epsilon_t \right|.$$$

The conditional variance of the forecasts is calculated based on the elements of vcov() (as in all the other functions), the predicted values are corrected in the same way as the fitted values , and the prediction intervals are generated from the qfnorm() function of greybox package. As for the residuals, resid() method returns $$e_t = y_t - \mu_t$$.

### Noncentral Chi Squared distribution

Noncentral Chi Squared distribution arises, when a normally distributed variable with a unity variance is squared and summed up: if $$x_i \sim \mathcal{N}(\mu_i, 1)$$, then $$\sum_{i=1}^k x_i^2 \sim \chi^2(k, \lambda)$$, where $$k$$ is the number of degrees of freedom and $$\lambda = \sum_{i=1}^k \mu_i^2$$. In the case of non-unity variance, with $$z_i \sim \mathcal{N}(\mu_i, \sigma^2)$$, the variable can also be represented as $$z_i = \sigma x_i$$, and then it can be assumed that $$\sum_{i=1}^k z_i^2 \sim \chi^2(k \sigma, \lambda)$$. In the perfect world, $$\lambda_t$$ would correspond to the location of the original distribution of $$z_i$$, while $$k$$ would need to be time varying and would need to include both number of elements $$k$$ and the individual variances $$\sigma^2_i$$ for each of the element, depending on the external variables values. However, given that the squares of the normal data are used, it is not possible to disaggregate the values into the original two parts. Thus we assume that the variance is constant for all the cases, and estimate it using likelihood. As a result the non-centrality parameter covers two parts that would be split in the ideal world.

The density function of Noncentral Chi Squared distribution is quite difficult. alm() uses dchisq() function from stats package, assuming constant number of degrees of freedom $$k$$ and time varying noncentrality parameter $$\lambda_t$$: $$$\label{eq:NCChiSquared} f(y_t) = \frac{1}{2} \exp \left( -\frac{y_t + \lambda_t}{2} \right) \left(\frac{y_t}{\lambda_t} \right)^{\frac{k}{4}-0.5} I_{\frac{k}{2}-1}(\sqrt{\lambda_t y_t}),$$$ where $$I_k(x)$$ is the Bessel function of the first kind. The $$\lambda_t$$ parameter is estimated from a regression with exogenous variables: $$$\label{eq:lambdaValue} \lambda_t = ( x_t' B )^2 ,$$$ where $$\exp$$ is taken in order to make $$\lambda_t$$ strictly positive, while $$k$$ is estimated directly by maximising the likelihood. In order to avoid the negative values of $$k$$, it’s absolute value is used.

The model that is assumed in the case of Noncentral Chi Squared distribution is: $$$\label{eq:chiSquaredModel} y_t = \left( \mu_t + \epsilon_t \right)^2.$$$ Given that square function is not monotonic, there are always two sets of parameters that give exactly the same $$\lambda_t$$ with positive and negative $$\mu_t$$. The alm() function returns the positive one (due to the restrictions imposed on the solver).

$$\lambda_t$$ is returned in the variable mu, while $$k$$ is returned in scale. Finally, fitted.values returns $$\lambda_t + k$$. Similar correction is done in predict() function. As for the prediction intervals, they are generated using qchisq() function from stats package. Last but not least, resid() method returns $$e_t = \sqrt{y_t} - \sqrt{\mu_t}$$.

### An example of application

Square the response variable for the next example:

xreg[,1] <- xreg[,1]^2
inSample <- xreg[1:80,]
outSample <- xreg[-c(1:80),]

An example with exotic Chi-Squared distribution with predefined number of degrees of freedom:

ourModel <- alm(y~x1+x2, data=inSample, distribution="dchisq",df=1)
summary(ourModel)
#> Warning: Choleski decomposition of hessian failed, so we had to revert to the simple inversion.
#> The estimate of the covariance matrix of parameters might be inaccurate.
#> Response variable: y
#> Distribution used in the estimation: Chi-Squared with df=1
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> (Intercept) 431.0850     0.1854   430.7158    431.4543
#> x1            0.2988     0.0055     0.2879      0.3097
#> x2            0.8186     0.0022     0.8143      0.8229
#>
#> Error standard deviation: 100.5276
#> Sample size: 80
#> Number of estimated parameters: 3
#> Number of degrees of freedom: 77
#> Information criteria:
#>      AIC     AICc      BIC     BICc
#> 706102.1 706102.4 706109.2 706109.9
plot(predict(ourModel,outSample))
#> Warning: Choleski decomposition of hessian failed, so we had to revert to the simple inversion.
#> The estimate of the covariance matrix of parameters might be inaccurate.

## Continuous distributions on a specific interval

There is currently only one distribution in this group:

1. Beta distribution.

Beta distribution is a distribution for a continuous variable that is defined on the interval of $$(0, 1)$$. Note that the bounds are not included here, because the probability density function is not well defined on them. If the provided data contains either zeroes or ones, the function will modify the values using: $$$\label{eq:BetaWarning} y^\prime_t = y_t (1 - 2 \cdot 10^{-10}),$$$ and it will warn the user about this modification. This correction makes sure that there are no boundary values in the data, and it is quite artificial and needed for estimation purposes only.

The density function of Beta distribution has the form: $$$\label{eq:Beta} f(y_t) = \frac{y_t^{\alpha_t-1}(1-y_t)^{\beta_t-1}}{B(\alpha_t, \beta_t)} ,$$$ where $$\alpha_t$$ is the first shape parameter and $$\beta_t$$ is the second one. Note indices for the both shape parameters. This is what makes the alm() implementation of Beta distribution different from any other. We assume that both of them have underlying deterministic models, so that: $$$\label{eq:BetaAt} \alpha_t = \exp(x_t' A) ,$$$ and $$$\label{eq:BetaBt} \beta_t = \exp(x_t' B),$$$ where $$A$$ and $$B$$ are the vectors of parameters for the respective shape variables. This allows the function to model any shapes depending on the values of exogenous variables. The conditional expectation of the model is calculated using: $$$\label{eq:BetaExpectation} \hat{y}_t = \frac{\alpha_t}{\alpha_t + \beta_t} ,$$$ while the conditional variance is: $$$\label{eq:BetaVariance} \text{V}({y}_t) = \frac{\alpha_t \beta_t}{((\alpha_t + \beta_t)^2 (\alpha_t + \beta_t + 1))} .$$$

alm() function with distribution="dbeta" returns $$\hat{y}_t$$ in the variables mu and fitted.values, and $$\text{V}({y}_t)$$ in the scale variable. The shape parameters are returned in the respective variables other$shape1 and other$shape2. You will notice that the output of the model contains twice more parameters than the number of variables in the model. This is because of the estimation of two models: $$\alpha_t$$ and $$\beta_t$$ - instead of one.

Respectively, when predict() function is used for the alm model with Beta distribution, the two models are used in order to produce predicted values for $$\alpha_t$$ and $$\beta_t$$. After that the conditional mean mu and conditional variance variances are produced using the formulae above. The prediction intervals are generated using qbeta function with the provided shape parameters for the holdout. As for the confidence intervals, they are produced assuming normality for the parameters of the model and using the estimate of the variance of the mean based on the variances (which is weird and probably wrong).

## Density functions of discrete distributions

This group includes:

These distributions should be used in cases of count data.

### Poisson distribution

Poisson distribution used in ALM has the following standard probability mass function: $$$\label{eq:Poisson} P(X=y_t) = \frac{\lambda_t^{y_t} \exp(-\lambda_t)}{y_t!},$$$ where $$\lambda_t = \mu_t = \sigma^2_t = \exp(x_t' B)$$. As it can be noticed, here we assume that the variance of the model varies in time and depends on the values of the exogenous variables, which is a specific case of heteroscedasticity. The exponent of $$x_t' B$$ is needed in order to avoid the negative values in $$\lambda_t$$.

alm() with distribution="dpois" returns mu, fitted.values and scale equal to $$\lambda_t$$. The quantiles of distribution in predict() method are generated using qpois() function from stats package. Finally, the returned residuals correspond to $$y_t - \mu_t$$, which is not really helpful or meaningful…

### Negative Binomial distribution

Negative Binomial distribution implemented in alm() is parameterised in terms of mean and variance: $$$\label{eq:NegBin} P(X=y_t) = \binom{y_t+\frac{\mu_t^2}{\sigma^2-\mu_t}}{y_t} \left( \frac{\sigma^2 - \mu_t}{\sigma^2} \right)^{y_t} \left( \frac{\mu_t}{\sigma^2} \right)^\frac{\mu_t^2}{\sigma^2 - \mu_t},$$$ where $$\mu_t = \exp(x_t' B)$$ and $$\sigma^2$$ is estimated separately in the optimisation process. These values are then used in the dnbinom() function in order to calculate the log-likelihood based on the distribution function.

alm() with distribution="dnbinom" returns $$\mu_t$$ in mu and fitted.values and $$\sigma^2$$ in scale. The prediction intervals are produces using qnbinom() function. Similarly to Poisson distribution, resid() method returns $$y_t - \mu_t$$.

### An example of application

Round up the response variable for the next example:

xreg[,1] <- round(sqrt(xreg[,1]))
inSample <- xreg[1:80,]
outSample <- xreg[-c(1:80),]

Negative Binomial distribution:

ourModel <- alm(y~x1+x2, data=inSample, distribution="dnbinom")
summary(ourModel)
#> Response variable: y
#> Distribution used in the estimation: Negative Binomial with size=32.34
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> (Intercept)   6.1206     0.2047     5.7128      6.5283
#> x1           -0.0010     0.0062    -0.0133      0.0113
#> x2            0.0014     0.0038    -0.0062      0.0089
#>
#> Error standard deviation: 100.2622
#> Sample size: 80
#> Number of estimated parameters: 4
#> Number of degrees of freedom: 76
#> Information criteria:
#>      AIC     AICc      BIC     BICc
#> 948.8615 949.3948 958.3896 959.5581

And an example with predefined size:

ourModel <- alm(y~x1+x2, data=inSample, distribution="dnbinom", size=30)
summary(ourModel)
#> Response variable: y
#> Distribution used in the estimation: Negative Binomial with size=30
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> (Intercept)   6.0748     0.2121     5.6526      6.4971
#> x1           -0.0005     0.0064    -0.0133      0.0123
#> x2            0.0022     0.0039    -0.0056      0.0100
#>
#> Error standard deviation: 100.2332
#> Sample size: 80
#> Number of estimated parameters: 3
#> Number of degrees of freedom: 77
#> Information criteria:
#>      AIC     AICc      BIC     BICc
#> 947.0081 947.3239 954.1542 954.8461

## Cumulative functions for binary variables

The final class of models includes two cases:

In both of them it is assumed that the response variable is binary and can be either zero or one. The main idea for this class of models is to use a transformation of the original data and link a continuous latent variable with the binary one. As a reminder, all the models eventually assume that: $$$\label{eq:basicALMCumulative} \begin{matrix} o_t \sim \text{Bernoulli}(p_t) \\ p_t = g(x_t' A) \end{matrix},$$$ where $$o_t$$ is the binary response variable and $$g(\cdot)$$ is the cumulative distribution function. Given that we work with the probability of occurrence, the predict() method produces forecasts for the probability of occurrence rather than the binary variable itself. Finally, although many other cumulative distribution functions can be used for this transformation (e.g. plaplace() or plnorm()), the most popular ones are logistic and normal CDFs.

Given that the binary variable has Bernoulli distribution, its log-likelihood is: $$$\label{eq:BernoulliLikelihood} \ell(p_t | o_t) = \sum_{o_t=1} \log p_t + \sum_{o_t=0} \log(1 - p_t),$$$ So the estimation of parameters for all the CDFs can be done maximising this likelihood.

In all the functions it is assumed that the probability $$p_t$$ corresponds to some sort of unobservable level’ $$q_t = x_t' A$$, and that there is no randomness in this level. So the aim of all the functions is to estimate correctly this level and then get an estimate of probability based on it.

The error of the model is calculated using the observed occurrence variable and the estimated probability $$\hat{p}_t$$. In a way, in this calculation we assume that $$o_t=1$$ happens mainly when the respective estimated probability $$\hat{p}_t$$ is very close to one. So, the error can be calculated as: $$$\label{eq:BinaryError} u_t' = o_t - \hat{p}_t .$$$ However this error is not useful and should be somehow transformed into the original scale of $$q_t$$. Given that both $$o_t \in (0, 1)$$ and $$\hat{p}_t \in (0, 1)$$, the error will lie in $$(-1, 1)$$. We therefore standardise it so that it lies in the region of $$(0, 1)$$: $$$\label{eq:BinaryErrorBounded} u_t = \frac{u_t' + 1}{2} = \frac{o_t - \hat{p}_t + 1}{2}.$$$

This transformation means that, when $$o_t=\hat{p}_t$$, then the error $$u_t=0.5$$, when $$o_t=1$$ and $$\hat{p}_t=0$$ then $$u_t=1$$ and finally, in the opposite case of $$o_t=0$$ and $$\hat{p}_t=1$$, $$u_t=0$$. After that this error is transformed using either Logistic or Normal quantile generation function into the scale of $$q_t$$, making sure that the case of $$u_t=0.5$$ corresponds to zero, the $$u_t>0.5$$ corresponds to the positive and $$u_t<0.5$$ corresponds to the negative errors. The distribution of the error term is unknown, but it is in general bimodal.

### Cumulative Logistic distribution

We have previously discussed the density function of logistic distribution. The standardised cumulative distribution function used in alm() is: $$$\label{eq:LogisticCDFALM} \hat{p}_t = \frac{1}{1+\exp(-\hat{q}_t)},$$$ where $$\hat{q}_t = x_t' A$$ is the conditional mean of the level, underlying the probability. This value is then used in the likelihood in order to estimate the parameters of the model. The error term of the model is calculated using the formula: $$$\label{eq:LogisticError} e_t = \log \left( \frac{u_t}{1 - u_t} \right) = \log \left( \frac{1 + o_t (1 + \exp(\hat{q}_t))}{1 + \exp(\hat{q}_t) (2 - o_t) - o_t} \right).$$$ This way the error varies from $$-\infty$$ to $$\infty$$ and is equal to zero, when $$u_t=0.5$$.

The alm() function with distribution="plogis" returns $$q_t$$ in mu, standard deviation, calculated using the respective errors in scale and the probability $$\hat{p}_t$$ based on in fitted.values. resid() method returns the errors discussed above. predict() method produces point forecasts and the intervals for the probability of occurrence. The intervals use the assumption of normality of the error term, generating respective quantiles (based on the estimated $$q_t$$ and variance of the error) and then transforming them into the scale of probability using Logistic CDF. This method for intervals calculation is approximate and should not be considered as a final solution!

### Cumulative Normal distribution

The case of cumulative Normal distribution is quite similar to the cumulative Logistic one. The transformation is done using the standard normal CDF: $$$\label{eq:NormalCDFALM} \hat{p}_t = \Phi(q_t) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{q_t} \exp \left(-\frac{1}{2}x^2 \right) dx ,$$$ where $$q_t = x_t' A$$. Similarly to the Logistic CDF, the estimated probability is used in the likelihood in order to estimate the parameters of the model. The error term is calculated using the standardised quantile function of Normal distribution: $$$\label{eq:NormalError} e_t = \Phi \left(\frac{o_t - \hat{p}_t + 1}{2}\right)^{-1} .$$$ It acts similar to the error from the Logistic distribution, but is based on the different functions.

Similar to the Logistic CDF, the alm() function with distribution="pnorm" returns $$q_t$$ in mu, standard deviation, calculated based on the errors in scale and the probability $$\hat{p}_t$$ based on in fitted.values. resid() method returns the errors discussed above. predict() method produces point forecasts and the intervals for the probability of occurrence. The intervals are also approximate and use the same principle as in Logistic CDF.

## Mixture distribution models

Finally, mixture distribution models can be used in alm() by defining distribution and occurrence parameters. Currently only plogis() and pnorm() are supported for the occurrence variable, but all the other distributions discussed above can be used for the modelling of the non-zero values. If occurrence="plogis" or occurrence="pnorm", then alm() is fit two times: first on the non-zero data only (defining the subset) and second - using the same data, substituting the response variable by the binary occurrence variable and specifying distribution=occurrence. As an alternative option, occurrence alm() model can be estimated separately and then provided as a variable in occurrence.

As an example of mixture model, let’s generate some data:

xreg[,1] <- round(exp(xreg[,1]-400) / (1 + exp(xreg[,1]-400)),0) * xreg[,1]
# Sometimes the generated data contains huge values
xreg[is.nan(xreg[,1]),1] <- 0;
inSample <- xreg[1:80,]
outSample <- xreg[-c(1:80),]

First, we estimate the occurrence model (it will complain that the response variable is not binary, but it will work):

modelOccurrence <- alm(y~x1+x2+Noise, inSample, distribution="plogis")

And then use it for the mixture model:

modelMixture <- alm(y~x1+x2+Noise, inSample, distribution="dlnorm", occurrence=modelOccurrence)

The occurrence model will be return in the respective variable:

summary(modelMixture)
#> Response variable: y
#> Distribution used in the estimation: Mixture of Log Normal and Cumulative logistic
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> (Intercept)   5.7632     0.3464     5.0732      6.4533
#> x1            0.0034     0.0033    -0.0033      0.0101
#> x2           -0.0012     0.0021    -0.0053      0.0030
#> Noise         0.0015     0.0011    -0.0006      0.0036
#>
#> Error standard deviation: 0.0934
#> Sample size: 80
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 75
#> Information criteria:
#>      AIC     AICc      BIC     BICc
#> 829.7300 820.5408 853.5503 833.4166
summary(modelMixture\$occurrence)
#> Response variable: y
#> Distribution used in the estimation: Cumulative logistic
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> (Intercept) -31.6466    13.4436   -58.4276     -4.8655
#> x1            0.0354     0.1301    -0.2238      0.2946
#> x2            0.0848     0.0742    -0.0629      0.2325
#> Noise         0.1004     0.0418     0.0171      0.1838
#>
#> Error standard deviation: 1.0562
#> Sample size: 80
#> Number of estimated parameters: 5
#> Number of degrees of freedom: 75
#> Information criteria:
#>     AIC    AICc     BIC    BICc
#> 63.0333 63.8441 74.9434 76.7199

After that we can produce forecasts using the data from the holdout sample:

predict(modelMixture,outSample,interval="p")
#>        Mean Lower 2.7% Upper 97.3%
#> 1  480.1003   401.3033    585.9184
#> 2  477.1054   403.0376    586.3250
#> 3  460.9731   401.6249    580.0017
#> 4  488.7810   410.4608    596.2257
#> 5  442.8176   402.4783    582.9329
#> 6  517.9526   425.9223    635.2728
#> 7  437.6745   397.8660    571.6003
#> 8  443.4254   402.5668    579.4297
#> 9  491.2455   411.4031    600.4858
#> 10 383.7664   400.3252    571.6373
#> 11 474.0977   409.9974    598.3672
#> 12 465.0254   403.1471    582.2193
#> 13 451.7959   404.8365    582.8949
#> 14 415.7800   398.8727    572.0124
#> 15 454.3449   395.0784    571.2612
#> 16 429.5019   401.5893    576.2267
#> 17 465.5562   403.6381    581.9467
#> 18 484.1937   411.9493    597.6677
#> 19 420.7776   392.3497    564.1094
#> 20 437.5001   404.9372    582.2778

If you expect autoregressive elements in the data, then you can specify the order of AR via the respective parameter

modelMixtureAR <- alm(y~x1+x2+Noise, inSample, distribution="dlnorm", occurrence=modelOccurrence, ar=1)
summary(modelMixtureAR)
#> Response variable: y
#> Distribution used in the estimation: Mixture of Log Normal and Cumulative logistic
#> ARIMA(1,0,0) components were included in the model
#> Coefficients:
#>             Estimate Std. Error Lower 2.5% Upper 97.5%
#> (Intercept)   5.1215     0.7809     3.5655      6.6774
#> x1            0.0036     0.0033    -0.0030      0.0103
#> x2           -0.0011     0.0021    -0.0052      0.0030
#> Noise         0.0015     0.0011    -0.0007      0.0036
#> yLag1         0.1044     0.1133    -0.1214      0.3302
#>
#> Error standard deviation: 0.0929
#> Sample size: 80
#> Number of estimated parameters: 6
#> Number of degrees of freedom: 74
#> Information criteria:
#>      AIC     AICc      BIC     BICc
#> 830.8243 821.9750 857.0266 837.6377
plot(predict(modelMixtureAR,outSample,interval="p",side="u"))

If the explanatory variables are not available for the holdout sample, the forecast() function can be used:

plot(forecast(modelMixtureAR, h=10, interval="p",side="u"))
#> Warning: No newdata provided, the values will be forecasted

It will produce forecasts for each of the explanatory variables based on the available data using es() function from smooth` package (if it is available; otherwise, it will use Naive) and use those values as the new data.

### References

Geraci, Marco, and Matteo Bottai. 2007. “Quantile regression for longitudinal data using the asymmetric Laplace distribution.” Biostatistics 8 (1): 140–54. https://doi.org/10.1093/biostatistics/kxj039.

Yu, Keming, and Jin Zhang. 2005. “A three-parameter asymmetric laplace distribution and its extension.” Communications in Statistics - Theory and Methods 34 (9-10): 1867–79. https://doi.org/10.1080/03610920500199018.