# Linear and Gaussian state space model

A linear Gaussian state space model can be written in the following form (Durbin and Koopman, 2012):

The measurement or observation equation with $y_{t}\in\mathbb{R}^{N}$ for $t=1,...,T$ is given by
$y_{t}=d_{t}+Z_{t}\alpha_{t}+\epsilon_{t},\ \ \ \ \epsilon_{t}\sim N(0, H_{t})$.
The state equation (transition dynamics) with $\alpha_{t}\in\mathbb{R}^{n}$, for $t=1,...,T-1$ is given by
$\alpha_{t}=c_{t}+T_{t}\alpha_{t}+R_{t}\eta_{t},\ \ \ \ \eta_{t}\sim N(0, Q_{t})$,
with initialisaiton
$\alpha_{1}\sim N(a_{1}, P_{1})$.

Note:

1. The system matrices $c_{t}$, $d_{t}$, $Z_{t}$, $T_{t}$, $R_{t}$, $H_{t}$ and $Q_{t}$ are predetermined matrices. Usually they are time-invariant and functions of unknown parameters.
2. Initialisation oftentimes plays an important role in inference. For non-stationary components in $\alpha_{t}$, say $\alpha_{tj}$, conventionally diffuse initialisation is used. In such a case,
$a_{1j}=0$ and $P_{1}$ is diagonal with $P_{1,jj}$ being a large number. For stationary components in $\alpha_{t}$, say $\alpha_{t}^{*}$, unconditional distribution is used to initialise. It follows,
$a_{1}^{*}=(I_{{n^{*}}^{2}}-T_{1}^{*})^{-1}c_{1}^{*}$
and $P_{1}^{*}$ is such that
$vec(P_{1}^{*})=(I_{{n^{*}}^{2}}-T_{1}^{*}\otimes T_{1}^{*})^{-1}vec\big((R_{1}Q_{1}R_{1}')^{*}\big)$,
where $n^{*}\in\{n^{*}\in \mathbb{N}:n^{*}\leq n\}$ is the dimension of stationary components in $\alpha_{t}$.

The Kalman filter computes $a_{t}=E(\alpha_{t}|y_{1:t-1})$ and $P_{t}=\text{Var}(\alpha_{t}|y_{1:t-1})$ via a forward recursion. Prediction errors are produced as a by product. It follows