Updates for the wind fields

In this section we obtain the coefficients for the online update of the vectorial GP from Section 5.4. For this we need a single likelihood term, indexed t, and the vector GP marginal at time t - 1, denoted qt - 1(z) where z is the two-dimensional wind vector. The single ``likelihood'' term has a mixture of 4 Gaussians (following the description from [22]) pm(zt|$ \omega$,$ \sigma_{t}^{0}$) = $ \sum_{j=1}^{4}$$ \beta_{tj}^{}$$ \phi$(zt|ctj,$ \sigma_{tj}^{}$) in the numerator and the GP marginal at xt, a two-dimensional Gaussian denoted q0(z|$ \mu$0,W0) in the denominator:

$\displaystyle {\frac{p_m({\boldsymbol { z } }_{t}\vert{\boldsymbol { \omega } }...
...{\boldsymbol { z } }\vert{\boldsymbol { \mu } }_{0},{\boldsymbol { W } }_{0})}}$ = $\displaystyle \sum_{j=1}^{4}$$\displaystyle \beta_{tj}^{}$$\displaystyle {\frac{\phi({\boldsymbol { z } }_{t}\vert{\boldsymbol { c } }_{tj...
...\boldsymbol { z } }\vert{\boldsymbol { \alpha } }_0,{\boldsymbol { W } }_{0})}}$ (220)

with the mixture coefficients $ \sum_{j}^{}$$ \beta_{tj}^{}$ = 1 and $ \phi$(zt|ctj,$ \sigma_{tj}^{}$) is one component of the Gaussian mixture: a spherical Gaussian centered at ctj and with spherical variance $ \sigma_{tj}^{}$I2 = Atj. We will use zero prior mean functions, thus we do not write $ \mu$0 in what follows.

We need to compute the average of the likelihood in eq. (220) with respect to the Gaussian qt - 1(z|$ \mu$t,Wt) where ($ \mu$t,Wt) are the mean and variance of the GP marginal at xt. Using these notations we write the required average as:

g($\displaystyle \langle$z$\displaystyle \rangle_{t}^{}$) = g($\displaystyle \mu$t) = $\displaystyle \int$dz$\displaystyle \sum_{j=1}^{4}$$\displaystyle \beta_{tj}^{}$$\displaystyle {\frac{\phi({\boldsymbol { z } }_{t}\vert{\boldsymbol { c } }_{tj...
...({\boldsymbol { z } }\vert{\boldsymbol { \alpha } }_0,{\boldsymbol { W } }_0)}}$  qt - 1(z|$\displaystyle \mu$t,Wt) (221)

where the dependence on the mean of the GP marginal $ \mu$t is explicitly written. We decompose eq. (221) into the sum:

\begin{displaymath}\begin{split}g({\boldsymbol { \mu } }_{t}) &= \sum_{j=1}^4\be...
...j=1}^4\beta_{tj} s_{tj}({\boldsymbol { \mu } }_{t}) \end{split}\end{displaymath} (222)

and in the following we compute stj($ \mu$t). We have the same integral for each stj($ \mu$t), we remove the indices and compute a generic

s($\displaystyle \mu$) = $\displaystyle \int$dz  $\displaystyle {\frac{\phi({\boldsymbol { z } }\vert{\boldsymbol { c } },{\boldsymbol { A } })}{q_0({\boldsymbol { z } }\vert{\boldsymbol { W } }_0)}}$  qt - 1(z|$\displaystyle \mu$,W). (223)

All distributions involved are Gaussian, the resulting distribution thus will also be a Gaussian one with the general form:

s = K exp(- $\displaystyle {\textstyle\frac{1}{2}}$$\displaystyle \cal {F}$)    

with the quadratic term

$\displaystyle \cal {F}$ = cTA-1c + $\displaystyle \mu$TW-1$\displaystyle \mu$ - $\displaystyle \left(\vphantom{{\boldsymbol { A } }^{-1}{\boldsymbol { c } }+ {\boldsymbol { W } }^{-1}{\boldsymbol { \mu } }}\right.$A-1c + W-1$\displaystyle \mu$$\displaystyle \left.\vphantom{{\boldsymbol { A } }^{-1}{\boldsymbol { c } }+ {\boldsymbol { W } }^{-1}{\boldsymbol { \mu } }}\right)^{T}_{}$$\displaystyle \left(\vphantom{ {\boldsymbol { A } }^{-1} + {\boldsymbol { W } }^{-1} - {\boldsymbol { W } }_0^{-1} }\right.$A-1 + W-1 - W0-1$\displaystyle \left.\vphantom{ {\boldsymbol { A } }^{-1} + {\boldsymbol { W } }^{-1} - {\boldsymbol { W } }_0^{-1} }\right)^{-1}_{}$$\displaystyle \left(\vphantom{{\boldsymbol { A } }^{-1}{\boldsymbol { c } }+ {\boldsymbol { W } }^{-1}{\boldsymbol { \mu } }}\right.$A-1c + W-1$\displaystyle \mu$$\displaystyle \left.\vphantom{{\boldsymbol { A } }^{-1}{\boldsymbol { c } }+ {\boldsymbol { W } }^{-1}{\boldsymbol { \mu } }}\right)$ (224)

or equivalently (using the matrix inversions from eq. (181)):

\begin{displaymath}\begin{split}{\cal F} &= {\boldsymbol { c } }^T{\boldsymbol {...
...\boldsymbol { W } }\right)^{-1}{\boldsymbol { c } } \end{split}\end{displaymath} (225)

and the multiplying constant

K2 = $\displaystyle {\frac{\vert{\boldsymbol { W } }_0\vert}{\vert{\boldsymbol { W } ...
...} }- {\boldsymbol { A } }{\boldsymbol { W } }_0^{-1}{\boldsymbol { W } }\vert}}$. (226)

The first and second order differentials of $ \cal {F}$:

\begin{displaymath}\begin{split}\frac{1}{2} \partial_{\boldsymbol { \mu } }{\cal...
...{\boldsymbol { W } }_0^{-1}\right)^{-1}\right]^{-1} \end{split}\end{displaymath}. (227)

We can substitute back each stj($ \mu$t) = Ktjexp(- $ \cal {F}$tj/2) and differentiate log g($ \mu$t) with respect to $ \mu$t to get the quantities required for the updates of the vector GP in eq. (175):

\begin{displaymath}\begin{split}&\partial_{\boldsymbol { \mu } }\ln g({\boldsymb...
...F}_{tj} \right)^T } { (\sum_j \beta_{tj} s_{tj})^2} \end{split}\end{displaymath} (228)

where $ \beta_{tj}^{}$stj is the responsbility of the j-th component of the mixture for generating data xt.