sets the ``gpopt'' structure to default values.
gpopt = defoptions
gpopt = defoptions
initialises gpopt
, the structure that
controls the training procedure ogptrain
. Below is a description of
the gpopt
structure.
The fields in gpopt
are divided into options related to the
approximation of the posterior process (gpopt.postopt
); options
related to the optimisation of the covariance function parameters
(gpopt.covopt
); and parameters returned by the optimisation procedure
like test or training errors during training or the value of the predictive
log-likelihood. The structure gpopt
also has options to compute the
the test/training error, the marginal likelihood for the test/training data,
etc.
Fields of gpopt
:
postopt
- substructure with parameters related to the computation of the posterior process - keeping the covariance and likelihood parameters fixed.
covopt
- substructure grouping parameters for the optimisation of the covariance parameters.
pavg
- boolean indicator for storing (or NOT) the log-averages.
If nonzero, logavg
stores the sequence of log-averages for each training input.
disperr
- boolean indicator whether to display (NOT) the errors during training.
erraddr
- address of function to compute the test error. It
should have four inputs: net
, the desired outputs y
, the
predictive means m_x
, and the predictive variances var_x
. The
function erraddr
returns a (user-specified) measure of error. If
there is no function given, the weighted quadratic error (implemented in
err_mse
) is used. See also this function on how to implement new
error functions.
ptest
- indicator to store test errors. Evaluating test error
can be expensive, gpopt.freq
specifies the delays between
successive test error computation (0=1, i.e. test error for each online
step).
x_test,y_test
- the test inputs and outputs.
testerror
- the returned test errors.
ptrain
- indicator whether to compute or not the training
errors. If this value is nonzero then, similarly to computing test
errors, the training errors are computed gpopt.freq
-th step.
trainerror
- the returned training errors.
The structure gpopt.postopt
stores options driving the
computation of the posterior process:
postopt.itn
- number of online sweeps through the data (default 1).
postopt.shuffle
- if nonzero (by default), then the inputs are shuffled at each iteration, this is an attempt to make the posterior independent of the data ordering.
postopt.isep
- indicator whether to perform the TAP/EP learning procedure or not. This requires additional values to be kept for further processing.
postopt.fixitn
- keeps the basis vectors fixed and performs the TAP/EP iteration. Thus one source of fluctuations is eliminated, and the TAP/EP parameters become stable.
If gpopt.postopt.isep
is set to nonzero, then the inference uses the
TAP/EP iterative approach, which is more time-consuming and also requires
additional additional information to be stored. These values are stored in
the substructure gpopt.ep
using the following fields:
ep.X
- location of training inputs.
ep.aP
- mean of the site distribution corresponding to the likelihood.
ep.lamP
- site variances corresponding to the likelihood.
ep.projP
- coefficients of the projection.
The substructure gpopt.covopt
contains the fields related to the
optimisation of the covariance function parameters. The optimisation relies
on the NETLAB
optimisation routines, the default is 'conjgrad'
,
this is the string stored in fnopt
. Additional options to the
respective optimisation routine are provided via the field opt
. If this
is a scalar, then is specifies the number of iterations, otherwise
it has to conform to the NETLAB
specifications (see netopt
from
the NETLAB package).
Copyright (c) Lehel Csató (2001-2004)