Returning to the example above, when the covariance is zero it is trivial to determine the location of the object after it moves according to an arbitrary nonlinear function f ( x, y ) : just apply the function to the mean vector.
12.
Where n is the length / dimension of x and \ frac { \ partial g ( \ mu ) } { \ partial x _ i } is the partial derivative of g at the mean vector \ mu with respect to the " i "-th entry of x.
13.
If I and III are equal in magnitude ( don't need the same sign ), then the mean vector can be 150?or-30? but in those cases lead II will be smallest, so the only possibilities left are + 60?or-120? depending on the sign of the lead II result.
14.
The mean vector and covariance matrix of the Gaussian distribution completely specify the GP . GPs are usually used as a priori distribution for functions, and as such the mean vector and covariance matrix can be viewed as functions, where the covariance function is also called the " kernel " of the GP . Let a function f follow a Gaussian process with mean function m and kernel function k,
15.
The mean vector and covariance matrix of the Gaussian distribution completely specify the GP . GPs are usually used as a priori distribution for functions, and as such the mean vector and covariance matrix can be viewed as functions, where the covariance function is also called the " kernel " of the GP . Let a function f follow a Gaussian process with mean function m and kernel function k,