aspcore.correlation.SampleCorrelation

class aspcore.correlation.SampleCorrelation(forget_factor, size, delay=0, estimate_mean=False)

Bases: object

Estimates the correlation matrix between two random vectors v(n) & w(n) by iteratively computing (1/N) sum_{n=0}^{N-1} v(n) w^T(n) The goal is to estimate R = E[v(n) w^T(n)]

If delay is supplied, it will calculate E[v(n) w(n-delay)^T]

Only the internal state will be changed by calling update() In order to update self.corr_mat, get_corr() must be called

sizescalar integer, correlation matrix is size x size.

or tuple of length 2, correlation matrix is size

forget_factorscalar between 0 and 1.

1 is straight averaging, increasing time window 0 will make matrix only dependent on the last sample

__init__(forget_factor, size, delay=0, estimate_mean=False)
sizescalar integer, correlation matrix is size x size.

or tuple of length 2, correlation matrix is size

forget_factorscalar between 0 and 1.

1 is straight averaging, increasing time window 0 will make matrix only dependent on the last sample

Methods

__init__(forget_factor, size[, delay, ...])

size : scalar integer, correlation matrix is size x size.

get_corr([autocorr, est_method, pos_def])

Returns the correlation matrix and stores it in self.corr_mat

update(vec1[, vec2])

Updates the correlation matrix with a new sample vector If only one is provided, the autocorrelation is computed If two are provided, their cross-correlation is computed

get_corr(autocorr=False, est_method='scm', pos_def=False)

Returns the correlation matrix and stores it in self.corr_mat

Will ensure positive semi-definiteness and hermitian-ness if autocorr is True If pos_def=True it will even ensure that the matrix is positive definite.

est_method can be ‘scm’, ‘oas’ or ‘qis’

update(vec1, vec2=None)

Updates the correlation matrix with a new sample vector If only one is provided, the autocorrelation is computed If two are provided, their cross-correlation is computed

Both vec are ndarrays of shape (vec_dim, 1)

For the recursive definition of covariance with sample mean, take a look at ‘Computing (co)variances recursively’ - Thijs Knaap.

Without mean we calculate 1/N sum_{n=1}^{N} x_n y_n*

With mean we calculate 1/N sum_{n=1}^{N} (x_n - xbar_n)(y_n - ybar_n)* where the sample mean is xbar_n = 1/n sum_{i=1}^{n} x_i. The recursive calculation is exact (apart from possible numerical differences), there is no additional assumptions.

Bessels correctyion (normalizing by 1 / (N-1)) is used, but is not necessarily desirable

since it is not the lower MSE (but it is unbiased). For a normalization of 1/N, use the following code (the first index must be handled separately in this case as well)

np.matmul(vec1 - self.mean.state, (vec2 - self.mean2.state).T, out=self._preallocated_update) self._preallocated_update *= 1 / self.n self.corr *= self.n / (self.n + 1)