Top Qs
Timeline
Chat
Perspective

Whitening transformation

Decorrelation method that converts a covariance matrix of a set of samples into an identity matrix From Wikipedia, the free encyclopedia

Remove ads

A whitening transformation or sphering transformation is a linear transformation that transforms a vector of random variables with a known covariance matrix into a set of new variables whose covariance is the identity matrix, meaning that they are uncorrelated and each have variance 1.[1] The transformation is called "whitening" because it changes the input vector into a white noise vector.

Several other transformations are closely related to whitening:

  1. the decorrelation transform removes only the correlations but leaves variances intact,
  2. the standardization transform sets variances to 1 but leaves correlations intact,
  3. a coloring transformation transforms a vector of white random variables into a random vector with a specified covariance matrix.[2]
Remove ads

Definition

Suppose is a random (column) vector with non-singular covariance matrix and mean . Then the transformation with a whitening matrix satisfying the condition yields the whitened random vector with unit diagonal covariance.

If has non-zero mean , then whitening can be performed by .

There are infinitely many possible whitening matrices that all satisfy the above condition. Commonly used choices are (Mahalanobis or ZCA whitening), where is the Cholesky decomposition of (Cholesky whitening),[3] or the eigen-system of (PCA whitening).[4]

Optimal whitening transforms can be singled out by investigating the cross-covariance and cross-correlation of and .[3] For example, the unique optimal whitening transformation achieving maximal component-wise correlation between original and whitened is produced by the whitening matrix where is the correlation matrix and the diagonal variance matrix.

Remove ads

Whitening a data matrix

Whitening a data matrix follows the same transformation as for random variables. An empirical whitening transform is obtained by estimating the covariance (e.g. by maximum likelihood) and subsequently constructing a corresponding estimated whitening matrix (e.g. by Cholesky decomposition).

High-dimensional whitening

This modality is a generalization of the pre-whitening procedure extended to more general spaces where is usually assumed to be a random function or other random objects in a Hilbert space . One of the main issues of extending whitening to infinite dimensions is that the covariance operator has an unbounded inverse in . Nevertheless, if one assumes that Picard condition holds for in the range space of the covariance operator, whitening becomes possible.[5] A whitening operator can be then defined from the factorization of the Moore–Penrose inverse of the covariance operator, which has effective mapping on Karhunen–Loève type expansions of . The advantage of these whitening transformations is that they can be optimized according to the underlying topological properties of the data, thus producing more robust whitening representations. High-dimensional features of the data can be exploited through kernel regressors or basis function systems.[6]

R implementation

An implementation of several whitening procedures in R, including ZCA-whitening and PCA whitening but also CCA whitening, is available in the "whitening" R package [7] published on CRAN. The R package "pfica"[8] allows the computation of high-dimensional whitening representations using basis function systems (B-splines, Fourier basis, etc.).

See also

References

Loading content...
Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads