At instance
, the
(complex-valued) output signals (measurements)
,
, of the system are related to the
(complex-valued) input signals
,
, as
where
denotes the noise added by the system. The one-dimensional form of ESPRIT can be applied if the weights have the form
, whose phases are integer multiples of some radial frequency
. This frequency only depends on the index of the system's input, i.e.
. The goal of ESPRIT is to estimate
's, given the outputs
and the number of input signals,
. Since the radial frequencies are the actual objectives,
is denoted as
.
Collating the weights
as
and the
output signals at instance
as
,
where
. Further, when the weight vectors
are put into a Vandermonde matrix
, and the
inputs at instance
into a vector
, we can write
With several measurements at instances
and the notations
,
and
, the model equation becomes![{\displaystyle \mathbf {Y} =\mathbf {A} \mathbf {X} +\mathbf {N} .}](//wikimedia.org/api/rest_v1/media/math/render/svg/1b5acefd44c60efe4cca3dfdd830c892855302e3)
Signal subspace
The singular value decomposition (SVD) of
is given as
where
and
are unitary matrices and
is a diagonal matrix of size
, that holds the singular values from the largest (top left) in descending order. The operator
denotes the complex-conjugate transpose (Hermitian transpose).
Let us assume that
. Notice that we have
input signals. If there was no noise, there would only be
non-zero singular values. We assume that the
largest singular values stem from these input signals and other singular values are presumed to stem from noise. The matrices in the SVD of
can be partitioned into submatrices, where some submatrices correspond to the signal subspace and some correspond to the noise subspace.
where
and
contain the first
columns of
and
, respectively and
is a diagonal matrix comprising the
largest singular values.
Thus, The SVD can be written as
where
,
, and
represent the contribution of the input signal
to
. We term
the signal subspace. In contrast,
,
, and
represent the contribution of noise
to
.
Hence, from the system model, we can write
and
. Also, from the former, we can write
where
. In the sequel, it is only important that there exists such an invertible matrix
and its actual content will not be important.
Note: The signal subspace can also be extracted from the spectral decomposition of the auto-correlation matrix of the measurements, which is estimated as![{\displaystyle \mathbf {R} _{\mathrm {YY} }={\frac {1}{T}}\sum _{t=1}^{T}\mathbf {y} [t]\mathbf {y} [t]^{\dagger }={\frac {1}{T}}\mathbf {Y} \mathbf {Y} ^{\dagger }={\frac {1}{T}}\mathbf {U} {\mathbf {\Sigma } \mathbf {\Sigma } ^{\dagger }}\mathbf {U} ^{\dagger }={\frac {1}{T}}\mathbf {U} _{\mathrm {S} }\mathbf {\Sigma } _{\mathrm {S} }^{2}\mathbf {U} _{\mathrm {S} }^{\dagger }+{\frac {1}{T}}\mathbf {U} _{\mathrm {N} }\mathbf {\Sigma } _{\mathrm {N} }^{2}\mathbf {U} _{\mathrm {N} }^{\dagger }.}](//wikimedia.org/api/rest_v1/media/math/render/svg/ffaea5135fb57d4a2d2b3fec9cb8cfeb39cda28a)
Estimation of radial frequencies
We have established two expressions so far:
and
. Now,
where
and
denote the truncated signal sub spaces, and
The above equation has the form of an eigenvalue decomposition, and the phases of the eigenvalues in the diagonal matrix
are used to estimate the radial frequencies.
Thus, after solving for
in the relation
, we would find the eigenvalues
of
, where
, and the radial frequencies
are estimated as the phases (argument) of the eigenvalues.
Remark: In general,
is not invertible. One can use the least squares estimate
. An alternative would be the total least squares estimate.
Algorithm summary
Input: Measurements :=[\,\mathbf {y} [1]\,\ \mathbf {y} [2]\,\ \dots \,\ \mathbf {y} [T]\,]}
, the number of input signals
(estimate if not already known).
- Compute the singular value decomposition (SVD) of
and extract the signal subspace
as the first
columns of
.
- Compute
and
, where
and
.
- Solve for
in
(see the remark above).
- Compute the eigenvalues
of
.
- The phases of the eigenvalues
provide the radial frequencies
, i.e.,
.