|
| This article is within the scope of WikiProject Signal Processing, a collaborative effort to improve the coverage of signal processing on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.Signal ProcessingWikipedia:WikiProject Signal ProcessingTemplate:WikiProject Signal ProcessingSignal Processing articles | |
|
| This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.MathematicsWikipedia:WikiProject MathematicsTemplate:WikiProject Mathematicsmathematics articles | | Low | This article has been rated as Low-priority on the project's priority scale. |
|
| This article is within the scope of WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.StatisticsWikipedia:WikiProject StatisticsTemplate:WikiProject StatisticsStatistics articles | | Low | This article has been rated as Low-importance on the importance scale. |
|
|
It is valuable to give a derivation of this useful expression.
However, in this part, the expression that is defined as "the power spectral density"
is actually the "energy spectral density" expectation.
There is much confusion about this in Wikipedia related articles, especially in the time domain,
but litterature generally agrees about the following definitions in the frequency domain:
If we consider a real signal x(t) in the time domain,
its Fourier transform is X(f) = FT{x(t)};
then, its "energy spectral density" (ESD) is :
Sxx(f) = |X(f)|2
(Some people use another letter than S).
It is easy to check that this expression has the dimension of an energy (x squared) divided by a frequency, as a "spectral density" of energy.
The integral of Sxx(f) = |X(f)|2 over all the frequencies is indeed the total energy of the signal, (i.e. the integral of |x(t)|2 over time) according to the Parseval-Plancherel theorem.
Taking its expectation E{Sxx(f)} (i.e. its statistical average over all the possibilities of the signal x(t)) does make it a power (i.e. an energy per time unit).
The "power spectral density" (PSD) is :
Pxx(f) = FT{Cxx(T)}
with :
Cxx(T) = COVARIANCE{x(t), x(t+T)} = E{[x(t)-mx ] . [x(t+T)-mx]}
where:
mx = E{x}
E is the expectation operator.
I assume that x(t) is stationary up to the second order.
(Many -but not all- call the expression Cxx(T) above the autocorrelation function).
This expression Pxx(f) has the dimension of a power (energy per unit of time) divided by a frequency, as a "spectral density" of power.
The usual expression with the PSD looks the same, (SNR is a dimensionless ratio between -usually- powers or -here- energies) but is more difficult to demonstrate.
This is why more accessible litterature uses the energy rather than the power, but this should be clearly stated.
Almipa (talk) 15:45, 6 May 2010 (UTC)
the wiener filter expression is incorrect ... ref. gonzalez and woods book p263 —The preceding unsigned comment was added by 67.87.222.13 (talk • contribs) 03:48, 19 March 2007 (UTC)
- In my copy of this book it appears on page 170, and while it does not have the same symbols exactly, it is mathematically equivalent. Just below it is written that |H(u,v)^2| is H*(u,v)H(u,v), which allows an H(u,v) to cancel on the top and bottom. There are many subtly different ways to write this expression, but I believe the one we have is correct. (Test it out on a deconvolution problem and see, if you're not sure.) - Rainwarrior 04:27, 19 March 2007 (UTC)
In the Definition section there are some functions that appear related to other functions, but the connection isn't stated:
- How is related to ? Should the in the "Our goal..." sentence be an instead? Is the just our best estimate of what the original should have been?
- How is related to ?
- How is related to ? (I assume the same way is related to .)
- Why is defined when it doesn't appear in the text above it? Is that meant to define as the complex conjugate of ?
In the Interpretation section, everything makes sense, but it might be clearer to split the first sentence and rephrase the second half of it as something like, "Given the signal function and the noise function , the signal-to-noise ratio function is ." (SNR does remain a function, right? That seems right, since a received signal might be noisy in some frequencies and clean in others.) It appears that the SNR is a result of interest, but the wording doesn't make that very clear. On the other hand, making the explanation wordier (like I've suggested) might make the whole less clear, if the reader is assumed to know the notation pretty well.
The Derivation section seems to have the most hand-waving, at least by my reading. I'm sure some of the notation is fairly standard, but the article would stand better on its own if more of the notation were explained in place.
- Again we have , which isn't defined in terms of , but the Definition section can cover it.
- In the expansion of the quadratic, I don't see where some of the terms come from.
- The term follows regular algebraic rules and makes perfectly good sense, with the exception of the notation, which would be covered by my second Definition section question.
- Neither the term nor the term are obvious as the result of the factors. By regular algebra (which I realize doesn't apply, since these are operations of functions rather than variables), it looks like they'd be something like (one term instead of the two, that is).
- The term follows regular algebraic rules and makes perfectly good sense, again with the exception of the notation.
- The connection between the assumption that signal and noise are independent and the equation makes perfectly good sense, and helps make some other points clearer too. However, understanding the and still requires an answer to my second question in the Definition section.
- The "Also, ..." and "Therefore, ..." steps make sense by simple substitution.
- The differentiation step is a mix of clear and not so clear.
- I'm assuming that setting the derivative to zero finds the minimum of the noise-squared function, and that because it's a quadratic the point where the derivative is zero is the only minimum. That probably doesn't need to be spelled out though, although it wouldn't hurt if it can be done without unnecessary wordiness. Maybe "To find the minimum error-squared value, ..." would do the job without wordiness, assuming that's an accurate statement.
- What is the significance of the "noting that this is a complex value" observation? Does that change the way the minimization or differentiation?
- The term made sense once I recalled that the derivative of a Fourier transform is the Fourier transform itself times a constant.
- The term lost me. More steps might help, unless understanding the meaning of the notation makes it obvious.
I'm sure a lot of this is likely to be obvious to anyone who understands this branch of mathematics more clearly. But it's right on the edge of my comprehension, so I hope I'm at the right level of knowledge to point out what would make the article clearer and stand on its own better. -- Steve Schonberger 19:33, 14 August 2007 (UTC)
- Hi, thanks for your comments. I wrote most of this material, and it's sometimes difficult to know what "level" to pitch it at, and assume that some things are obvious! I've made several changes to the article text that hopefully address most of your concerns, please let me know if it helps, or if I've missed anything.
- The two points I haven't addressed are:
- How the quadratic expansion of the expectation works. This is straightforward given a knowledge of how expectation works (that deterministic variables can be moved outside the expectation operator). I don't think it's the place of this article to explain that.
- Similarly, I don't think that it's necessary to explain how/why setting the derivative equal to zero finds a minimum/maximum of a quadratic function; it's outside of this article's scope.
- Oli Filth 20:06, 14 August 2007 (UTC)
- Nice work. Explaining the complex conjugate notation in place definitely makes a lot of page clear. It's unfortunate that the centered-infix and superscript-postfix use the same character for unrelated operators. I see it's explained on the complex conjugate as a typical notation, but since this page defines a different use for the same operator it's good to have both uses in the same Definition section.
- I see how deterministic variables can be pulled outside the expectation operator; I saw that without even clicking through on the expectation article. The part of the quadratic expansion that puzzled me was why pulling the deterministic variables out of the gives the variable times its conjugate. In other words, why does the term in the come out as and not just ? Is that a property of Fourier transforms, expectation, or something else? I assume it's something elementary, but I don't see it. (My differential equations class covered Laplace transforms, which I never quite followed, but didn't cover Fourier transforms, which are fairly similar in a lot of ways but make intuitive sense to me because of the physical model of converting time-domain to frequency-domain. Maybe that's why the article falls right at the edge of my understanding.) Anyway, if it is something elementary, it's reasonable to leave it as-is. But if it's more complex (pun intended) it might deserve at least a link to clarify.
- As for minimizing the quadratic, you're probably right; anyone who looks at the Derivation section without the ability to follow that step will be long-lost before they get anywhere near it.
- The language of the differentiation step might be clearer as three short sentences than one long one with a parenthesized note in the middle. (There's a typo in there too: "emembering".) Do you think this would be clearer and also correct?
- Next [or "Finally"?] we differentiate with respect to . Since these are complex values, acts as a constant. To find the minimum error-squared value, we set the differential to zero:
- [The equation.]
- (This all started because I was amazed by the results I got with the Unshake program mentioned on the Deconvolution page, and was curious about how it worked. My curiosity sure got me into a lot of brain-stretching effort here! And I'm not even sure whether Unshake uses Weiner deconvolution or some other sort of image-processing deconvolution.)
- -- Steve Schonberger 20:24, 16 August 2007 (UTC)
- By definition (almost), for any complex (or even real). I chose to express it in terms of its factors in readiness for the differentiation stage of the proof.
- I've refactored the differentiation explanation sentence slightly. I think its important to start with our aim ("To find the minimum value..."), rather than how we intend to achieve that ("We differentiate..."); hopefully its still clearer. Thanks for spotting the typo!
- Oli Filth 20:46, 16 August 2007 (UTC)
- I don't know how I missed that in school (or more likely, learned it and forgot), but it's certainly an elementary point.
- The rewording is good. I see the point about starting with the intention. My thought in the wording I proposed was that it maintained symmetry with the rest of the section, where all the other equations are presented with a colon. That may be desirable as a matter of style, but I think the article works in terms of clarity.
- -- Steve Schonberger 23:46, 16 August 2007 (UTC)
- I believe there are still some parts which are not very clear:
- From context I assume that is the Fourier transform of the additive noise . I think it would be better the mention this explicitely as there is no alphabetical connection between and
- I do not see why that and being independent implies that . In my understanding it only implies that . Are you maybe using the implicit assumption that or do I miss some fundamental property about expectation and complex-valued functions?
- I do not understand the statement: "As this is a complex value, acts as a constant." I would expect that if is not constant with respect to differentiating, nor is .
- 193.190.187.220 (talk) 14:39, 3 January 2013 (UTC)
- The sentence "...As this is a complex value, acts as a constant..." does not make any sense and it is misleading at best, if not completely wrong. This last step involving the derivation of the quantity should be corrected and made clear, since the argumentation used now is not valid, although the resulting expression for G is correct.
- As a counterexample to the current argument, just suppose that both the image and the convolution kernel are centrally symmetric, and noise is white. Then we know that the Fourier transforms and will be real (imaginary part equal to zero), thus also will be centrally symmetric and real. In such a case the quantity would reduce to , whose derivative is , and clearly in this case does not act as constant, as claimed by the author.
- A more reasonable argument could be that, if we assume that the Wiener filter in time domain is real, then we have that in Fourier domain , which would indeed justify treating "as a constant" when differentiating w.r.t. . However I don't know if the Wiener filter must be strictly real (if not, then the above argument is not valid either).
- PS: furthermore, all the instances of should be replaced with for consistency, and it should be specified that noise is assumed to have zero mean, otherwise .
91.145.120.246 (talk) 18:14, 2 September 2013 (UTC)Normand
EDIT: The mistake can be corrected by simply using the definition of Wirtinger derivative instead of of that of the standard complex derivative, which is what the author claimed to use! I corrected the wrong statement.