James–Stein estimator
Biased estimator for Gaussian random vectors, better than ordinary least-squared-error minimization / From Wikipedia, the free encyclopedia
Dear Wikiwand AI, let's keep it short by simply answering these key questions:
Can you list the top facts and stats about James–Stein estimator?
Summarize this article for a 10 year old
The James–Stein estimator is a biased estimator of the mean, , of (possibly) correlated Gaussian distributed random variables with unknown means .
This article may be too technical for most readers to understand. (November 2017) |
It arose sequentially in two main published papers. The earlier version of the estimator was developed in 1956,[1] when Charles Stein reached a relatively shocking conclusion that while the then-usual estimate of the mean, the sample mean, is admissible when , it is inadmissible when . Stein proposed a possible improvement to the estimator that shrinks the sample means towards a more central mean vector (which can be chosen a priori or commonly as the "average of averages" of the sample means, given all samples share the same size). This observation is commonly referred to as Stein's example or paradox. In 1961, Willard James and Charles Stein simplified the original process.[2]
It can be shown that the James–Stein estimator dominates the "ordinary" least squares approach, meaning the James–Stein estimator has a lower or equal mean squared error than the "ordinary" least square estimator.
Similar to the Hodges' estimator, the James-Stein estimator is superefficient and non-regular at .[3]