If X is a nonnegative random variable and a > 0, then the probability
that X is at least a is at most the expectation of X divided by a:[1]
When , we can take for to rewrite the previous inequality as
In the language of measure theory, Markov's inequality states that if (X, Σ, μ) is a measure space, is a measurable extended real-valued function, and ε > 0, then
This measure-theoretic definition is sometimes referred to as Chebyshev's inequality.[2]
Extended version for nondecreasing functions
If φ is a nondecreasing nonnegative function, X is a (not necessarily nonnegative) random variable, and φ(a) > 0, then[3]
An immediate corollary, using higher moments of X supported on values larger than 0, is
If X is a nonnegative random variable and a > 0, and U is a uniformly distributed random variable on that is independent of X, then[4]
Since U is almost surely smaller than one, this bound is strictly stronger than Markov's inequality. Remarkably, U cannot be replaced by any constant smaller than one, meaning that deterministic improvements to Markov's inequality cannot exist in general. While Markov's inequality holds with equality for distributions supported on , the above randomized variant holds with equality for any distribution that is bounded on .
We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader.
Intuition
where is larger than or equal to 0 as the random variable is non-negative and is larger than or equal to because the conditional expectation only takes into account of values larger than or equal to which r.v. can take.
Property 1:
Given a non-negative random variable , the conditional expectation because . Also, probabilities are always non-negative, i.e., . Thus, the product:
.
This is intuitive since conditioning on still results in non-negative values, ensuring the product remains non-negative.
Property 2:
For , the expected value given is at least . Multiplying both sides by , we get:
.
This is intuitive since all values considered are at least , making their average also greater than or equal to .
Hence intuitively, , which directly leads to .
Probability-theoretic proof
Method 1:
From the definition of expectation:
However, X is a non-negative random variable thus,
From this we can derive,
From here, dividing through by allows us to see that
Method 2:
For any event , let be the indicator random variable of , that is, if occurs and otherwise.
Using this notation, we have if the event occurs, and if . Then, given ,
which is clear if we consider the two possible values of . If , then , and so . Otherwise, we have , for which and so .
Since is a monotonically increasing function, taking expectation of both sides of an inequality cannot reverse it. Therefore,
Now, using linearity of expectations, the left side of this inequality is the same as
Thus we have
and since a > 0, we can divide both sides by a.
Measure-theoretic proof
We may assume that the function is non-negative, since only its absolute value enters in the equation. Now, consider the real-valued function s on X given by
Then . By the definition of the Lebesgue integral
and since , both sides can be divided by , obtaining
Chebyshev's inequality
Chebyshev's inequality uses the variance to bound the probability that a random variable deviates far from the mean. Specifically,
for any a > 0.[3] Here Var(X) is the variance of X, defined as:
Chebyshev's inequality follows from Markov's inequality by considering the random variable
and the constant for which Markov's inequality reads
This argument can be summarized (where "MI" indicates use of Markov's inequality):
Other corollaries
- The "monotonic" result can be demonstrated by:
- The result that, for a nonnegative random variable X, the quantile function of X satisfies:
- the proof using
- Let be a self-adjoint matrix-valued random variable and . Then
- which can be proved similarly.[5]
Assuming no income is negative, Markov's inequality shows that no more than 10% (1/10) of the population can have more than 10 times the average income.[6]
Another simple example is as follows: Andrew makes 4 mistakes on average on his Statistics course tests. The best upper bound on the probability that Andrew will do at least 10 mistakes is 0.4 as Note that Andrew might do exactly 10 mistakes with probability 0.4 and make no mistakes with probability 0.6; the expectation is exactly 4 mistakes.