Almost sure convergence: Difference between revisions
imported>Hendra I. Nurdin |
imported>Hendra I. Nurdin m (→Definition: fixed latex code) |
||
Line 17: | Line 17: | ||
A sequence <math>X_1,X_2,\ldots,X_n,\ldots</math> of random variables is said to '''converge almost surely''' to a random variable <math>Y</math> if <math>\mathop{\lim}_{k \rightarrow \infty}X_k(\omega)=Y(\omega)</math> for all <math>\omega \in \Lambda</math>, where <math>\Lambda \subset \Omega</math> is some measurable set satisfying <math>P(\Lambda)=1</math>. An equivalent definition is that the sequence <math>X_1,X_2,\ldots,X_n,\ldots</math> converges almost surely to <math>Y</math> if <math>\mathop{\lim}_{k \rightarrow \infty}X_k(\omega)=Y(\omega)</math> for all <math>\omega \in \Omega \backslash \Lambda'</math>, where <math>\Lambda'</math> is some measurable set with <math>P(\Lambda')=0</math>. This convergence is often expressed as: | A sequence <math>X_1,X_2,\ldots,X_n,\ldots</math> of random variables is said to '''converge almost surely''' to a random variable <math>Y</math> if <math>\mathop{\lim}_{k \rightarrow \infty}X_k(\omega)=Y(\omega)</math> for all <math>\omega \in \Lambda</math>, where <math>\Lambda \subset \Omega</math> is some measurable set satisfying <math>P(\Lambda)=1</math>. An equivalent definition is that the sequence <math>X_1,X_2,\ldots,X_n,\ldots</math> converges almost surely to <math>Y</math> if <math>\mathop{\lim}_{k \rightarrow \infty}X_k(\omega)=Y(\omega)</math> for all <math>\omega \in \Omega \backslash \Lambda'</math>, where <math>\Lambda'</math> is some measurable set with <math>P(\Lambda')=0</math>. This convergence is often expressed as: | ||
<math>\mathop{\lim}_{k \rightarrow \infty} X_k = Y\,\,P\ | <math>\mathop{\lim}_{k \rightarrow \infty} X_k = Y \,\,P{\rm -a.s},</math> | ||
or | |||
<math>\mathop{\lim}_{k \rightarrow \infty} X_k = Y\,\,{\rm a.s}</math>. | |||
==Important cases of almost sure convergence== | ==Important cases of almost sure convergence== |
Revision as of 19:32, 20 October 2007
Almost sure convergence is one of the four main modes of stochastic convergence. It may be viewed as a notion of convergence for random variables that is similar to, but not the same as, the notion of pointwise convergence for real functions.
Definition
In this section, a formal definition of almost sure convergence will be given for complex vector-valued random variables, but it should be noted that a more general definition can also be given for random variables that take on values on more abstract topological spaces. To this end, let be a probability space (in particular, ) is a measurable space). A (-valued) random variable is defined to be any measurable function , where is the sigma algebra of Borel sets of . A formal definition of almost sure convergence can be stated as follows:
A sequence of random variables is said to converge almost surely to a random variable if for all , where is some measurable set satisfying . An equivalent definition is that the sequence converges almost surely to if for all , where is some measurable set with . This convergence is often expressed as:
or
.
Important cases of almost sure convergence
If we flip a coin n times and record the percentage of times it comes up heads, the result will almost surely approach 50% as .
This is an example of the strong law of large numbers.
References
- P. Billingsley, Probability and Measure (2 ed.), ser. Wiley Series in Probability and Mathematical Statistics, Wiley, 1986.
- D. Williams, Probability with Martingales, Cambridge : Cambridge University Press, 1991.
- E. Wong and B. Hajek, Stochastic Processes in Engineering Systems, New York: Springer-Verlag, 1985.
See also
- Stochastic convergence
- Convergence in distribution
- Convergence in probability
- Convergence in r-th order mean
Related topics