Probability/Probability Spaces

From testwiki
Jump to navigation Jump to search

Template:Nav

Terminologies

The name of this chapter, Template:Colored em, is a mathematical construct that models a random experiment. To be more precise: Template:Colored definition Let us first give the definitions related to sample space. For the definitions of event space and probability, we will discuss it in later sections. Template:Colored definition Template:Colored remark Template:Colored example Template:Colored definition Template:Colored remark Template:Colored definition Template:Colored example

Probability interpretations

In this chapter, we will discuss probability mathematically, and we will give an axiomatic and abstract definition to probability (function). By axiomatic definition, we mean defining probability to be a function that satisfying some axioms, called probability axioms. But such axiomatic definition does not tell us how should we interpret the term "probability", so the definition is said to be Template:Colored em from the interpretation of probability. Such independence make the formal definition always applicable, no matter how you interpret probability.

However, the axiomatic definition does not suggest a way to construct a probability measure (i.e., assigning probabilities to events): it just states that probability is a function satisfying certain axioms, but how can we construct such function in the first place? In this section, we will discuss two main types of probability interpretations: subjectivism and frequentism, where the method of assigning probabilities to events is mentioned in each of them.

Subjectivism

Intuitively and naturally, Template:Colored em of an event is often regarded as a numerical measure of the "chance" of the occurrence of the event (that is, how likely the event will occur). So, it is natural for us to assign probability to an event based on our own assessment on the "chance". (In order for the probability to be valid according to the axiomatic definition, the assignment needs to satisfy the Template:Colored em.) But different people may have different assessment on the "chance", depending on their personal opinions. So, we can see that such interpretation of probability is somewhat Template:Colored em, since different people may assign different probabilities to the same event. Hence, we call such probability interpretation as Template:Colored em (also known as Template:Colored em). Template:Colored example The main issue of the subjectivism is the lack of objectivity, since different probabilities can be assigned to the same event based on personal opinion. Then, we may have difficulties in choosing which of the probabilities should be used for that event. To mitigate the issue of the lack of objectivity, we may adjust our degrees of belief on an event from time to time when there are more observed data through Template:Colored em, which will be discussed in later chapter, so that the value is assigned in a more objective way. However, even after the adjustment, the assignment of value is still not in an Template:Colored em objective way, since the adjusted value (known as Template:Colored em) still depends on the initial value (known as Template:Colored em), which is assigned subjectively.

Frequentism

Another probability interpretation, which is objective, is called Template:Colored em. We denote by n(E) the number of occurrences of an event E in n repetitions of experiment. (An Template:Colored em is any action or process with an Template:Colored em that is subject to uncertainty or randomness.) Then, we call n(E)n as the Template:Colored em of the event E. Intuitively, we will Template:Colored em that the relative frequency fluctuates less and less as n gets larger and larger, and approach to a constant limiting value (we call this as Template:Colored em) as n tends to infinity, i.e., the limiting relative frequency is limnn(E)n. It is thus natural to take the limiting relative frequency as the probability of the event E. This is exactly what the definition of probability in the frequentism. In particular, the Template:Colored em of such limiting relative frequency is an assumption or Template:Colored em in frequentism. (As a side result, when n is large enough, the relative frequency of the event E may be used to approximate the probability of the event E.)

However, an issue of frequentism is that it may be infeasible to conduct experiments many times for some events. Hence, for those events, no probability can be assigned to them, and this is clearly a limitation for frequentism. Template:Colored example Template:Colored example Because of these issues, we will instead use a modern axiomatic and abstract approach to define probability, which is suggested by a Russian mathematician named Andrey Nikolaevich Kolmogorov in 1933. By Template:Colored em, we mean defining probability quite broadly and abstractly as something that satisfy certain axioms (called Template:Colored em). Such probability axioms are the mathematical foundation and the basis of modern probability theory.

Probability axioms

Since we want use the probability measure β„™ to assign probability β„™(E) to every event E in the sample space, it seems natural for us to set Template:Colored em of the probability measure β„™ to be the set containing Template:Colored em subsets of Ω, i.e., the power set of Ω, 𝒫(Ω). Unfortunately, this situation is not that simple, and there are some technical difficulties if we set the domain like this, when the sample space Ω is Template:Colored em. Template:Colored remark This is because the power set of such uncountable sample space includes some "badly behaved" sets, which causes problems when assigning probabilities to them. (Here, we will not discuss those sets and these technical difficulties in details.) Thus, instead of setting the domain of the probability measure to be 𝒫(Ω), we set the domain to be a Template:Colored em (sigma-algebra) containing some "sufficiently well-behaved" events: Template:Colored definition Template:Colored remark Template:Colored proposition

Proof.

Property 1: By the closure under complementation, since SΣ, it follows that =ScΣ.

Property 2: By the closure under countable unions, we have for Template:Colored em infinite sequence of sets A1,A2,, if the sets A1,A2,Σ, then i=1AiΣ. So, in particular, we can choose the sequence to be A1,A2,,An,,, (Σ) where A1,A2,,An is an arbitrary sequence such that A1,A2,,AnΣ. Then, i=1nAi=i=1AiΣ. Thus, we have the desired result.

Property 3: For every infinite sequence of sets A1,A2,Σ, by the closure under complementation, we have A1c,A2c,Σ. Then, by the closure under countable unions, we have i=1AicΣ. After that, we use the De Morgan's law: (i=1Ai)c=i=1AicΣ. Using the closure under complementation property again, we have i=1AiΣ as desired.

Property 4: The proof is similar to that of property 2, and hence left as an exercise.

Template:Colored remark Template:Colored exercise Template:Colored example Template:Colored exercise We have seen two examples of σ-algebra in the example above. Often, the "smallest" σ-algebra is not chosen to be the domain of the probability measure, since we usually are interested in events Template:Colored em and Ω.

For the "largest" σ-algebra, on the other hand, it contains every event, but we may not be interested in some of them. Particularly, we are usually interested in events that are "well-behaved", instead of those "badly behaved" events (indeed, it may be even impossible to assign probabilities to them properly (those events are called Template:Colored em)).

Fortunately, when the sample space Ω is Template:Colored em, every set in 𝒫(Ω) is "well-behaved", so we can take this power set to be a σ-algebra for the domain of probability measure.

However, when the sample space Ω is Template:Colored em, even if the power set 𝒫(Ω) is a σ-algebra, it contains "too many" events, particularly, it even includes some "badly behaved" events. Therefore, we will not choose such power set to the domain of the probability measure. Instead, we just choose a σ-algebra that includes the "well-behaved" events to be the domain, so that we are able to assign probability properly to every event in the σ-algebra of the domain. Particularly, those "well-behaved" events are often the events of interest, so all events of interest are contained in that σ-algebra, that is, the domain of the probability measure.

To motivate the probability axioms, we consider some properties that the "probability" in frequentism (as a limiting relative frequency) possess:

  1. The limiting relative frequency must be nonnegative. (We call this property as Template:Colored em.)
  2. The limiting relative frequency of the whole sample space Ω (Ω is also an event) must be 1 (since by definition Ω contains all sample points, this event must occur in every repetition). (We call this property as Template:Colored em.)
  3. If the events E1,E2, are pairwise disjoint (i.e., EiEj= for every i,j with ij), then the limiting relative frequency of the event i=1Ei= def E1E2 (union of subsets of Ω is a subset of Ω, so it can be called an event) is

limnn(i=1Ei)n=limnn(i=1Ei)n=limnn(E1)+n(E2)+n(the events are pairwise disjoint)=limnn(E1)n+limnn(E2)n+(every limit exists by the axiom in frequentism)=i=1limnn(Ei)n,

which is the sum of the limiting relative frequency of each of the events E1,E2,. (We call this property as Template:Colored em.)

It is thus very natural to set the probability axioms to be the three properties mentioned above: Template:Colored definition Template:Colored remark Using the probability axioms alone, we can prove many well-known properties of probability.

Basic properties of probability

Let us start the discussion with some simple properties of probability. Template:Colored theorem

Proof. Consider the infinite sequence of events Ω,,, (recall that and Ω must be in the σ-algebra β„±). We can see that the events are pairwise disjoint. Also, the union of these events is Ω. Hence, by the countable additivity of probability, we have β„™(Ω)=1=β„™(Ω)=1+β„™()+β„™()+β„™()+β„™()+=11=0i=1β„™()=0. It can then be shown that β„™()=0. [1]

Using this result, we can obtain Template:Colored em from the countable additivity of probability: Template:Colored theorem

Proof. Consider the infinite sequence of events E1,E2,,En,,, (recall that β„± always). Then, β„™(i=1nAi)=β„™(i=1Ai)=i=1β„™(Ai)(countable additivity)=i=1nβ„™(Ai)+i=n+1β„™()=i=1nβ„™(Ai). (The last equality follows since β„™()=0, and it can be shown that i=n+1β„™()=0 using some concepts in limit (to be mathematically rigorous).)

Finite additivity makes the proofs of some of the following results simpler. Template:Colored theorem

Proof.

Property 1:

First, notice that by definition Ω=AAc. Furthermore, since Aβ„±, we have Acβ„± by the closure under complementation of σ-algebra. Also, the sets A and Ac are disjoint. Thus, by the finite additivity, we have β„™(AAc)=β„™(A)+β„™(Ac). On the other hand, β„™(AAc)=β„™(Ω)= P2 1. Thus, we have the desired result.

Property 2: By property 1, we have β„™(A)=1β„™(Ac)0 by P11. We then have the desired numeric bound on β„™(A) since β„™(A)0 also by the nonnegativity of probability.

Property 3: β„™(B)=β„™(BΩ)(BΩ, so B=BΩ)=β„™(B(AAc))(definition)=β„™((BA)(BAc))(distributive law)=β„™(BA)+β„™(BA)(BA,BAcβ„±, and are disjoint. Also, BA=BAc)

Property 4: By property 3, we have β„™(AB)=β„™((AB)A)+β„™((AB)A)(property 3)=β„™(A)+β„™(BA)(possibly through Venn diagram informally)=β„™(A)+β„™(B)β„™(BA).(property 3)

Property 5: Assume that AB. Then, AB=A. Hence, by property 3, β„™(B)=β„™(BA)+β„™(BA)0 by P1=β„™(A).

Template:Colored remark Template:Colored example Template:Colored example Template:Colored exercise Template:Colored example Template:Colored remark Template:Colored example Template:Colored exercise Template:Colored example

Constructing a probability measure

As we have said, the axiomatic definition does not suggest us a way to construct a probability measure. Actually, even for the same experiment, there can be many ways to construct a probability measure that satisfies the above probability axioms if there are not sufficient information provided: Template:Colored example However, we have previously mentioned that we may assign probabilities to events subjectively (as in subjectivism), or according to its limiting relative frequency (as in frequentism). Through these two probability interpretations, we may provide some background information for a random experiment, by assigning probabilities to some of the events before constructing the probability measure, to the extent that there is Template:Colored em way to construct a probability measure. Consider the coin tossing example again: Template:Colored example In general, it is not necessary to assign probability to Template:Colored em event in the event space in the background information for us to able to construct the probability measure in exactly one way. Consider the following example. Template:Colored example We can see from this example that to provide sufficient background information to the extent that the probability measure can be constructed in exactly one way, we just need the probability of each of the singleton events (which should be nonnegative and sum to one to satisfy the probability axioms). After that, we can calculate the probability for each of the other events in the event space, and hence construct the only possible probability measure.

This is true when the sample space is countable, in general: Template:Colored theorem

Proof.

Case 1: Ω is finite. Then, we can write Ω={ω1,ω2,,ωn}. It follows that every event Eβ„± can be expressed as E=i{ωi} (which is are taken over for the union depends on the event E). Notice also that the sets "{ωi}"s are disjoint. (Every set contains a different sample point, and so the intersection of any pair of them is an empty set.) Then, by the finite additivity of probability, we have for every event Eβ„±, β„™(E)=iβ„™({ωi})=ωEβ„™({ω}).

Case 2: Ω is countably infinite. Then, we can write Ω={ω1,ω2,}. It follows that every event Eβ„± can be expressed as E=i{ωi} (which is are taken over for the union depends on the event E). Notice also that the sets "{ωi}"s are disjoint. Then, by the countable additivity/finite additivity of probability, we have for every event Eβ„±, β„™(E)=iβ„™({ωi})=ωEβ„™({ω}).

Template:Colored example The following is an important special case for the above theorem. Template:Colored corollary

Proof. Under the assumptions, the probability of every singleton event is nonnegative. Also, the sum of the probabilities is 1|Ω|+1|Ω|++1|Ω||Ω| times=1. Thus, for every event E, we have by the previous theorem β„™(E)=ωEβ„™({ω})=1|Ω|+1|Ω|++1|Ω||E| times=|E||Ω|.

Template:Colored remark Template:Colored example Template:Colored example Template:Colored exercise Template:Colored example Template:Colored example Template:Colored example Template:Colored example Template:Colored example Template:Colored example Template:Colored example Template:Colored example Template:Colored example

More advanced properties of probability

Recall the Template:Colored em in combinatorics. We have similar results for probability: Template:Colored theorem

Proof. We can prove this by mathematical induction.

Let P(n) be the statement β„™(E1E2En)=i1β„™(Ei1)i1<i2β„™(Ei1Ei2)+i1<i2<i3β„™(Ei1Ei2Ei3)+(1)n+1i1<i2<<inβ„™(Ei1Ei2Ein). We wish to prove that P(n) is true for every positive integer n.

Basis Step: When n=1, P(n) is clearly true since it merely states that β„™(E1)=β„™(E1).

Inductive Hypothesis: Assume that P(k) is true for an arbitrary positive integer k.

Inductive Step:

Case 1: k=1. Then, P(k+1)=P(2) is true by a property of probability (recall that we have "β„™(AB)=β„™(A)+β„™(B)β„™(AB)").

Case 2: k2. We wish to prove that P(k+1) is true. The main idea of the steps is to regard E1E2EkEk+1 as (E1E2Ek)Ek+1, and then we apply the above property of probability, and eventually we will apply the inductive hypothesis twice, on two probabilities involving union of k events. Ultimately, through some (somewhat complicated) algebraic manipulations, we finally get the desired result. The details are as follows (may be omitted): β„™(E1E2EkEk+1)=β„™((E1E2Ek)Ek+1)=β„™(E1E2Ek)+β„™(Ek+1)β„™((E1E2Ek)Ek+1)(using the above property of probability again)=β„™(E1E2Ek)+β„™(Ek+1)β„™((E1Ek+1)(E2Ek+1)(EkEk+1))(distributive law)=i1β„™(Ei1)i1<i2β„™(Ei1Ei2)+i1<i2<i3β„™(Ei1Ei2Ei3)+(1)k+1i1<i2<<ikβ„™(Ei1Ei2Eik)+β„™(Ek+1)i1β„™(Ei1Ek+1)+i1<i2β„™(Ei1Ei2Ek+1)+(1)k+1i1<i2<<ik1β„™(Ei1Ei2Ei3Eik1Ek+1)+(1)k+2i1<i2<<ikβ„™(Ei1Ei2EikEk+1)(using inductive hypothesis twice)=i1β„™(Ei1)i1<i2β„™(Ei1Ei2)+i1<i2<i3β„™(Ei1Ei2Ei3)+(1)k+1i1<i2<<ikβ„™(Ei1Ei2Eik)+β„™(Ek+1)i1β„™(Ei1Ek+1)+i1<i2β„™(Ei1Ei2Ek+1)+(1)k+1i1<i2<<ik1β„™(Ei1Ei2Ei3Eik1Ek+1)+(1)k+2i1<i2<<ikβ„™(Ei1Ei2EikEk+1)(just changing colors)=i1β„™(Ei)i1<i2β„™(Ei1Ei2)+i1<i2<i3β„™(Ei1Ei2Ei3)+(1)k+1i1<i2<<ikβ„™(Ei1Ei2Eik)+(1)k+2i1<i2<<ik<ik+1β„™(Ei1Ei2EikEik+1)(sum is wrt k+1 case, which involves E1,E2,,Ek+1) So, P(k+1) is true.

Hence, by the principle of mathematical induction, P(n) is true for every positive integer n.

Template:Colored remark Template:Colored example Template:Colored example The following is a classical example for demonstrating the application of inclusion-exclusion principle. Template:Colored example Template:Colored theorem

Proof. Assume that EnE. Now, define F1=E1,F2=E2E1,F3=E3E2,. [2] Claim: F1,F2,β„± are pairwise disjoint.

Proof. We wish to prove that FiFj= for every i,j such that ij. FiFj=(EiEi1c)(EjEj1c)=(EiEj)(Ei1Ej1)c.(De Morgan's law) Case 1: i<j. Then, EiEj=Ei (EiEj) and Ei1Ej1=Ej1 (Ei1Ej1). Hence, FiFj=EiEj1=. (Since i<j, EiEj1 (i can be at most j1).)

Case 2: i>j. Then, EiEj=Ej (EjEi) and Ei1Ej1=Ei1. Hence, FiFj=EjEi1= similarly.

Also, we have i=1nFi=En, and i=1Fi=i=1Eiβ„±. Then, we have limnβ„™(En)=limnβ„™(i=1nFi)(above)=limni=1nβ„™(Fi)(finite additivity)=i=1β„™(Fi)(definition)=β„™(i=1Fi)(countable additivity)=β„™(i=1Ei)(above)=β„™(E).

Template:Colored corollary

Proof. Assume that EnE as n. Then, by De Morgan's law and a property of subset, EncEc. So, limnβ„™(En)=1limnβ„™(Enc)(complementary event property)=1β„™(Ec)(continuity from below)=1(1β„™(E))(complementary event property)=β„™(E).

Template:Colored example Template:Colored exercise Template:Colored example Template:Colored theorem

Proof. Define F1=E1,F2=E2E1,F3=E3(E2E1),Fk=Ek(Ek1Ek2E2E1),. Claim: F1,F2,F3,β„± are pairwise disjoint.

Proof. We wish to prove that FiFj= for every i,j such that ij. FiFj=(Ei(Ei1Ei2E1)c)(Ej(Ej1Ej2E1)c)=(EiEi1cEi2cE1c)(EjEj1cEj2cE1c).(De Morgan's law) Case 1: i<j. Then, EjEj1cEj2cE1c=EjEj1cEj2cEicE1cEic. So, it follows that (EiEi1cEi2cE1c)(EjEj1cEj2cE1c)(EiEi1cEi2cE1c)Eic=, which means FiFj= (since the only subset of is ).

Case 2: i>j. Then, EiEi1cEi2cE1c=EiEi1cEi2cEjcE1cEjc. So, it follows that (EiEi1cEi2cE1c)(EjEj1cEj2cE1c)Ejc(EjEj1cEj2cE1c)=, which means FiFj= (since the only subset of is ).

Furthermore, we have FiEi for every i=1,2,, and j=1Fj=j=1Ejβ„±. Hence, β„™(i=1Ei)=β„™(i=1Fi)=i=1β„™(Fi)(countable additivity)i=1β„™(Ei).(monotonicity)

Template:Colored remark Template:Colored example Template:Nav

Template:BookCat

  1. ↑ One may prove this by Template:Colored em: Assume that β„™()0. Then, by the nonegativity of probability, this means β„™()>0. Then, i=1β„™()=limn(nβ„™()>0)= (that is, the sum Template:Colored em). So, i=1β„™()0.
  2. ↑ Graphically,
          .           .
          .         .
          .       .
    *-----------*
    |###########|
    *--------*##|
    |////////|##|
    *----*///|##|  ...
    |....|///|##|
    |....|///|##|
    *----*---*--*
      E_1
        E_2
          E_3 ...
    ..
    .. : F_1  
    
    //
    // : F_2
    
    ##
    ## : F_3