## Thursday, 7 August 2014

### chapter-13 More on Events

Until now, what we have been doing is simple: to evaluate the probability of any event $E$ in a sample space $S$, we find the total number of outcomes, and the number of outcomes favorable to $E$, and we then have
 $P\left( E \right) = \dfrac{{n\left( E \right)}}{{n\left( S \right)}}$ $\ldots(1)$
You must not at all forget that this holds only if all the outcomes are equally likely, that is, we have no reason to suspect that any particular outcome will be more or less likely than another. For example, we saw that the sample space of tossing a fair coin or rolling a fair die consist of equally likely outcomes. (Note that two outcomes cannot be proved mathematically to be equally likely. We either assume beforehand the equal likelihood of outcomes, or we repeat the experiment an indefinitely large number of times, and thus show empirically (rather than mathematically) that the relative frequencies of the various outcomes approach the same value).
Now, coming back to $(1)$, we said that it will not hold if the various outcomes are not equally likely. For example, suppose that a die is constructed (using careful loading) such that
 $P\left( 1 \right) = P\left( 2 \right) = P\left( 3 \right) = \dfrac{1}{6},\,P\left( 4 \right) = \dfrac{1}{3},\,P\left( 5 \right)$ $= \left( {\dfrac{1}{8}} \right),\,\,P\left( 6 \right) = \dfrac{1}{{24}}$
For such a die, the probability of rolling an odd number will be
 $P\left( 1 \right) = P\left( 3 \right) = P\left( 5 \right) = \dfrac{1}{6} + \dfrac{1}{6} + \dfrac{1}{8} = \dfrac{{11}}{{24}}$
rather than $\dfrac{1}{2}$, which you would have got by doing (no. of odd outcomes / no. of total outcomes). This point is easy to understand yet mistakes are made!
A curious reader might have a further issue. She might say, “You just talked about making a die with outcomes of unequal probabilities. For example, you said that $P\left( 5 \right) = \dfrac{1}{8}.$ What is the basis for saying so? I understood the case of equally likely outcomes, where all probabilities are the same, but how did this figure of $\dfrac{1}{8}$ come about ?” Well, this number comes about by using a relative frequency approach to probability. When the die-maker says that the probability of a $5$ coming up is $\dfrac{1}{8}$, what he must have done (either actually, or through a sophisticated computer simulation) is roll the die a very large number of times, and observe that $5$ comes up (about) one-eight of the time. Thus the assertion.
To summarize, there are two ways we’ve discussed to evaluate probabilities
Classical approach
this ‘works’ when all the outcomes are equally likely. If our event can happen in $n$ ways out of a total possible of $N$, our required probability is $n/N$
Frequency approach
this ‘works’ in general. To find the probability of an event, we repeat the experiment a very large number of times, say $M$, and observe how many times that particular event occurred, say $m$$m/M$ then gives us the empirical probability of an event. In fact, we should be using this relation:
 $P\left( {{\rm{event}}} \right) = \mathop {\lim }\limits_{M \to \infty } \dfrac{m}{M}$
that is, we should be using the value of empirical probability only if the experiment is repeated an indefinitely large number of times.
Finally, it must be said that both the approaches fail to stand up to the rigors of mathematics, because the former uses the vague phrase “equally likely” about which we can give no mathematical justification, while in the latter, we have no way to prove that the limit $\mathop {\lim }\limits_{M \to \,\,\infty } \dfrac{m}{M}$ will actually coverage to some value, because no experiment can be repeated an infinite number of times.
Mathematicians therefore, being very finicky about rigor, define probability as a function associated with any event and that satisfies three axioms:
Axiom 1: For any event $E$,
 $0 \le P\left( E \right) \le 1$
Axiom 2: For the entire sample space $S$ (that is, for the sure event),
 $P\left( S \right) = 1$
Axiom 3: For mutually exclusive events ${A_i},\,\,i = 1,\,\,2,\ldots,$
 $P\left( {{A_1} \cup {A_2} \cup \ldots } \right) = P\left( {{A_1}} \right) + P\left( {{A_2}} \right) + \ldots$
Thus, what we have here is three axioms that the probability of any event(s) must satisfy, but these three axioms in no way tell us how to actually measure probability associated with any event. Those interested in knowing more deeply about these axioms and the interpretation of probability should find plenty of resources on the World Wide Web. For present, this much background should suffice.
Before closing this section, let us see some more examples of how events are treated as subsets of a universal set of outcomes, the sample space. Events are denoted by $A$$B$$C$ etc and the sample space by $S$. The complementary event of any event $A$ is denoted by $\bar A$.
Relation (1) $P\left( A \right) + P\left( {\bar A} \right) = P\left( S \right) = 1$
Relation (2) $P\left( {A \cap \bar B} \right) = P\left( A \right) - P\left( {A \cap B} \right)$
Relation (3) $P\left( {A \cup B} \right) = P\left( A \right) + P\left( B \right) - P\left( {A \cap B} \right)$
Generalising this gives
 $P\left( {{A_1} \cup {A_2} \cup \ldots \cup {A_n}} \right) = \sum\limits_{i = 1}^n {P\left( {{A_i}} \right)} - \sum\limits_{i < j} {P\left( {{A_i} \cap {A_j}} \right)}$ $+ \sum\limits_{i < j < k} {P\left( {{A_i} \cap {A_j} \cap {A_k}} \right)} - \ldots + {\left( { - 1} \right)^{n - 1}}P\left( {{A_1} \cap {A_2}\ldots \cap {A_n}} \right)$
For example,
 $P\left( {{A_1} \cup {A_2} \cup {A_3}} \right) = P\left( {{A_1}} \right) + P\left( {{A_2}} \right) + P\left( {{A_3}} \right) - P\left( {{A_1} \cap {A_2}} \right) - P\left( {{A_2} \cap {A_3}} \right) - P\left( {{A_3} \cap {A_1}} \right) + P\left( {{A_1} \cap {A_2} \cap {A_3}} \right)$
Try proving this relation for three events using a Venn diagram
Relation (4) $P\left( {{A_1} \cup {A_2}} \right) \le P\left( {{A_1}} \right) + P\left( {{A_2}} \right)$
This should be obvious. On the right side of the inequality, there is an extra contribution to the sum from ${A_1} \cap {A_2}.$ The equality holds only for mutually exclussive events.
This also generalises obviously to $n$ events.
Relation (5) $P\left( {{{\bar A}_1}} \right) + P\left( {{{\bar A}_2}} \right) \ge 1 - P\left( {{A_1} \cap {A_2}} \right)$
Try to figure this out on your own. Using a Venn diagram would be a good idea.