It is in most cases very useful to talk about the mean of a random variable . For example, in the experiment above of four tosses of a coin, someone might want to know the average number of heads obtained. Now the reader may wonder about what meaning to attach to the phrase “average number of heads”. After all, we are doing the experiment only once and we’ll obtain a particular value for the number of Heads, say or or or or , but what is then this “average number of heads”?
By the average number of heads we mean this: repeat the experiment an indefinitely large number of times. Each time you’ll get a certain number of Heads. Take the average of the number of Heads obtained in each repetition of the experiment. For example, if in repetitions of this experiment, you obtain ,, , , Heads respectively, the average number of Heads would be . This is not a natural number, which shouldn’t worry you since it is an average over the repetitions. To calculate the true average, you have to repeat the experiment an indefinitely large number of times.
An alert reader might have realized that the average value of a is easily calculable through its . For example, let us calculate the true average number of heads in the experiment of Example – . The is reproduced below:
Thus, for example, , which means that if the experiment is repeated an indefinitely large number of times, we’ll obtain Heads exactly once, (about) of the time. Similarly, (about) of the time, Heads will be obtained exactly twice, and so on. Let us denote the number of repetition of the experiment by , where , Thus, the average number of Heads per repetition would be denotes average)
(Value of the (Corresponding Probability of this value) |
Thus, we see that if a has possible values with respective probabilities , the mean of , denote by , is simply given by
As another example, recall the experiment of rolling two dice where the was the sum of the numbers on the two dice. The of is given in the table on Page – , and the average value of is
The average value is also called the expected value, which signifies that it is what we can expect to obtain by averaging the ’s value over a large number of repetitions of the experiment. Note that the value itself may not be expected in the general sense – the “expected value” itself may be unlikely or even impossible. For example, in the rolling of a fair die, the expected value of the number that shows up is (verify), which in itself can never be a possible outcome. Thus, you must take care while interpreting the expected value – see it as an average of the ’s values when the experiment is repeated indefinitely.
Another quantity of great significance associated with any is its variance, denoted by . To understand this properly, consider two s and and their s shown in graphical form below.
Both the s have an expected value of (verify), but it is obvious that there is a significant difference between the two distributions. What is this difference? Can you put it into words? And more importantly, can you quantify it?
It turns out that we can, in a way very simple to understand. The ‘data’ or the of is more widely spread than that of . This is what is obvious visually, but we must now assign a numerical value to this spread. So what we’ll do is measure the spread of the about the mean of the . For both and , the mean is , but the of is spread more about than that of . We now quantity the spread in .
Observe that the various value of tell us how far the corresponding values of are from the mean (which is fixed). One way that may come to your mind to measure the spread is sum all these distances, i.e.
However, a little thinking should immediately make it obvious to you that the right hand side is always , because the data is spread in such a way around the mean that positive contributions to the sum from those values greater than and negative contributions from those values smaller than exactly cancel out. Work it out yourself.
So what we do is use the sum of the squares of these distances:
However, there is still something missing. To understand what consider the following :
Although the seems visually widespread here, the probabilities of those values far from the mean are extremely low, which means that their contribution to the spread must take into account how probable they are and so on. This is simply accomplished by multiplying the value of with the probability of the corresponding value of .
Thus, if can take the values with probabilities , the spread in the of can be appropriately represented by
‘Spread’ |
This definition of spread is termed the variance of , and is denoted by . Statisticians defined another quantify for spread, called the standard deviation, denote by , and related to the variance by
Note that the expected value of was
Similarly, variance is nothing but the expected value of
Coming back to Fig-, the variance in is
Similarly, the variance in is
which confirms our visual observation that the of is more widely spread than of , because .
No comments:
Post a Comment