Suppose that a random experiment consists of tossing two dice, and the quantity of interest is the sum of the numbers on the two dice. Let us denote this sum by .
will be termed a random variable of this experiment and it can take one of these possible values: .Consider another random experiment wherein we toss a coin a times and we are interested in the number of Heads obtained, which is again a random variable of this experiment, with one of these possible values:
Of course, an experiment can have many random variables associated with it. For example, for the coin tossing experiment above, we can have so many possible random variables.
: No. of tails | |
: No. of Heads No. of Tails | |
: The no. of tosses in the largest consecutive sequence of Heads | |
etc |
Thus, we see that a random variable is a way of assigning to each outcome of the experiment, a single real number, which will vary with different outcomes of the experiment. So far, so good.
Now, we will try to understand what the probability distribution of a random variable means.
Consider once again the random experiment of rolling two dice and observing the sum of the numbers on the two dice, which we denoted by . can take a multitude of values in the following ways:
Outcome(s) which gives this | ||
Each value of has a certain probability of being obtained. For example
Let us plot the probabilities for each value of :
This table gives us what is known as the probability distribution of , that is, it is a description of how the “probability is distributed” across different values of the random variable. In simple words, the of any random variable tells us how probable each value of the is.
The sum of the various probabilities in a must be , as should be obvious. This fact you are urged to confirm for the last table.
Let us write down another as an example.
No comments:
Post a Comment