Suppose that a random experiment consists of tossing two dice, and the quantity of interest is the sum of the numbers on the two dice. Let us denote this sum by
.
Of course, an experiment can have many random variables associated with it. For example, for the coin tossing experiment above, we can have so many possible random variables.
etc |
Thus, we see that a random variable is a way of assigning to each outcome of the experiment, a single real number, which will vary with different outcomes of the experiment. So far, so good.
Now, we will try to understand what the probability distribution of a random variable means.
Consider once again the random experiment of rolling two dice and observing the sum of the numbers on the two dice, which we denoted by
.
can take a multitude of values in the following ways:
Outcome(s) which gives this | ||
Each value of
has a certain probability of being obtained. For example
Let us plot the probabilities for each value of
:
This table gives us what is known as the probability distribution
of
, that is, it is a description of how the “probability is distributed” across different values of the random variable. In simple words, the
of any random variable
tells us how probable each value of the
is.
The sum of the various probabilities in a
must be
, as should be obvious. This fact you are urged to confirm for the last table.
Let us write down another
as an example.
No comments:
Post a Comment