Mean and Variance of Binomial Distribution
Mean or Expected value of a binomial distribution is given by the following formula:
Mean = μ = np
and variance or measure of a binomial distribution is given by the following formula:
Variance = σ2 = np(1-p)
where,
- n is Total Number of Trials
- p is Probability of Success
Important Things to Remember about Binomial Distribution
There are some important things related to binomial distribution to which we need to pay more attention. Some of those things are as follows:
- Binomial distribution is a legitimate probability distribution since
[Tex]\bold{\sum_{r=0}^n P(X=r)=\sum_{r=0}^n nCrq^{n-r}p^r=(q+p)^n=1}[/Tex]
- Mean of the Binomial Distribution is given by:
[Tex]\bold{E(x)=\sum_{r} x_{r}p_{r}=np}[/Tex]
and
[Tex]\bold{E(x^2)=\sum_{r}x_{r}^2p_{r}}[/Tex]
- Variance of the Binomial Distribution is given by:
[Tex]\bold{Var(x)=E(x^2)-{{E(x)}}^2=npq}[/Tex]
Generalization of Bernoulli’s Distribution: Multinomial Distribution
If A1, A2, . . , Ak are exhaustive and mutually exclusive events associated with a random experiment such that, P(Ai occurs) = pi where,
p1 + p2 +. . . + pk = 1, and if the experiment is repeated n times, then the probability A1 occurs r1 times, A2 occurs r2 times, . . . . , Ak occurs rk times is given by:
Pn(r1, r2, . . . , rk) = [Tex]\frac{n!}{r_{1}! r_{2}! . . . r_{k}!} \ p_{1}^{r_{1}}\times p_{2}^{r_{2}}\times. . .\times p_{k}^{r_{k}}[/Tex]
where,
- r1 + r2 + …+ rk = n
Proof:
r1 trials in which the event A1 occurs can be chosen from the n trials nCr ways. The remaining (n – r1) trials are left over for the other events.
r2 trials in which the event A2 occurs can be chosen from the (n – r1) trials in (n – r1)Cr2 ways.
r3 trials in which the event A3 occurs can be chosen from the (n – r1 – r2) trials in (n – r1 – r2)Cr3 ways, and so on.
Therefore, the number of ways in which the events A1, A2, …, Ak can happen:
nCr1 × (n − r1)Cr2 × (n −r1 − r2)Cr3 × (n−r1 − r1 – …− rk − 1)Crk = n!/(r1!r2! . . . r3!)
Consider any one of the above ways in which the events A1, A2, . . ., Ak occurs.
Since, n trials are independent, r1 + r2 + . . . +rk trials are also independent.
∴ P(A1 occurs r1 times, A2 occurs r2 times, . . . , Ak occurs rk times) = p1 r1 × p2r2 × . . . × pk rk
Since the ways in which the events happen are mutually exclusive, the required probability is given by
Pn (r1 , r2 , . . . , rk ) =[Tex] \frac{n!}{r_{1}! r_{2}! . . . r_{k}!}\times \ p_{1} ^{r_{1}}\times p_{2}^{r_{2}}\times… \times p_{k}^{r_{k}}[/Tex]
Read More,
Bernoulli Trials and Binomial Distribution
Bernoulli Trials and Binomial Distribution are the fundamental topics in the study of probability and probability distributions. Bernoulli’s Trials are those trials in probability where only two possible outcomes are Success and Failure or True and False. Due to this fact of two possible outcomes, it is also called the Binomial Trial.
Binomial Distribution is the sequence of independent experiments with each experiment being a binomial trial. In this article, we are going to discuss the Bernoulli Trials in detail with the related theorems as well. Also, we will study the Binomial Distribution after the understanding Bernoulli Trial.
Table of Content
- Bernoulli’s Trials Definition
- Examples of Bernoulli’s Trials
- Bernoulli’s Trials Theorem
- Binomial Distribution Definition
- Examples of Binomial Distribution
- Formula for Probability in Binomial Distribution
- Mean and Variance of Binomial Distribution
- Important Things to Remember about Binomial Distribution
- Generalization of Bernoulli’s Distribution: Multinomial Distribution