JOINT PROBABILITY DISTRIBUTION - Quantitative Techniques for management

Probability distributions can be applied to grouped random variables which give rise to joint probability distributions. Essentially, joint probability distributions describe situations where by both outcomes represented by random variables occur. While only X represent the random variable, we now have X and Y as the pair of random variables. Consider the experiment of throwing two unbiased dice.

If X denotes the number on the first and Y denotes the number on the second die, then X and Y are random variables having a joint probability distribution. When the number of random variables is two, it is called a bi-variate probability distribution and if the number of random variables become more than two, the distribution is termed as a multivariate probability distribution.

Let the random variable X take values X1, X2, ...... Xm and Y take values Y1, Y2, ...... Yn. Further, let pij be the joint probability that X takes the value Xi and Y takes the value Yj, i.e., P[X = Xi and Y = Yj] = pij (i = 1 to m and j = 1 to n). This bivariate probability distribution can be written in a tabular form as follows :

bivariate probability distribution

Marginal Probability Distribution

Given a joint probability distribution f(x1,x2,…xn) the marginal distribution of one of the variables is the probability distribution of that variable considered by itself. It is called "marginal" because it may be found for a discrete distribution of two variables presented in a table by summing values in a table along rows or columns, and writing the sum in the margins of the table for a continuous distribution by integration.

In the above table, the probabilities given in each row are added and shown in the last column. Similarly, the sum of probabilities of each column is shown in the last row of the table. The last column of the table gives the marginal probabilities for various values of random variable X. The set of all possible values of the random variable X along with their respective marginal probabilities is termed as the marginal probability distribution of X.

Similarly, the marginal probabilities of the random variable Y are given in the last row of the above table.

Remarks: If X and Y are independent random variables, then by multiplication theorem of probability we have P(X = Xi and Y = Yi) = P(X = Xi).P(Y = Yi) ∀ i and j
Using notations, we can write pij= Pi.P j´

The above relation is similar to the relation between the relative frequencies of independent attributes.

Conditional Probability Distribution

Conditional Probability Distributions arise from joint probability distributions where by we need to know that probability of one event given that the other event has happened and the random variables behind these events are joint. Each column of the above table gives the probabilities for various values of the random variable X for a given value of Y, represented by it.

For example, column 1 of the table represents that P(X1,Y1) = p11, P(X2,Y1) = p21, ...... P(Xm,Y1) = pm1, where P(Xi,Y1) = pi1 denote the probability of the event that X = Xi (i = 1 to m) and Y = Y1. From the conditional probability theorem, we can write

Conditional Probability Distribution

This gives us a conditional probability distribution of X given that Y = Y1. This distribution can be written in a tabular form as shown below:

Conditional Probability Distribution

The conditional distribution of X given some other value of Y can be constructed in a similar way. Further, we can construct the conditional distributions of Y for various given values of X.

Remarks: It can be shown that if the conditional distribution of a random variable is same as its marginal distribution, the two random variables are independent. Thus, if for the conditional distribution of X given Y1 we have Pi1/P1’ "i, then X and Y are independent. It should be noted here that if one conditional distribution satisfies the condition of independence of the random variables, then all the conditional distributions would also satisfy this condition.

Example: Let two unbiased dice be tossed. Let a random variable X take the value 1 if first die shows 1 or 2, value 2 if first die shows 3 or 4 and value 3 if first die shows 5 or 6. Further, Let Y be a random variable which denotes the number obtained on the second die. Construct a joint probability distribution of X and Y. Also determine their marginal probability distributions and find E(X) and E(Y) respectively.

Determine the conditional distribution of X given Y = 5 and of Y given X = 2. Find the expected values of these conditional distributions. Determine whether X and Y are independent?

Solution: For the given random experiment, the random variable X takes values 1, 2 and 3 and the random variable Y takes values 1, 2, 3, 4, 5 and 6. Their joint probability distribution is shown in the following table:

joint probability distributionjoint probability distribution

The conditional distribution of X when Y = 5 is

conditional distribution

Since the conditional distribution of X is same as its marginal distribution (or equivalently the conditional distribution of Y is same as its marginal distribution), X and Y are independent random variables.

Example : Two unbiased coins are tossed. Let X be a random variable which denotes the total number of heads obtained on a toss and Y be a random variable which takes a value 1 if head occurs on first coin and takes a value 0 if tail occurs on it. Construct the joint probability distribution of X and Y. Find the conditional distribution of X when Y = 0.
Are X and Y independent random variables?

Solution: There are 4 elements in the sample space of the random experiment. The possible values that X can take are 0, 1 and 2 and the possible values of Y are 0 and 1. The joint probability distribution of X and Y can be written in a tabular form as follows:

joint probability distribution

Since the conditional and the marginal distributions are different, X and Y are not independent random variables.

Expectation of the Sum or Product of two Random Variables

Theorem: If X and Y are two random variables, then E(X + Y) = E(X) + E(Y).

Proof: Let the random variable X takes values X1, X2, ...... Xm and the random variable Y takes values Y1, Y2, ...... Yn such that P(X = Xi and Y = Yj) = pij (i = 1 to m, j = 1 to n).

By definition of expectation, we can write

Sum or Product of two Random Variables

The above result can be generalised. If there are k random variables X1, X2, ...... Xk, then E(X1 + X2 + ...... + Xk) = E(X1) + E(X2) + ...... E(Xk).

Remarks: The above result holds irrespective of whether X1, X2, ...... Xk are independent or not.

Theorem : If X and Y are two independent random variables, then E(X.Y) = E(X).E(Y)

Proof: Let the random variable X takes values X1, X2, ...... Xm and the random variable Y takes values Y1, Y2, ...... Yn such that P(X = Xi and Y = Yj) = pij (i = 1 to m, j = 1 to n).

random variable

The above expression, which is the mean of the product of deviations of values from their respective means, is known as the Covariance of X and Y denoted as Cov(X, Y) or

Covariance of X and Y

Note that E[{Y - E(Y)}] = 0, the sum of deviations of values from their arithmetic mean.

Remarks: If X and Y are independent random variables, the right hand side of the above equation will be zero. Thus, covariance between independent variables is always equal to zero.

II. Mean and Variance of a Linear Combination

Let Z =Φ (X,Y ) = aX + bY be a linear combination of the two random variables X and Y, then using the theorem of addition of expectation, we can write

µz = E (Z) = E (aX + bY) = aE(X) + bE(Y) = aµx + bµy

Further, the variance of Z is given by

σ22 = E[Z - E(Z)]2 = E(aX + bY- aµx - bµy]2 = E[a(X - µx)+b(Y-µy)]2

= a2 E(X -µx)2 + b E(Y - µy)2 + 2abE (X - µx)( Y - µy)

= a2σ2x+ b2σ2y+2abσxy

Remarks:

1. The above results indicate that any function of random variables is also a random variable.

2. If X and Y are independent, then / XY 0 , ∴ σ2z= a2σ2x+b2σ2y

3. If Z = aX - bY, then we can write σ2z= a2σ2x+b2σ2y-2abσxyif X and Y are independent.

4. The above results can be generalised. If X1, X2, ...... Xk are k independent random variables with means µ1µ2 , , ...... µk and variances σ2122,.....σ2k respectively, then

E(X1±x2±....±xk)= µ1±µ2;±µ3;±....±µk;

and var(x1±x2±...±xk)= σ2122+...+σ2k

Notes:

1. The general result on expectation of the sum or difference will hold even if the random variables are not independent.
2. The result can also be proved for continuous random variables.

Example: A random variable X has the following probability distribution :

X : 2 1 0 1 2
Probability : 1/6 P 1/4 P 1/6

(i) Find the value of p.
(ii) Calculate E(X + 2) and E(2X2 + 3X + 5).

Solution: Since the total probability under a probability distribution is equal to unity, the value of p should be such that 1/6+p+1/4+p+1/6=1.

This condition gives p=5/24

Further, E(X)= -2.1/6-1.5/24+0.1/4+1.5/24+2.1/6=0,

E(X2)= 4.1/6+1.5/24+0.1/4+1.5/24+4.1/6= 7/4

E(X+2)=E(X)+2= 0+2=2 And E(2X2+ 3X+5)= 2E(X2)+3E(X)+5= 2.7/4+0+5= 8.5

Example : A person applies for equity shares of Rs 10 each to be issued at a premium of Rs 6 per share; Rs 8 per share being payable along with the application and the balance at the time of allotment. The issuing company may issue 50 or 100 shares to those who apply for 200 shares, the probability of issuing 50 shares being 0.4 and that of issuing 100 shares is 0.6.

In either case, the probability of an application being selected for allotment of any shares is 0.2 The allotment usually takes three months and the market price per share is expected to be Rs 25 at the time of allotment. Find the expected rate of return of the person per month.

Solution: Let A be the event that the application of the person is considered for allotment, B1 be the event that he is allotted 50 shares and B2 be the event that he is allotted 100 shares. Further, let R1 denote the rate of return (per month) when 50 shares are allotted, R2 be the rate of return when 100 shares are allotted and R = R1 + R2 be the combined rate of return.

We are given that P(A) = 0.2, P(B1/A)=0.4 and P(B2/A)=0.6

(a) When 50 shares are allotted The return on investment in 3 months = (25 - 16)50 = 450 ∴ Monthly rate of return=450/3=150

The probability that he is allotted 50 shares

= P (A∩ B) = P(A).P (B1/ A) = 0.2*0.4 = 0.08

Thus, the random variable R1 takes a value 150 with probability 0.08 and it takes a value 0 with probability 1 - 0.08 = 0.92

∴ E(R1)= 150*0.08+0= 12.00

(b) When 100 shares are allotted

The return on investment in 3 months = (25 - 16).100 = 900

∴ Monthly rate of return= 900/3=300

The probability that he is allotted 100 shares

= P (A ∩ B2) = P (A). P(B2/ A) =0.2*0.6=0.12

Thus, the random variable R2 takes a value 300 with probability 0.12 and it takes a value 0 with probability 1 - 0.12 = 0.88

∴ E(R2)= 300*0.12+0= 36

Hence, E(R) = E(R1 + R2) = E(R1) + E(R2) = 12 + 36 = 48

Example : If X and Y are two ndependent random variables with means 50 and 120 and variances 10 and 12 respectively, find the mean and variance of Z = 4X + 3Y.

Solution: E(Z) = E(4X + 3Y) = 4E(X) + 3E(Y) = 4 * 50 + 3 * 120 = 560

Since X and Y are independent, we can write

Var(Z) = Var(4X + 3Y) = 16Var(X) + 9Var(Y) = 16 * 10 + 9 * 12 = 268

Example : It costs Rs 600 to test a machine. If a defective machine is installed, it costs Rs 12,000 to repair the damage resulting to the machine. Is it more profitable to install the machine without testing if it is known that 3% of all the machines produced are defective? Show by calculations.

Solution: Here X is a random variable which takes a value 12,000 with probability 0.03 and a value 0 with probability 0.97.

∴ E(X) = 12000 * 0.03 + 0 * 0.97 = Rs 360.

Since E(X) is less than Rs 600, the cost of testing the machine, hence, it is more profitable to install the machine without testing.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Quantitative Techniques for management Topics