17
Tutorial 5, STAT1301 Fall 2010, 26OCT2010, MB103@HKU By Joseph Dong CHARACTERISTICS OF A RANDOM VARIABLE GENERATING FUNCTIONS STRICTLY MONOTONIC TRANSFORMATION OF A RANDOM VARIABLE EXPECTATION AS INTEGRATION MARKOV’S INEQUALITY

Tutorial 5, STAT1301 Fall 2010, 26OCT2010 , MB103@HKU By Joseph Dong

  • Upload
    dolf

  • View
    32

  • Download
    5

Embed Size (px)

DESCRIPTION

N umerical Characteristics of a Random Variable G enerating Functions S trictly Monotonic Transformation of a Random Variable E xpectation as Integration M arkov’s Inequality. Tutorial 5, STAT1301 Fall 2010, 26OCT2010 , MB103@HKU By Joseph Dong. Recall: What is a Random Variable?. - PowerPoint PPT Presentation

Citation preview

Page 1: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

Tutorial 5, STAT1301 Fall 2010, 26OCT2010, MB103@HKUBy Joseph Dong

NUMERICAL CHARACTERISTICS OF A RANDOM VARIABLE

GENERATING FUNCTIONS

STRICTLY MONOTONIC TRANSFORMATION OF A RANDOM VARIABLE

EXPECTATION AS INTEGRATION

MARKOV’S INEQUALITY

Page 2: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

2

Recall: What is a Random Variable?• A Random Variable is

a function defined on a sample space.

• The sample space contains randomness.

• The state space is accordingly random.

• The Random Variable itself is deterministic.

Page 3: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

3

Recall: What we have done about RV?

•We have defined the Random Variable as a function (with a special restriction we don’t want to discuss in this course) from a given sate space to a sample space (the total set of outcomes from a random experiment) , usually a subset of . • In symbols: • The sample space is the platform where we adopt the notion

“variable”.

Page 4: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

4

Recall: What we have done about RV? •We have done the probability distribution of a random variable.• This is the law governing the random variable’s dance in

sample space.• Two equivalent way of describing the law• By probability measure on the sample space: (takes in a set as

argument)• By listing the probability measure for all atoms of the sample space

• This is equivalent to defining PDF or PMF, or a general probability function

• By distribution function (takes in a number as argument)

• The distribution function is never decreasing

• ,

• The distribution function is right continuous

Page 5: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

5

Numerical Characteristics of a Random Variable and Related Topics•Workplace = a numeral sample space (subset of ) = or • Expectation • Law Of The Unconscious Statistician:

• Moments = Expectation of positive integer powers:

• Variance = 2nd order central moment:

• Compute Moments using Moment Generating Function

• Markov & Chebyshev Inequalities , • Strictly Monotonic Transformation of an R.V. & an invariant

differential • When is strictly increasing, then

What’s the integrand? What’s the bedrock for

integration?

Are you conscious about what they are

unconscious?Expectation is a moment. Variance is a moment.

Moment is the most general concept among the three.

Generating Function is a trick. Here we apply the trick to the

problem here of finding moments. And we get huge

bonus (in Ch4)

Chebyshev is Markov’s teacher. But the relationship is reversed for the two inequalities. Markov’s Inequality has a physical meaning.

Page 6: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

6

Linearity of Expectation

where can be .

Simple cases:

Page 7: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

7

Technical Exercises• Handout Problem 1, 2, and 3.• This is the level that you have already mastered before yesterday’s

midterm

Page 8: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

8

A Closer Look at Expectation• Expectation is a generalized integral.• Let’s forget about probability theory for a few minutes

and go back to calculus.• Usually, we always use a homogeneous horizontal axis

for integration. The density everywhere is the same. Such as in

• But we can generalize by allowing the density to vary from place to place on the horizontal axis. • To take care of the density, we introduce a

density function into the integral as:

(Of course the integral will now change value, except everywhere.)

Page 9: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

9

Center of Mass and Expectation• For now let’s forget about the curve but focus on the

x-axis• If we treat the segment on the horizontal axis as a massed

segment with linear mass density , we can now compute the coordinate of its center of mass, , according to the formula:

• One more step:

• Note that can be regarded as a normalizing constant and the whole thing could be some real probability density!

• Now suppose the x-axis is the state space of some random variable , and is actually , the probability density, then and are the same thing—both conceptually and technically.

Page 10: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

10

Exercises: Handout Problem 4 & 5

Page 11: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

11

Law of the Unconscious Statistician•We go one step forward to find the expectation of any function of such as , , etc., that is

• Go back to the previous unresolved integration , and, without lost of generality, assume the density here is a probabilistic one.

• Obs1: If two r.v.’s share the same sample space and the same distribution, then they must have the same expectation.• Therefore

• Obs2: If two values, say and , are mapped to the same value by , that is, if , then • Therefore

Page 12: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

12

A New Level of Understanding• Now we understand the meaning of the new integral

where is a probability density on the x-axis, is the expectation of :

• Expectation is an Integration of the general kind.• They are unconscious about the fact that as a random

variable has a different sample space than has. Hence the definition of or more explicitly written as should be

and it takes some reasoning to establish the equality of this integral with the one used in Lotus.

Page 13: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

13

Markov’s Inequality

Caution: Markov’s Inequality only works for non-negative r.v..

Page 14: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

14

Generating Function• Generating Function is a general math technique.• Whenever you have a function whose value set (range) is

a countable set, you can embed these values in a power series as:

where is the range of the function. In specific cases, the power series will converge(sum) to a compact form, but it will still be a function of .

• Question: How to get back the ’s when you are directly given ?• One widely used way is to differentiate with respect to ,

multiple times, and evaluate the derivative at , and divide by a constant.

• For example, you want to get back , the procedure is

• Often, to remove the division step, we adopt the form

Page 15: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

15

Moment Generating Function• Recall: Moment of a random variable

where is a non-negative integer (). • If we regard is a function whose value is indexed by , then the value set is a countable set: • Then we can embed all the moments in a generating function/power series known as Moment Generating Function:

Page 16: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

16

Strictly Monotonic Transformation of an R.V.• Strictly Monotonic Transformation(Function)• Strictly Increasing Transformation• Strictly Decreasing Transformation

• Consider a strictly increasing function . For simplicity, use to denote , and hence to denote . The following equality between the two probability differentials must hold:

• Reason: • This is equivalent to claiming

• But , since is strictly monotonic, therefore the event is the exactly the same one as .

• For strictly decreasing functions, absolute values are needed.

Page 17: Tutorial  5,  STAT1301 Fall 2010,  26OCT2010 , MB103@HKU By Joseph Dong

17

Consequence of • Caution: Always remember this equality holds under the strict monotonic transformation condition.• Consequence:

• Caution: Absolute value here are always needed for some very mysterious reason in the general theory of calculus (Consult Loomis’s Advanced Calculus if you are interested).• This is the standard way of find the (strictly monotonically) transformed density function.