42
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.

Exam 2: Rules

  • Upload
    diem

  • View
    40

  • Download
    0

Embed Size (px)

DESCRIPTION

Exam 2: Rules. Section 2.1. Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back. Exam 2: Rules. Section 2.1. Five question:. One fill in the blanks. One multiple choice. Three to solve. One of those three is from your homework. - PowerPoint PPT Presentation

Citation preview

Page 1: Exam 2: Rules

Exam 2: Rules Section 2.1

Bring a cheat sheet. One page 2 sides.

Bring a calculator.

Bring your book to use the tables in the back.

Page 2: Exam 2: Rules

Exam 2: Rules Section 2.1

Five question:

One fill in the blanks

One multiple choice

Three to solve

One of those three is from your homework

Show work; if you do you might get partial credit

Page 3: Exam 2: Rules

In studying uncertainty:

1) Identify the experiment of interest and understand it well (including the associated population)

2) Identify the sample space (all possible outcomes)

3) Identify an appropriate random variable that reflects what you are studying (and simple events based on this random variable)

4) Construct the probability distribution associated with the simple events based on the random variable

Page 4: Exam 2: Rules

Ch3:

3.1 Random Variables3.2 Probability distributions for discrete

random variables3.3 Expected values3.4 The Binomial Distribution3.5 Negative Binomial and

Hypergeometric3.6 The Poisson

Page 5: Exam 2: Rules

Random Variables Section 3.1

A Random Variable: is a function on the outcomes of an experiment; i.e. a function on outcomes in S.

A discrete random variable is one with a sample space that is finite or countably infinite. (Countably infinite => infinite yet can be matched to the integer line)

A continuous random variable is one with a continuous sample space.

Page 6: Exam 2: Rules

For discrete random variables, we call P(X = x) = P(x) the probability mass function (pmf).

From the axioms of probability, we can show that:1.

2.

A CDF, F(x) is defined to be,

Probability distributions for discrete rvs Section 3.2

Page 7: Exam 2: Rules

We can also find the pmf using the CDF if we note that:

Probability distributions for discrete rvs Section 3.2

So, for any two numbers a, b where a < b,

Page 8: Exam 2: Rules

Expected values Section 3.3

The variance of a discrete random variable is the weighted average of the squared distance from the mean,

The standard deviation,

Let h(X) be a function, a and b be constants then,

The expected value E(X) of a discrete random variable is the weighted average or the mean of that random variable,

Page 9: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

Talked about the following distributions:

Bernoulli

Binomial

Hypergeometric

Geometric

Negative Binomial

Poisson

Page 10: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

BernoulliTwo possible outcomes S and F, probability of success = p.

S = {S, F}

Page 11: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

Binomial

The experiment consists of a group of n independent Bernoulli sub-experiments, where n is fixed in advance of the experiment and the probability of a success is p.

What we are interested in studying is the number of successes that we may observe in any run of such an experiment.

Page 12: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

Binomial

The binomial random variable X = the number of successes (S’s) among n Bernoulli trials or sub-experiments.

We say X is distributed Binomial with parameters n and p,

The pmf can become (depending on the book),

Page 13: Exam 2: Rules

The CDF can become (also depending on the book),

Discrete Probability & Expected values Section 3.2-6

Binomial

Tabulated in Table A.1, page 664-666

Page 14: Exam 2: Rules

The Binomial Distribution Section 3.4

When to use the binomial distribution?

1. When we have n independent Bernoulli trials

2. When each Bernoulli trial is formed from a sample n of individuals (parts, animals, …) from a population with replacement.

3. When each Bernoulli trial is formed from a sample of n individuals (parts, animals, …) from a population of size N without replacement if n/N < 5%.

Page 15: Exam 2: Rules

HypergeometricDiscrete Probability & Expected values Section 3.2-6

The experiment consists of a group of n dependent Bernoulli sub-experiments, where n is fixed in advance of the experiment and the probability of a success is p.

What we are interested in studying is the number of successes that we may observe in any run of such an experiment.

Page 16: Exam 2: Rules

The hypergeometric random variable X = the number of successes (S’s) among n trials or sub-experiments.

We say X is distributed Hypergeometric with parameters N, M and n

Discrete Probability & Expected values Section 3.2-6

Hypergeometric

Page 17: Exam 2: Rules

The pmf can become (depending on the book),

The CDF

Discrete Probability & Expected values Section 3.2-6

Hypergeometric

Page 18: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

Hypergeometric

Page 19: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

Negative Binomial

The experiment consists of a group of independent Bernoulli sub-experiments, where r (not n), the number of successes we are looking to observe, is fixed in advance of the experiment and the probability of a success is p.

What we are interested in studying is the number of failures that precede the rth success.

Called negative binomial because instead of fixing the number of trials n we fix the number of successes r.

Page 20: Exam 2: Rules

The negative binomial random variable X = the number of failures (F’s) until the rth success.

We say X is distributed negative Binomial with parameters r and p,

Discrete Probability & Expected values Section 3.2-6

Negative Binomial

pmf is:

Page 21: Exam 2: Rules

CDF is

Discrete Probability & Expected values Section 3.2-6

Negative Binomial

Page 22: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

Geometric

A special case of the negative binomial is when r = 1, then we call the distribution geometric.The geometric random variable X = the number of

failures (F’s) until the 1st success.We say X is distributed geometric with parameter p,

pmf is:

Page 23: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

Geometric

CDF is

Page 24: Exam 2: Rules

Discrete Probability & Expected values Section 3.2-6

Poisson

We can get to the Poisson model in two ways:

1. As an approximation of the Binomial distribution

2. As a model describing the Poisson process

Page 25: Exam 2: Rules

1. Approximating the Binomial distribution

Rules for approximation:

The math ones are:If , , and then

In practice:

If n is large (>50) and p is small such as np < 5, then we can approximate with , where

Discrete Probability & Expected values Section 3.2-6

Poisson

Page 26: Exam 2: Rules

pmf:

Poisson random variable X = the number of successes (S).

We say X is distributed Poisson with parameter l,

1. Approximating the Binomial distribution

Discrete Probability & Expected values Section 3.2-6

Poisson

Page 27: Exam 2: Rules

CDF:

Tabulated in Table A.2, page 667

1. Approximating the Binomial distribution

Discrete Probability & Expected values Section 3.2-6

Poisson

Page 28: Exam 2: Rules

2. As a model describing the Poisson processThis is a process of counting events, usually, over time

Assumptions:

a. There exists a parameter a > 0 such that,

b. There is a very small chance that 2 or more events will occur in ,

b. The number of events observed in is independent from that occurring in any other period.

Discrete Probability & Expected values Section 3.2-6

Poisson

Page 29: Exam 2: Rules

pmf:

Poisson random variable X = the number of successes (S) within time period t.

We say X is distributed Poisson with parameter at,

2. As a model describing the Poisson process

Discrete Probability & Expected values Section 3.2-6

Poisson

Page 30: Exam 2: Rules

CDF:

Tabulated in Table A.2, page 667

Discrete Probability & Expected values Section 3.2-6

Poisson

Page 31: Exam 2: Rules

Ch4:

4.1 Probability Density Functions4.2 CDFs and Expected Values4.3 The Normal Distribution4.4 The Exponential Distribution

Page 32: Exam 2: Rules

Continuous pdfs, CDFs and Expectation Section 4.1-2

For continuous random variables, we call f(x) the probability density function (pdf).

From the axioms of probability, we can show that:

1.

2.

CDF

Page 33: Exam 2: Rules

We can also find the pdf using the CDF if we note that:

Probability distributions for discrete rvs Section 3.2

So, for any two numbers a, b where a < b,

Page 34: Exam 2: Rules

Expected values Section 3.3

Page 35: Exam 2: Rules

Continuous random variables Section4.2-6

Talked about the following distributions:

Uniform

Normal

Exponential

Page 36: Exam 2: Rules

Section4.2-6

UniformContinuous random variables

Page 37: Exam 2: Rules

NormalSection4.2-6Continuous random variables

The most important distribution of classical and applied statistics.

CDF

Expectation

Page 38: Exam 2: Rules

The standard Normal

Z is said to have a standard normal distribution with mean = μ = 0 and standard deviation = σ = 1,

pdf,

A CDF, , as provided by Table A.3 pages 668-669

Normal

Section4.2-6Continuous random variables

Page 39: Exam 2: Rules

PercentilesNormal

Section4.2-6Continuous random variables

zα = x(1-α) = equal to the (1-α)th percentile.

If , with , then we can use the normal distribution to approximate this distribution as follows,

Page 40: Exam 2: Rules

Exponential

Section4.2-6Continuous random variables

Commonly used to model component life time (if that component can be assumed not to change over time) and times between occurrence of multiple events in a Poisson process. A good approximation to the geometric distribution

Page 41: Exam 2: Rules

We say that a random variable is exponentially distributed, , governed by parameter λ if the pdf of its distribution is,

CDF,

Expectation,

Exponential

Section4.2-6Continuous random variables

Page 42: Exam 2: Rules

Chebyshev’s rule:

Says that no matter what probability distribution you are looking at the chance that an observed simple event of an experiment (from now on we will hand waive it and call it an outcome) will be between k standard deviations from the mean of the distribution is going to be at least 1 – 1/k2

In simple math:

For any type of random variables