
MATLAB: An Introduction with Applications
6th Edition
ISBN: 9781119256830
Author: Amos Gilat
Publisher: John Wiley & Sons Inc
expand_more
expand_more
format_list_bulleted
Question
![*9.93 Let Y₁, Y₂...., Y, be a random sample from a population with density function
20²
y3
0 <y <∞,
elsewhere.
In Exercise 9.53, you showed that Y(₁) = min(Y₁, Y₂, ..., Y,) is sufficient for 6.
a Find the MLE for 0. [Hint: See Example 9.16.]
b Find a function of the MLE in part (a) that is a pivotal quantity.
*
Use the pivotal quantity from part (b) to find a 100(1-) % confidence interval for 8.
f(y) =
0,](https://content.bartleby.com/qna-images/question/763773a4-b59e-428b-8a3f-bc3dd5fb97d6/53cc1cea-d8a9-477d-8706-bffffcfaec0a/kuhbxuj_thumbnail.png)
Transcribed Image Text:*9.93 Let Y₁, Y₂...., Y, be a random sample from a population with density function
20²
y3
0 <y <∞,
elsewhere.
In Exercise 9.53, you showed that Y(₁) = min(Y₁, Y₂, ..., Y,) is sufficient for 6.
a Find the MLE for 0. [Hint: See Example 9.16.]
b Find a function of the MLE in part (a) that is a pivotal quantity.
*
Use the pivotal quantity from part (b) to find a 100(1-) % confidence interval for 8.
f(y) =
0,
![C20110
EXAMPLE 9.16 Let Y₁...... Y, be a random sample of observations from a uniform distribution
with probability density function (18)= 1/8,for0 ≤ y ≤ and i = 1, 2, ..., n.
Find the MLE of 8.
Solution In this case, the likelihood is given by
L(B) = f(y-2--- % ]®) = S(x]8) × ƒ(y₂]6) × --- × ƒ(8)
1
1
0 ≤ y ≤0.1=1,2,.....
otherwise.
Obviously, L() is not maximized when L(8)= 0. You will notice that 1/8" is a
monotonically decreasing function of 6. Hence, nowhere in the interval 0 <<
is d[1/8"1/49 equal to zero. However, 1/6 increases as 8 decreases, and 1/8" is
maximized by selecting to be as small as possible, subject to the constraint that all
of the y values are between zero and 8. The smallest value of that satisfies this
constraint is the maximum observation in the set y. y. y That is, 8=Y=
max(₁. Y..... Y) is the MLE for 8. This MLE for is not an unbiased estimator
of 8, but it can be adjusted to be unbiased, as shown in Example 9.1.
We have seen that sufficient statistics that best summarize the data have desirable
properties and often can be used to find an MVUE for parameters of interest. If U
is any sufficient statistic for the estimation of a parameter 8, including the sufficient
statistic obtained from the optimal use of the factorization criterion, the MLE is
always some function of U. That is, the MLE depends on the sample observations
only through the value of a sufficient statistic. To show this, we need only observe
480 Chapter 9 Properties of Point Estimators and Methods of Estimation
w okkapuenjiž.
that if U is a sufficient statistic for 6, the factorization criterion (Theorem 9.4) implies
that the likelihood can be factored as
L(8)=L(... y) = g(u,0)h(y)
where g(u,) is a function of only u and 6 and k(y₁. 2. y) does not depend
on 8. Therefore, it follows that
In[L(9)] = Ing(u,9)]+[(y)].
Notice that In[h(y. y... y)] does not depend on and therefore maximizing
In[L(9)] relative to is equivalent to maximizing In[g(u, 8)] relative to 6. Because
Ing(4,8)] depends on the data only through the value of the sufficient statistic U, the
MLE for 8 is always some function of U. Consequently, if an MLE for a parameter
can be found and then adjusted to be unbiased, the resulting estimator often is an
MVUE of the parameter in question.
MLEs have some additional properties that make this method of estimation par-
ticularly attractive. In Example 9.9, we considered estimation of 8², a function of the
parameter 8. Functions of other parameters may also be of interest. For example, the
variance of a binomial random variable is mp(1-p). a function of the parameter p.
If Y has a Poisson distribution with mean A, it follows that P(Y=0)=we may
wish to estimate this function of à Generally, if is the parameter associated with
a distribution, we are sometimes interested in estimating some function of 8-say
1(6) rather than itself. In Exercise 9.94, you will prove that if 1(8) is a one-to-one
function of and it is the MLE for e, then the MLE of 1() is given by
1(0)=1(0).
This result, sometimes referred to as the invariance property of MLEs, also holds for
any function of a parameter of interest (not just one-to-one functions). See Casella
and Berger (2002) for details.](https://content.bartleby.com/qna-images/question/763773a4-b59e-428b-8a3f-bc3dd5fb97d6/53cc1cea-d8a9-477d-8706-bffffcfaec0a/wpyjoe_thumbnail.png)
Transcribed Image Text:C20110
EXAMPLE 9.16 Let Y₁...... Y, be a random sample of observations from a uniform distribution
with probability density function (18)= 1/8,for0 ≤ y ≤ and i = 1, 2, ..., n.
Find the MLE of 8.
Solution In this case, the likelihood is given by
L(B) = f(y-2--- % ]®) = S(x]8) × ƒ(y₂]6) × --- × ƒ(8)
1
1
0 ≤ y ≤0.1=1,2,.....
otherwise.
Obviously, L() is not maximized when L(8)= 0. You will notice that 1/8" is a
monotonically decreasing function of 6. Hence, nowhere in the interval 0 <<
is d[1/8"1/49 equal to zero. However, 1/6 increases as 8 decreases, and 1/8" is
maximized by selecting to be as small as possible, subject to the constraint that all
of the y values are between zero and 8. The smallest value of that satisfies this
constraint is the maximum observation in the set y. y. y That is, 8=Y=
max(₁. Y..... Y) is the MLE for 8. This MLE for is not an unbiased estimator
of 8, but it can be adjusted to be unbiased, as shown in Example 9.1.
We have seen that sufficient statistics that best summarize the data have desirable
properties and often can be used to find an MVUE for parameters of interest. If U
is any sufficient statistic for the estimation of a parameter 8, including the sufficient
statistic obtained from the optimal use of the factorization criterion, the MLE is
always some function of U. That is, the MLE depends on the sample observations
only through the value of a sufficient statistic. To show this, we need only observe
480 Chapter 9 Properties of Point Estimators and Methods of Estimation
w okkapuenjiž.
that if U is a sufficient statistic for 6, the factorization criterion (Theorem 9.4) implies
that the likelihood can be factored as
L(8)=L(... y) = g(u,0)h(y)
where g(u,) is a function of only u and 6 and k(y₁. 2. y) does not depend
on 8. Therefore, it follows that
In[L(9)] = Ing(u,9)]+[(y)].
Notice that In[h(y. y... y)] does not depend on and therefore maximizing
In[L(9)] relative to is equivalent to maximizing In[g(u, 8)] relative to 6. Because
Ing(4,8)] depends on the data only through the value of the sufficient statistic U, the
MLE for 8 is always some function of U. Consequently, if an MLE for a parameter
can be found and then adjusted to be unbiased, the resulting estimator often is an
MVUE of the parameter in question.
MLEs have some additional properties that make this method of estimation par-
ticularly attractive. In Example 9.9, we considered estimation of 8², a function of the
parameter 8. Functions of other parameters may also be of interest. For example, the
variance of a binomial random variable is mp(1-p). a function of the parameter p.
If Y has a Poisson distribution with mean A, it follows that P(Y=0)=we may
wish to estimate this function of à Generally, if is the parameter associated with
a distribution, we are sometimes interested in estimating some function of 8-say
1(6) rather than itself. In Exercise 9.94, you will prove that if 1(8) is a one-to-one
function of and it is the MLE for e, then the MLE of 1() is given by
1(0)=1(0).
This result, sometimes referred to as the invariance property of MLEs, also holds for
any function of a parameter of interest (not just one-to-one functions). See Casella
and Berger (2002) for details.
Expert Solution

arrow_forward
Step 1
Given, be a random sample from a population with density function
Step by stepSolved in 4 steps

Knowledge Booster
Similar questions
- i need the answer quicklyarrow_forwardThe lifetime, X, of a particular integrated circuit has an exponential distribution with rate of λ=0.5 per year. Thus, the density of X is: f(x,x) = 1 e-^x for 0 ≤ x ≤ ∞o, λ = 0.5. λ is what R calls rate. Hint: This is a problem involving the exponential distribution. Knowing the parameter for the distribution allows you to easily answer parts a,b,c and use the built-in R functions for the exponential distribution (dexp(), pexp(), qexp()) for other parts. Or (not recommended) you should be able to use the R integrate command with f(x) defined as above or with dexp() for all parts. d) What is the probability that X is greater than its expected value? e) What is the probability that X is > 5? f) What is the probability that X is> 10? g) What is the probability that X > 10 given that X > 5? h) What is the median of X? Please solution USING R scriptarrow_forwardplease step by step and thank you!! correct answer, i will make sure to vote!!arrow_forward
- 7.3.3arrow_forward3.18 A continuous random variable X that can as- 5 has a density sume values between x = 2 and x = function given by f(x) = 2(1+x)/27. Find (a) P(X < 4); (b) P(3 ≤ X < 4). 3.19 For the density function of Exercise 3.17, find F(x). Use it to evaluate P(2 < X < 2.5). 3.20 For the density function of Exercise 3.18, find F(x), and use it to evaluate P(3 ≤ X < 4).arrow_forward
arrow_back_ios
arrow_forward_ios
Recommended textbooks for you
- MATLAB: An Introduction with ApplicationsStatisticsISBN:9781119256830Author:Amos GilatPublisher:John Wiley & Sons IncProbability and Statistics for Engineering and th...StatisticsISBN:9781305251809Author:Jay L. DevorePublisher:Cengage LearningStatistics for The Behavioral Sciences (MindTap C...StatisticsISBN:9781305504912Author:Frederick J Gravetter, Larry B. WallnauPublisher:Cengage Learning
- Elementary Statistics: Picturing the World (7th E...StatisticsISBN:9780134683416Author:Ron Larson, Betsy FarberPublisher:PEARSONThe Basic Practice of StatisticsStatisticsISBN:9781319042578Author:David S. Moore, William I. Notz, Michael A. FlignerPublisher:W. H. FreemanIntroduction to the Practice of StatisticsStatisticsISBN:9781319013387Author:David S. Moore, George P. McCabe, Bruce A. CraigPublisher:W. H. Freeman

MATLAB: An Introduction with Applications
Statistics
ISBN:9781119256830
Author:Amos Gilat
Publisher:John Wiley & Sons Inc

Probability and Statistics for Engineering and th...
Statistics
ISBN:9781305251809
Author:Jay L. Devore
Publisher:Cengage Learning

Statistics for The Behavioral Sciences (MindTap C...
Statistics
ISBN:9781305504912
Author:Frederick J Gravetter, Larry B. Wallnau
Publisher:Cengage Learning

Elementary Statistics: Picturing the World (7th E...
Statistics
ISBN:9780134683416
Author:Ron Larson, Betsy Farber
Publisher:PEARSON

The Basic Practice of Statistics
Statistics
ISBN:9781319042578
Author:David S. Moore, William I. Notz, Michael A. Fligner
Publisher:W. H. Freeman

Introduction to the Practice of Statistics
Statistics
ISBN:9781319013387
Author:David S. Moore, George P. McCabe, Bruce A. Craig
Publisher:W. H. Freeman