y = X,B1 + X,B2 + ɛ where y = (y1, .., Yn)', X1 and X2 are random except the intercept term (i.e., th vector of 1) included in X1. Conditional on Xı and X2, the random error vecte ɛ is jointly normal with zero expectation and variance-covariance matrix V, which does not depend on X1 and X2. V is not a diagonal matrix (i.e., some off-diagonal elements are nonzero). B1 and B2 are vectors of two different sets of regression coefficients; B1 has two regression coefficients and B2 has four regression coefficients. B = (B1 , B½)'; that is, B is a column vector of %3D six regression coefficients. a) V is completely known (i.e., the values of all elements of V are given).

Linear Algebra: A Modern Introduction
4th Edition
ISBN:9781285463247
Author:David Poole
Publisher:David Poole
Chapter7: Distance And Approximation
Section7.3: Least Squares Approximation
Problem 31EQ
icon
Related questions
Question

1.  Is this null hypothesis always testable? Why or why not?

2. Consider the case that this null hypothesis is testable. Construct a
 statistical test and its rejection region for H0 .

Consider the multiple regression model for n y-data yı, ., Yn, (n is sample size)
....
y = X,B1 + X,B2 + ɛ
where y = (y1, ., Yn)', X1 and X2 are random except the intercept term (i.e., the
vector of 1) included in X1. Conditional on X1 and X2, the random error vector
ɛ is jointly normal with zero expectation and variance-covariance matrix V,
which does not depend on X1 and X2. V is not a diagonal matrix (i.e., some
off-diagonal elements are nonzero). B1 and B2 are vectors of two different
sets of regression coefficients; B1 has two regression coefficients and B2 has
four regression coefficients. B = (B1 , B½)'; that is, B is a column vector of
six regression coefficients.
a) V is completely known (i.e., the values of all elements of V are given).
Let W be a matrix of k rows (k > 1) and four columns of given real
numbers. Of interest are the hypotheses
Họ: WB2 = 0 versus H1: WB2 ÷ 0 .
Transcribed Image Text:Consider the multiple regression model for n y-data yı, ., Yn, (n is sample size) .... y = X,B1 + X,B2 + ɛ where y = (y1, ., Yn)', X1 and X2 are random except the intercept term (i.e., the vector of 1) included in X1. Conditional on X1 and X2, the random error vector ɛ is jointly normal with zero expectation and variance-covariance matrix V, which does not depend on X1 and X2. V is not a diagonal matrix (i.e., some off-diagonal elements are nonzero). B1 and B2 are vectors of two different sets of regression coefficients; B1 has two regression coefficients and B2 has four regression coefficients. B = (B1 , B½)'; that is, B is a column vector of six regression coefficients. a) V is completely known (i.e., the values of all elements of V are given). Let W be a matrix of k rows (k > 1) and four columns of given real numbers. Of interest are the hypotheses Họ: WB2 = 0 versus H1: WB2 ÷ 0 .
Expert Solution
trending now

Trending now

This is a popular solution!

steps

Step by step

Solved in 2 steps with 2 images

Blurred answer
Similar questions
  • SEE MORE QUESTIONS
Recommended textbooks for you
Linear Algebra: A Modern Introduction
Linear Algebra: A Modern Introduction
Algebra
ISBN:
9781285463247
Author:
David Poole
Publisher:
Cengage Learning
Glencoe Algebra 1, Student Edition, 9780079039897…
Glencoe Algebra 1, Student Edition, 9780079039897…
Algebra
ISBN:
9780079039897
Author:
Carter
Publisher:
McGraw Hill
Elementary Linear Algebra (MindTap Course List)
Elementary Linear Algebra (MindTap Course List)
Algebra
ISBN:
9781305658004
Author:
Ron Larson
Publisher:
Cengage Learning
Big Ideas Math A Bridge To Success Algebra 1: Stu…
Big Ideas Math A Bridge To Success Algebra 1: Stu…
Algebra
ISBN:
9781680331141
Author:
HOUGHTON MIFFLIN HARCOURT
Publisher:
Houghton Mifflin Harcourt