week 13_Grace

pdf

School

University of Illinois, Urbana Champaign *

*We aren’t endorsed by this school

Course

440

Subject

Statistics

Date

Jan 9, 2024

Type

pdf

Pages

71

Report

Uploaded by dfhuber

Week 13 Grace
Homework Feedback u Model Assumptions u Group names vs numbers in your csv file u Tukey’s W ࠵? = ࠵? ! ࠵?࠵?࠵? ࠵? " u You need to include a table with group letters for post hoc testing, as well as a short explanation
Latin square RCBD 1way ANOVA Two blocking effect One blocking effect No blocking effect
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
1.) If columns and rows had a significant effect for a Latin square design, then would we expect the MSE to be larger or smaller if we dropped columns to create a RCBD?
1.) If columns and rows had a significant effect for a Latin square design, then would we expect the MSE to be larger or smaller if we dropped columns to create a RCBD? Larger. The variability contained in columns (which was significant) is not redistributed into the error term. Specifically, SSColumn is added to SSError, and dfColumn is added to dfError. Since SSColumn was large (because the effect of column was significant), the MSE (calculated as SSE/dfE) increases significantly. The gains in the dfE in the denominator is not enough to balance out the large gains of the SSE. With the larger increase in numerator and smaller increase in the denominator, the MSE will increase as a result
2.) If we dropped row and columns to end up with a CRD, how would the MSE compare with the previously described Latin square and RCBD? As part of this, please rank the MSEs for each design from largest to smallest.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
2.) If we dropped row and columns to end up with a CRD, how would the MSE compare with the previously described Latin square and RCBD? As part of this, please rank the MSEs for each design from largest to smallest. CRD > RCBD > Latin Square When we remove one blocking from Latin square it become RCBD, the MSE will increase because the SS from the blocking effect will be added to error and df from the blocking effect will be added to error as well, but the increase in numerator of MSE is larger than the increase in denominator, the MSE will increase When we remove another blocking from RCBD, the MSE will also increase for the same reason.
3.) What is the difference between fixed and random effects?
3.) What is the difference between fixed and random effects? Fixed : When we are interested in ONLY the particular levels of the effect used in an experiment. the treatment level is selected intentionally and reproducible. Random : interested in an entire population of effect levels. We think of the particular levels of this effect used in the experiment as a random sample from the population of effect levels. The effect level can reasonably be assumed to represent a probability distribution.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
3.) What is the difference between fixed and random effects? Fixed example : Suppose we want to make inferences about the effect of three brand of fertilizer on the height of tomato plants and account for the difference of soil composition across the field only in where the plants will be planted. Random example : Suppose we want to make inferences about the effect of three brand of fertilizer on the height of tomato plants and account for the temperature in any number of fields.
4.) A researcher is interested in assessing drought tolerance in 517 difference lines of species. To account for differences in soil composition across the field in Illinois where the lines will be planted, this researcher is using an RCBD. Suppose that the researcher is interested in making inferences for only the one location in Illinois where these lines are being planted. Should the statistical model used to analyze these data include blocks as a fixed or random effect?
4.) A researcher is interested in assessing drought tolerance in 517 difference lines of species. To account for differences in soil composition across the field in Illinois where the lines will be planted, this researcher is using an RCBD. Suppose that the researcher is interested in making inferences for only the one location in Illinois where these lines are being planted. Should the statistical model used to analyze these data include blocks as a fixed or random effect? Fixed Why? Since they are only interested in the performance of their lines in that field, and don’t want to infer further than this inference space. If their interest was to infer to how these lines might respond in any number of fields (in Illinois, or any field anywhere), then the blocks would be random effects.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
5.) Give an examples of a fixed effect and a random effect from your research or a scientific topic of your choice.
5.) Give an example of a fixed effect and a random effect from your research or a scientific topic of your choice.
Quiz Time
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Linear Regression
Linear Regression
Example https://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704- EP713_MultivariableMethods/#:~:text=We%20could%20use%20the%20equation,independent %20variable%20on%20the%20outcome.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
So we need an equation to work with. And it comes from one you’ve known for a long time! Slope equation: y = mx + b
Slope equation: y = mx + b Linear regression equation: y i = β 0 + β 1 x i + ε i
Slope equation: y = mx + b Linear regression equation: y i = β 0 + β 1 x i + ε i * If β 1 is not equal to zero, meaning that it is likely that there is a linear relationship
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
6) An instructor is interested in the degree to which students’ final grades from her calculus III course associates with their performance in her introductory probability course. Therefore, the final grades from 15 randomly selected students who took both of these courses were collected.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Here are the data: Student_ID Calc_3_Grade Intro_Prob_Grade 1 83.52 80.21 2 92.71 77.93 3 90.72 62.78 4 78.96 75.59 5 86.01 81.76 6 87.63 85.76 7 83 71.81 8 95.06 92.46 9 81.44 73.84 10 67.85 66.32 11 74.12 69.22 12 88.42 102.95 13 85.09 77.84 14 91.26 91.03 15 90.06 66.18
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
a) Fit a simple linear regression model by hand. Let the performance in the introductory probability course be the dependent variable (Y) and the performance in calculus III be the independent variable (X).
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Student_ID Calc_3_Grade Intro_Prob_Grade 1 83.52 80.21 2 92.71 77.93 3 90.72 62.78 4 78.96 75.59 5 86.01 81.76 6 87.63 85.76 7 83 71.81 8 95.06 92.46 9 81.44 73.84 10 67.85 66.32 11 74.12 69.22 12 88.42 102.95 13 85.09 77.84 14 91.26 91.03 15 90.06 66.18 X Y
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Please write down the model, and show all of your work. You can use R to do the calculations and fit the model!
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Please write down the model, and show all of your work. y i = β 0 + β 1 x i + ε i n always 1 always 1 n-2 df: 15 1 1 13 * Note that n is number of paired data( or X), in this case, the number of students (15), not number of x and y observations added together (30)
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
y i = β 0 + β 1 x i + ε i n always 1 always 1 n-2 df: 15 1 1 13 ࠵? ! =the probability final grade of the i th student ࠵? " =the intercept parameter ࠵? # =the slope parameter ࠵? ! = the error term showing the variability in the probability final grade that is not explained by the linear relationship with the calculus III grade of the i th student which is assumed to be~NID(0, ࠵? 2 )
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Show the null and alternative hypotheses for the Model term in the ANOVA table. H 0 : β 1 = 0 H A : β 1 ≠ 0
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat )2 (y hat - y bar ) 2 (y - y bar ) 2 x y The x and y columns are just our 15 original data points. Here, x is the calculus III grade, and y is the introductory probability grade.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat )2 (y hat - y bar ) 2 (y - y bar ) 2 x - x bar y - y bar For these next two columns, you just take the original datapoint (x or y) and subtract the respective average.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat )2 (y hat - y bar ) 2 (y - y bar ) 2 (x – x bar )*(y – y bar ) For the next column, just multiply column #3 and column #4. The sum of the 15 numbers in this column will be referred to as “Sxy”. We will use it for future calculations.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat )2 (y hat - y bar ) 2 (y - y bar ) 2 (x – x bar )*(x – x bar ) Now, square column #3. The sum of the 15 numbers in this column will be referred to as “Sxx”. We will use it for future calculations.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Now we have to calculate β 0 and β 1 so that we can calculate the rest of the columns.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat )2 (y hat - y bar ) 2 (y - y bar ) 2
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat )2 (y hat - y bar ) 2 (y - y bar ) 2 Sum of this column = Sxy Sum of this column = Sxx = = 0.70
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat )2 (y hat - y bar ) 2 (y - y bar ) 2 Sum of this column = Sxy Sum of this column = Sxx = = 0.70 = 78.38 – (0.70)(85.06) = 18.84
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat ) 2 (y hat - y bar ) 2 (y - y bar ) 2 y hat The y hat column is calculated with the ! ࠵? 0 and ! ࠵? ! that we just calculated, using the equation y hat = ! ࠵? 0 + ! ࠵? ! ࠵? ࠵? (just plug in each x value to find y hat )
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat ) 2 (y hat - y bar ) 2 (y - y bar ) 2 (y – y hat ) 2 This column is calculated by taking each y value, subtracting y hat , and then squaring it
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat ) 2 (y hat - y bar ) 2 (y - y bar ) 2 (y hat - y bar ) 2 This column is calculated by taking each y hat value, subtracting y bar , and then squaring it
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat ) 2 (y hat - y bar ) 2 (y - y bar ) 2 (y - y bar ) 2 This column is calculated by taking each y value, subtracting y bar , and then squaring it
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Now let’s set up our R document. This will help us make the calculations that we can use with different hypothesis tests. These will allow us to answer questions about our linear regression .
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
F test for slope ( β 1 )
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
ANOVA table for Regression
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Please write down the model, and show all of your work. y i = β 0 + β 1 x i + ε i df: 15 1 1 13 We will start by filling in the degrees of freedom column. We already figured out what these were when we wrote the model.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Sum of squares Σ (y – y bar ) 2 Σ (y hat - y bar ) 2 Σ (y – y hat ) 2
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
ANOVA table for Regression F crit α =0.05, dfReg=1, dfError=13 = 4.67
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
ANOVA table for Regression F crit α =0.05, dfReg=1, dfError=13 = 4.67 H 0 : β 1 = 0 H A : β 1 ≠ 0 Now it’s time to make our decision and conclusion. Let’s recall our hypotheses.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
ANOVA table for Regression F crit α =0.05, dfReg=1, dfError=13 = 4.67 Because F calc <F crit , we fail to reject the null hypothesis and conclude there is insufficient evidence to say the slope is not equal to 0. In other words, the data do not have a significant linear relationship. H 0 : β 1 = 0 H A : β 1 ≠ 0
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
T tests for slope ( β 1 ) and intercept ( β 0 )
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Hypotheses u In the intercept line we test the hypotheses: H 0 : β 0 = 0 H A : β 0 0 u In the slope line we test the hypotheses: H 0 : β 1 = 0 H A : β 1 0 *NOTE: the slope hypotheses are the exact same as our hypotheses for our regression model ANOVA * In t test, we can test whether the intercept and slope is equal to any number
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
0 ) 1 ) Parameter estimates table
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
0 ) 1 )
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
x y x - x bar y - y bar (x - x bar )(y - y bar ) (x - x bar ) 2 y hat (y – y hat )2 (y hat - y bar ) 2 (y - y bar ) 2 Sum of this column = Sxy Sum of this column = Sxx = = 0.70 = 78.38 – (0.70)(85.06) = 18.84
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
0 ) 1 ) 31.94 Sxx
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
0 ) 1 ) ࠵? $ % ! = ࠵?࠵?࠵? ࠵? − ̅࠵? & = 104.44 745.59 = 0.37
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
0 ) 1 ) ! !"#! = # $ $ − # $ ! & % & ! = 18.84 − 0 31.94 = 0.59 0.59
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
0 ) 1 ) ! !"#! = # $ $ − # $ ! & % & " = 0.70 − 0 0.37 = 1.87
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
0 ) 1 ) t crit = t α /2 =0.025, dfError = 13 = ± 2.16 *note that this is a two sided t- test
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
0 ) 1 ) Intercept: H 0 : β 0 = 0 t calc < t crit H A : β 0 0 At α = 0.05, we fail to reject the null hypothesis and conclude we have insufficient evidence to say the intercept is not equal to 0. Slope: H 0 : β 1 = 0 t calc < t crit H A : β 1 0 At α = 0.05, we fail to reject the null hypothesis and conclude we have insufficient evidence to say the slope is not equal to 0. *Note that this is the same result that we found using the F test in the ANOVA table.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
to find out if a linear relationship exists between the variables (F test and t test for slope, t test for the intercept) Why would we want to do this? to find out if the intercept is significantly different than a given number (here we did t-test using our value of interest as zero, but we could look at any number in theory)
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Confidence Intervals
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
b) Calculate a 95% confidence interval for the intercept and slope. Please show all of your work.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
95% confidence interval for the intercept: ! " ! − $% "/$,&' ! &' ( ) " ≤ ! ! ≤ ! " ! + $% "/$,&' ! &' ( ) "
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
95% confidence interval for the intercept: ! " ! − $% "/$,&' ! &' ( ) " ≤ ! ! ≤ ! " ! + $% "/$,&' ! &' ( ) " 18.84 − (2.16)(31.94) ≤ - ! ≤ 18.84 + (2.16)(31.94) −50.15 ≤ ' ! ≤ 87.83 We are 95% confident that the true ࠵? ! lies between -50.15 and 87.83. In repeated sampling, 95% of the resulting CI will contain the true intercept
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
95% confidence interval for the slope: ! " ! − $% "/$,&' ! &' ( ) " ≤ ! ! ≤ ! " ! + $% "/$,&' ! &' ( ) "
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
95% confidence interval for the slope: ! " ! − $% "/$,&' ! &' ( ) " ≤ ! ! ≤ ! " ! + $% "/$,&' ! &' ( ) " 0.70 − (2.16)(0.37) ≤ , ! ≤ 0.70 + (2.16)(0.37) −0.10 ≤ & ! ≤ 1.50 We are 95% confident that the true ࠵? " lies between -0.10 and 1.50. In repeated sampling, 95% of the resulting CI will contain the true slope
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Predict the estimated final grade in the introductory probability course if they had a final grade of 65 in calculus III.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Predict the estimated final grade in the introductory probability course if they had a final grade of 65 in calculus III. ࠵? " = 65 2࠵? ! = 4 ࠵? " + 4 ࠵? # ࠵? ! = 18.84 + 0.70 65 = 64.34
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
ICES forms
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
#Set your variables before fitting your model Y<-data$YourYVariable X<-data$YourXVariable #Fit a simple linear regression model model<- lm(Y~ X) #Look at the output from the model fit anova(model) summary(model) #Intermediate step: calculate the residuals and predicted values (for the observations without missing absences data) residuals <- model $residuals predicted.values <- model $fitted.values #Make a residual plot plot(residuals~predicted.values) abline(h = 0, lty = 2) #calculate the confidence intervals for the slope and intercept confint(model) #Calculate a 95% CI for the expected value of y at 22.4 predict(model, data.frame(x=22.4), interval="predict", level=0.95) #Calculate a 95% prediction interval of y at xxx and xxxx predict(model, data.frame(x=22.4), interval="confidence", level=0.95)
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help