Ask Question

To assess the precision of a laboratory scale, we measure a block known to have a mass of 1 gram. we measure the block n times and record the mean x¯ of the measurements. suppose the scale readings are normally distributed with unknown mean µ and standard deviation σ = 0.001 g. how large should n be so that a 95% confidence interval for µ has a margin of error of ± 0.0001

+4
Answers (1)
  1. 19 March, 17:15
    0
    n = 5 The formula for the confidence interval (CI) is CI = m ± z*d/sqrt (n) where CI = confidence interval m = mean z = z value in standard normal table for desired confidence n = number of samples Since we want a 95% confidence interval, we need to divide that in half to get 95/2 = 47.5 Looking up 0.475 in a standard normal table gives us a z value of 1.96 Since we want the margin of error to be ± 0.0001, we want the expression ± z*d/sqrt (n) to also be ± 0.0001. And to simplify things, we can omit the ± and use the formula 0.0001 = z*d/sqrt (n) Substitute the value z that we looked up, and get 0.0001 = 1.96*d/sqrt (n) Substitute the standard deviation that we were given and 0.0001 = 1.96*0.001/sqrt (n) 0.0001 = 0.00196/sqrt (n) Solve for n 0.0001*sqrt (n) = 0.00196 sqrt (n) = 19.6 n = 4.427188724 Since you can't have a fractional value for n, then n should be at least 5 for a 95% confidence interval that the measured mean is within 0.0001 grams of the correct mass.
Know the Answer?
Not Sure About the Answer?
Find an answer to your question 👍 “To assess the precision of a laboratory scale, we measure a block known to have a mass of 1 gram. we measure the block n times and record ...” in 📗 Mathematics if the answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers