Ask Question
20 June, 06:51

Note that f (x) is defined for every real x, but it has no roots. That is, there is no x∗ such that f (x∗) = 0. Nonetheless, we can find an interval [a, b] such that f (a) < 0 < f (b) : just choose a = - 1, b = 1. Why can't we use the intermediate value theorem to conclude that f has a zero in the interval [-1, 1]?

+1
Answers (1)
  1. 20 June, 07:14
    0
    Answer: Hello there!

    Things that we know here:

    f (x) is defined for every real x

    f (a) < 0 < f (b), where we assume a = - 1 and b = 1

    and the problem asks: "Why can't we use the intermediate value theorem to conclude that f has a zero in the interval [-1, 1]?

    The theorem says:

    if f is continuous in the interval [a, b], and f (a) < u < f (b), there exist a number c in the interval [a, b], such f (c) = u

    Notice that the function needs to be continuous in the interval, and in this case, we don't know if f (x) is continuous or not, so we can't apply this theorem.
Know the Answer?
Not Sure About the Answer?
Find an answer to your question 👍 “Note that f (x) is defined for every real x, but it has no roots. That is, there is no x∗ such that f (x∗) = 0. Nonetheless, we can find an ...” in 📗 Mathematics if the answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers