Dan Asimov writes:
... This strategy, however, would certainly benefit from a theorem in between the above definition of integrals and the calculation thereof, stating that a large class of functions (e.g., piecewise continuous ones) actually *are* integrable in the sense mentioned.
Stewart includes exactly such a theorem. Andy Latto writes:
Pedagogically, I think it makes sense to do things the other way around. This is a class for people who do not have a lot of experience with mathematics, and the problem people have understanding mathematics is the high level of abstraction involved. The way to combat this is by being as concrete as possible at first, and only introducing abstraction as needed.
I'm in agreement with this. Dan asks:
Jim did, by the way, say this is an honors course. (Is it honors for anyone, or mainly honors math majors?)
Most of the students have declared an intention to major in physics or engineering. My goal is to get at least a few of them to decide that math is cooler.
Pedagogy aside, certainly one could cook up a function having the value 1 on all points of the form k/n for integers k,n such that 0 <= k/n <= 1, and the value 2 at other points.
In fact, on the homework I define the function f(x) that's 1 when x is rational and 2 when x is irrational, and ask them to show that the function isn't integrable on [0,1].
Let's call this method EWAP. So is EWAP integrable equivalent to APAP integrable, and yielding the same values?
That's a good and concise question of the mathematical issue (as opposed to the pedagogical issue) that I'm raising.
Conversely, there is a chance that a strange singularity can be created near 0, say, of a function f:[0,1} -> R, such that f is EWAP integrable, but not APAP integrable.
I suspect not, but I don't see a proof off-hand.
Good question.
Thanks! Stephen Gray writes:
When you go from equal intervals to unequal ones, it's important to specify exactly how all the intervals approach zero width.
Stewart's definition requires that the width of the widest interval (aka the "mesh" of the partition) must go to zero.
I imagine that the limit is affected by how the limit is reached.
One might worry that using a partition in which some intervals are much wider than others would mess up the answer, but this isn't the case, in the limit where the mesh goes to zero. For a situation in which Stephen's qualms *do* apply, see the "cylinder area paradox": one can define a sequence of polyhedral approximations to the cylinder for which the surface areas of the approximations do not converge on the true surface area of the cylinder, on account of the triangular faces of the polyhedron getting too long and skinny in shape, even though they're shrinking away to zero diameter. (Anyone know a good web reference for this? Freida Zames' article "Surface Area and the Cylinder Area Paradox" is only available via JSTOR, as far as I can tell.) Jim Propp