In playing with my GPS system (Garmin GPS V) and the mapping functions, I became curious about how to minimize the data stored for the various streets. In particular, I was thinking only about the paths of the streets themselves, not the ancillary data about street names, etc. I noticed that the encoded streets do not always follow the actual streets, and I understand that this is a data compression problem, not a data gathering problem. Navtek, which gathers the data, drives all the streets, and collects data to within the error of the satellite GPS system, which in most cases is within 20-30 feet. While this is good enough to distinguish between two sides of a freeway, it isn't good enough to distinguish one lane of the freeway from the next. Nevertheless, the streets encoded within Garmin's digital maps are not stored to this degree of precision, and in several cases, the Garmin unit thinks that you have left the road even when you haven't. I have then compared the "bread crumb" trails of the GPS points with an actual paper map, and found that the paper map (and the trail of bread crumbs) was more accurate. While the problem is very similar to the encoding of data in a waveform (Shannon, etc.), it isn't exactly the same, since the goal in GPS encoding is to minimize the error distance from the actual road rather than minimizing "bandwidth". In particular, I would imagine that the usual "ripple" problems of polynomials won't matter, so long as the absolute value of the error is within bounds. Chebychev (sp?) approximations can be very high quality, but I'm not aware of any _information theoretic_ analysis of them. I.e., is anyone aware of any theory/theorems that would govern the number of bits required to stay within a particular error bound of a given continuous function?