By critical points I assume that you mean points at which both partial derivatives vanish (that's compatible with your remark about the discriminant). Let R be a disk centered at the origin of radius 2. Let f(x,y) = (x^2 + y^2 - 1)((x-5)^2 + (y-5)^2 - 1). Then the locus of f(x,y) = 0 is the union of a circle of radius 1 centered at the origin and a circle of radius 1 centered at (5,5). This clearly avoids the boundary of R and has no critical points in R. Victor On Fri, May 30, 2014 at 11:34 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
<< for otherwise you could just take a product of a bunch of such and your locus would be the union of all the curves >>
Don't follow --- why does that affect things? WFL
On 5/30/14, Victor Miller <victorsmiller@gmail.com> wrote:
Fred, I took your reference to "critical points" as meaning singularities, especially when you mentioned that one way to find them was to find the discriminant, and then find the roots of that. However, on second though, if you mean that the discriminant has no real roots, then the real locus of an elliptic curve will have only one component, and thus not be a counter-example to what you hoped for. Now, of course you need your f(x,y) irreducible over the reals, for otherwise you could just take a product of a bunch of such and your locus would be the union of all the curves.
Victor
On Fri, May 30, 2014 at 10:52 AM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
(1) Dan has admonished me in private, and perfectly correctly, for confusing surfaces and curves with functions. In future I shall try to restrain my wayward imagination: fundamentally, it's functions we're discussing.
In particular, my "critical points" --- where some relevant combination of differentials vanishes --- depend essentially on the coordinate frame (x,y) , the particular polynomial function f(x,y) , and whether I'm discussing: z = f(x,y) in 3-space, with CR's at df/dx = df/dy = 0 ; or y(x) or x(y) defined implicitly (and locally) by f = 0 in 2-space, with CR's at f = df/x = 0 , etc.
(2) That confusion misled me into proposing a bogus counter-example to my 3-space algorithm, based on a "tilting cigar" surface meeting the z-plane in the boundary of the 2-space region of interest R , but rising to a maximum z at a point P which projects down to outside R .
Intuitively (aha!) this is impossible for simply-connected R , since the surface would have to buckle under itself, and would no longer represent a single-valued function.
(3) But as Warren's example of a circle of zeros within annular R shows, simple-connectivity is necessary (the surface can be a paraboloid). This means that any proof is going to involve more topological nous than I can currently muster.
(4) Furthermore a "pedestrian" 2-space algorithm actually provides more information about what region R can be guaranteed free of zeros.
(5) Andy's counter-example --- a straight line of zeros within an infinite strip R --- I did foresee, but ignored on the grounds that it can be fixed by compactification. Adjoin a complex point or projective line at infinity: the boundary of R then includes points at infinity where the line meets it.
(6) I didn't follow Victor's reasoning concerning elliptic curves, I'm afraid; which may be well a problem of communication, given my own earlier muddle. In particular, at least two points on his interior oval will satisfy f = df/x = 0 ,
(7) But it's probably more constructive to work through an actual case illustrating my proposal in action: so I shall go prepare an example.
Fred Lunnon
On 5/30/14, Victor Miller <victorsmiller@gmail.com> wrote:
Fred, You need some other hypothesis. For example the elliptic curve y^2 = x^3 - x has it's real locus two disconnected ovals (one of them passes through infinity so looks like an open oval). Since they're disconnected, you can surround one by a circle not encroaching on the other.
Victor
On Thu, May 29, 2014 at 1:01 PM, Fred Lunnon <fred.lunnon@gmail.com> wrote:
Given a plane curve C defined by f = 0 with f(x, y) polynomial, and a region R of the plane (possibly extending to infinity), I assert that C avoids R provided C avoids the boundary of R ; and C has no critical points within R .
There must be a well-known theorem to this effect (unless, of course, it's actually false --- a situation by no means previously unknown). But I don't know a reference (or a counter-example) --- anybody?
A straightforward way to locate the critical points seems to be to compute the discriminant g of f with respect to (say) x , then find the roots of g(y) = 0 . Is there a more respectable alternative?
WFL
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun