I would suggest using the less eponymous term "Shatter function lemma" (that Matousek uses in his Lectures on Discrete Geometry) in addition to "Sauer-Shelah lemma". I personally have known this lemma for a long time, but always under the more descriptive name "Shatter function lemma".
Marius Overholt (not verified)4 years 28 weeks agoSmoothing sums
It was actually Waclaw Sierpinski who obtained the exponent 1/3 in the Gauss circle problem, as part of his doctoral dissertation, and who published the result in 1906 (in Polish). His adviser Georgy Voronoi had obtained the same exponent in the Dirichlet divisor problem in 1903. There is an article by Andrzej Schinzel about Sierpinski's papers in number theory in volume XXI of Acta Mathematica (1972). Also there is supposed to exist (but which I have never seen) a description of Sierpinski's proof in German, possibly in some Jahrbuch der Deutsche Mathematikerverein from those years. Schinzel states that the proof is by Voronoi's geometric method. That would mean that the circle is approximated by a polygon, and Euler-Maclaurin summation applied on each piece.
In Proof of Theorem 2:
... use Theorem 1 to identify \mathrm{Hom}_{k(G)}(U,U \otimes_k V) with V (not U),
we see that we may regard S as a subspace of V (not U).
Beyond this, a very nice article.
I don't find the article unintelligible at all; perhaps you could clarify what you find problematic.
Also, editing a page to correct typos (like the missing coefficient you spotted) is very simple: just click on edit at the top of the article. I've corrected it here.
This should probably be restructured to match the form of the articles under "What kind of problem am I trying to solve?". I might even do so myself if I have the time and energy...
This describes one of the most fundamental and general mathematical techniques there is, one so basic that probably most math majors find it second nature already. As such, the description of it as a general technique may be most useful for lower-level pedagogy; failing to grasp this general technique can make math very hard for primary and secondary school students. However, I would love to have examples of its use at all levels and fields, since it is so general-purpose.
My personal favorite way of understanding cubics is the following. In analogy to quadratic equations, try a solution of the form u + v, figuring once we get a single root we can get the other two in short order. Then get rid of the bx^2 term like you did, obtaining x^3 + cx + d = 0. The idea now is to cancel the cx term too. Namely, x^3 = u^3 + 3uv(u + v) + v^3, and cx = c(u + v), so to cancel the cx
term and the 3uv(u + v) term at the same time one stipulates that 3uv = -c.
As a result, one has two equations now: u^3 + v^3 + d = 0, and 3uv = -c. Plugging
v = -c/3u into the first equation gives a quadratic equation in u^3, which has
six solutions u. To each of these three is a single v = -c/3u that works and then one obtains six solutions u + v. But of course u and v are symmetric here so we really just have 3 solutions.
I don't think one really needs to apply the axiom of choice in this context: Since the multifunction is taking values in positive integers, we could just define the corresponding function by taking its value to be the least member of the set of values of the multifunciton. (This process will then, I guess, secretly take us back to the original function construction, where we write each rational in lowest terms — but we don't need to pay attention to this if we don't want to.)
There is a typo in the second paragraph of section Example 1, continued: one should cancel the symbol in the two occurrences of
the expected value of given that
And, reading the last paragraph of this section, I wondered why you picked as an example of extremely unlikely deviation. First, would already yield a rather respectable (i.e., small) bound, namely, something like . Second, there is nothing specific about here, although the formulation seems to indicate otherwise: you might want to add a for example somewhere.
Does it mean the re-write needed to see "variance" is small? Or is it because "partition" naturally suggests "statistics"?
There are reasons to prefer habitually testing for Cauchy-Schwartz, or even Jensen's inequality: for instance they are both more flexible than "variance".
This is a nice comment of yours, and it does seem to me that the best thing to do Tricki-wise is what you more or less suggest. That is, in this article one could say something like, "It is not hard to prove that the numbers , and cannot be factorized any further" and then say "For a proof, see ..." Then there could be a separate article on this norm trick. Or perhaps the norm trick could be thought of as an example of the more general trick, which actually came up in a discussion about proving facts about groups, of finding a homomorphism to some structure that's simpler to analyse.
Is it clear to the intended reader that none of those numbers can be factored any further? Of course one does something like trial division, but it's not obvious what "something like trial division" is.
One possible proof that 2, 3, 1 + \sqrt{-5}, 1 - \sqrt{-5} can't be factored any further is as follows. Let N(a+b\sqrt{-5}) = a^2 + 5b^2 (I use N because this is a "norm".) This norm is multiplicative, since it's the square of the complex absolute value. Then if, say, there exist non-unit elements of Z(\sqrt{-5}), a + b\sqrt{-5} and c + d\sqrt{-5}, such that
(a + b \sqrt{-5}) (c + d \sqrt{-5}) = 3
then they have norms either 1 and 9 or 3 and 3, respectively. There are no elements of the ring with norm 3, and the only elements of norm 1 are units. Similar proofs work for 2, 1 + \sqrt{-5}, 1 - \sqrt{-5} – in general, if an element of Z(\sqrt{-5}) can be factored, then its norm can be factored, and the factorization of a + b\sqrt{-5} will be a "lifting" of the factorization of the norm a^2 + 5b^2.
("Take norms" seems like its own trick, and perhaps this is better placed in an article on that trick.)
I'm not really sure about the etiquette for this site, so I hope this comment isn't too low-level.
In computer science, at least, pretty much the first example of greedy algorithms is for the problem of finding a minimum weight spanning tree in a graph (and then the generalization to the characterization of matroids by greedy algorithms).
Also, sometimes when greedy algorithms don't work optimally, you can still get something useful, like finding a linear-sized independent set in a planar graph or approximating set cover. (There are many examples of this nature.)
I would suggest using the less eponymous term "Shatter function lemma" (that Matousek uses in his Lectures on Discrete Geometry) in addition to "Sauer-Shelah lemma". I personally have known this lemma for a long time, but always under the more descriptive name "Shatter function lemma".
It should be Acta Arithmetica, of course!
It was actually Waclaw Sierpinski who obtained the exponent 1/3 in the Gauss circle problem, as part of his doctoral dissertation, and who published the result in 1906 (in Polish). His adviser Georgy Voronoi had obtained the same exponent in the Dirichlet divisor problem in 1903. There is an article by Andrzej Schinzel about Sierpinski's papers in number theory in volume XXI of Acta Mathematica (1972). Also there is supposed to exist (but which I have never seen) a description of Sierpinski's proof in German, possibly in some Jahrbuch der Deutsche Mathematikerverein from those years. Schinzel states that the proof is by Voronoi's geometric method. That would mean that the circle is approximated by a polygon, and Euler-Maclaurin summation applied on each piece.
In Proof of Theorem 2:
... use Theorem 1 to identify \mathrm{Hom}_{k(G)}(U,U \otimes_k V) with V (not U),
we see that we may regard S as a subspace of V (not U).
Beyond this, a very nice article.
Very good but it would be nice if variable seperation method would be included.
I don't find the article unintelligible at all; perhaps you could clarify what you find problematic.
Also, editing a page to correct typos (like the missing coefficient you spotted) is very simple: just click on edit at the top of the article. I've corrected it here.
This article is unintelligible.
How can it be possible to transform the method of completing the square into such a "charabia".
By the way, a(x+x_0)^2 = ax^2 + 2ax x_0 + x_0^2 is false.
This should probably be restructured to match the form of the articles under "What kind of problem am I trying to solve?". I might even do so myself if I have the time and energy...
This describes one of the most fundamental and general mathematical techniques there is, one so basic that probably most math majors find it second nature already. As such, the description of it as a general technique may be most useful for lower-level pedagogy; failing to grasp this general technique can make math very hard for primary and secondary school students. However, I would love to have examples of its use at all levels and fields, since it is so general-purpose.
Blech, yes, I got stuff reversed. Fixing.
It's a Wiki. Usual advice is to rewrite the style yourself. :-)
My personal favorite way of understanding cubics is the following. In analogy to quadratic equations, try a solution of the form u + v, figuring once we get a single root we can get the other two in short order. Then get rid of the bx^2 term like you did, obtaining x^3 + cx + d = 0. The idea now is to cancel the cx term too. Namely, x^3 = u^3 + 3uv(u + v) + v^3, and cx = c(u + v), so to cancel the cx
term and the 3uv(u + v) term at the same time one stipulates that 3uv = -c.
As a result, one has two equations now: u^3 + v^3 + d = 0, and 3uv = -c. Plugging
v = -c/3u into the first equation gives a quadratic equation in u^3, which has
six solutions u. To each of these three is a single v = -c/3u that works and then one obtains six solutions u + v. But of course u and v are symmetric here so we really just have 3 solutions.
For consistency I've now decided to use \chi throughout that section.
Before (1), the equation for the Fourier transform has \chi when it should have \xi.
I don't think one really needs to apply the axiom of choice in this context: Since the multifunction is taking values in positive integers, we could just define the corresponding function by taking its value to be the least member of the set of values of the multifunciton. (This process will then, I guess, secretly take us back to the original function construction, where we write each rational in lowest terms — but we don't need to pay attention to this if we don't want to.)
This article is should not be part of the Tricki.
I like how you snuck in the Axiom of Choice here, and hid it under the rug since its use is "clear".
Should "difficult" be changed to "easy"?
There is a typo in the second paragraph of section Example 1, continued: one should cancel the symbol in the two occurrences of
the expected value of given that
And, reading the last paragraph of this section, I wondered why you picked as an example of extremely unlikely deviation. First, would already yield a rather respectable (i.e., small) bound, namely, something like . Second, there is nothing specific about here, although the formulation seems to indicate otherwise: you might want to add a for example somewhere.
Thank you for this article.
Does it mean the re-write needed to see "variance" is small? Or is it because "partition" naturally suggests "statistics"?
There are reasons to prefer habitually testing for Cauchy-Schwartz, or even Jensen's inequality: for instance they are both more flexible than "variance".
What is a "smart exhaust"?
This is a nice comment of yours, and it does seem to me that the best thing to do Tricki-wise is what you more or less suggest. That is, in this article one could say something like, "It is not hard to prove that the numbers , and cannot be factorized any further" and then say "For a proof, see ..." Then there could be a separate article on this norm trick. Or perhaps the norm trick could be thought of as an example of the more general trick, which actually came up in a discussion about proving facts about groups, of finding a homomorphism to some structure that's simpler to analyse.
Is it clear to the intended reader that none of those numbers can be factored any further? Of course one does something like trial division, but it's not obvious what "something like trial division" is.
One possible proof that 2, 3, 1 + \sqrt{-5}, 1 - \sqrt{-5} can't be factored any further is as follows. Let N(a+b\sqrt{-5}) = a^2 + 5b^2 (I use N because this is a "norm".) This norm is multiplicative, since it's the square of the complex absolute value. Then if, say, there exist non-unit elements of Z(\sqrt{-5}), a + b\sqrt{-5} and c + d\sqrt{-5}, such that
(a + b \sqrt{-5}) (c + d \sqrt{-5}) = 3
then they have norms either 1 and 9 or 3 and 3, respectively. There are no elements of the ring with norm 3, and the only elements of norm 1 are units. Similar proofs work for 2, 1 + \sqrt{-5}, 1 - \sqrt{-5} – in general, if an element of Z(\sqrt{-5}) can be factored, then its norm can be factored, and the factorization of a + b\sqrt{-5} will be a "lifting" of the factorization of the norm a^2 + 5b^2.
("Take norms" seems like its own trick, and perhaps this is better placed in an article on that trick.)
Closely related is Péron's expression solving the harmonic Dirichlet problem, as an infimum of sup-harmonic dominators.
I'm not really sure about the etiquette for this site, so I hope this comment isn't too low-level.
In computer science, at least, pretty much the first example of greedy algorithms is for the problem of finding a minimum weight spanning tree in a graph (and then the generalization to the characterization of matroids by greedy algorithms).
Also, sometimes when greedy algorithms don't work optimally, you can still get something useful, like finding a linear-sized independent set in a planar graph or approximating set cover. (There are many examples of this nature.)