I am not quite sure where to put this article. There is a 'use convexity' link in the 'estimating integrals' section but also one for Hölder's inequality.
I guess the best would be a link from both articles (if they exist one day) to this one?
Angela, I think you missed some coefficients, correct expression should be,
And I'm sorry but its really difficult to understand the formula you write for derivative. I think the expression you wanted to write is the following-
In the example proof, instead of squares, isn't it meant to be rectangles equal to each other? If squares are allowed to have different sizes, I don't think it is true that each neighbour of a square must overlap with one of its vertices.
A question: is there some reference for this problem and proof? I realize this problem asks for bounds on m in terms of n, where n=R(m,m) for intersection graph of rectangles. Was this studied for other special graphs?
Let be the number of satisfying assignments of a Boolean formula . Using the leftover hash lemma, there exists a polynomial time probabilistic algorithm taking oracle for SAT that given a formula outputs with high probability a number such that
i.e. approximates within factor of 2.
By replacing with "tensor power" , the number of assignments rises to . Running the algorithm on this formula,
and putting we get approximation within factor of in time polynomial in and length of . The algorithm is by Stockmeyer.
2. (Independent set cannot be approximated to a constant factor)
For any constant ,-approximating maximal independent set in a graph is -hard. (Taking complement, the same argument goes to clique)
Proof idea:
The PCP theorem implies that there is a constant such that MAX-3SAT cannot be approximated within . The usual reduction from 3SAT to independent set takes a formula with clauses and gives a graph with vertices, where each vertex corresponds to one of 7 ways a clause can be satisfied, and edges correspond to conflicts. Therefore the approximation ratio is preserved, and independent set cannot be approximated within some constant factor. Applying "tensor square" allows to lower the constant arbitrarily.
Use any value δ, which can be made small enough to approximate the derivitive to an arbitrary number of decimal places.
f'(x)≃(f(x+δ)-f(x))/δ
f''(x) ((f(x+2δ)-f(x+δ))-(f(x+δ)-f(x)))/δ=f(x+2δ)-2f(x+δ)+f(x)
f'''(x) (f(x+3δ)-f(x+2δ)+f(x+δ)-f(x))/δ
The pattern continues. The nth derivitive is
....................n
...................⎲
f(x+nδ)/δ+ ⎳ n(-1)^k f(x+kδ)/δ
...................k=1
The dots are there because the computer won't show spaces properly.
.. but I can't understand why "the proportion of x such that | f(x) - delta N | \geq (delta N / 2)" has delta N instead of delta^2 N which is the average value of f(x) as stated previously. And I haven't been able to turn inline commenting on in google chrome, my apologies.
A few other articles had also been mysteriously truncated, but I have now restored them (and this one). If anyone spots another page that does not look right then please let us know, for example via the forums.
For some strange reason, all revisions of this article have been truncated at the first paragraph, and I cannot find a way to retrieve any subsequent portion of the text. Has this also occurred for other articles? Is there any way to recover some copy of the full article?
is independent of if the value of does not actually depend on , i.e. it is constant as a function of . For instance is independent of , as it is always .
Equivalently, is independent of if one has for all .
Proving the consequence first is a great way to simplify a problem.
But I don't understand the example. What exactly does "independent" mean. How can f(n) be independent of n?
Sorry if this is a stupid question, but I would really like to understand this.
Suppose first that X is semialgebraic. Consider the function f :C—> R defined by
f(z)= dist(z, X)
Then for all but finitely many positive numbers t the level set C_t={ f=t} is a smooth curve. For t sufficiently small C_t will be a curve that contains X in its interior {f< t}. As an aside, for t small, the set {f\leq t} is homotopy equivalent to X.
If X is not semialgebraic, then the pixellation Y that you constructed is compact, semialgebraic, contains X and is contained in D if the size of the squares is small enough. Now apply the previous construction to Y.
PS My PhD student has been investigating approximations of planar sets by grids of squares (or pixellations). He has obtained quite interesting results. If P_r if the pixellation of a semialgebraic set with squares of size r then one can algorithmically associate to P_r a PL set S_r that in a rather strong sense to X as r–>0. In particular, many geometric invariants of S_r (such as area, perimeter, curvature measures, Betti numbers) converge to the corresponding invariants of X.
There are good reasons to study not just the fixed point property (fpp) but the common generalization of fpp and the covering dimension theory, namely the universal functions, and even the universal (and couniversal) morphisms for general categories. (Since monoids are 1-object categories, one may also focus on the universal elements of monoids).
First of all, an excellent site! It's great to have such a compendium of mathematical knowledge online, accessable to all. So a great deal of thanks goes to the creators of this site!
I'm in Year 12 in Australia and I just stumbled upon this page [on functional analysis], and I'd love to see it expanded. It looks to be an interesting field! I assume that capital Omega here is a space/topology, maybe even a set being integrated over? Could an explanation be given of the theory behind L^p(Omega) spaces and how it ties in more broadly to this topic?
I look forward to hearing further (and just as a side-note, I believe my mathematical background to be good so far - I have a good understanding of convergence of sequences, Maclaurin series, complex numbers, derivatives of inverse trigonometric functions, etc). I have read a little of analysis, to which I have been impressed so far, but that I have only broadly skimmed.
I am sorry. I know that the definition is not *wrong* with a correct definition of set theory (which is usually not given in the books that use the above definition of ordered pair), so I reverted it. I just think that it is crazy to unnecessarily use that as definition, but it seems that many people love it.
The use of this method for Zeta(3) is more involved but goes deeper into the principles of this method. I suggest that someone try to write up a description of this as Example 3.
This is indeed one instance of one of the most general method useful in mathematics and all the sciences. Z. A. Melzak has devoted a whole book to this idea : "Bypasses" (Wiley, 1983) showing how it encompasses geometry, algebra, analysis, physics, chemistry, information theory, technology in general.
In this tricki entry, contributors have shown classical instances of how to find and build T (a transformation toward another form) motivated by the fact we know how to do U (the transformation/projection of the target form into needed information, here its solutions), mainly with algebraic objects.
One of the most important aspect of this method or trick is that it can be nested at will.
e+e^(-1) has sequence of approximations 2(1 + 1/2! + 1/4! + ... 1/(2n)!) and no immediately obvious (to me) formula for the continued-fraction expansion.
Here is a point which I feel needs to be emphasized more in Example 3.
If we try to make the argument work without leaving to be optimized later, but by, say, initially setting , the argument will fail: we will get a useless bound. The reason: we need to pick a function which will grow rapidly while will grow slowly. In this case, grows pretty fast with .
What kind of functions work for this? Well, works: out of the terms in , almost all have expectation . Similarly, works fine as well: in the sum , we will be able to easily cancel many terms because they have expectation .
The central insight behind the proof in Example 3 is that picking for really, really small works. The reason: the average of and is really close to . That is: near one, inversion is about the same as subtraction. This insight can be pushed a little further to argue that if has mean , the quantity should be pretty close to . Thus one might hope that grows slowly - after all, it is "approximately" .
Maybe it'd be nice to mention that if your space is being decomposed into U and V and these have non-empty intersection, then the end terms (H_0 and such) can be taken as reduced homology instead of just normal homology, simplifying some calculations.
I am not quite sure where to put this article. There is a 'use convexity' link in the 'estimating integrals' section but also one for Hölder's inequality.
I guess the best would be a link from both articles (if they exist one day) to this one?
Is there an analogue of this trick for the multidimensional Fourier transform?
Angela, I think you missed some coefficients, correct expression should be,
And I'm sorry but its really difficult to understand the formula you write for derivative. I think the expression you wanted to write is the following-
"It's pretty obvious that you can't cover an open disc with finitely many points and lines, but it is not quite trivial, so here's an easy proof: ..."
A simpler proof: it is impossible to cover even a circle with finitely many lines, since a line can intersect a circle at at most 2 points!
Only if squares are of the same size, no? It would also work for rectangles with same sizes, right?
In the example proof, instead of squares, isn't it meant to be rectangles equal to each other? If squares are allowed to have different sizes, I don't think it is true that each neighbour of a square must overlap with one of its vertices.
A question: is there some reference for this problem and proof? I realize this problem asks for bounds on m in terms of n, where n=R(m,m) for intersection graph of rectangles. Was this studied for other special graphs?
1. (Approximate counting of problems in )
Let be the number of satisfying assignments of a Boolean formula . Using the leftover hash lemma, there exists a polynomial time probabilistic algorithm taking oracle for SAT that given a formula outputs with high probability a number such that
i.e. approximates within factor of 2.
By replacing with "tensor power" , the number of assignments rises to . Running the algorithm on this formula,
and putting we get approximation within factor of in time polynomial in and length of . The algorithm is by Stockmeyer.
2. (Independent set cannot be approximated to a constant factor)
For any constant , -approximating maximal independent set in a graph is -hard. (Taking complement, the same argument goes to clique)
Proof idea:
The PCP theorem implies that there is a constant such that MAX-3SAT cannot be approximated within . The usual reduction from 3SAT to independent set takes a formula with clauses and gives a graph with vertices, where each vertex corresponds to one of 7 ways a clause can be satisfied, and edges correspond to conflicts. Therefore the approximation ratio is preserved, and independent set cannot be approximated within some constant factor. Applying "tensor square" allows to lower the constant arbitrarily.
Use any value δ, which can be made small enough to approximate the derivitive to an arbitrary number of decimal places.
f'(x)≃(f(x+δ)-f(x))/δ
f''(x) ((f(x+2δ)-f(x+δ))-(f(x+δ)-f(x)))/δ=f(x+2δ)-2f(x+δ)+f(x)
f'''(x) (f(x+3δ)-f(x+2δ)+f(x+δ)-f(x))/δ
The pattern continues. The nth derivitive is
....................n
...................⎲
f(x+nδ)/δ+ ⎳ n(-1)^k f(x+kδ)/δ
...................k=1
The dots are there because the computer won't show spaces properly.
.. but I can't understand why "the proportion of x such that | f(x) - delta N | \geq (delta N / 2)" has delta N instead of delta^2 N which is the average value of f(x) as stated previously. And I haven't been able to turn inline commenting on in google chrome, my apologies.
A few other articles had also been mysteriously truncated, but I have now restored them (and this one). If anyone spots another page that does not look right then please let us know, for example via the forums.
For some strange reason, all revisions of this article have been truncated at the first paragraph, and I cannot find a way to retrieve any subsequent portion of the text. Has this also occurred for other articles? Is there any way to recover some copy of the full article?
That's a great explanation. Thanks.
is independent of if the value of does not actually depend on , i.e. it is constant as a function of . For instance is independent of , as it is always .
Equivalently, is independent of if one has for all .
Proving the consequence first is a great way to simplify a problem.
But I don't understand the example. What exactly does "independent" mean. How can f(n) be independent of n?
Sorry if this is a stupid question, but I would really like to understand this.
Suppose first that X is semialgebraic. Consider the function f :C—> R defined by
f(z)= dist(z, X)
Then for all but finitely many positive numbers t the level set C_t={ f=t} is a smooth curve. For t sufficiently small C_t will be a curve that contains X in its interior {f< t}. As an aside, for t small, the set {f\leq t} is homotopy equivalent to X.
If X is not semialgebraic, then the pixellation Y that you constructed is compact, semialgebraic, contains X and is contained in D if the size of the squares is small enough. Now apply the previous construction to Y.
PS My PhD student has been investigating approximations of planar sets by grids of squares (or pixellations). He has obtained quite interesting results. If P_r if the pixellation of a semialgebraic set with squares of size r then one can algorithmically associate to P_r a PL set S_r that in a rather strong sense to X as r–>0. In particular, many geometric invariants of S_r (such as area, perimeter, curvature measures, Betti numbers) converge to the corresponding invariants of X.
Isn't that always the case?
There are good reasons to study not just the fixed point property (fpp) but the common generalization of fpp and the covering dimension theory, namely the universal functions, and even the universal (and couniversal) morphisms for general categories. (Since monoids are 1-object categories, one may also focus on the universal elements of monoids).
Hi,
First of all, an excellent site! It's great to have such a compendium of mathematical knowledge online, accessable to all. So a great deal of thanks goes to the creators of this site!
I'm in Year 12 in Australia and I just stumbled upon this page [on functional analysis], and I'd love to see it expanded. It looks to be an interesting field! I assume that capital Omega here is a space/topology, maybe even a set being integrated over? Could an explanation be given of the theory behind L^p(Omega) spaces and how it ties in more broadly to this topic?
I look forward to hearing further (and just as a side-note, I believe my mathematical background to be good so far - I have a good understanding of convergence of sequences, Maclaurin series, complex numbers, derivatives of inverse trigonometric functions, etc). I have read a little of analysis, to which I have been impressed so far, but that I have only broadly skimmed.
Thanks once again.
Davin
I am sorry. I know that the definition is not *wrong* with a correct definition of set theory (which is usually not given in the books that use the above definition of ordered pair), so I reverted it. I just think that it is crazy to unnecessarily use that as definition, but it seems that many people love it.
The use of this method for Zeta(3) is more involved but goes deeper into the principles of this method. I suggest that someone try to write up a description of this as Example 3.
This is indeed one instance of one of the most general method useful in mathematics and all the sciences. Z. A. Melzak has devoted a whole book to this idea : "Bypasses" (Wiley, 1983) showing how it encompasses geometry, algebra, analysis, physics, chemistry, information theory, technology in general.
In this tricki entry, contributors have shown classical instances of how to find and build T (a transformation toward another form) motivated by the fact we know how to do U (the transformation/projection of the target form into needed information, here its solutions), mainly with algebraic objects.
One of the most important aspect of this method or trick is that it can be nested at will.
e+e^(-1) has sequence of approximations 2(1 + 1/2! + 1/4! + ... 1/(2n)!) and no immediately obvious (to me) formula for the continued-fraction expansion.
"Think about the converse" should be added as a link in this article.
Here is a point which I feel needs to be emphasized more in Example 3.
If we try to make the argument work without leaving to be optimized later, but by, say, initially setting , the argument will fail: we will get a useless bound. The reason: we need to pick a function which will grow rapidly while will grow slowly. In this case, grows pretty fast with .
What kind of functions work for this? Well, works: out of the terms in , almost all have expectation . Similarly, works fine as well: in the sum , we will be able to easily cancel many terms because they have expectation .
The central insight behind the proof in Example 3 is that picking for really, really small works. The reason: the average of and is really close to . That is: near one, inversion is about the same as subtraction. This insight can be pushed a little further to argue that if has mean , the quantity should be pretty close to . Thus one might hope that grows slowly - after all, it is "approximately" .
Maybe it'd be nice to mention that if your space is being decomposed into U and V and these have non-empty intersection, then the end terms (H_0 and such) can be taken as reduced homology instead of just normal homology, simplifying some calculations.