to be pedantic, the bound would be O(nlog n) because if the symbols are not all distinct, you can "compress" the representation size down, though how much depends on the distribution of symbols, eg use a huffman encoding of the symbols.
I feel like the article is vague around what I consider the main reason for nlog n popping up so often – the exponential structure of any tree, and the depth of the tree being calculated by a logarithm.
That is, considering an algorithm on n elements where some operation occurs n times, things are often structured in a tree than for each operation one potentially reaches the maximal depth of the tree, so one simply multiplies n operations by the depth of the tree.
There's got to be an even clearer way of expressing that, though.
Often n log n pops up for the following reason. If I have a set of n elements, and I want to represent the set as a binary string, the natural thing to do is represent each element as a binary string of length log n. n such strings then require total length n log n to represent.
I've now turned the headings into ordinary text-style links and removed the artificial links in the succeeding paragraphs, for stylistic consistency. And I think it's clearer too, though the headings do look a little small.
Since it doesn't show up directly in the article history, I should mention that this article is basically written by Ben Green. I've moved it over from the page How to use compactness, which is now a navigation page, and added one example (the one on the complex plane).
If V is any non-empty subset of U, does it follow that it must be an initial segment of some set W in the chain. This does not seem necessary. Does any non-empty subset of U have to contain the minimal element?
The following seems like a natural addition to this article: is every finitely additive probability measure on the integers countably additive? The answer could be a proof of the ultrafilter lemma by appealing to Zorn's lemma, which gives that the answer is no.
Similar functions, such as (which can also be though of as "half-way" between polynomial and exponential) arise as the running times of various algorithms for factoring integers.
I believe it should say: is thought of as being "half-way" between *exponential* functions and polynomials, in a similar manner to how is though of as being "half-way" between *constants* and polynomials.
In the first case, I believe it was just a typo. In the second case, "bounded function" could be a bit misleading. For example, I don't think many people would consider as "half-way" between and , even if they would consider it "half-way" between a constant function and .
It would be nice if there were an article on using various graph decompositions. For example, it is very frequently useful to decompose a graph into one or more of the following:
- connected components (or strongly connected components if dealing with directed graphs)
- 2-connected components
- a tree decomposition (a la treewidth)
- a clique decomposition (a la clique width)
- some sort of product of other graphs (Cartesian product is particularly nice because it satisfies unique factorization)
- others?
I am not experienced enough to write such an article, but thought I'd throw the idea out there.
So it's not possible to find out who wrote a new article, without editing it first? In this case I think I have a good guess ;) but it might be harder later in the project, when anyone can make a new article.
Though it doesn't really belong in this article, there's a very clever proof of the fourth example (and many similar problems) which should certainly go somewhere in the Tricki. I believe it's called the garden-path argument, though I can't find who invented it.
Given such a tiling of R, create a graph G whose vertices are the corners of the tiles and whose edges connect the corners along the sides of integer length. For tiles with four sides of integer length, just include two parallel sides (it doesn't matter which) as edges in G rather than all four. Now the degree of each vertex is the number of tiles for which it's a corner. It's easy to see that this is 1 for the four corners of R, and 2 or 4 for all other vertices.
If we arbitrarily make a path starting at one of these four corners which repeats no edges, we can only get stuck at another corner, since all other vertices have even degree. So there exists a path between two distinct corners of R which only moves parallel to the sides of R in integral steps, and so at least one side of R has integral length.
One can also use this to prove, for instance, that an m-by-n rectangle can be tiled with 1-by-k polyominoes if and only if k divides m or n.
to be pedantic, the bound would be O(nlog n) because if the symbols are not all distinct, you can "compress" the representation size down, though how much depends on the distribution of symbols, eg use a huffman encoding of the symbols.
I feel like the article is vague around what I consider the main reason for nlog n popping up so often – the exponential structure of any tree, and the depth of the tree being calculated by a logarithm.
That is, considering an algorithm on n elements where some operation occurs n times, things are often structured in a tree than for each operation one potentially reaches the maximal depth of the tree, so one simply multiplies n operations by the depth of the tree.
There's got to be an even clearer way of expressing that, though.
A picture of selfsame would also help here.
This could be the subject of another, more detailed, article.
It would be great to include an example to illustrate that. Can you suggest a good one?
Often n log n pops up for the following reason. If I have a set of n elements, and I want to represent the set as a binary string, the natural thing to do is represent each element as a binary string of length log n. n such strings then require total length n log n to represent.
I wrote something on this (the "dyadic pigeonhole principle") at
http://www.math.ucla.edu/~tao/preprints/Expository/pigeonhole.dvi
I may try to convert this to the wiki format, but probably not soon.
I've now turned the headings into ordinary text-style links and removed the artificial links in the succeeding paragraphs, for stylistic consistency. And I think it's clearer too, though the headings do look a little small.
Since it doesn't show up directly in the article history, I should mention that this article is basically written by Ben Green. I've moved it over from the page How to use compactness, which is now a navigation page, and added one example (the one on the complex plane).
Perhaps, there can be entries here on estimating probabilities and expectations.
Some estimation methods for probabilities and expectations:
The claim here is that the element is a minimal element of not just with respect to the ordering on but also with respect to the ordering on
There is no assumption here that is an initial segment, but is an initial segment, and if is non-empty then must contain the minimal element.
"This must be minimal in V as well " instead of "This must be minimal in U as well".
If V is any non-empty subset of U, does it follow that it must be an initial segment of some set W in the chain. This does not seem necessary. Does any non-empty subset of U have to contain the minimal element?
The following seems like a natural addition to this article: is every finitely additive probability measure on the integers countably additive? The answer could be a proof of the ultrafilter lemma by appealing to Zorn's lemma, which gives that the answer is no.
Similar functions, such as (which can also be though of as "half-way" between polynomial and exponential) arise as the running times of various algorithms for factoring integers.
In the first half of my last comment I was thinking of , which can be though of as "half-way" between polynomial and exponentials like .
I believe it should say: is thought of as being "half-way" between *exponential* functions and polynomials, in a similar manner to how is though of as being "half-way" between *constants* and polynomials.
In the first case, I believe it was just a typo. In the second case, "bounded function" could be a bit misleading. For example, I don't think many people would consider as "half-way" between and , even if they would consider it "half-way" between a constant function and .
Thanks – I've changed "bounded" to "constant".
It would be nice if there were an article on using various graph decompositions. For example, it is very frequently useful to decompose a graph into one or more of the following:
- connected components (or strongly connected components if dealing with directed graphs)
- 2-connected components
- a tree decomposition (a la treewidth)
- a clique decomposition (a la clique width)
- some sort of product of other graphs (Cartesian product is particularly nice because it satisfies unique factorization)
- others?
I am not experienced enough to write such an article, but thought I'd throw the idea out there.
That's a good point – one should be able to find out who wrote an article. I'll put this on the to-do list.
So it's not possible to find out who wrote a new article, without editing it first? In this case I think I have a good guess ;) but it might be harder later in the project, when anyone can make a new article.
Not too much is lost, hopefully, since the link to each page also occurs in the paragraph immediately following the heading.
It's because there have been no revisions to the article. Perhaps we should display the tab regardless, to avoid confusion.
I agree, and this dates from when I used to write articles differently. I'll try to do something about it soon.
Why don't I have access to the "Revisions" page for this article?
Though it doesn't really belong in this article, there's a very clever proof of the fourth example (and many similar problems) which should certainly go somewhere in the Tricki. I believe it's called the garden-path argument, though I can't find who invented it.
Given such a tiling of R, create a graph G whose vertices are the corners of the tiles and whose edges connect the corners along the sides of integer length. For tiles with four sides of integer length, just include two parallel sides (it doesn't matter which) as edges in G rather than all four. Now the degree of each vertex is the number of tiles for which it's a corner. It's easy to see that this is 1 for the four corners of R, and 2 or 4 for all other vertices.
If we arbitrarily make a path starting at one of these four corners which repeats no edges, we can only get stuck at another corner, since all other vertices have even degree. So there exists a path between two distinct corners of R which only moves parallel to the sides of R in integral steps, and so at least one side of R has integral length.
One can also use this to prove, for instance, that an m-by-n rectangle can be tiled with 1-by-k polyominoes if and only if k divides m or n.