Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Blow-up lemma

From Wikipedia, the free encyclopedia

The blow-up lemma, proved by János Komlós, Gábor N. Sárközy, and Endre Szemerédi in 1997,[1][2] is an important result in extremal graph theory, particularly within the context of the regularity method. It states that the regular pairs in the statement of Szemerédi's regularity lemma behave like complete bipartite graphs in the context of embedding spanning graphs of bounded degree.

Definitions and Statement

[edit]

To formally state the blow-up lemma, we first need to define the notion of a super-regular pair.

Super-regular pairs

[edit]

A pair of subsets of the vertex set is called -super-regular if for every and satisfying

and

we have

and furthermore,

for all and for all .[1]

Here denotes the number of pairs with and such that is an edge.

Statement of the Blow-up Lemma

[edit]

Given a graph of order and positive parameters , there exists a positive such that the following holds. Let be arbitrary positive integers and let us replace the vertices of with pairwise disjoint sets of sizes (blowing up). We construct two graphs on the same vertex set . The first graph is obtained by replacing each edge of with the complete bipartite graph between the corresponding vertex sets and . A sparser graph G is constructed by replacing each edge with an -super-regular pair between and . If a graph with is embeddable into then it is already embeddable into G.[1]

Proof Sketch

[edit]

The proof of the blow-up lemma is based on using a randomized greedy algorithm (RGA) to embed the vertices of into sequentially. The argument then proceeds by bounding the failure rate of the algorithm such that it is less than 1 (and in fact ) for an appropriate choice of parameters. This means that there is a non-zero chance for the algorithm to succeed, so an embedding must exist.

Attempting to directly embed all the vertices of in this manner does not work because the algorithm may get stuck when only a small number of vertices are left. Instead, we set aside a small fraction of the vertex set, called buffer vertices, and attempt to embed the rest of the vertices. The buffer vertices are subsequently embedded by using Hall's marriage theorem to find a perfect matching between the buffer vertices and the remaining vertices of .

Notation

[edit]

We borrow all notation introduced in previous sections. Let . Since can be embedded into , we can write with for all . For a vertex , let denote . For ,

denotes the density of edges between the corresponding vertex sets of . is the embedding that we wish to construct. is the final time after which the algorithm concludes.

Outline of the algorithm

[edit]

Phase 0: Initialization

[edit]
  1. Greedily choose the set of buffer vertices from the vertices of as a maximal set of vertices distance at least from each other
  2. Order the remaining vertices (those in ) in a list , placing the neighbors of first.
  3. Declare a queue of presently prioritized vertices, which is initially empty.
  4. Declare an array of sets indexed by the vertices of , representing the set of all "free spots" of , that is, the set of unoccupied vertices in the vertex could be mapped to without violating any of the adjacency conditions from the already-embedded neighbors of in . is initialized to .

Phase 1: Randomized Greedy Embedding

[edit]
  1. Choose a vertex from the set of remaining vertices as follows:
    1. If the queue of prioritized vertices is non-empty, then choose the vertex from
    2. Otherwise, choose a vertex from the list of remaining vertices
  2. Choose the image in for the vertex randomly from the set of "good" choices, where a choice is good iff none of the new free-sets differ too much in size from the expected value.
  3. Update the free sets , and put vertices whose free sets have become too small with respect to their size in the last update in the set of prioritized vertices
  4. Abort if the queue contains a sufficiently large fraction of any of the sets
  5. If there are non-buffer vertices left to be embedded in either or , update time and go back to step 1; otherwise move on to phase 2.

Phase 2: Kőnig-Hall matching for remaining vertices

[edit]

Consider the set of vertices left to be embedded, which is precisely , and the set of free spots . Form a bipartite graph between these two sets, joining each to , and find a perfect matching in this bipartite graph. Embed according to this matching.

Proof of correctness

[edit]

The proof of correctness is technical and quite involved, so we omit the details. The core argument proceeds as follows:

Step 1: most vertices are good, and enough vertices are free

[edit]

Prove simultaneously by induction on that if is the vertex embedded at time , then

  1. only a small fraction of the choices in are bad
  2. all of the free sets are fairly large for unembedded vertices

Step 2: the "main lemma"

[edit]

Consider , and such that is not too small. Consider the event where

  1. no vertices are embedded in during the first phase
  2. for every there is a time such that the fraction of free vertices of in at time was small.

Then, we prove that the probability of happening is low.

Step 3: phase 1 succeeds with high probability

[edit]

The only way that the first phase could fail is if it aborts, since by the first step we know that there is always a sufficient choice of good vertices. The program aborts only when the queue is too long. The argument then proceeds by union-bounding over all modes of failure, noting that for any particular choice of , and with representing a subset of the queue that failed, the triple satisfy the conditions of the "main lemma", and thus have a low probability of occurring.

Step 4: no queue in initial phase

[edit]

Recall that the list was set up so that neighbors of vertices in the buffer get embedded first. The time until all of these vertices get embedded is called the initial phase. Prove by induction on that no vertices get added to the queue during the initial phase. It follows that all of the neighbors of the buffer vertices get added before the rest of the vertices.

Step 5: buffer vertices have enough free spots

[edit]

For any and , we can find a sufficiently large lower bound on the probability that , conditional on the assumption that was free before any of the vertices in were embedded.

Step 6: phase 2 succeeds with high probability

[edit]

By Hall's marriage theorem, phase 2 fails if and only if Hall's condition is violated. For this to happen, there must be some and such that . cannot be too small by largeness of free sets (step 1). If is too large, then with high probability , so the probability of failure in such a case would be low. If is neither too small nor too large, then noting that is a large set of unused vertices, we can use the main lemma and union-bound the failure probability.[1][2][3]

Applications

[edit]

The blow-up lemma has a number of applications in embedding dense graphs.

Pósa-Seymour Conjecture

[edit]

In 1962, Lajos Pósa conjectured that every -vertex graph with minimum degree at least contains the square of a Hamiltonian cycle,[4] generalizing Dirac's theorem. The conjecture was further extended by Paul Seymour in 1974 to the following:

Every graph on vertices with minimum degree at least contains the -th power of a Hamiltonian cycle.

The blow-up lemma was used by Komlós, Sárközy, and Szemerédi to prove the conjecture for all sufficiently large values of (for a fixed ) in 1998.[5]

Alon-Yuster Conjecture

[edit]

In 1995, Noga Alon and Raphael Yuster considered the generalization of the well-known Hajnal–Szemerédi theorem to arbitrary -factors (instead of just complete graphs), and proved the following statement:

For every fixed graph with vertices, any graph G with n vertices and with minimum degree contains vertex disjoint copies of H.

They also conjectured that the result holds with only a constant (instead of linear) error:

For every integer there exists a constant such that for every graph with vertices, any graph with vertices and with minimum degree contains at least vertex disjoint copies of .[6]

This conjecture was proven by Komlós, Sárközy, and Szemerédi in 2001 using the blow-up lemma.[7]

History and Variants

[edit]

The blow-up lemma, first published in 1997 by Komlós, Sárközy, and Szemerédi,[1] emerged as a refinement of existing proof techniques using the regularity method to embed spanning graphs, as in the proof of the Bollobás conjecture on spanning trees,[8] work on the Pósa-Seymour conjecture about the minimum degree necessary to contain the k-th graph power of a Hamiltonian cycle,[9][4] and the proof of the Alon-Yuster conjecture on the minimum degree needed for a graph to have a perfect H-factor.[7] The proofs of all of these theorems relied on using a randomized greedy algorithm to embed the majority of vertices, and then using a Kőnig-Hall like argument to find an embedding for the remaining vertices.[1] The first proof of the blow-up lemma also used a similar argument. Later in 1997, however, the same authors published another paper that found an improvement to the randomized algorithm to make it deterministic.[2]

Peter Keevash found a generalization of the blow-up lemma to hypergraphs in 2010.[3]

Stefan Glock and Felix Joos discovered a variant of the blow-up lemma for rainbow graphs in 2018.[10]

In 2019, Peter Allen, Julia Böttcher, Hiep Hàn, Yoshiharu Kohayakawa, and Yury Person, found sparse analogues of the blow-up lemma for embedding bounded degree graphs into random and pseudorandom graphs[11]

References

[edit]
  1. ^ a b c d e f Komlós, János; Sárközy, Gábor N.; Szemerédi, Endre (1997), "Blow-up lemma", Combinatorica, 17 (1): 109–123, doi:10.1007/BF01196135, MR 1466579, S2CID 6720143
  2. ^ a b c Komlós, János; Sárközy, Gábor N.; Szemerédi, Endre (1998), "An algorithmic version of the blow-up lemma", Random Structures & Algorithms, 12 (3): 297–312, arXiv:math/9612213, doi:10.1002/(SICI)1098-2418(199805)12:3<297::AID-RSA5>3.3.CO;2-W, MR 1635264
  3. ^ a b Keevash, Peter (2011-05-10). "A hypergraph blow-up lemma". Random Structures & Algorithms. 39 (3): 275–367. arXiv:1011.1355. doi:10.1002/rsa.20362. S2CID 1395608.
  4. ^ a b Komlós, János; Sárközy, Gábor N.; Szemerédi, Endre (1996). "On the square of a Hamiltonian cycle in dense graphs". Random Structures & Algorithms. 9 (1–2): 193–211. doi:10.1002/(SICI)1098-2418(199608/09)9:1/2<193::AID-RSA12>3.0.CO;2-P. ISSN 1098-2418.
  5. ^ Komlós, János; Sárközy, Gábor N.; Szemerédi, Endre (1998-03-01). "Proof of the Seymour conjecture for large graphs". Annals of Combinatorics. 2 (1): 43–60. doi:10.1007/BF01626028. ISSN 0219-3094. S2CID 9802487.
  6. ^ Alon, Noga; Yuster, Raphael (1996-03-01). "H-Factors in Dense Graphs". Journal of Combinatorial Theory, Series B. 66 (2): 269–282. doi:10.1006/jctb.1996.0020. ISSN 0095-8956.
  7. ^ a b Komlós, János; Sárközy, Gábor; Szemerédi, Endre (2001-05-28). "Proof of the Alon–Yuster conjecture". Discrete Mathematics. Chech and Slovak 3. 235 (1): 255–269. doi:10.1016/S0012-365X(00)00279-X. ISSN 0012-365X.
  8. ^ Komlós, János; Sárközy, Gábor N.; Szemerédi, Endre (1995). "Proof of a Packing Conjecture of Bollobás". Combinatorics, Probability and Computing. 4 (3): 241–255. doi:10.1017/S0963548300001620. ISSN 1469-2163. S2CID 27736891.
  9. ^ Komlós, János; Sárközy, Gábor N.; Szemerédi, Endre (1998). "On the Pósa-Seymour conjecture". Journal of Graph Theory. 29 (3): 167–176. doi:10.1002/(SICI)1097-0118(199811)29:3<167::AID-JGT4>3.0.CO;2-O. ISSN 1097-0118.
  10. ^ Glock, Stefan; Joos, Felix (2020-02-20). "A rainbow blow-up lemma". Random Structures & Algorithms. 56 (4): 1031–1069. doi:10.1002/rsa.20907. S2CID 119737272.
  11. ^ Allen, Peter; Böttcher, Julia; Hàn, Hiep; Kohayakawa, Yoshiharu; Person, Yury (2019-03-19). "Blow-up lemmas for sparse graphs". arXiv:1612.00622 [math.CO].