Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

RANDOM CLIQUE COMPLEX PROCESS INSIDE THE CRITICAL

WINDOW.

AGNIVA ROY AND D. YOGESHWARAN


arXiv:2303.17535v1 [math.PR] 30 Mar 2023

Abstract. We consider the random clique complex process - the process of clique com-
plexes induced by the complete graph with i.i.d. Uniform edge weights. We investigate the
evolution of the Betti numbers of the clique complex process in the critical window and in
particular, show a process-level convergence of the Betti numbers to a Poisson process. Our
proof technique gives easily an hitting time result i.e, with high probability, the kth coho-
mology becomes trivial when there are no more isolated k-faces. Our results imply that the
thresholds for vanishing of cohomology of the clique complex process coincides with that of
the threshold for vanishing of ‘instantaneous’ homology determined by Kahle [16]. We also
give a lower bound for the probability of clique complex process to have Kazhdan’s property
(T ). These results show a different behaviour for the clique complex process compared to
the Čech complex process investigated in the geometric setting by Bobrowski [2].

1. Introduction
The success of Erdös-Rényi random graph model [10] and need for higher-dimensional
analogues of random graphs in topological data analysis have driven studies of various models
of weighted random (simplicial) complexes [3, 17]. A crucial question here is determining the
threshold for homological connectivity in random complexes. In this article, we investigate
the behaviour of cohomology of the random clique complex process close to the threshold
for vanishing of cohomology. To the best of our knowledge, process level convergence of this
model is not straightforward to deduce from existing results due to the lack of monotonicity
in homology.
Organization of the paper. In the rest of the introduction, we introduce the model, state
our main results and also place them in the context of existing literature. In Section 2, we
briefly recall some basic notions in combinatorial topology and state simple lemmas for later
use. We give proof of our main theorems in Section 3. To keep our exposition brief, we
shall not describe the background and related literature in detail but provide pointers to the
same; we refer the interested reader to the introduction in Bobrowski [2].
Set-up. We first define the random graph process. Let U(i, j), 1 ≤ i < j ≤ n be independent
identically distributed uniform([0, 1]) random variables. We set U(i, j) = U(j, i) for i > j.
Let t ∈ [0, 1]. Setting E(n, t) := {(i, j) : i 6= j, U(i, j) ≤ t}, we define the graph G(n, t) :=

Date: 2023-03-31.
2020 Mathematics Subject Classification. 60B99 55U10 .
Key words and phrases. random topology, hitting times, clique complexes, Poisson convergence, Betti
numbers.
1
([n], E(n, t)) where [n] := {1, . . . , n}. The graph G(n, t) has the same distribution as the
well known Erdös-Rényi random graph with parameters n, t. For convenience, we shall refer
to G(n, t) as the Erdös-Rényi random graph.
Associated to a graph G, one can build a (simplicial) complex called the clique complex
(also known as Vietoris-Rips complex) by considering k-cliques as (k − 1)-faces of the (sim-
plicial) complex. We refer the reader to Section 2 for more detailed definitions of various
notions used here. The (Erdös-Rényi ) random clique complex proces X(n, t), t ≥ 0 is the
process of clique complexes associated to the random graph process G(n, t).
This random clique complex was introduced in [15] and threshold for homological con-
nectivity was investigated in [15, 16, 9]. Denoting by H k (·), the cohomology group of a
complex, triviality of H k (X(n, t)) is referred to as homological connectivity. Triviality of
H 0 (X(n, t)) corresponds to graph connectivity of G(n, t) and this has been studied in detail;
see for example [10]. The crucial difference between k = 0 and k ≥ 1 is monotonicity. If
H 0 (X(n, t)) is trivial (i.e., G(n, t) is connected) then H 0 (X(n, s)) is also trivial for all s ≥ t.
However, this is not true for k ≥ 1 and naturally leads to the main question of this article.
Now onwards, we shall assume that k ≥ 1.

Main results. The above discussion leads us to consider the random clique complex process
X(n, t), t ∈ [0, 1] and in particular, the Betti number process βk (X(n, t)), t ∈ [0, 1] where
βk is the rank of the cohomology group H k with rational coefficients. Triviality of H k is
equivalent to βk = 0. With this brief background, our main question is the scaling and
asymptotic distribution of the vanishing threshold Tn,k when H k (X(n, t)) becomes trivial.
More formally, set
(1.1) Tn,k := inf{t : βk (X(n, s)) = 0 ∀s ≥ t}.
To relate Tn,k to triviality of H k , note that
{Tn,k ≥ t} = {βk (X(n, s)) 6= 0 for some s ≥ t}.
Going further, we shall investigate the asymptotics of the process βk (t) := βk (X(n, t)), t ∈
[0, 1] in the critical window (i.e., close to its vanishing threshold). We shall now state our
theorem determining the scaling and asymptotic distribution of the process.
1
( k +1) log n+ k2 log log n+c  k+1
Theorem 1.1. Let k ≥ 1. For c ∈ R, we set tc := tc (k, n) = 2 n
.
Then we have that
d
{βk (tc )}c∈R → {P(c)}c∈R ,
where {P(c)}c∈R is a Poisson process on R with intensity measure µ(k, x) dx, µ(k, x) :=
(k/2+1)k/2 −x d
(k+1)!
e and → denotes weak convergence in the space D[0, ∞) of right-continuous func-
tions with left limits.

The Poisson process P(c) is a non-increasing pure jump process defined by two properties
(i) P(b) − P(a) has Poisson(µ(k, a) − µ(k, b)) distribution and (ii) P(a2j ) − P(a2j−1 ), j =
1, . . . , m are independent if −∞ < a1 < a2 . . . < a2m < ∞.
2
Asymptotic distribution of βk (X(n, tc )) for a fixed c was shown in [9, Theorem 1.3]. We
feel that there is a gap in the arguments therein and our proof of the above theorem does
fix the gap; see Remark 3.7. Even so, it is not possible to deduce asymptotic distribution of
Tn,k from asymptotic distribution of βk (X(n, tc )). But we can easily conclude from Theorem
1.1 and non-increasing property of the Poisson process that
\ R∞
lim P(Tn,k ≤ tc ) = lim P (βk (t) = 0) = P P(c) = 0 = e− c µ(k,x)dx = e−µ(k,c) ,
 
n→∞ n→∞
t≥tc

k+1
i.e., nTn,k − ( k2 + 1) log n − k2 log log n converges in distribution to a Gumbel distribution
with parameter µ(k, c). This also immediately implies that there are no ’exceptional times’
for triviality of H k (X(n, t)) i.e., for k ≥ 1, and a sequence w(n) such that w(n) → ∞, it
holds that
 k  1
\
k ( 2 + 1) log n + k2 log log n + w(n) k+1
(1.2) P( H (X(n, t)) = 0) → 1, for tn := .
t≥t
n
n

As an immediate corollary, we recover [16, Theorem 1.1] - for k, w(n), tn as above, we have
that
P(H k (X(n, t)) = 0) → 1, if t ≥ tn .
Thus the thresholds for vanishing of ‘instantaneous cohomology’ and vanishing of cohomology
coincide. This is not the case for the random geometric Čech Complex (see [2, Theorem 3.1
and Corollary 7.14]). Also, part of our proof involves showing that isolated faces also exhibit
similar threshold behaviour (Proposition 3.1) and such a phenomenon has been shown to
hold true for 1-faces in geometric clique complexes as well [14, Proposition 1.4 and 1.5] but
fails for 1-faces in random Čech complexes [14, Proposition 1.4 and 1.5].
On the other hand like with connectivity in Erdös-Rényi random graph case, the thresholds
for vanishing of ‘instantaneous cohomology’ and vanishing of cohomology coincide trivially
for the random k-complex (see [19, 20]) due to monotonicity. Process-level convergence inside
the critical window was shown for this model in [25, Theorem 7] building upon the marginal
distribution convergence proven in [18, Theorem 1.10].
Another easy consequence of our proof technique is that apart from process level conver-
gence, we also obtain a hitting time result. Even for random k-complexes such a result is
known only in the case of k = 1, 2 which were proven respectively in [4, Theorem 4] and
[18, Theorem 1.11]. Hitting time results for the random Čech complex was shown in [2] (see
discussion below (3.4) therein) with the result for 0th homology of random Čech complex
(or connectivity in the random geometric graph) proven in [23]. We now state such a result
for all k for random clique complexes.
Let Nk (t) := Nk (X(n, t)) denote the number of isolated k-faces in X(n, t) which is same
as the number of maximal (k + 1)-cliques in G(n, t). That this approximates βk (t) very
well for many values of t has been the driving force behind the results in random clique
complexes [16, 9] including ours. Also note that Nk (t) is not monotonic. Analogous to
3

vanishing threshold Tk,n for βk , we may define Tn,k as the vanishing threshold for Nk (t) i.e.,

(1.3) Tn,k := inf{t : Nk (X(n, s)) = 0 ∀s ≥ t}.

Informally, the hitting time result shows that the kth cohomology group of the random clique
complex process vanishes when the last isolated k-face disappears. We state it formally now
and after the proof mention a strengthening of the same.

Theorem 1.2. Let Tn,k , Tn,k be the vanishing thresholds of kth Betti number and isolated
k-faces. Then we have that as n → ∞,

P(Tn,k = Tn,k ) → 1.

Our third and final result is less precise but nevertheless gives a lower bound for the
probability that the random clique complex process has Kazhdan’s property (T).

Theorem 1.3. Let k, c, tc (k, n), µ(k, c) be as in the above Theorem. Then,
[
lim sup P {Π1 (X(n, t)) does not have property (T )} ≤ 1 − e−µ(1,c) ,

n→∞
t≥tc (1,n)

where Π1 (X(n, t)) is the fundamental group of X(n, t).

As with Theorem 1.1, we can also deduce an analogue of (1.2) and vanishing threshold
for Kazhdan’s property (T) as in [16, Theorem 1.2] from Theorem 1.3.

Proof Outline: The proof of [16, Theorem 1.1] uses Garland’s method (Theorem 2.3) which
reduces vanishing of βk to verifying vanishing of isolated k-faces and large spectral gap on
the links of (k − 1)-faces. In [16], the former is verified by counting arguments and Markov’s
inequality and the latter via the powerful spectral gap result of [13, Theorem 1.1]. A similar
approach is used in [9, Theorem 1.3] to prove Poisson convergence in the critical window. Our
proof involves showing analogue of Theorem 1.1 for isolated k-faces (Theorem 3.2) and then
showing that the process of isolated k-faces and Betti numbers coincide starting anywhere
in the critical window (Theorem 3.3). The latter result is the main technical contribution
of this paper and allows to translate results about isolated k-faces to kth Betti number at a
process-level. To prove this result, we use that βk (X(n, t)) is a jump-process along with a
Borel-Cantelli argument to reduce the proof to obtaining good estimates on the probabilities
P(βk (X(n, t)) 6= Nk (X(n, t))). These estimates need to satisfy suitable integrability in t and
summability in n. For the same, we quantify the probability estimates in [16, 9] much more
carefully to enable us to derive the desired bounds via Garland’s method and the spectral gap
bound of [13]. Lastly, the same method with Zuk’s criteria (Theorem 2.4) replacing Garland’s
method yields Theorem 1.3. We make a more precise comparison after Remark 3.5. Our
proofs would simplify significantly if one is interested in the above results for t >> tc ; see
Remark 3.7. It should be possible to extend our methods to the multi-parameter random
clique complex model as in [9, 5, 6, 7].
4
2. Preliminaries
In this section we recall some basic definitions regarding (simplicial) complexes as well
as collect some topological results that are used later. The reader can turn to the books
Edelsbrunner and Harer [8] and Munkres [21] for more background on complexes and details.
We shall assume familiarity with basic algebraic topology notions.

Definition 2.1. A simplicial complex is a family S of finite sets such that if A ∈ S, and
B ⊆ A, then B ∈ S.
• The vertex set of S is the union of all its constituent sets. V (S) = ∪A∈S A.
• Every set A ∈ S is called a k-face of S where |A| = k + 1.
• A ∈ S is called isolated or maximal if ∄ B ∈ S such that A ⊂ B.
• The k-skeleton of S is defined to be the sub simplicial complex consisting of all the
faces of S that have size atmost (k + 1). The 1-skeleton is also referred to as the
underlying graph.

Every simplicial complex has well-defined topological invariants associated to it, namely
its homology and cohomology groups. The reader can refer to [8] for background on how
they are defined. The k-th cohomology group of a space X with coefficients in the field F
will be denoted H k (X, F ). The rank of H k (X, Q), called the Betti number, will be denoted
βk (X).
We recall now some very useful criteria to verify homological connectivity of complexes
given in terms of spectral gap of certain graphs.

Definition 2.2. The spectral gap of a graph, denoted λ2 , is defined to be the second largest
1 1
eigenvalue of the symmetric normalised Laplacian of the graph L = I − D − 2 AD − 2 , where
D is the degree matrix and A is the adjacency matrix.

Theorem 2.3. (Garland [11], Ballmann and Światkowski [1, Theorem 2.5]) Let X be a pure
k + 1-dimensional finite simplicial complex. If for every (k − 1)-face σ, the link lkX (σ) is
1
connected and has spectral gap λ2 [lkX (σ)] > 1 − k+1 , then H k (X, Q) = 0.

Theorem 2.4. (Żuk [28, Theorem 1]) If X is a pure 2-dimensional locally-finite simplicial
complex such that for every vertex v, the vertex link lk(v) is connected and the normalized
Laplacian L = L[lk(v)] satisfies λ2 (L) > 1/2, then π1 (X) has property (T).

Now we shall state and prove some lemmas which will help us to understand how removal
of isolated faces affects Betti numbers. Though these are implicitly used in [9, Theorem
1.2], we explicitly state and prove them here for convenience as well as to delineate the
deterministic parts of our proofs from the probabilistic parts.
Given a simplicial complex X, define Vk ⊂ V , as the set of vertices which are isolated
vertices in the link of some (k − 1)-face of X. Define Σ as the set of maximal k-faces.
Suppose Σ = {σ1 , . . . , σm }. Define X ′ = X \ Σ, the simplicial complex obtained from X by
deleting all the maximal k-faces.
5
Lemma 2.5. A vertex v ∈ Vk if and only if v is the vertex of some k-face in Σ. It follows
that
(1) Vk is empty iff Σ is empty.
(2) Let τ be a (k − 1)-face of X ′ . Then lkX ′ (τ ) is connected if and only if lkX (τ ) is a
graph with one non-trivial component (i.e., a component with at least two-vertices)
and the rest are isolated vertices.

Proof. We prove (1) first. Suppose v ∈ Vk . Then, there exists a k-face σ with vertices
a1 , . . . , ak+1 in X such that v ∈ lkX (σ), i.e., a1 , . . . , ak+1 , v is a (k + 1)-face of X. Further, v
is isolated in lkX (σ), which means that there is no vertex y such that a1 , . . . , ak+1, v, y are
the vertices a (k + 2)-face of X. This means that σ is a maximal (k + 1)-face, i.e., σ ∈ Σ.
Conversely, suppose v is the vertex of σ ∈ Σ. Suppose the vertices of σ are a1 , . . . , ak+1, v.
Call the k-face with vertices a1 , . . . , ak+1 as τ . Then, v ∈ lkX (τ ). Also, since there is no
(k + 2)-face such that σ is a subface of that, v is an isolated vertex in lkX (τ ), which means
v ∈ Vk .
(2) follows clearly by observing that lkX ′ (τ ) is obtained by removing vertices corresponding
to maximal k-faces containing τ from lkX (τ ). 

Lemma 2.6. Let X be a simplicial complex as above with exactly m maximal faces. Suppose
X ′ is obtained from X by removing all the maximal k-faces of X. Then, βk (X) > m implies
βk (X ′ ) ≥ 1.

Proof. Suppose βk (X) > m. Thus, H k (X) is an abelian group with at least (m + 1) infi-
nite order generators. Consider a generating set of H k (X) that contains [φ1 ], . . . , [φm ], the
cohomology classes corresponding to the characteristic functions of the m maximal k-faces
σ1 , . . . , σm . As the σi are maximal, it is clear that δ k (φi ) = 0, so they are well-defined coho-
mology classes. As the [φi ] may or may not be linearly independent, the generating set for
H k (X) would have at least one other infinite order generator. The group H k (X ′ ) is obtained
from H k (X) with the same generating set, but by adding the relations [φ1 ] = 1, . . . , [φm ] = 1.
However, as there is at least one more generator not involving any of the [φi ], H k (X ′ ) has
at least one infinite order generator. Thus βk (X ′ ) ≥ 1. 

Lemma 2.7. Let X be a simplicial complex as above with exactly m maximal faces. Suppose
X ′ is obtained from X by removing all the maximal k-faces of X. Then, βk (X) < m implies
βk−1 (X ′ ) ≥ 1.

Proof. Call the characteristic functions of the m maximal faces φ1 , . . . , φm . These are k-
cocycles, as they correspond to maximal faces. Then, if βk (X) < m, the φi must be linearly
dependent in H k (X), i.e., there must be a (k − 1) cochain λ such that δ k−1(λ) = m
P
i=1 ai φi .

Thus, it follows that λ is not a (k − 1)-coboundary. Now consider X = X − {σ1 , . . . , σm }.
Now, δ k−1 (λ) = 0 in X ′ , and λ is not a coboundary in X or X ′ , which means λ would give
a nontrivial element in H k−1(X ′ ). 
6
3. Proofs
We give proofs of our main theorems in this section. The analogue of Theorem 1.1 for
isolated faces is proved in Section 3.1. We relate the process of isolated faces to Betti
number process in Section 3.2 with proof of some technical lemmas postponed to Section
3.4. In Section 3.3, we complete the proof of our main theorems.

3.1. Poisson process convergence of Isolated faces - Theorem 3.2. Recall that Nk (t)
is the number of maximal (k + 1)-cliques in G(n, t) or the number of isolated k-faces in
X(n, t). We can define this formally as
6=
1 X
Nk (t) := 1[i1 , . . . , ik+1 form a maximal clique in G(n, t). ],
(k + 1)! i ,...,i
1 k+1

where 6= denotes that the sum is over distinct indices. Since we are interested in persistent
P

homology, we need to understand maximal cliques that exist for any s > t. We define these
as
6=
1 X
Nk∗ (t) := 1[i1 , . . . , ik+1 form a maximal clique in G(n, s) for some s ≥ t. ].
(k + 1)! i ,...,i
1 k+1

Note that while Nk (t) is non-monotonic in t, Nk∗ (t) is monotonic. Trivially, we have that
Nk (t) ≤ Nk∗ (t) but we show that both are asymptotically equal at tc onwards.
1
( k2 +1) log n+ k2 log log n+c  k+1
Proposition 3.1. Let k ≥ 1 and tc (k, n) = n
for n ≥ 3. Then we
have that as n → ∞,

(3.1) E[Nk∗ (tc (k, n)) − Nk (tc (k, n))] → 0.

Consequently, Nk∗ (tc (k, n)) converges in distribution to a Poisson random variable with mean
µ(k, c) as in Theorem 1.1.

Proof. We shall only prove (3.1) as the Poisson convergence follows from (3.1), Poisson
convergence of Nk (tc (k, n)) [16, Theorem 2.3] (see also [9, Lemma 7.1]) and Slutsky’s theorem.
From now on, we abbreviate tc (k, n) by tc and drop the subscript k for convenience. Let
t ≥ tc . Further set N̂ (t) := N ∗ (t) − N(t). Thus to prove (3.1), it suffices to show that

(3.2) lim E[N̂ (tc )] = 0.


n→∞

We can represent N̂ (t) as


6=
1 X
N̂ (t) := 1[i1 , . . . , ik+1 do not form a clique in G(n, t)]
(k + 1)! i ,...,i
1 k+1

× 1[i1 , . . . , ik+1 form a maximal clique in G(n, s) for some s > t].
7
By the construction of G(n, t) via the weights U(i, j), we re-write the above events. We will
do so assuming i1 = 1, . . . , ik+1 = k + 1 for notational convenience.

{1, . . . , k + 1 do not form a clique in G(n, t)} = { max U(l, m) > t},
l,m=1,...,k+1

{1, . . . , k + 1 form a maximal clique in G(n, s)} = { max U(l, m) ≤ s}


l,m=1,...,k+1
\
∩ { max U(j, l) > s}.
1≤l≤k+1
j>k+1

From the above identities, note that if 1, . . . , k + 1 form a maximal clique in G(n, s) for some
s, then necessarily they form a maximal clique in G(n, s) for s′ = maxl,m=1,...,k+1 U(l, m).
Using this observation and combining the above events, we have that

{1, . . . , k + 1 do not form a clique in G(n, t) but form a maximal clique in G(n, s) for some s > t}
!
[ \ \
= {U(i, j) > t, max U(l, m) ≤ U(i, j)} { max U(p, l) > U(i, j)}
l,m=1,...,k+1 l=1,...,k+1
1≤i6=j≤k+1 p>k+1
[
=: A(i, j).
1≤i6=j≤k+1

We have from the definition of N̂ (t), the above identity, union bound and exchangeability
that
  
n k+1
(3.3) E[N̂ (t)] ≤ P(A(1, 2)).
k+1 2

By conditioning on U(1, 2) and then using the independence of U(·, ·)’s, we have that

Z 1
k+1
P(A(1, 2)) = s( 2)−1 (1 − sk+1 )(n−k−1) ds.
t

Now by substituting the above probability in (3.3) and using some elementary bounds, we
obtain that for a constant C depending on k,

Z 1
k+1
E[N̂ (t)] ≤ Cn k+1
s( 2)−1 e−nsk+1 ds.
t

k+1
Now using that s( 2 ) = sk(k+1)/2 is increasing in s and s−1 e−ns
k+1
is decreasing in s, we
can derive the following bounds. Let cn → ∞ such that cn = o(log log n) and set t′n :=
8
( (k+2)n log n )1/(k+1) .
Z tcn Z t′n
k k
+1 k+1 k/2 −1 −nsk+1 k+1
E[N̂ (tc )] ≤ C n 2 (ns ) s e ds + C n 2 +1 (nsk+1 )k/2 s−1 e−ns ds
tc tcn
Z 1
k+1
+C nk+1 e−ns ds
t′n
Z tcn Z t′n
k
+1 k/2 −ntk+1 k k
−ntk+1
≤ Cn 2 (ntk+1
cn ) e c −1
s ds + Cn 2
+1
((k + 2) log n) e 2 cn
s−1 ds + Cn−1
tc tcn
k/2
t′

k k log log n cn tc
≤ Ce−c +1+ + log( n ) + Ce−cn (k + 2)k/2 log( n ) + Cn−1
2 2 log n log n tc tcn

 
tc t
≤ Ck,c log( n ) + e−cn log( n ) + Cn−1 .
tc tcn
t′n
Now, as n → ∞, tcn /tc → 1, cn → ∞, tcn
→ 21/(k+1) and so E[N̂ (tc )] → 0. 
1
( k2 +1) log n+ k2 log log n+c  k+1
Theorem 3.2. Let k ≥ 1. For c ∈ R, we set tc := tc (k, n) = n
.
Then we have that
d
{Nk (tc )}c∈R → {P(c)}c∈R ,
where {P(c)}c∈R is a Poisson process on R with intensity measure µ(k, x) dx and convergence
d
→ as in Theorem 1.1.

Proof. Firstly, we explain that it suffices to prove convergence of the associated point process
for random counting measures and then reduce it to Poisson convergence for suitable random
variables. Lastly, we shall establish Poisson convergence by a factorial moment argument.
We shall outline the main steps but will exclude some derivations and instead point the
reader to suitable references for similar calculations. The first step above is standard in
weak convergence of pure jump processes, the second step will be based on well-known
criteria for convergence of point processes and approximation as in Proposition 3.1. The last
step involves computing factorial moments similar to those in [25, Proposition 33] and [9,
Lemma 7.1]. Here we shall only mention the key points of the computation.
Let Mp (R) be the space of locally-finite (Radon) counting measures on R equipped with
the topology of vague convergence; see [24, Chapter 3] for details. Since Nk (tc ) is a pure
jump process, we can represent Nk (·), c ∈ R as an element of Mp (R) by considering the time
of jumps i.e., for a Borel measurable set A ⊂ R,
X
Nk (A) = 1[Nk (tc ) 6= Nk (tc −)],
c∈A

and similarly P(·) can also be represented as a random Radon counting measure. Due to
continuous mapping theorem (see the proof of Part I of Theorem 2.2 in [22]), it suffices to
prove weak convergence of Nk in Mp (R) i.e.,
Nk ⇒ P in Mp (R).
9
From the above representation, Nk ((c, ∞)) counts the number of isolated faces at tc and
the isolated faces that are formed after tc are counted twice - once when they are created
and once when they become non-isolated. Thus we have that
Nk ((c, ∞)) = Nk (tc ) + 2N̂k (tc ).
Since [9, Lemma 7.1] gives that E[Nk (tc )] → E[P(c)] for all c ∈ R, we can immediately
obtain from Proposition 3.1 that E[Nk (I)] → E[P(I)] for all I which are finite-unions of
disjoint intervals. Since the intensity measure of P is non-atomic, to complete the proof of
weak convergence in Mp (R) it suffices to show that
(3.4) P(Nk (I) = 0) → P(P(I) = 0) = e−µ(k,I) as n → ∞,
R
for all I which are finite-unions of disjoint intervals and µ(k, I) := I µ(k, x) dx ; see [24,
Proposition 3.22].
Let I = ∪lj=1 (a2j−1 , a2j ] for −∞ < a1 < . . . < a2l < ∞. Fix c ∈ (−∞, a1 ). Define
ū = nuk+1 − ( k2 + 1) log n − k2 log log n for u ∈ [0, 1] and n ≥ 3. We omit n from argument of
ū for convenience. Define
6=
′ 1 X
N (I) := 1[i1 , . . . , ik+1 form a maximal clique in G(n, tc )
(k + 1)! i ,...,i
1 k+1

× 1[ min max{Ū (j, i1 ), . . . , Ū (j, ik+1 )} ∈ I],


j6=i1 ,...,ik +1

which is nothing but the number of maximal cliques in G(n, tc ) that vanish in I. Setting
Ū (j, σ) = max{Ū(j, i) : i ∈ σ}, we re-write N ′ (I) as
X
N ′ (I) = 1[σ is a clique in G(n, tc )]1[min Ū (j, σ) ∈ I],
j ∈σ
/
σ∈Kk−1 (n)

where we are using that if minj ∈σ / Ū (j, σ) ∈ I then σ has to be maximal in G(n, tc ). Since
N̂ (tc ) is asymptotically 0 by Proposition 3.1, there are no new maximal cliques created and
hence asymptotically the only change in the process Nk is due to vanishing of maximal
cliques in G(n, tc ). Thus (3.4) follows if we show that
d
N ′ (I) → P(I) as n → ∞,
d
where → is standard convergence in distribution for random variables. The same can be
done via a factorial moment computation [26, Theorems 2.4 and 2.5]. For I = (c, ∞), this
follows from [9, Lemma 7.1] and Proposition 3.1. We only sketch the details of the generic
case here.
For notational convenience, we represent (k + 1)-tuples or k-faces by σ and we shall fix
the standard total ordering on [n] for convenience. Also set k (r) = k(k − 1) . . . (k − r + 1) as
the rth factorial power of k for k > r. Under such a notation, the rth factorial of N ′ (I) can
be written as
X6= Yr
N ′ (I)(r) = 1[σi is a clique in G(n, tc )]1[min Ū(j, σi ) ∈ I],
j ∈σ
/ i
σ1 ,...,σr i=1
10
and so its rth factorial moment is
6=  
X
′ (r)
E[N (I) ] = P {σi ’s are cliques in G(n, tc )} ∩ {min Ū (j, σi ) ∈ I} ,
j ∈σ
/ i
σ1 ,...,σr

We need to show that as n → ∞, E[N ′ (I)(r) ] → E[P(I)(r) ] = µ(k, I)r . This is simi-
lar to the proofs of [25, Proposition 33] and [9, Lemma 7.1]. Suppose we abbreviate
1[σi is a clique in G(n, tc )]1[minj ∈σ
/ i Ū (j, σi ) ∈ I] by 1[σ; I] for any set I, then observe that
r
Y X r
Y
1[σj ; I] = (−1)αj +1 1[σj ; (aαj , ∞)].
j=1 α1 ,...,αr ∈{1,...,2l}r j=1
P∗
We use σ1 ,...,σr to denote that the σi ’s are disjoint i.e., have no common vertices. In this
case, we can derive using independence of the indicators inside the sum that
∗ r r  
X Y Y n − (j − 1)(k + 1) (k+1 )
E[ 1[σj ; (aαj , ∞)]] = tc 2 (1−P(Ū ≤ aαj )k+1 )n−k−1 1+o(1) ,

σ ,...,σ j=1 j=1
k+1
1 r

where U is a Uniform [0, 1] random variable and the o(1) term is due to the fact that the
edge-weights between the vertices of σi also satisfy the requisite condition. From this one
can derive that
X ∗ Yr Yr
lim E[ 1[σj ; (aαj , ∞)]] = µ(k, (αj , ∞)),
n→∞
σ1 ,...,σr j=1 j=1

and now summing over αi′ s, we derive that



X r
Y
lim E[ 1[σj ; I]] = µ(k, I)r .
n→∞
σ1 ,...,σr j=1

To complete the proof, one needs to show that summation over σi ’s that are not disjoint do
not contribute asymptotically. We skip this part as it is similar to that in the proof of [9,
Lemma 7.1].


3.2. From isolated faces to Betti numbers - Theorem 3.3. The bulk of our work is in
relating the process of isolated nodes to that of Betti numbers and this is our main theorem
that helps us to do so.
1
( k +1) log n+ k2 log log n+c  k+1
Theorem 3.3. Let k ≥ 1 and tc := tc (k, n) = 2 n
for n ≥ 3, c ∈ R.
Then we have that as n → ∞,
 [ 
(3.5) lim P {βk (X(n, t)) 6= Nk (X(n, t))} → 0.
n→∞
tc ≤t≤1

From Proposition 3.1, we know that the probability that X(n, t) is a pure (k + 1)-
dimensional complex for all t ≥ tc i.e., X(n, t) has finitely many (even though random)
isolated k-faces. So thanks to Lemmas 2.6 and 2.7, to prove Theorem 1.1 we need to show
11
triviality of random complexes obtained by removing these finitely many isolated faces. Af-
ter removing these isolated faces, the random complex is a pure (k + 1)-dimensional complex
and we shall aim to show triviality of these random complexes via the Garland’s method
(Theorem 2.3). This requires verification of a certain spectral gap condition. An important
ingredient for such a verification is the following spectral gap result for Erdös-Rényi random
graphs.
( 1 +δ) log n
Theorem 3.4. ([13, Theorem 1.1]) Fix δ > 0 and let p > 2 n . Let d = p(n − 1) denote
the expected degree of a vertex. For every ǫ > 0, there is a constant C = C(δ, ǫ), so that
1
P(|λ2 (G̃(n, p)) − 1| > √Cd ) ≤ Cn exp(−(2 − ǫ)d) + C exp(−d 4 log n), where G̃ represents the
giant component of the graph G i.e., the largest component.

The largest component is well-defined for G(n, p) with p as above; for example, see [13,
Lemma 5.8].
(α+1) log n
Remark 3.5. The above bound can be interpreted as saying that if p > n
, then for
some C = C(α), we have that
C
P(|λ2(G̃(n, p)) − 1| > √ ) ≤ Cn−2α−1/2 .
d
1
This is because n exp(−(2−ǫ)d) dominates exp(−d 4 log n), and n exp(−(2−ǫ)d) < n exp(−(2−
ǫ)(α + 1) log n) = n1−(2−ǫ)(1+α) = n−2α−1+ǫ(1+α) . Choosing ǫ = 1/2(1 + α) yields the inequality
above.

In [16] or [9], a different version of Theorem 3.4 is used where the probbability estimate for
G(n, p) to be connected and have a large spectral gap is atmost Cn−α . Such a bound doesn’t
suffice for us and instead we use the above bound and estimate separately the probability
of an Erdös-Rényi random graph consisting of a ’giant component’ and isolated vertices.
This decays sufficient fast close to connective regime; see STEPS 2,3 and 4 of the proof. We
remark again on the proof simplifications for t >> tc and the neccessity for remaining steps
in Remark 3.7.
We first state a technical lemma necessary for proof of Theorem 3.3 but shall defer its
proof to the end. For m ≥ 1, set
(α + 1) log m
q(α, m) = .
m
Lemma 3.6. Let α = k(k+3) 2
− ρ for ρ > ρ0 for a ρ0 small. Set µ(t) := (n − k)tk and
µ(t) := µ(t) − (µ(t))3/5 . Then ∃n0 ∈ N such that ∀n ≥ n0 , m ≥ 1 and t ≥ tc , we have that
if t < q(α, m) then m < µ(t).

Proof. (Proof of Theorem 3.3) Recall that we set βk (t) = βk (X(n, t)) for notational conve-
nience. We break the proof into six steps to follow it easily. In the first step, we reduce the
proof to deriving quantitative bounds for certain probabilities at a fixed time t. At the end
of STEP 1, we explain what is done in each of the remaining steps. We shall be using some
12
topological lemmas from Section 2 and probabilistic estimates from Section 3.4.

Step 1: Reduction to estimates at a fixed time t. Observe that


[  [   
P {βk (t) 6= Nk (t)} ≤ P {βk (t) 6= Nk (t)} ∩ {Nk (tc ) ≤ m} + P Nk (tc ) ≥ m .
tc ≤t tc ≤t

Since Nk (tc ) converges in distribution (see Proposition 3.1), we can choose m large to make
the second term small and so it suffices to show that for all m ≥ 1,
[ 
(3.6) lim P {βk (t) 6= Nk (t)} ∩ {Nk (tc ) = m} = 0.
n→∞
tc ≤t

From the proof of Proposition 3.1, we know that E[N̂k (tc )] → 0. Thus (3.6) follows, if we
show that for all m ≥ 1
[ 
(3.7) lim P {βk (t) 6= Nk (t), Nk (tc ) = m, N̂k (tc ) = 0} = 0.
n→∞
tc ≤t


We now define Rk−1,m (t), Rk−1,m (t) motivated by similar definitions in the proof of [9, The-
orem 1.3].
6=
∗ 1 X
(3.8) Rk−1,m (t) := 1[i1 , . . . , ik form a (k − 1)-face which
k! i ,...,i
1 k

is a subface of m or fewer k-faces in X(s) for some s ≥ t. ]


6=
1 X
(3.9) Rk−1,m (t) := 1[i1 , . . . , ik form a (k − 1)-face which
k! i ,...,i
1 k

is a subface of m or fewer k-faces in X(t) ]

We can break up the relevant probability into the following terms:


[ 

P {βk (t) 6= Nk (t), Nk (tc ) = m, N̂k (tc ) = 0} ≤ P(Rk−1,m (tc ) > 0)
tc ≤t
[ 

+P {βk (t) > Nk (t), Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0}
tc ≤t
[ 

(3.10) +P {βk (t) < Nk (t), Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0}
tc ≤t

From upcoming Lemma 3.8, we have that the first term in the inequality (3.10) converges
to 0 as n → ∞. To complete the proof, we need to show that the second and third terms in
the inequality (3.10) vanish asymptotically. To prove the same, we shall show that
X
(3.11) E[Zn ] < ∞
n≥1
13
where
Z 1

Zn := 1[βk (t) > Nk (t), Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0]dt
tc
Z 1

+ 1[βk (t) < Nk (t), Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0]dt.
tc

Since βk (t), Nk (t) are pure jump-processes in t for every fixed n, Zn 6= 0 iff the event An :=

S
tc ≤t {βk (t) 6= Nk (t), Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0} occurs. So, we have that

1[Zn > 0] = 1[An ].


Thus, using (3.11) and Markov’s inequality, we derive that for all ǫ > 0,
X
P(Zn > ǫ) < ∞,
n≥1

and then by Borel-Cantelli Lemma we have that Zn → 0 a.s. as n → ∞. This yields that
1[An ] → 0 a.s. as n → ∞ and from bounded covergence theorem, we obtain that P(An ) → 0
as n → ∞. So, we have argued that (3.11) implies that the the second and third terms in
the inequality (3.10) vanish asymptotically and as argued above (3.11), this proves (3.7).
Setting

P (t) := P(βk (X(n, t)) > Nk (t), Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0),

Q(t) := P(βk (X(n, t)) < Nk (t), Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0),
we will prove (3.11), by showing that
XZ 1 XZ 1
(3.12) P (t) < ∞ and Q(t) < ∞
n≥1 tc n≥1 tc

We now give a roadmap to rest of the proof. In STEP 2, we shall break P (t) into two
terms P1 (t), P2 (t) and then bound each of them in STEP 3 and STEP 4 respectively. In
STEP 5, we break Q(t) into three terms and bound two of them and the remaining term is
bounded in STEP 6.

Step 2: Breaking P (t) into P1 (t) and P2 (t). Define X ′ (t) = X(t) \ Σk (t) where Σk (t)
is the set of isolated faces. From Lemma 2.6,

{βk (X(n, t)) > Nk (t), N̂k (tc ) = 0, Rk−1,m (t) = 0} ⊂ {βk (X ′ (n, t)) ≥ 1, N̂k (tc ) = 0, Rk−1,m

(t) = 0}.
For X ′ (n, t) as above and σ ∈ Kk−1 (n), define
B1 (σ, t) := {lkσ (X(n, t)) has comp. of size in [2, Nσ (t)/2]},
˜ σ (X(n, t))) ≤ k
B2 (σ, t) := {λ2 (lk },
k+1
˜ σ (X(n, t)) is the giant component
where Nσ (t) is the number of vertices in lkσ (X(n, t)) and lk
in lkσ (X(n, t)). The giant component (i.e., largest component) is well-defined for large n
14
(see [13, Lemma 5.8] for example). Note that lkσ (X(n, t)) is also an Erdös-Rényi random
graph G(Nσ (t), t). Observe that by Lemma 2.5, we have
(3.13) ˜ σ (X(n, t))} = {lkσ (X ′ (n, t)) is not connected} ⊂ B1 (σ, t).
{lkσ (X ′ (n, t)) 6= lk
Thus, using the above observations, we derive that
 
′ ∗
P (t) ≤ P βk (X (n, t)) ≥ 1, N̂k (tc ) = 0, Rk−1,m(t) = 0
 [ 
≤P {lkσ (X ′ (n, t)) is not connected}
σ∈Xk (n,t)
 [ k 
+P {λ2 (lkσ (X ′ (n, t))) ≤ } ∩ {lkσ (X ′ (n, t)) is connected}
k+1
σ∈Xk (n,t)

(using Garland’s theorem - Theorem 2.3)


 [   [ 
≤P B1 (σ, t) + P B2 (σ, t) , (using (3.13))
σ∈Xk (n,t) σ∈Xk (n,t)
X   X  
≤ P {σ ∈ Xk (n, t)} ∩ B1 (σ, t) + P {σ ∈ Xk (n, t)} ∩ B2 (σ, t)
σ∈Kk−1 (n) σ∈Kk−1 (n)

(3.14)
k
= nk t(2) [P1 (t) + P2 (t)] ,
( {σ ∈ Xk (n, t)} and Bi (σ, t) are independent and the latter are identically distributed)
where P1 (t) = P(B1 (σ, t)), P2 (t) = P(B2 (σ, t)) for some σ ∈ Kk−1(n). Thus to prove
P R1 P R 1 k (k )
n≥1 tc P (t) < ∞, it suffices to show that n≥1 tc n t
2 P (t) < ∞ for i = 1, 2.
i

Step 3: Bounding P1 (t). Recall µ(t) = (n − k)tk , µ(t) = µ(t) − µ(t)3/5 and set µ(t) :=
µ(t) + µ(t)3/5 . As for P1 (t), observe that
 
P1 (t) ≤ P {lkσ (X(n, t)) has comp. of size in (2, µ(t)/2)} ∩ {Nσ (t) ∈ (µ(t), µ(t))}
 
3/5
+ P |Nσ (t) − µ(t)| ≥ µ(t)
(3.15) =: T1 (t) + T2 (t)
We now try to bound T1 (t) which is a computation about an Erdös-Rényi random graph
similar to G(ntk , t). Since we are close to connectivity regime for G(ntk , t), with very high
probability, G(ntk , t) is connected except for finitely many isolated nodes. Thus, we expect
P1 (t) to decay sufficiently fast.
Let Cj (σ, t) be the number of  components in lkσ (X(n, t)) with j vertices. Set uj (t) =
E Cj (σ, t)1[Nσ (t) ∈ (µ(t), µ(t))] . Trivially, we have that
⌊µ(t)/2⌋
X
T1 (t) ≤ uj (t).
j=2
15
d
Since lkσ (X(n, t)) = G(Nσ (t), t), by using the standard spanning-tree counting estimates
(see for example, [10, Section 4.1]), and conditioning on Nσ we have that
µ(t)j j−2 j−1 1
uj (t) ≤ j t (1 − t)j(µ(t)−j) ≤ 2 (eµ(t)t)j e−tj(µ(t)−j)
j! tj
For n large and j ≤ µ(t)/2, it holds that µ(t) ≤ 5ntk /4, µ(t) − µ(t)/2 ≥ 2ntk /5 and hence
we derive that
k (n2 tk+1 )k/2 5 k+1 −2ntk+1 /5 j
nk t(2) uj (t) ≤ ( ent e ).
tj 2 4
We can choose κ large depending on k such that for t ≥ κtc , we have that 2ntk+1 /5 ≥
( k2 + 5) log n and hence, we obtain that
k 5
nk t(2) uj (t) ≤ j −2 ( en−3/2 )j .
4
k
nk t(2) uj (t) ≤ Cn−5/4 for t ≥ κtc and some constant C. So
P⌊µ(t)/2⌋
This gives that j=2

XZ 1 k
(3.16) nk t(2) T1 (t)dt < ∞.
n≥1 κtc

Now consider t ∈ [tc , κtc ). Then, again using the above bounds on uj (t) along with the
choice of µ(t), µ(t), we obtain that for large enough n,
k 1 5
nk t(2) uj (t) ≤ 2
(2kκn log n)k/2 ( eκk+1 (ntc )k+1 n−2(k/2+1)/5 )j
tc j 4
1 k (1−2j/5)− 19 k/2 5
≤ 2
n2 20 (2kκ log n) ( eκk+1 (ntc )k+1 n−1/20 )j .
tc j 4
Since k2 (1 − 2j/5) − 19
20
< −1 for k ≥ 1, j ≥ 3, this suffices to derive that

XZ κtc ⌊µ(t)/2⌋
k
nk t(2) uj (t) < ∞.
X

n≥1 tc j=3

For j = 2, one can obtain a more exact bound as


k 1 5
nk t(2) u2 (t) ≤ 2
(2kκn log n)k/2 ( eκk+1 (ntc )k+1 e−t(µ(t)−2) )2 .
tc j 4
Substituting this into the above bounds, we have that
XZ κtc ⌊µ(t)/2⌋
k
nk t(2) uj (t) < ∞
X

n≥1 tc j=2

and hence combining with (3.16), we have shown that


XZ 1 k
(3.17) nk t(2) T1 (t)dt < ∞.
n≥1 tc
16
Now we bound T2 (t). Since Nσ (t) has binomial distribution with parameters n − k, tk ,
using Bernstein’s inequality [27, Theorem 2.8.4] for binomial random variables, we can derive
that
 1 6k 
3/5 (n − k) 5 t 5
T2 (t) = P(|Nσ (t) − µ(t)| ≤ µ(t) ) ≤ 2 exp − t 3k 1
2(tk (1 − tk ) + (1/3)( (n−k)2) )
5

 1 6k 
(n − k) 5 t 5
≤ 2 exp − t 3k 1
2(tk + (1/3)( (n−k)2) )
5

 1 6k 
n 5 tc5
≤ 2 exp − 3k 1
2(tkc + ( tnc2 ) 5 )
(as the term inside exponential is increasing in t).

Now substituting the above bound, we obtain that

1 6k
1 1  
n 5 tc5
Z Z
k k
n t( ) T2 (t)dt ≤ 2
k 2 n t( ) exp
k 2 − 3k 1
dt
tc tc 2(tkc + ( tnc2 ) 5 )
Z 1 3 3k
k
≤2 nk t(2) exp(−n 5(k+1) (log n) 5(k+1) )dt (using the value of tc )
tc
3 3k
k
≤ 2n exp(−n 5(k+1) (log n) 5(k+1) ).

Hence, we have that

XZ 1
k
(3.18) nk t(2) T2 (t)dt < ∞.
n≥1 tc

So combining (3.18) with (3.17) and (3.15), we obtain that

XZ 1
k
(3.19) nk t(2) P1 (t) < ∞.
n≥1 tc

Step 4: Bounding P2 (t). We will estimate P2 (t) using Theorem 3.4.


Let α = k(k+3)
2
−ρ for ρ to be chosen later and recall µ(t) = (n−k)tk , µ(t) := µ(t)−(µ(t))3/5
be as in Lemma 3.6. Note that t(µ(t) − 1) is increasing in both n and t for t ≥ tc . Also,
since t(µ(t) − 1) → ∞ as n → ∞, we can pick n0 large enough so that Lemma 3.6 applies.
Also, assume that for all n ≥ n0 and for C = C(α) as in Remark 3.5, it holds that

C C 1
(3.20) q ≤q ≤ , t ≥ tc .
(µ(t) − 1)t (µ(tc ) − 1)tc k+1
17
Recall the notation for q(α, m) as before Lemma 3.6 and also that Nσ (t) is the number of
vertices in lkσ (X(n, t)).

P2 (t) ≤ P(B2 (σ, t) ∩ {Nσ (t) ≥ µ(t)}) + P(Nσ (t) < µ(t))
≤ P B2 (σ, t) ∩ {t ≥ q(α, Nσ (t)), Nσ (t) ≥ µ(t)} + P(Nσ (t) < µ(t))


(using n ≥ n0 and Lemma 3.6)


≤ P B2 (σ, t) ∩ {t ≥ q(α, Nσ (t)), Nσ (t) ≥ µ(t)} + P(|Nσ (t) − µ(t)| ≥ µ(t)3/5 )


(3.21) =: T3 (t) + T2 (t),

where recall that T2 (t) is defined in (3.15). Now, we shall evaluate T3 (t) by conditioning
on Nσ (t). Observe that conditioned on Nσ (t), lkσ (X(n, t)) has the same distribution as
G(Nσ , t). Thus, given Nσ (t),
1
B2 (σ, t) ⊂ {|1 − λ2 (G̃(Nσ (t), t))| > }.
k+1
So, we derive that
 
T3 (t) = E 1[t ≥ q(α, Nσ (t)), Nσ (t) ≥ µ(t)]P B2 (σ, t) | Nσ (t)


 
1
≤ E [1[t ≥ q(α, Nσ (t)), Nσ (t) ≥ µ(t)]P |1 − λ2 (G̃(Nσ (t), t))| >

| Nσ (t)
k+1
 
−2α−1/2
≤ E [1[t ≥ q(α, Nσ (t)), Nσ (t) ≥ µ(t)]Cα Nσ (t)

(from Remark 3.5 with d = (Nσ (t) − 1)t and (3.20), for n ≥ n0 )
≤ Cα µ(t)−2α−1/2
(since Nσ (t) ≥ µ(t)).

Without loss of generality, we assume that for n ≥ n0 , (1 − µ(tc )−2/5 )−2α−1/2 ≤ 1/2. As
µ(t) ≥ µ(tc ), we have that for n ≥ n0 , (1 − µ(t)−2/5 )−2α−1/2 ≤ 1/2 for all t ≥ tc . Thus, we
can derive that
Z 1 Z 1 Z 1
k (k2) k (k2) k
n t T3 (t)dt ≤ n t µ(t) −2α−1/2
dt ≤ nk t(2) (ntk )−2α−1/2 (1 − µ(t)−2/5 )−2α−1/2 dt
tc tc tc
1
1
Z
k+1
≤ nk−2α−1/2 tk( 2 −(2α+1/2)) dt
2 tc
1 k−2α−1/2 k( k+1 −(2α+1/2))
≤ n tc 2 (t has a negative exponent as 2α = k(k + 3) − 2ρ with ρ small, )
2
 k
 k+1 ( k+1 −(2α+1/2))
1 k k 2 k 2α+1/2
= ( + 1) log n + log log n + c n 2 − k+1 (substituting the value of tc )
2 2 2
 k
 k+1 ( k+1 −(2α+1/2))
1 k k 2 k2 +5k+1−2ρ
= ( + 1) log n + log log n + c n− 2(k+1) ,
2 2 2
18
where in the last line we have used that 2α = k(k + 3) − 2ρ and we can choose a smaller ρ
if needed. Since the coefficient of n is less than -1, we have from the above bound that
XZ 1 k
(3.22) nk t(2) T3 (t)dt < ∞.
n≥1 tc

This along with (3.18) and (3.21) gives


XZ 1 k
(3.23) nk t(2) P2 (t) < ∞.
n≥1 tc

Step 5: Bounding Q(t). Recall that Σk (t) is the set of isolated faces in X(n, t) and
X ′ (n, t) = X(n, t) − Σk (t). From Lemma 2.7 we have that Q(t) can be bounded as fol-
lows.  
′ ∗
Q(t) ≤ P βk−1 (X (t)) 6= 0, Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0 .
Thus, we can further bound these by using Theorem 2.3 again as follows.
(3.24) Q(t) ≤ Q1 (t) + Q2 (t) + Q3 (t),
where
 

Q1 (t) := P ∪σ∈Kk−2 (n) {lkσ (X ′ (n, t)) is not connected} ∩ {Nk (tc ) = m, N̂k (tc ) = 0, Rk−1,m (tc ) = 0} ,
 1 
Q2 (t) := P ∪σ∈Kk−2 (n) {|λ2 (lkσ (X ′ (n, t))) − λ2 (lkσ (X(n, t)))| ≥ } ,
2(k + 1)
 1 
Q3 (t) := P ∪σ∈Kk−2 (n) {|1 − λ2 (lkσ (X(n, t)))| ≥ } .
2(k + 1)
Using the spectral gap theorem (Theorem 3.4) as in STEP 4 in bounding P2 (t), one can
show that
XZ 1
(3.25) Q3 (t) < ∞.
n≥1 tc

One can use the Wielandt–Hoffman theorem [12] to bound Q2 (t) as follows. It allows us to
control the spectral gap of lkσ (X ′ (n, t)) in terms of the spectral gap of lkσ (X(n, t)). The
theorem says the following: Let A and B be normal matrices. Let their eigenvalues ai and
bi be ordered such that i |ai − bi |2 is minimised. Then,
P
X
(3.26) |ai − bi |2 ≤ ||A − B||
i

where || · || denotes the Frobenius matrix norm. Thus, we have


 1 
Q2 (t) = P ∪σ∈Kk−2 (n) {|λ2 (lkσ (X ′ (n, t))) − λ2 (lkσ (X(n, t)))| ≥ }
2(k + 1)
 1 
≤ P ∪σ∈Kk−2 (n) {||Llkσ (X ′ (n,t)) − Llkσ (X(n,t)) || ≥ } (by (3.26))
2(k + 1)

19
Since the entries of Llkσ (X ′ (n,t)) and Llkσ (X(n,t)) differ by the degrees in lkσ of those vertices
that form a maximal k-face with σ, we will overcount and bound the probability that some
vertex outside σ has low connectivity with the vertices in lkσ (X(n, t)). Note that for any
vertex v ∈/ σ, the number of edges from v to a vertex in lkσ (X(n, t)) has binomial distribution
with parameters (n − k), tk . Call this quantity deg ′ (v). Then following from above,
 1 
Q2 (t) ≤ P ∪σ∈Kk−2 (n) {||Llkσ (X ′ (n,t)) − Llkσ (X(n,t)) || ≥ }
2(k + 1)
  
 [ [ m
≤P {deg ′ (v) ≤ 3(k + 1) }
2
σ∈Kk−2 (n) v∈σ
/

(3(k + 1) m2 − (n − k)tk )2
  
n  
≤ (n − k + 1) exp −
2(n − k)tk (1 − tk ) + (1/3)(3(k + 1) m2 − (n − k)tk )

k−1
(by Bernstein’s inequality [27, Theorem 2.8.4])
(3(k + 1) m2 − (n − k)tk )2
  
k
≤ n exp −
2(n − k)tk (1 − tk ) + (1/3)(3(k + 1) m2 − (n − k)tk )


(3(k + 1) m2 − ntk )2
  
k
≤ n exp −
2ntk + (1/3)(3(k + 1) m2 − ntk )


(3(k + 1) m2 − ntkc )2
  
k
≤ n exp −
2ntkc + (1/3)(3(k + 1) m2 − ntkc )


∼ nk exp(−n1/(k+1) )
Thus, we have
XZ 1
(3.27) Q2 (t) < ∞.
n≥1 tc

The next and the last step of proof will be focussed on bounding Q1 (t).

Step 6: Bounding Q1 (t).


Under the event in consideration in Q1 (t), Σk (t) has cardinality m and since removal of
faces in Σk (t) disconnects lkσ (X ′ (n, t)), this implies that lkσ (X(n, t)) is not m-connected
i.e., removal of m edges disconnects lkσ (X(n, t)) and so does removal of m vertices. This
is because removal of every k-face can at most delete a single edge in lkσ (X ′ (n, t)). Since
lkσ (X(n, t)) is again distributed as G(Nσ (t), t) where Nσ (t) has Binomial(n, tk−1 ) distribu-
tion, using Markov’s inequality, we have that
X  
(3.28) Q1 (t) ≤ P lkσ (X(n, t)) is not m-connected
σ∈Kk−2 (n)

and to bound this we argue as in [10, Theorem 4.3]. Thus, in analogy to STEP 3, this is
a computation about an Erdös-Rényi random graph similar to G(ntk−1 , t). Since we are
well above the connectivity regime for G(ntk−1 , t), with very high probability, G(ntk , t) is
20
m-connected for any m. Thus, we expect Q1 (t) to decay sufficiently fast. We now formalize
this.
Fix σ ∈ Kk−2 (n), n(t) = ntk−1 and N = Nσ (t). Again using Bernstein’s inequality as in
the proof of (3.18), we can show that
XZ 1 k−1
nk−1 t( 2 ) P |N − n(t)| ≥ n(t)3/5 dt < ∞.

n≥1 tc

Set n(t) := n(t) − n(t)3/5 , n(t) := n(t) + n(t)3/5 . Thus to show summability of Q1 (t), it
suffices to show that
XZ 1 k−1
 
(3.29) nk−1 t( 2 ) P lkσ (X(n, t)) is not m-connected ∩ {N ∈ [n(t), n(t)]} dt < ∞.
n≥1 tc

For a graph G on N vertices (say for N > 2m) to be not m-connected either of the following
two events have to happen - (i) the graph has a vertex of degree at most m or (ii) there exist
disjoint subsets of vertices S, T such that |S| < m + 1, m − |S| + 2 ≤ |T | ≤ N −|S|
2
and T is
a component of G − S. In (ii), the assumption on cardinality of T comes from the fact that
if (i) doesn’t happen and G is not m-connected then G − S has two components and so one
of them must be at most N −|S|2
. Also since T is a component of G − S, the vertices in T
have edges to S or within T . Since |S| < m, a vertex in T must have at least m + 1 − |S|
neighbours in T and this gives the lower bound on |T |.
Thus abbreviating notation, we have that

(3.30) {lkσ (X(n, t)) is not m-connected} ≤ {∃ deg ≤ m} + {∃S, T }.

We will first bound the second term. Now by Markov’s inequality as well as using a
spanning tree argument as in bounding T1 (t) in STEP 3 (see also [10, Theorem 4.3]) and
conditioning on N = Nσ (t), we have that
N−j
m 2
    
  X X N N l−2 l−1 jl j
P ∃S, T N ≤ l t t (1 − t)l(N −j−l)

j=1 l=max{2,m−j+2}
j l j
N−j
m
X 2
X  Ne j  l
≤ t−1 (le)t l−2 Nete−(N −l−j)t (using j! ≥ (j/e)j , 1 − t ≤ e−t )
j=1 l=max{2,m−j+2}
j
N−j
m
X 2
X  Ne j  l
≤ t−1 (le)t l−2 Nete−(N −j)t/2 (since j + l ≤ (N + j)/2)
j=1 l=max{2,m−j+2}
j

Thus, we can derive that for n large, there is a constant C (depending on m) such that
n(t)−1
  2
2 m
−1
 X l
P ∃S, T N ∈ [n(t), n(t)] ≤ Ct n(t) t l−2 e n(t)t e−(n(t)−m)t/2

l=2
21
n1/k+1
Since t ≥ tc , we have that (n(t) − m + 1)t ≥ 2
for large n and so using m, k ≥ 1 we
obtain that,
n(t)−1
2
n1/k+1 n1/k+1 l
  X
P ∃S, T N ∈ [n(t), n(t)] ≤ Ce− 4 n2(m−1) l−2 e n e− 8

l=2

Because of the exponentially fast decaying term, one can show that
XZ 1 k−1
 
(3.31) nk−1 t( 2 ) P {∃S, T } ∩ {N ∈ [n(t), n(t)]} dt < ∞.
n≥1 tc

We will now bound the probability of the first term. Given N, the degree of a vertex in
lkσ (X(n, t)) has Binomial(N, t) distribution and so,
  m−1
X
P ∃ deg < m | N ≤ Ntj (1 − t)N −j .
j=0

Now using the bounds on N as above, we derive that for a constant C (depending on m),
  m−1
X
P ∃ deg < m|{N ∈ [n(t), n(t)]} ≤ n(t)tj e−(n(t)−j)t
j=0
n1/k+1
≤ Cne− 2 .
Thus, we can immediately obtain that
XZ 1 k−1
 
(3.32) nk−1 t( 2 ) P {∃ deg < m} ∩ {N ∈ [n(t), n(t)]} dt < ∞.
n≥1 tc

Now from (3.29), (3.30), (3.31) and (3.32), we have that


XZ 1
(3.33) Q1 (t) < ∞,
n≥1 tc

and hence combined with (3.24), (3.33), (3.27), we obtain that


XZ 1
(3.34) Q(t) < ∞.
n≥1 tc

Thus the proof of Theorem 3.3 is complete by combining (3.11), (3.12), (3.14), (3.19), (3.23)
and (3.34). 
Remark 3.7. If we wanted to prove Theorem 3.3 for only t >> tc , then Nk∗ (t) = 0 with
high probability and this yields simplifcations to STEPS 2, 3 and 4 in the above proof and
will make STEPS 5 and 6 redundant. Furthermore, though a weaker bound to that for Q1 (t)
is claimed in the proof of [9, Theorem 1.3], the proof seems to assume that lkσ (X ′ (n, t)) is
distributed as Erdös-Rényi random graph (first para of Page 114 therein) which may not be
true. As shown in our STEP 6, this can be done via m-connectivity results for Erdös-Rényi
random graphs but does need additional work.
22
3.3. Proof of Main theorems - Theorems 1.1, 1.2 and 1.3.

Proof. (Proof of Theorem 1.1) The proof follows from Theorems 3.2, 3.3 and Slutsky’s lemma.


Proof. (Proof of Theorem 1.2) Observe that


′ ′ ′ ′
P(Tn,k 6= Tn,k ) ≤ P(Tn,k ≤ tc ) + P(Tn,k ≤ tc ) + P(Tn,k 6= Tn,k , Tn,k , Tn,k ≥ tc )
 [ 

= P(Tn,k ≤ tc ) + P(Tn,k ≤ tc ) + P {βk (X(n, t)) 6= Nk (X(n, t))} ,
tc ≤t≤1


where the last inequality follows because if Tn,k 6= Tn,k when both are larger than tc then there
exists a t ≥ tc such that βk (X(n, t)) > 0 = Nk (X(n, t)) or βk (X(n, t)) = 0 < Nk (X(n, t)).
Now letting n → ∞ and using Theorems 3.3, 1.1 and 3.2, we obtain that

lim P(Tn,k 6= Tn,k ) ≤ 2e−µ(k,c) .
n→∞

Now the proof is complete by letting c → −∞ and noting that µ(k, c) → ∞. 

It is possible to extend the above argument to prove the following: For m ≥ 0, let

Tn,k (m) = inf{t : βk (X(n, s)) ≤ m, ∀s ≥ t} and similarly define Tn,k (m) with respect to
Nk (X(n, t)). Then we have that

lim P(Tn,k (m) 6= Tn,k (m)) = 0.
n→∞

Proof. (Proof of Theorem 1.3) Again, set tc = tc (1, n). Using Theorem 2.4, we derive that
[ [
P {Π1 (X(n, t)) does not have property (T )} ≤ P

{X(n, t) has an isolated 2-face}
t≥tc t≥tc
[ [ 1 
∪ni=1 {lki (X(t)) is not connected} ∪ni=1 {λ2 (lki (X(t))) < }
2
By an union bound, the above can be split into a sum of three terms, where the first can be
bounded above by 1 − e−µ(1,c) , which follows from Theorem 3.2. In STEPS 2 and 3 of the
proof of Theorem 3.3, we respectively show that the probability that for some t ≥ tc (k, n),
k
the link of a (k − 1)-face in X(n, t) is not connected, or has spectral gap λ2 less than k+1 ,
is asymptotically zero. Applying these bounds for k = 1, we have that the second and third
terms in above vanish. 

3.4. A probabilistic lemma and proof of Lemma 3.6. Recall the quantities Rk−1 (t)
∗ ∗
and Rk−1 (t) from (3.9) and (3.8) respectively. Here, we show that E[Rk−1 (t)] goes to 0.

Lemma 3.8. limn→∞ E[Rk−1 (tc )] = 0.

Proof. Define R̂k−1 (t) := Rk−1 (t) − Rk−1 (t). It was shown in [9, Section 7.2] that
limn→∞ E[Rk−1 (tc )] = 0 and so to complete our proof, we will show convergence of E[R̂k−1 (tc )]
23
by computations similar to that in Proposition 3.1.
m h1 X 6=
X
E[R̂k−1 (tc )] ≤ E 1[i1 , . . . , ik do not form a k-clique in G(n, tc )]
l=0
k! i ,...,i
1 k

× 1[i1 , . . . , ik form a k-clique in G(n, s) which is a subclique


i
of exactly l (k + 1)-cliques in G(n, s) for some s > tc ]
m Z 1   
X n (n)−1 n − k kl
= xk x (1 − xk )n−k−l dx
l=0 tc
k l
m Z 1
n
nk+l x( k )−1 xkl (1 − xk )n−k−l dx
X

l=0 tc
m Z 1
n
nk+l x( k )−1 xkl e−x (n−k−l) dx
k
X

l=0 tc
m Z 1
n
nk+l x( k )−1 xkl e−(tc ) (n−k−l) dx
k
X

l=0 tc
m Z 1 k 1
k −1 k
X k+1 n k+1
≤C nk+l xn xkl e−(( 2 +1) log n) dx
l=0 tc
k 1
k k+1 n k+1
≤ (m + 1)nk+m e−(( 2 +1) log n)
As the exponential term will dominate, the above will limit to 0 as n → ∞. 
The above computation is simpler than that in Proposition 3.1 since we are considering
(k − 1)-faces here.
Proof. (Proof of Lemma 3.6) As q(α, m) = (α+1)mlog m , which is decreasing in m for m > 1,
and µ(t) increases with t, it is enough to show that tc > q(α, µ(tc )). Now,
3
(α + 1) log(ntkc − (ntkc ) 5 ) (α + 1) log(ntkc )
tc − q(α, µ(tc )) = tc − 3 ≥ tc − 3
ntkc − (ntkc ) 5 ntkc − (ntkc ) 5
 1 
k+1
3k
3/5 5 +1
= k k 3/5
(ntc − n tc − (α + 1) log(ntkc ))
ntc − (ntc )
Note that the denominator in the above is positive. Plugging in the value of tc , we get
the numerator as

 k 3k+5
k  2
k k  5(k+1)
( + 1) log n + log log n + c − n 5(k+1) + 1) log n + log log n + c −
2 2 2 2
α+1 k(α + 1)  k k 
log n − log ( + 1) log n + log log n + c
k+1 k+1 2 2
k α+1
=( + 1 − ) log n + O(log log n)
2 k+1
24
Plugging in α = k(k+3)
2
− ρ with ρ small, we get the coefficient of log n in the above is
positive. This means for big enough n, we have the required inequality. 

Acknowledgements
DY’s research was funded by CPDA from the Indian Statistical Institute, DST-INSPIRE
Faculty award and SERB-MATRICS grant. AR was partially supported by NSF grant DMS-
1906414. This project originally started as part of AR’s M.Math project in 2017 at Indian
Statistical Institute, Bangalore. AR would also like to thank ISI Bangalore for giving the
opportunity to visit and work on this problem.

References
[1] Werner Ballmann and J Światkowski. On L2-cohomology and property (T) for auto-
morphism groups of polyhedral cell complexes. Geometric and Functional Analysis, 7:
615–645, 1997. 5
[2] Omer Bobrowski. Homological connectivity in random čech complexes. Probability
Theory and Related Fields, 183(3-4):715–788, 2022. 1, 3
[3] Omer Bobrowski and Dmitri Krioukov. Random simplicial complexes: models and
phenomena. In Higher-Order Systems, pages 59–96. Springer, 2022. 1
[4] Béla Bollobás and Andrew Thomason. Random graphs of small order. In
MichalKaroński and Andrzej Ruciński, editors, Random Graphs ’83, volume 118 of
North-Holland Mathematics Studies, pages 47–97. North-Holland, 1985. 3
[5] Armindo E. Costa and Michael Farber. Large random simplicial complexes, i. Journal
of Topology and Analysis, 8(03):399–429, 2016. 4
[6] Armindo E. Costa and Michael Farber. Large random simplicial complexes, ii; the
fundamental group. Journal of Topology and Analysis, 9(03):441–483, 2017. 4
[7] Armindo E. Costa and Michael Farber. Large random simplicial complexes, iii the
critical dimension. Journal of Knot Theory and Its Ramifications, 26(02):1740010, 2017.
4
[8] Herbert Edelsbrunner and John L Harer. Computational topology: an introduction.
American Mathematical Society, 2022. 5
[9] Christopher F Fowler. Homology of multi-parameter random simplicial complexes. Dis-
crete & Computational Geometry, 62(1):87–127, 2019. 2, 3, 4, 5, 7, 9, 10, 11, 12, 13, 22,
23
[10] Alan Frieze and Michal Karoński. Introduction to random graphs. Cambridge University
Press, 2016. 1, 2, 16, 20, 21
[11] Howard Garland. p-adic curvature and the cohomology of discrete subgroups of p-adic
groups. Annals of Mathematics, 97(3):375–423, 1973. 5
[12] Alan J Hoffman and Helmut W Wielandt. The variation of the spectrum of a normal
matrix. In Selected Papers Of Alan J Hoffman: With Commentary, pages 118–120.
World Scientific, 2003. 19
25
[13] Christopher Hoffman, Matthew Kahle, and Elliot Paquette. Spectral gaps of random
graphs and applications. International Mathematics Research Notices, 2021(11):8353–
8404, 2021. 4, 12, 15
[14] Srikanth K Iyer and D Yogeshwaran. Thresholds for vanishing of ‘isolated’ faces in
random čech and Vietoris–Rips complexes. Annales de l’Institut Henri Poincaré Prob-
abilités et Statistiques, 56(3):1869–1897, 2020. 3
[15] Matthew Kahle. Topology of random clique complexes. Discrete mathematics, 309(6):
1658–1671, 2009. 2
[16] Matthew Kahle. Sharp vanishing thresholds for cohomology of random flag complexes.
Annals of Mathematics, pages 1085–1107, 2014. 1, 2, 3, 4, 7, 12
[17] Matthew Kahle. Topology of random simplicial complexes: a survey. AMS Contemp.
Math, 620:201–222, 2014. 1
[18] Matthew Kahle and Boris Pittel. Inside the critical window for cohomology of random
k-complexes. Random Structures & Algorithms, 48(1):102–124, 2016. 3
[19] Nathan Linial and Roy Meshulam. Homological connectivity of random 2-complexes.
Combinatorica, 26(4):475–487, 2006. 3
[20] Roy Meshulam and Nathan Wallach. Homological connectivity of random k-dimensional
complexes. Random Structures & Algorithms, 34(3):408–417, 2009. 3
[21] James R. Munkres. Topology. Prentice Hall, Inc., Upper Saddle River, NJ, 2000. Second
edition. 5
[22] Takashi Owada. Limit theorems for Betti numbers of extreme sample clouds with
application to persistence barcodes. The Annals of Applied Probability, 28(5):2814–
2854, 2018. 9
[23] Mathew D Penrose. The longest edge of the random minimal spanning tree. The Annals
of Applied Probability, 7(2):340–361, 1997. 3
[24] Sidney I Resnick. Extreme values, regular variation, and point processes, volume 4.
Springer Science & Business Media, 2008. 9, 10
[25] Primoz Skraba, Gugan Thoppe, and D Yogeshwaran. Randomly weighted d− complexes:
Minimal spanning acycles and persistence diagrams. Electronic Journal of Combina-
torics, 27, 2017. 3, 9, 11
[26] Remco Van Der Hofstad. Random graphs and complex networks, volume 43. Cambridge
university press, 2016. 10
[27] Roman Vershynin. High-dimensional probability: An introduction with applications in
data science, volume 47. Cambridge university press, 2018. 17, 20
[28] Andrzej Żuk. Property (T) and Kazhdan constants for discrete groups. Geometric &
Functional Analysis, 13:643–670, 2003. 5
School of Mathematics, Georgia Tech, Atlanta, USA.
Email address: [email protected]

Theoretical Statistics and Mathematics Unit, Indian Statistical Institute, Bangalore.


Email address: [email protected]

26

You might also like