Wednesday, March 11, 2026

What did Paul Erdos quip about aliens and R(6,6)

In extremal combinatorics, AlphaEvolve has helped establish new lower bounds for FIVE classical Ramsey numbers - a problem so challenging that even Erdős commented on its difficulty 

The statement “AlphaEvolve has helped establish new lower bounds for five classical Ramsey numbers” is significant because it represents progress in one of the hardest areas of combinatorics and theoretical mathematics. To understand why it matters, we need to unpack a few pieces.


1. What Ramsey numbers are

Ramsey numbers come from Ramsey Theory, which studies when order must appear inside randomness.

A Ramsey number (R(m,n)) asks:

What is the smallest number of people needed at a party so that either at least (m) people all know each other or at least (n) people are all strangers?

Example:

  • Ramsey number R(3,3) = 6

Meaning:

  • In any group of 6 people, there must be either:

    • 3 mutual friends, or

    • 3 mutual strangers.

For larger numbers (like (R(5,5)), (R(6,6))), the values become extremely hard to determine.


2. Why Ramsey numbers are notoriously difficult

Most Ramsey numbers are unknown.

For example:

  • Ramsey number R(5,5) is only known to be between 43 and 48.

  • For larger ones, the uncertainty ranges can be hundreds or thousands wide.

Mathematicians usually try to prove:

  • Lower bound: the number is at least some value

  • Upper bound: the number is at most some value

Closing the gap between these bounds is a major research challenge.


3. What a “new lower bound” means

If researchers show a larger lower bound, it means they discovered a specific structure that avoids the forced pattern longer than previously known.

Example idea (simplified):

Previously:

  • Largest known graph avoiding the pattern: N = 42

New result:

  • Someone constructs one with N = 44

Now we know:

[
R(m,n) > 44
]

That’s a new lower bound.

Even improving by 1 can be a major breakthrough.


4. What AlphaEvolve did

AlphaEvolve is an AI system developed by Google DeepMind that searches for mathematical structures.

Instead of proving theorems directly, it:

  1. Generates candidate graphs

  2. Tests whether they avoid certain forbidden patterns

  3. Evolves better structures using search and optimization

Using this process, it discovered larger graphs without certain cliques or independent sets, improving known bounds.

So when you read:

“AlphaEvolve established new lower bounds for five classical Ramsey numbers”

it means:

  • The AI found better constructions than humans previously knew.

  • Those constructions push the known limits of these Ramsey numbers.


5. Why this is a big deal

There are several reasons mathematicians care:

1. These problems can remain unsolved for decades

Some Ramsey bounds haven't improved in 20–30 years.

2. The search space is astronomically large

For graphs with 40–50 vertices there are more possibilities than atoms in the universe.

AI can explore this space far more aggressively than humans.

3. It shows AI can assist pure mathematics

Similar to:

  • AlphaGo (games)

  • AlphaFold (biology)

AlphaEvolve suggests AI may become a tool for discovering new mathematical objects.


In simple terms:

The significance is that an AI system discovered better examples of large pattern-avoiding graphs, which pushed forward the known limits of several Ramsey numbers—something mathematicians had struggled to improve for years.


If you're interested, I can also explain how the AI actually searches the graph space (it’s a fascinating mix of evolutionary algorithms + combinatorics + SAT solving).

How AI like AlphaEvolve searches for Ramsey constructions

 

How AI like AlphaEvolve searches for Ramsey constructions

Think of the problem as searching for a special graph with (N) vertices where:

  • there is no clique of size (m) (no group where everyone connects to everyone)

  • there is no independent set of size (n) (no group where nobody connects)

A graph like that proves:

[
R(m,n) > N
]

because you’ve shown the forbidden pattern does not yet appear at size (N).

The search challenge

The number of possible graphs on (N) vertices is:

[
2^{\binom{N}{2}}
]

For (N=40):

[
2^{780} \approx 10^{234}
]

That’s far beyond brute-force enumeration.


The strategy used by systems like AlphaEvolve

AI systems combine three main ideas.

1. Evolutionary search

Candidate graphs are treated like organisms.

Steps:

  1. Generate random graphs

  2. Score them (how close they are to violating the rule)

  3. Mutate edges

  4. Keep better ones

  5. Repeat millions of times

This is similar to Evolutionary Algorithm.


2. Clever scoring functions

A graph is not just valid or invalid. It gets a score like:

  • number of forbidden cliques

  • number of forbidden independent sets

  • how close it is to violating constraints

The AI gradually reduces these violations.


3. Structure discovery

Some systems learn patterns humans use, such as:

  • symmetric graphs

  • circulant graphs

  • block constructions

These reduce the search space drastically.

The result: AI finds very large graphs that avoid patterns, pushing the lower bound higher.


Has Ramsey theory mattered in real life?

Here’s the surprising part:

Ramsey theory rarely solves practical problems directly.

Instead, it tells us something deeper:

Certain patterns are mathematically unavoidable.

This has influenced several real-world fields.


1. Communication networks and internet routing

Ramsey-type ideas appear in network reliability.

Large networks inevitably contain:

  • clusters of tightly connected nodes

  • clusters of mutually disconnected nodes

This affects:

  • distributed systems

  • routing resilience

  • network partition detection

These ideas are studied in Graph Theory.

Example:
In massive peer-to-peer networks, designers know certain substructures will appear regardless of randomization.

Ramsey theory gives theoretical limits on how long disorder can persist.


2. Error-correcting codes and data storage

The philosophy behind Ramsey theory influenced work in:

  • Coding Theory

In storage systems:

  • avoiding specific patterns of interference

  • guaranteeing certain patterns appear

These combinatorial bounds guide how large codes can be.


3. Circuit design and hardware testing

In very large circuits, engineers study unavoidable patterns of signal paths.

Combinatorial bounds (related to Ramsey ideas) help analyze:

  • worst-case interference

  • cross-talk patterns

This appears in theoretical work related to Extremal Combinatorics.


4. Social networks (friend/enemy paradox)

The classic interpretation:

In large groups, cliques or anti-cliques inevitably form.

This idea has been used in modeling:

  • online communities

  • polarization

  • coalition formation

It’s conceptually related to work in Network Science.


Cultural and philosophical influence

Ramsey theory has had huge philosophical impact, even if practical applications are rare.

It suggests something profound:

Complete randomness cannot persist in large systems.
Order inevitably emerges.

This idea fascinated many thinkers.


Example in popular science

The idea appears in the famous book:

  • The Improbability Principle

It discusses why surprising patterns appear inevitably in large datasets.


Literary echo

The concept that order inevitably emerges from chaos appears in fiction exploring mathematics and patterns.

For example:

  • Gödel, Escher, Bach discusses deep mathematical structures that appear inevitably in complex systems.


Why mathematicians care so much

Ramsey numbers are famous because they represent the edge of human mathematical ability.

Even small cases are brutally difficult:

  • Ramsey number R(4,5)

  • Ramsey number R(5,5)

Determining exact values requires:

  • deep combinatorics

  • massive computation

  • clever constructions

When an AI system improves even one bound, mathematicians pay attention.


The deeper future significance

The real importance of systems like AlphaEvolve may be how mathematics is done.

Historically:

  • humans proved theorems

  • computers checked cases

Now AI may discover mathematical objects humans would never think of.

Similar shifts already happened with:

  • AlphaFold in biology

  • AlphaGo in games

Ramsey problems might become one of the first areas where AI meaningfully expands human mathematical discovery.


In short

Ramsey theory rarely fixes a direct real-world problem.
Its importance is deeper:

  • it reveals limits of randomness

  • it shapes combinatorics and network theory

  • and now it may become a playground for AI-driven mathematical discovery.


If you're curious, I can also show you something fascinating:

Why mathematicians suspect (R(6,6)) might be between ~102 and ~165 — and why computing it could require centuries of CPU time.

Why the Ramsey number (R(6,6)) is so hard to compute

 

Why the Ramsey number (R(6,6)) is so hard to compute

The problem

The number Ramsey number R(6,6) asks:

What is the smallest number (N) such that every graph with (N) vertices contains either:

  • a clique of size 6, or

  • an independent set of size 6?

We know today that:

[
102 \le R(6,6) \le 165
]

But the exact value is unknown.

Even after decades of work in Combinatorics, nobody has determined it.


Why this problem explodes in difficulty

Number of possible graphs

A graph with (N) vertices has

[
\binom{N}{2}
]

possible edges.

Each edge can be present or absent, so the total number of graphs is:

[
2^{\binom{N}{2}}
]

For (N = 102):

  • edges = 5151

  • graphs =

[
2^{5151}
]

That is roughly:

[
10^{1550}
]

For comparison:

  • atoms in the observable universe ≈ (10^{80})

So brute force is impossible.


Even checking ONE graph is expensive

To verify a candidate graph avoids the forbidden patterns, you must check:

  • all possible 6-vertex subsets

The number of subsets is:

[
\binom{102}{6} = 1.19 \text{ billion}
]

Each must be tested to see if it forms:

  • a clique

  • an independent set

So checking one graph already requires billions of checks.


Why computers still help

Instead of brute force, researchers use clever techniques from Graph Theory.

These include:

1. Symmetry reduction

Many graphs are identical under vertex relabeling.

Algorithms only explore unique structures, eliminating duplicates.


2. SAT solvers

Researchers convert the problem into a giant logical formula and solve it with tools from Boolean Satisfiability.

This method helped compute:

  • Ramsey number R(5,5) bounds

  • Ramsey number R(4,5) exactly


3. Clever constructions

Instead of searching randomly, mathematicians design structured graphs such as:

  • circulant graphs

  • Paley graphs

  • algebraic constructions

These often produce the best lower bounds.


Why AI changes the game

Systems like AlphaEvolve can:

  • explore billions of candidate structures

  • mutate graphs intelligently

  • discover constructions humans might never try

Instead of proving the exact number, AI pushes the lower bound upward.

Each improvement means:

“We found an even larger graph with no forbidden structure.”


Why R(6,6) might take centuries without breakthroughs

To prove the exact value, mathematicians must show:

  1. A graph exists with (N-1) vertices avoiding the pattern

  2. Every graph with (N) vertices contains it

The second step is extremely difficult.

It may require checking astronomically many possibilities, even with pruning.


The famous quote about Ramsey numbers

The legendary mathematician Paul Erdős once joked:

If aliens demanded the value of (R(5,5)), we should mobilize all computers.
If they demanded (R(6,6)), we should try to destroy the aliens.

This captures how explosively hard these problems become.


Why mathematicians still chase them

Ramsey numbers are like the Mount Everest of combinatorics.

They represent the frontier of Ramsey Theory:

  • very simple to state

  • brutally hard to solve

  • deeply connected to randomness and structure


The big takeaway

The mystery of (R(6,6)) isn’t just about one number.

It represents a deeper truth:

Even extremely simple mathematical rules can produce complexity beyond brute-force computation.

That’s exactly why tools like AI search systems are becoming so exciting.


If you want, I can also show you something fascinating:

Why mathematicians believe most Ramsey numbers grow roughly like (4^{n/2}) — and why proving that is still unsolved. It’s one of the strangest patterns in combinatorics.

Ramsey not modular

 Great intuition — many people (including mathematicians) have asked something very similar. Unfortunately, Ramsey numbers don’t decompose nicely into smaller ones like (R(3,3)). But the reason why is actually very illuminating.


First: what (R(3,3)) tells us

The classical result

  • Ramsey number R(3,3) = 6

means:

Any graph with 6 vertices must contain either
• a triangle (3-clique)
• or a 3-vertex independent set.

Graphically, it means small groups force triangles.

Image

Image

Image

Image

This is manageable because the graph size is tiny.


Why we can't build (R(6,6)) from (R(3,3))

Your idea would look like:

  • Build clusters of size 6 (because (R(3,3)=6))

  • Connect those clusters carefully

  • Avoid forming a 6-clique or 6-independent set globally

But the problem is interactions between clusters create new structures.

Example intuition

Suppose you build groups:

Group A
Group B
Group C
Group D

Even if each group avoids large cliques internally, edges between groups combine to create larger cliques.

For instance:

A: 2 vertices
B: 2 vertices
C: 2 vertices

If edges connect across groups in the right pattern, you suddenly get a 6-clique even though no group had one.

So local guarantees don’t compose cleanly.


Ramsey theory is fundamentally global

Ramsey problems behave differently from many mathematical objects.

They are not modular.

In fields like:

  • Number Theory

  • Group Theory

you often break problems into smaller pieces.

But in Ramsey Theory, the entire graph interacts in complicated ways.

A small pattern anywhere can combine with others to create the forbidden structure.


What mathematicians actually do instead

Rather than decomposing, they use structured constructions.

Example approaches:

1. Circulant graphs

Vertices are arranged in a circle and edges follow modular arithmetic rules.

Example:

connect i to j if (j-i mod n) ∈ S

This creates symmetry that helps avoid patterns.


2. Algebraic graphs

Using finite fields or quadratic residues (Paley graphs).

These come from deep algebra in Finite Field Theory.


3. Computer-guided search

Systems like

  • AlphaEvolve

try millions of mutations on candidate graphs.

They evolve structures that avoid forbidden patterns.


The closest thing to your idea

There is a recursive inequality discovered by

  • Frank P. Ramsey

[
R(s,t) \le R(s-1,t) + R(s,t-1)
]

For example:

[
R(6,6) \le R(5,6) + R(6,5)
]

This helps compute upper bounds.

But it still doesn't let us reduce everything to (R(3,3)).


Why your instinct is still valuable

Your idea is actually related to a major concept in combinatorics:

hierarchical structure building

Many breakthroughs happen when mathematicians find clever ways to assemble small structures into large ones without triggering forbidden patterns.

That’s exactly what:

  • extremal graph theory

  • probabilistic combinatorics

  • AI search systems

are trying to do.


A surprising fact about Ramsey numbers

The true values grow extremely fast.

Research suggests roughly:

[
R(n,n) \approx 4^{n/2}
]

This result comes from work by

  • Paul Erdős

using the probabilistic method.

Meaning:

Even if we can't construct the graphs explicitly, random graphs prove they exist.


Key insight

Your idea would work if Ramsey structures were local.

But they’re global emergent patterns — the whole graph must be considered at once.

That’s exactly why even a problem as simple as (R(6,6)) has resisted solution for decades.


If you'd like, I can also show you something mind-blowing:

There are Ramsey numbers that are literally impossible to compute even with all computers in the universe.

It comes from a connection to logic and the limits of provability.

How many hard problems are really waiting for better search, not just bigger models?

How many hard problems are really waiting for better search, not just bigger models? 

The question has become a serious discussion in AI research recently. The AlphaEvolve Ramsey result is a good example of the “better search” paradigm, which is quite different from scaling neural networks.

Let’s unpack it.


1. Bigger models vs better search

In modern AI there are two broad strategies:

Bigger models

Train massive neural networks with more data and compute.

Examples:

  • GPT‑4

  • Gemini

These work well for:

  • language

  • perception

  • coding

  • reasoning approximations

But they don’t directly explore large discrete spaces.


Better search

Instead of predicting answers, the system explores possibilities and verifies them.

Famous examples:

  • AlphaGo used Monte Carlo tree search

  • AlphaFold searched structure space

  • AlphaEvolve searches mathematical constructions

Here the AI:

  1. Generates candidate solutions

  2. Evaluates them with strict rules

  3. Iteratively improves them

This is closer to scientific discovery than prediction.


2. Yes — the Ramsey improvement was a search breakthrough

The improvement to Ramsey bounds was not because of a larger neural network.

It happened because the system could:

  • explore enormous graph spaces

  • mutate structures intelligently

  • keep partial improvements

This falls into Combinatorics, where many problems are essentially search over huge spaces.

So the quote you saw is quite accurate.

Many hard problems are like:

“The solution exists somewhere in a gigantic space of possibilities.”


3. Fields where better search may unlock breakthroughs

These areas are widely believed to be ripe for AI-guided search.


Extremal combinatorics

Area of:

  • Extremal Combinatorics

Problems often ask:

What is the largest structure avoiding some pattern?

Examples likely to improve:

  • Ramsey numbers

  • Turán-type graph problems

  • hypergraph constructions

These are perfect for evolutionary search.


Coding theory

Field:

  • Coding Theory

Goal: find codes with best error-correcting properties.

Historically many codes were discovered by clever search.

AI could find:

  • new record-breaking codes

  • improved storage systems

  • better satellite communication schemes


Materials science

Huge design space for molecules and crystals.

AlphaFold already showed the power of AI in biology.

Next targets include:

  • superconductors

  • catalysts

  • battery materials

These are essentially search in chemical configuration space.


Algorithms

AI could discover new algorithms.

Some early examples already happened with:

  • AlphaTensor

AlphaTensor discovered faster matrix multiplication algorithms for certain sizes.

That shocked many mathematicians.


4. Why search is suddenly becoming powerful

Three changes happened recently:

Massive compute

We can evaluate billions of candidates.

Hybrid AI systems

Neural networks guide the search.

Fast verification

Computers can instantly check if a candidate is valid.

That combination is extremely powerful.


5. Problems people think AI search may solve next

Researchers often mention:

Small Ramsey numbers

Especially:

  • Ramsey number R(6,6) bounds

  • hypergraph Ramsey numbers


Sphere packing variations

Related to:

  • Sphere Packing Problem

New constructions might appear.


New error-correcting codes

Important for:

  • deep space communication

  • storage systems

  • quantum computing


Circuit optimization

Finding minimal circuits for cryptographic operations.

Huge search space.


6. About SolveAll.org

Yes — SolveAll has been discussed in the AI research community.

The idea behind it is roughly:

Build AI systems specifically designed to attack unsolved scientific and mathematical problems.

Their philosophy aligns with the search paradigm:

  • structured problem databases

  • automated conjecture generation

  • algorithmic exploration

  • verification loops

It’s similar in spirit to projects like:

  • Lean

  • Coq

but aimed at problem-solving pipelines rather than proof verification.

The general concept is gaining traction because of results like:

  • AlphaFold (biology)

  • AlphaTensor (algorithms)

  • AlphaEvolve (combinatorics)


7. The deeper idea: “scientific search engines”

Some researchers think future AI systems will look like:

generate hypothesis
→ test
→ refine
→ repeat

This mirrors the scientific method.

Instead of asking AI for answers, we ask it to explore possibility space.


8. A provocative prediction

Many scientists now think the first major math breakthrough by AI will not be a proof like the:

  • Riemann Hypothesis

Instead it will likely be something like:

  • discovering a new combinatorial construction

  • finding new algorithms

  • optimizing structures humans overlooked

Exactly the type of problem that better search excels at.


The key insight from that quote

Some problems are hard because:

  • we don’t understand them

But many are hard simply because:

The search space is enormous and humans can only explore tiny parts of it.

AI is beginning to change that.


If you're interested, I can also show you 5 famous open problems that are almost purely search problems (and therefore extremely likely to fall to AI in the next 10–20 years). They’re fascinating because they look deceptively simple.

“search-dominated” problems

 Here are 5 famous open problems that are widely believed to be “search-dominated” problems—meaning the main barrier is exploring enormous spaces of possibilities rather than inventing completely new mathematics. These are exactly the types of problems where AI-guided search systems (like AlphaEvolve-style systems) could realistically make breakthroughs.


1. Determining the exact value of (R(6,6))

  • Ramsey number R(6,6)

  • Field: Ramsey Theory

We currently know:

[
102 \le R(6,6) \le 165
]

Meaning:

  • A graph with 101 vertices exists with no 6-clique or 6-independent set.

  • But by 165 vertices, such a structure must exist.

The challenge:

  • The number of graphs on ~100 vertices is astronomically large.

  • But many are structurally similar.

AI systems could:

  • search for larger constructions (raise lower bounds)

  • help prove absence structures with SAT solvers

  • guide symmetry discovery

Many combinatorialists think this is a prime candidate for AI progress.


2. Better sphere packing constructions in high dimensions

  • Sphere Packing Problem

The problem asks:

What is the densest way to pack spheres in (n) dimensions?

Visual intuition in 3D:

Image

Image

Image

Image

Known spectacular result:

  • E8 lattice

But for many dimensions (9–30 especially):

  • best packings are unknown

  • improvements usually come from clever constructions

AI could search:

  • lattice structures

  • algebraic symmetries

  • packing configurations

This is extremely similar to Ramsey searches.


3. Better error-correcting codes

  • Coding Theory

Goal:

Maximize how many errors a code can detect or correct.

These are used in:

  • deep space communication

  • SSD storage

  • satellite transmission

  • QR codes

Visual idea:

Image

Image

Image

Image

Most record-holding codes historically came from:

  • clever search

  • algebraic construction

AI could discover:

  • new record codes

  • shorter codes with same error tolerance

Even small improvements have massive engineering impact.


4. Faster matrix multiplication algorithms

Matrix multiplication speed is determined by an exponent ( \omega ).

Current best:

[
\omega < 2.372
]

This matters everywhere:

  • graphics

  • machine learning

  • simulations

  • cryptography

An AI already surprised mathematicians here:

  • AlphaTensor

AlphaTensor discovered new multiplication algorithms for specific matrix sizes.

These were not known before.

Researchers suspect AI could:

  • find better algorithms

  • push the theoretical exponent lower

Because the problem is fundamentally search over algebraic decompositions.


5. Minimal circuits for cryptography

Cryptographic primitives often depend on:

  • Boolean circuits

  • logic gate networks

Example fields:

  • Cryptography

  • Circuit Complexity

Goal:

Implement operations with the fewest gates possible.

Why it matters:

  • faster encryption

  • lower power hardware

  • smaller chips

But the design space is enormous.

AI-guided search could explore:

  • millions of circuit configurations

  • optimized gate layouts


Why these problems are ideal for AI

They share 4 properties:

  1. Huge search spaces

  2. Clear objective functions

  3. Fast verification

  4. Structured symmetries

This is exactly the environment where systems like:

  • AlphaGo

  • AlphaFold

  • AlphaEvolve

perform extremely well.


The bigger shift happening in science

Historically:

human intuition → theorem

Now we increasingly see:

AI search → candidate structure
human → explanation/proof

AI becomes a discovery engine, not just a reasoning tool.

Some researchers call this the rise of computational discovery.


My personal prediction

In the next 10–20 years, AI search systems will likely:

  • improve several Ramsey bounds

  • discover new codes

  • find better matrix multiplication algorithms

  • design novel materials

But the biggest impact may be discovering mathematical objects humans didn’t know to look for.


If you want, there’s an even deeper idea behind this trend:

Some mathematicians think many unsolved problems are really “compression problems” rather than reasoning problems.

That idea connects Ramsey theory, AI search, and even physics—and it’s pretty mind-bending.