Designing AI chip Software and Hardware
https://x.com/matt_dz/status/2031779437994258826
https://www.broune.com/blog/testing-and-benchmarking-of-ai-compilers
The statement “AlphaEvolve has helped establish new lower bounds for five classical Ramsey numbers” is significant because it represents progress in one of the hardest areas of combinatorics and theoretical mathematics. To understand why it matters, we need to unpack a few pieces.
Ramsey numbers come from Ramsey Theory, which studies when order must appear inside randomness.
A Ramsey number (R(m,n)) asks:
What is the smallest number of people needed at a party so that either at least (m) people all know each other or at least (n) people are all strangers?
Example:
Ramsey number R(3,3) = 6
Meaning:
In any group of 6 people, there must be either:
3 mutual friends, or
3 mutual strangers.
For larger numbers (like (R(5,5)), (R(6,6))), the values become extremely hard to determine.
Most Ramsey numbers are unknown.
For example:
Ramsey number R(5,5) is only known to be between 43 and 48.
For larger ones, the uncertainty ranges can be hundreds or thousands wide.
Mathematicians usually try to prove:
Lower bound: the number is at least some value
Upper bound: the number is at most some value
Closing the gap between these bounds is a major research challenge.
If researchers show a larger lower bound, it means they discovered a specific structure that avoids the forced pattern longer than previously known.
Example idea (simplified):
Previously:
Largest known graph avoiding the pattern: N = 42
New result:
Someone constructs one with N = 44
Now we know:
[
R(m,n) > 44
]
That’s a new lower bound.
Even improving by 1 can be a major breakthrough.
AlphaEvolve is an AI system developed by Google DeepMind that searches for mathematical structures.
Instead of proving theorems directly, it:
Generates candidate graphs
Tests whether they avoid certain forbidden patterns
Evolves better structures using search and optimization
Using this process, it discovered larger graphs without certain cliques or independent sets, improving known bounds.
So when you read:
“AlphaEvolve established new lower bounds for five classical Ramsey numbers”
it means:
The AI found better constructions than humans previously knew.
Those constructions push the known limits of these Ramsey numbers.
There are several reasons mathematicians care:
Some Ramsey bounds haven't improved in 20–30 years.
For graphs with 40–50 vertices there are more possibilities than atoms in the universe.
AI can explore this space far more aggressively than humans.
Similar to:
AlphaGo (games)
AlphaFold (biology)
AlphaEvolve suggests AI may become a tool for discovering new mathematical objects.
✅ In simple terms:
The significance is that an AI system discovered better examples of large pattern-avoiding graphs, which pushed forward the known limits of several Ramsey numbers—something mathematicians had struggled to improve for years.
If you're interested, I can also explain how the AI actually searches the graph space (it’s a fascinating mix of evolutionary algorithms + combinatorics + SAT solving).
Think of the problem as searching for a special graph with (N) vertices where:
there is no clique of size (m) (no group where everyone connects to everyone)
there is no independent set of size (n) (no group where nobody connects)
A graph like that proves:
[
R(m,n) > N
]
because you’ve shown the forbidden pattern does not yet appear at size (N).
The number of possible graphs on (N) vertices is:
[
2^{\binom{N}{2}}
]
For (N=40):
[
2^{780} \approx 10^{234}
]
That’s far beyond brute-force enumeration.
AI systems combine three main ideas.
Candidate graphs are treated like organisms.
Steps:
Generate random graphs
Score them (how close they are to violating the rule)
Mutate edges
Keep better ones
Repeat millions of times
This is similar to Evolutionary Algorithm.
A graph is not just valid or invalid. It gets a score like:
number of forbidden cliques
number of forbidden independent sets
how close it is to violating constraints
The AI gradually reduces these violations.
Some systems learn patterns humans use, such as:
symmetric graphs
circulant graphs
block constructions
These reduce the search space drastically.
The result: AI finds very large graphs that avoid patterns, pushing the lower bound higher.
Here’s the surprising part:
Ramsey theory rarely solves practical problems directly.
Instead, it tells us something deeper:
Certain patterns are mathematically unavoidable.
This has influenced several real-world fields.
Ramsey-type ideas appear in network reliability.
Large networks inevitably contain:
clusters of tightly connected nodes
clusters of mutually disconnected nodes
This affects:
distributed systems
routing resilience
network partition detection
These ideas are studied in Graph Theory.
Example:
In massive peer-to-peer networks, designers know certain substructures will appear regardless of randomization.
Ramsey theory gives theoretical limits on how long disorder can persist.
The philosophy behind Ramsey theory influenced work in:
Coding Theory
In storage systems:
avoiding specific patterns of interference
guaranteeing certain patterns appear
These combinatorial bounds guide how large codes can be.
In very large circuits, engineers study unavoidable patterns of signal paths.
Combinatorial bounds (related to Ramsey ideas) help analyze:
worst-case interference
cross-talk patterns
This appears in theoretical work related to Extremal Combinatorics.
The classic interpretation:
In large groups, cliques or anti-cliques inevitably form.
This idea has been used in modeling:
online communities
polarization
coalition formation
It’s conceptually related to work in Network Science.
Ramsey theory has had huge philosophical impact, even if practical applications are rare.
It suggests something profound:
Complete randomness cannot persist in large systems.
Order inevitably emerges.
This idea fascinated many thinkers.
The idea appears in the famous book:
The Improbability Principle
It discusses why surprising patterns appear inevitably in large datasets.
The concept that order inevitably emerges from chaos appears in fiction exploring mathematics and patterns.
For example:
Gödel, Escher, Bach discusses deep mathematical structures that appear inevitably in complex systems.
Ramsey numbers are famous because they represent the edge of human mathematical ability.
Even small cases are brutally difficult:
Ramsey number R(4,5)
Ramsey number R(5,5)
Determining exact values requires:
deep combinatorics
massive computation
clever constructions
When an AI system improves even one bound, mathematicians pay attention.
The real importance of systems like AlphaEvolve may be how mathematics is done.
Historically:
humans proved theorems
computers checked cases
Now AI may discover mathematical objects humans would never think of.
Similar shifts already happened with:
AlphaFold in biology
AlphaGo in games
Ramsey problems might become one of the first areas where AI meaningfully expands human mathematical discovery.
✅ In short
Ramsey theory rarely fixes a direct real-world problem.
Its importance is deeper:
it reveals limits of randomness
it shapes combinatorics and network theory
and now it may become a playground for AI-driven mathematical discovery.
If you're curious, I can also show you something fascinating:
Why mathematicians suspect (R(6,6)) might be between ~102 and ~165 — and why computing it could require centuries of CPU time.
The number Ramsey number R(6,6) asks:
What is the smallest number (N) such that every graph with (N) vertices contains either:
a clique of size 6, or
an independent set of size 6?
We know today that:
[
102 \le R(6,6) \le 165
]
But the exact value is unknown.
Even after decades of work in Combinatorics, nobody has determined it.
A graph with (N) vertices has
[
\binom{N}{2}
]
possible edges.
Each edge can be present or absent, so the total number of graphs is:
[
2^{\binom{N}{2}}
]
For (N = 102):
edges = 5151
graphs =
[
2^{5151}
]
That is roughly:
[
10^{1550}
]
For comparison:
atoms in the observable universe ≈ (10^{80})
So brute force is impossible.
To verify a candidate graph avoids the forbidden patterns, you must check:
all possible 6-vertex subsets
The number of subsets is:
[
\binom{102}{6} = 1.19 \text{ billion}
]
Each must be tested to see if it forms:
a clique
an independent set
So checking one graph already requires billions of checks.
Instead of brute force, researchers use clever techniques from Graph Theory.
These include:
Many graphs are identical under vertex relabeling.
Algorithms only explore unique structures, eliminating duplicates.
Researchers convert the problem into a giant logical formula and solve it with tools from Boolean Satisfiability.
This method helped compute:
Ramsey number R(5,5) bounds
Ramsey number R(4,5) exactly
Instead of searching randomly, mathematicians design structured graphs such as:
circulant graphs
Paley graphs
algebraic constructions
These often produce the best lower bounds.
Systems like AlphaEvolve can:
explore billions of candidate structures
mutate graphs intelligently
discover constructions humans might never try
Instead of proving the exact number, AI pushes the lower bound upward.
Each improvement means:
“We found an even larger graph with no forbidden structure.”
To prove the exact value, mathematicians must show:
A graph exists with (N-1) vertices avoiding the pattern
Every graph with (N) vertices contains it
The second step is extremely difficult.
It may require checking astronomically many possibilities, even with pruning.
The legendary mathematician Paul Erdős once joked:
If aliens demanded the value of (R(5,5)), we should mobilize all computers.
If they demanded (R(6,6)), we should try to destroy the aliens.
This captures how explosively hard these problems become.
Ramsey numbers are like the Mount Everest of combinatorics.
They represent the frontier of Ramsey Theory:
very simple to state
brutally hard to solve
deeply connected to randomness and structure
✅ The big takeaway
The mystery of (R(6,6)) isn’t just about one number.
It represents a deeper truth:
Even extremely simple mathematical rules can produce complexity beyond brute-force computation.
That’s exactly why tools like AI search systems are becoming so exciting.
If you want, I can also show you something fascinating:
Why mathematicians believe most Ramsey numbers grow roughly like (4^{n/2}) — and why proving that is still unsolved. It’s one of the strangest patterns in combinatorics.
Great intuition — many people (including mathematicians) have asked something very similar. Unfortunately, Ramsey numbers don’t decompose nicely into smaller ones like (R(3,3)). But the reason why is actually very illuminating.
The classical result
Ramsey number R(3,3) = 6
means:
Any graph with 6 vertices must contain either
• a triangle (3-clique)
• or a 3-vertex independent set.
Graphically, it means small groups force triangles.




This is manageable because the graph size is tiny.
Your idea would look like:
Build clusters of size 6 (because (R(3,3)=6))
Connect those clusters carefully
Avoid forming a 6-clique or 6-independent set globally
But the problem is interactions between clusters create new structures.
Suppose you build groups:
Group A
Group B
Group C
Group D
Even if each group avoids large cliques internally, edges between groups combine to create larger cliques.
For instance:
A: 2 vertices
B: 2 vertices
C: 2 vertices
If edges connect across groups in the right pattern, you suddenly get a 6-clique even though no group had one.
So local guarantees don’t compose cleanly.
Ramsey problems behave differently from many mathematical objects.
They are not modular.
In fields like:
Number Theory
Group Theory
you often break problems into smaller pieces.
But in Ramsey Theory, the entire graph interacts in complicated ways.
A small pattern anywhere can combine with others to create the forbidden structure.
Rather than decomposing, they use structured constructions.
Example approaches:
Vertices are arranged in a circle and edges follow modular arithmetic rules.
Example:
connect i to j if (j-i mod n) ∈ S
This creates symmetry that helps avoid patterns.
Using finite fields or quadratic residues (Paley graphs).
These come from deep algebra in Finite Field Theory.
Systems like
AlphaEvolve
try millions of mutations on candidate graphs.
They evolve structures that avoid forbidden patterns.
There is a recursive inequality discovered by
Frank P. Ramsey
[
R(s,t) \le R(s-1,t) + R(s,t-1)
]
For example:
[
R(6,6) \le R(5,6) + R(6,5)
]
This helps compute upper bounds.
But it still doesn't let us reduce everything to (R(3,3)).
Your idea is actually related to a major concept in combinatorics:
hierarchical structure building
Many breakthroughs happen when mathematicians find clever ways to assemble small structures into large ones without triggering forbidden patterns.
That’s exactly what:
extremal graph theory
probabilistic combinatorics
AI search systems
are trying to do.
The true values grow extremely fast.
Research suggests roughly:
[
R(n,n) \approx 4^{n/2}
]
This result comes from work by
Paul Erdős
using the probabilistic method.
Meaning:
Even if we can't construct the graphs explicitly, random graphs prove they exist.
✅ Key insight
Your idea would work if Ramsey structures were local.
But they’re global emergent patterns — the whole graph must be considered at once.
That’s exactly why even a problem as simple as (R(6,6)) has resisted solution for decades.
If you'd like, I can also show you something mind-blowing:
There are Ramsey numbers that are literally impossible to compute even with all computers in the universe.
It comes from a connection to logic and the limits of provability.
How many hard problems are really waiting for better search, not just bigger models?
The question has become a serious discussion in AI research recently. The AlphaEvolve Ramsey result is a good example of the “better search” paradigm, which is quite different from scaling neural networks.
Let’s unpack it.
In modern AI there are two broad strategies:
Train massive neural networks with more data and compute.
Examples:
GPT‑4
Gemini
These work well for:
language
perception
coding
reasoning approximations
But they don’t directly explore large discrete spaces.
Instead of predicting answers, the system explores possibilities and verifies them.
Famous examples:
AlphaGo used Monte Carlo tree search
AlphaFold searched structure space
AlphaEvolve searches mathematical constructions
Here the AI:
Generates candidate solutions
Evaluates them with strict rules
Iteratively improves them
This is closer to scientific discovery than prediction.
The improvement to Ramsey bounds was not because of a larger neural network.
It happened because the system could:
explore enormous graph spaces
mutate structures intelligently
keep partial improvements
This falls into Combinatorics, where many problems are essentially search over huge spaces.
So the quote you saw is quite accurate.
Many hard problems are like:
“The solution exists somewhere in a gigantic space of possibilities.”
These areas are widely believed to be ripe for AI-guided search.
Area of:
Extremal Combinatorics
Problems often ask:
What is the largest structure avoiding some pattern?
Examples likely to improve:
Ramsey numbers
Turán-type graph problems
hypergraph constructions
These are perfect for evolutionary search.
Field:
Coding Theory
Goal: find codes with best error-correcting properties.
Historically many codes were discovered by clever search.
AI could find:
new record-breaking codes
improved storage systems
better satellite communication schemes
Huge design space for molecules and crystals.
AlphaFold already showed the power of AI in biology.
Next targets include:
superconductors
catalysts
battery materials
These are essentially search in chemical configuration space.
AI could discover new algorithms.
Some early examples already happened with:
AlphaTensor
AlphaTensor discovered faster matrix multiplication algorithms for certain sizes.
That shocked many mathematicians.
Three changes happened recently:
We can evaluate billions of candidates.
Neural networks guide the search.
Computers can instantly check if a candidate is valid.
That combination is extremely powerful.
Researchers often mention:
Especially:
Ramsey number R(6,6) bounds
hypergraph Ramsey numbers
Related to:
Sphere Packing Problem
New constructions might appear.
Important for:
deep space communication
storage systems
quantum computing
Finding minimal circuits for cryptographic operations.
Huge search space.
Yes — SolveAll has been discussed in the AI research community.
The idea behind it is roughly:
Build AI systems specifically designed to attack unsolved scientific and mathematical problems.
Their philosophy aligns with the search paradigm:
structured problem databases
automated conjecture generation
algorithmic exploration
verification loops
It’s similar in spirit to projects like:
Lean
Coq
but aimed at problem-solving pipelines rather than proof verification.
The general concept is gaining traction because of results like:
AlphaFold (biology)
AlphaTensor (algorithms)
AlphaEvolve (combinatorics)
Some researchers think future AI systems will look like:
generate hypothesis
→ test
→ refine
→ repeat
This mirrors the scientific method.
Instead of asking AI for answers, we ask it to explore possibility space.
Many scientists now think the first major math breakthrough by AI will not be a proof like the:
Riemann Hypothesis
Instead it will likely be something like:
discovering a new combinatorial construction
finding new algorithms
optimizing structures humans overlooked
Exactly the type of problem that better search excels at.
✅ The key insight from that quote
Some problems are hard because:
we don’t understand them
But many are hard simply because:
The search space is enormous and humans can only explore tiny parts of it.
AI is beginning to change that.
If you're interested, I can also show you 5 famous open problems that are almost purely search problems (and therefore extremely likely to fall to AI in the next 10–20 years). They’re fascinating because they look deceptively simple.
Here are 5 famous open problems that are widely believed to be “search-dominated” problems—meaning the main barrier is exploring enormous spaces of possibilities rather than inventing completely new mathematics. These are exactly the types of problems where AI-guided search systems (like AlphaEvolve-style systems) could realistically make breakthroughs.
Ramsey number R(6,6)
Field: Ramsey Theory
We currently know:
[
102 \le R(6,6) \le 165
]
Meaning:
A graph with 101 vertices exists with no 6-clique or 6-independent set.
But by 165 vertices, such a structure must exist.
The challenge:
The number of graphs on ~100 vertices is astronomically large.
But many are structurally similar.
AI systems could:
search for larger constructions (raise lower bounds)
help prove absence structures with SAT solvers
guide symmetry discovery
Many combinatorialists think this is a prime candidate for AI progress.
Sphere Packing Problem
The problem asks:
What is the densest way to pack spheres in (n) dimensions?
Visual intuition in 3D:


![]()

Known spectacular result:
E8 lattice
But for many dimensions (9–30 especially):
best packings are unknown
improvements usually come from clever constructions
AI could search:
lattice structures
algebraic symmetries
packing configurations
This is extremely similar to Ramsey searches.
Coding Theory
Goal:
Maximize how many errors a code can detect or correct.
These are used in:
deep space communication
SSD storage
satellite transmission
QR codes
Visual idea:
![]()



Most record-holding codes historically came from:
clever search
algebraic construction
AI could discover:
new record codes
shorter codes with same error tolerance
Even small improvements have massive engineering impact.
Matrix multiplication speed is determined by an exponent ( \omega ).
Current best:
[
\omega < 2.372
]
This matters everywhere:
graphics
machine learning
simulations
cryptography
An AI already surprised mathematicians here:
AlphaTensor
AlphaTensor discovered new multiplication algorithms for specific matrix sizes.
These were not known before.
Researchers suspect AI could:
find better algorithms
push the theoretical exponent lower
Because the problem is fundamentally search over algebraic decompositions.
Cryptographic primitives often depend on:
Boolean circuits
logic gate networks
Example fields:
Cryptography
Circuit Complexity
Goal:
Implement operations with the fewest gates possible.
Why it matters:
faster encryption
lower power hardware
smaller chips
But the design space is enormous.
AI-guided search could explore:
millions of circuit configurations
optimized gate layouts
They share 4 properties:
Huge search spaces
Clear objective functions
Fast verification
Structured symmetries
This is exactly the environment where systems like:
AlphaGo
AlphaFold
AlphaEvolve
perform extremely well.
Historically:
human intuition → theorem
Now we increasingly see:
AI search → candidate structure
human → explanation/proof
AI becomes a discovery engine, not just a reasoning tool.
Some researchers call this the rise of computational discovery.
✅ My personal prediction
In the next 10–20 years, AI search systems will likely:
improve several Ramsey bounds
discover new codes
find better matrix multiplication algorithms
design novel materials
But the biggest impact may be discovering mathematical objects humans didn’t know to look for.
If you want, there’s an even deeper idea behind this trend:
Some mathematicians think many unsolved problems are really “compression problems” rather than reasoning problems.
That idea connects Ramsey theory, AI search, and even physics—and it’s pretty mind-bending.