102092 - Is the destiny of logic immoral or amoral?

N. Lygeros

Translated from french by Grok

To Roland FRAÏSSE (my oldest friend!)

Will the computer become the workhorse* of logic?

I feel a bit ashamed writing an article about logic without even deeply knowing the work of the great Bertrand Russell. It seems as ridiculous as trying to talk about theoretical physics without having first studied Einstein’s work. My only defense against this self-criticism is that my article doesn’t pretend to be a proper logic thesis — it just tries to do its job as a manifesto.

This manifesto is necessary; otherwise, some branches of mathematics that we currently call “discrete” will become so truly discrete that in the future epistemologists — since mathematicians will have forgotten they exist — will have to rename them “obsolete mathematics” or “extinct mathematics.” This manifesto aims to be critical because logicians have no excuse.

The fact is, logic no longer has a choice: either it evolves or it will be swept along by evolution. And its future is already clearly mapped out by a tool that represents the pinnacle of automated thought: the computer.

But careful — let’s avoid sloppy thinking. The point of this piece is **not** to chant the slogan: “Logicians, become computer scientists!” Only idiots would understand it that way. We have no desire for logic to become some “old, dusty branch” — because most logicians are already starting to become serious competitors for the half-century mark! Fortunately, the same can’t really be said about their brains, and that’s exactly why we hope to convince at least a few of them! — of computer science.

So the goal isn’t to magically turn the classical logician into a computer scientist. We simply want the leading figures in logic to encourage their collaborators — and above all their students — to use the computer in a **heuristic** way.

Even if at first this usage remains mostly computational (including what we call “automated reasoning”), that alone would already be huge progress. Because most logicians — and this extends to mathematicians in general — not only don’t know how to program a personal computer, but they can’t even use a workstation, even though powerful software exists that could help them in their own research.

Of course we have to be a little realistic. We don’t want readers to think logic will suddenly rise from its ashes overnight thanks to computers. True, some areas of logic are still particularly impenetrable to computers — at least so far. But even there, any research that will allow computing power to enter those zones must necessarily start from the work of specialists in those very fields.

Still, in areas like combinatorics, for example, it’s high time researchers got real help from electronic computers. It’s pathetic to see the intellectual waste destroying young and rare logicians who, because they aren’t properly guided, spend most of their time doing calculations that any decent computer would finish in minutes — and would probably keep going far beyond human limits, giving much more weight to the thinker’s arguments.

We believe the most fundamental lesson from this whole issue is that **the computer can transform a qualitative question into a quantitative conjecture**.

Now let’s go deeper into analyzing the current state of logic. To illustrate our point, consider this question:

Why didn’t Kurt Gödel receive a Fields Medal?

First, a reminder for the few readers who still don’t know: the Fields Medal is awarded every four years to mathematicians under 40 to honor outstanding mathematical work.

Kurt Gödel was born in Brno in 1906. His first major result came in 1929 — the completeness theorem, which earned him his doctorate in 1930. But the work that made him famous appeared in 1931: it now bears his name and is about **incompleteness**. That isn’t just exceptional work — it’s one of the greatest triumphs of the human mind, an honor to humanity. So how do we explain that neither of the first two Fields Medals awarded in 1936 went to Gödel? What makes this decision even more regrettable is that even a mathematician would struggle to name who actually got them. Of course we don’t want to downplay the work of Lars Ahlfors or Jesse Douglas, but there’s simply no comparison with Gödel’s achievement. True, Gödel’s reputation grew further in 1938 with his proof of the consistency of Zermelo-Fraenkel set theory (ZF) + the Continuum Hypothesis (CH) + the Axiom of Choice (AC), assuming the consistency of ZF; and then again in 1963 with Paul Cohen’s work showing the independence of AC from ZF and of CH (in its generalized form) from ZF+AC. It’s also true that only specialists — and not even all of them — could really understand Gödel’s paper when it first appeared.

Yet none of these excuses can change one hard fact: by 1933, Gödel was already a famous mathematician — three years before the 1936 Fields Medals!

We think the simplest explanation for this misjudgment of Gödel’s contribution is that, from the very beginning, **logic has always been a minority affair**. To confirm this, just look at Hilbert’s program. While mathematics was in the middle of a foundational crisis and logic was the only field that could do something about it, Hilbert gave it only **two** out of his famous 23 problems — and he thought both were provable. Those two? The Continuum Hypothesis and the consistency of arithmetic — both of which turned out to be **undecidable**!

And do you really think Alfred Tarski — probably one of the greatest logicians ever — owes his fame to his logic work? He wrote “The Concept of Truth in Formalized Languages” (1931/1935), Introduction to Mathematical Logic (1937), The Semantic Conception of Truth (1944), Semantic Logic and Mathematics (1956)… yet what made him truly famous was a piece of pure mathematics: the Banach–Tarski theorem!

Our ignorance and your patience force us to stop with these examples. But there are certainly many more throughout the history of logic.

So it seems logicians are used to being treated as an intellectual minority within mathematics. As M. Pouzet pointed out to us, the problem isn’t really about numbers — it’s about the **ratio** of logicians to mathematicians. The number of mathematicians has grown exponentially compared to the (much slower) growth of logicians. If things continue this way, this minority will become **negligible** — in some universities, it already is!

It would be truly regrettable if the expansion of the mathematical universe ended up endangering the very existence of logicians, precisely when their field is still capable of spawning many new branches!

* In ancient Greek, the word for “horse” literally means: “without logic”!

Post to X