Sunday 12 April 2020

Does the universe use irrational numbers?

I've been increasingly drawn to an area that I always thought of as being pretty much as esoteric as they come: Brouwer's theory of "intuitionist" algebra.  Although this has very little to do with distributed computing, it seems so interesting that I thought I might share it here.

If you've ever read the wonderful illustrated historical novel by Christos Papadimitrou, Logicomix, then you probably know that logic has struggled to deal with the kind of mathematics we learn in high school and that also underlies classical physics.  Logicomix is presented a bit like a comic book, but you'll find it serious in tone -- hence more or less a European "bande dessinée", meaning "illustrated story".    I highly recommend it.

Christos describes the history leading up to the introduction of computer science.  The early years of this history centered on logic, algebra, calculus and insights into what Wigner much more recently characterized as the "unreasonable effectiveness of mathematics in the natural sciences".  Christos centers his tale on the giants of mathematical logic and their struggle to deal with ideas like completeness, undecidability, high-order statements, and notions like empty sets, or infinity ones.  Logic appealed to its early practitioners for its elegance and clarity, yet turned out to be exceptionally hard to extend so that it could apply to real world mathematical and physical questions.

The early period of logic coincided with a tremendous revolution in physics.  Thus even as generation after generation of logicians lost their minds, abandoned mathematics to join monasteries, and generally flaked out, physics proved the existence of molecules, developed theories explaining the behavior of light and electromagnetic fields, uncovered the structure of the atom, discovered radiation and explained the phenomenon.  Quantum mechanics was discovered, and general relativity.  This led to a situation in which physicists (and many mathematicians) became dismissive of logic, which seemed mired in the past, stuck over obscure absurdities and increasingly incapable of expressing the questions that matter.

Yet the rapid advance of physics relative to logic has left a worrying legacy.   For example, have you ever seen Ramanujan's elegant little proof that the infinite sum of 1+2+3+4... is equal to -1/12?  He did this when trying to show that there was an inconsistency in the definitions he was using to study Euler's zeta function (an important tool in physics).  In effect, Ramanujan was concerned that when working algebraically you run a risk of serious errors when using algebraic variables to represent infinite quantities or even divergent infinite series.  Yet, this occurs all the time: physical theories make constant use of infinity, pi, e, α etc.

In the early 1900's, the Bourbaki consortium of mathematicians tried to save the day: They created a grand project to axiomatize all of arithmetic (but without treating logic in any special way).  One hundred years and many volumes later, their axiomatic work is still far from completion.

Brouwer, an early 20'th century logician, proposed a different approach: he offered a mix of theorems and conjectures that added up to an area he called "intuitionist mathematics", in which the goal was to rederive all of algebra and calculus from the ground up using a logically sound algebra: a fully self-consistent description that we could understand as a higher order logic over rational numbers.  Brouwer made huge strives, but his project, like the Bourbaki version, was unfinished when he passed away.  The big unfinished question centered on how to treat irrational and values.  But this open question is now seemingly being answered!

My colleague Bob Constable leads a group called the NuPRL project.  Bob is a close friend and we have done some work together: many years ago, he and Robbert van Renesse met with Nancy Lynch and at her urging, decided to see if we could formalize the virtual synchrony model of distributed computing using NuPRL.  I won't discuss that work here, except to say that it was remarkably successful, and ended up also including proofs of versions of Paxos and even a constructive version of the famous Fischer, Lynch and Patteson impossibility result, for fault-tolerant consensus.  But I do want to tell you a little about NuPRL itself and how it differs from other logic tools you may already be familiar with, like the ones used for program verification -- NuPRL has a grander goal.

The system implements a "program refinement logic", meaning that it establishes a formal correspondence between logical statements and executable programs.  In NuPRL when you prove that for every integer x there exists an integer y such that x < y,  you can also extra a function that will carry out your proof, generating this larger value y from a given x (for example, by adding 1, or whatever technique you used in your proof).

More broadly, in NuPRL every proof is automatically checked for correctness, and the semantics of any provable statement map to a piece of code (in O'CaML, which can then cross-compile to C or C++) that will perform the computation the proof describes.

As a fancier example, back when the 4-coloring theorem for planar graphs was first proved, the proof itself didn't tell us how to generate a coloring.  It simply showed that if a certain sequence of "embedded subgraphs" are 4-colorable, then any graph can be 4-colored.  But had that proof been carried out in NuPRL, the proof would have come with an executable artifact: a program that 4-colors any planar graph.

Central to NuPRL are two aspects.  First, the system never permits a proof by contradiction, which is important because the logic it uses is strong enough to encode undecidable statements.  Thus if we have some formula F and prove that F cannot be true, we cannot safely conclude that F is false: it might also be undecidable.  Bob has sometimes pointed out that there are proof tools out there today that overlook this issue, and hence are fully capable of verifying incorrect proofs.

Second, NuPRL is a higher order logic.  Thus if we were to write down the famous inconsistent statement defining S to be "the set of all sets that are not members of themselves", we would find that in NuPRL, the statement isn't inconsistent at all.  If expressed in a first order logic, we wouldn't be able to describe a set of sets.  If expressed in a higher order logic, we couldn't write this statement, the type of S (the new higher-order set) is distinct from the type of its elements (its lower-order members).    This prevents the NuPRL user from expressing an ill-formed assertion.

The exciting new developing is recent: over the past few years, the NuPRL project has tackled Brouwer's theory with tremendous results.  Bob and his main coworker, a remarkable "constructive logician" named Mark Bickford, recently proved a series of open conjectures that Brouwer had left on the table at the time of his death, and even more recently gave a talk at a major European logic and PL conference on the rapid progress of their broader effort, which by now seeks to complete Brouwer's agenda and fully axiomatize algebra and calculus.  Theorems like the chain theorem and the central limit theorem turn out to have elegant much shorter proofs in NuPRL, because by defining them over a logically sound foundation, Bob and Mark don't need to state all sorts of assumptions that one finds in any standard calculus textbook: the definitions used for continuous functions and algebraic objects turn out to carry all the needed baggage, and don't need to be separately restated.  They are in the middle of writing a book on the work, but some papers can already be found on the NuPRL website at Cornell.

The fascinating aspect of Brouwer's approach centers on the elimination of irrational numbers and transcendental constants.  In effect, Brouwer argues that rather than think of real numbers as infinite sequences of digits, we would do better to focus on algebraic definitions.  For irrational numbers, this entails representing the value as some form of polynomial equation.   pi and other transcendental numbers cannot be so expressed, but they can be expressed using a formal notion of convergence that defines them as fixed points of Taylor series approximations or other similar recursive expressions.   Brouwer's agenda was to show that this algebraic approach can accomplish everything we do in normal calculus, including such puzzles as continuity (which defeated those logicians of the 18th and 19th century).  Moreover, we avoid the inconsistencies that doing so can introduce -- the same ones that puzzled and frustrated Ramanujan!  The NuPRL twist is that an infinite sum, such as the one Ramanujan looked at, is a higher-order logic, like our set S was earlier.  One can define it, and use it in equations, but the type-checking rules don't allow such an object to be "misused" as if it was a first-order construction.   In Ramanujan's original letter observing the inconsistency of Euler's definitions, that sort of abuse of notation was at the root of the issue.

So, why am I sharing all of this?  Well, in addition to being a fan of NuPRL, I've always been fascinated by work on the foundations of quantum mechanics (I was a physics undergraduate, before abandoning the area to switch into computer science), and I recently found myself reading an article about a new movement to redefine physics using Brouwer's intuitionism.  So here we have a movement within the physics community to fix these inconsistencies in the mathematical formulation of physics laws side by side with a huge advance in the needed intuitionistic mathematical community, offering them just the tools they will need!  (Not to mention, a pretty nifty theorem prover they can work with).

Why are the physicists suddenly worried about inconsistencies?  One reason is that with the advent of quantum computing, physics have become fascinated with the computable, and with an "information" based understanding of the most basic levels of the universe itself.  A natural question this raises is whether it makes sense to presume that the universal laws that define reality really entail doing mathematics on irrational numbers, computing with transcendental constants and so forth.

Part of the agenda is to try and fix some of the issues seen when quantum mechanics and general relativity are merged into a single theory.  This new intuitionistic community is arguing that the inconsistencies seen in previous efforts to carry out such a merger are very similar to the issue Ramanujan noted in Euler's work.  Indeed, there are even physical theories in which the "fact" that the infinite sum mentioned earlier is equal to -1/12 seems to play a role.  They aren't claiming such work is wrong, but simply that it is strong evidence that sloppy mathematics may be limiting progress in physics.

And the cool thing is that there is even experimental support for their view.  For example, one recent study pointed out that the equations of motion for 3 non-rotating black holes are chaotic to such a degree that no rational approximation of their positions and trajectories can fully predict their future paths (or backtrack to their past).  That particular piece of research was focused on a possible explanation for the arrow of time: it suggests that this observation can help explain why time flows from past to future.

In an intuitionist model of physics, expressed over a granular space-time in which distances smaller than the Planck metric have no meaning, all sorts of puzzles vanish.  For example, in quantum mechanics, one sees fields that seem to be arbitrarily small.  We hear about the idea that a photon might be here in Ithaca... but with some low probability, might actually really be somewhere out near Alpha Centauri, light years away.  In an intuitionistic formulation, particularly one in a quantized model, probabilities shrink to zero -- not to a small value, but to zero.  Locations become pixelated, just like in any graphics system.  And because we no longer need to worry about the "effect" of photons over in Alpha Centauri on events right here and now in Ithaca, the universal computation (the one the universe itself does, to compute its own next state from its current state) requires only a bounded calculation, not an infinite one.

Hopefully, I've interested you in this enough to start a small reading project on the topic... I find it fascinating.  Wouldn't it be amazing if 2020 turns out to be the year when computer scientists at Cornell -- logicians, in fact -- helped the physics community put this most basic of sciences on a sound footing that could eliminate all those pesky inconsistencies and absurdities?  What a great step that would be, in a story that drove generations of mathematicians and logicians out of their minds...