Do Mathematicians Think Like Computers?


Here’s a very interesting philosophical question: How can our brain understand mathematics? Is it like a computer, or, does it work in a different way? Understanding the answer will give you the most effective way to learn mathematics, because if you know how your brain works, you know the most optimal way to teach it. 

Psychologist Warren McCulloch believed that the brain is a “logical machine”, because of what he found in his 1943 study. He built a mathematical model of neurons where he removed their complex biological reactions, and reduced them down to the bare minimum, giving them only two functions: to fire or not to fire.

What happens is that, a neuron would get little pushes from other neurons. You count how many pushes it got. If the count is high enough, it fires. If the count is too low, it does not fire. So for example, say a neuron has a threshold of 2. If only one input is active, the total push = 1, which is below 2, so it doesn’t fire. If two inputs are active together, the total push = 2, which reaches threshold, so it fires.

McCulloch asked the following question: can large networks of these simple on/off units still do serious computation? And the answer is yes. In computer jargon, we would say that “that network had the power of a Turing machine”. In a nutshell, he concluded that “a nervous system can compute any computable number.”

So, McCulloch followed in the steps of mathematician George Boole, who invented “Boolean” logic, which basically describes how binary values 0 and 1 (so, true and false) can be combined in logical computation.

The metaphor of the brain being a computer is very popular, and not only among the general public but also among some specialists of cognitive science. That basically it doesn’t matter whether the computer runs on silicon or if it runs on nerve cells. Alonzo Church and Alan Turing agreed that any function that the human mind can compute can be computed by a Turing machine or a computer.

So, is the brain really nothing more than a “logical machine”? And does this innate logical capability explain why we have mathematical abilities?

Stanislas Dehaene, the author of Number Sense, would say that this explanation is way too simple. And what we will show you today comes from his book. He argues that rigorous calculations don’t come very easily to humans. As a simple example, if we take someone who has been doing calculations for years, they still take tens of seconds to multiply two numbers with six digits or more. That’s something that the most basic computer in your home can do in milliseconds.

If we take a series of logical steps, the computer’s performance would be perfect, while our own brain would be extremely slow. But if we take something like a recognition of shapes or attributing meaning, the brain would be way better.

So, to make their position more subtle, many psychologists (the branch of which is called ‘functionalists’) would not reduce it to brain = computer, but treat it as an information processing device. Even though the brain has the ‘wet stuff’, this wet stuff doesn’t matter, fundamentally speaking. We can reduce everything to logical processes and discrete functions, to inputs and outputs.

If we study the brain in that way of thinking, we can still learn a lot about it. But it still isn’t quite right because, well… It doesn’t take emotion into consideration. And the thing that Dehaene argues is that logic and emotion are strongly connected, and this has a huge influence on how we make decisions and how we reason.

And, John Von Neumann would agree. In his own words from the book The Brain and the Computer “the language of the brain [is] not the language of mathematics” and that “When we talk about mathematics, we may be discussing a secondary language, built upon the primary language truly used by the central nervous system. Thus, the outward forms of our mathematics are not absolutely relevant from the point of view of evaluating what is the mathematical or logical language truly used by the central nervous system.”

Basically that, when we do mathematics, we express it in symbols, in rules, proofs, equations, and so on. But Von Neumann is saying that this is just the outer translation of a deeper internal process, that the brain has its own internal language that lets it process things in a different way.

For example, an interesting way that shows how the brain processes numbers differently from a computer can be seen in something called the “distance effect”.

In a study, Arabic digits or number words were flashed on a screen, and participants were asked to press one button if the number was larger than 5, and another button if the number was smaller than 5. The results were recorded with electrodes on their scalp.

Though this study has a lot of interesting nuances, for our purposes there’s one thing to highlight: it systematically takes us more time to compare numbers that are close to each other, like 1 and 2, than it does to compare numbers far away from each other, like 1 and 9.

But for computers, the comparison time is constant, no matter what the numbers are. 

If you want to program the computer to do something similar to the brain, it would be much harder than the way it is now. So the distance effect is fundamental to the human brain, but it’s not a property of digital computers. A good question is: are there other kinds of machines that have a similar effect to the brain? The answer is yes, analog machines.

An analog machine is actually something like a scale. If you place a 1 pound weight on the left plate, and a nine pound weight on the right, as soon as you let go, the scales will immediately tip to the right. Obviously, because 9 is larger than 1.

But if you were to replace the 9 pound weight with a 2 pound, the scales would actually hit the right side after a longer period of time. Actually, the time it takes for the scales to tip over is inversely proportional to the square root of the difference in weight. Which fits nicely with our brains, that take longer to compare 2 and 1 versus 9 and 1. So the way we compare numbers is actually pretty weird, and though you could, technically speaking, simulate the behavior of an analog computer on a digital one, the main principle on which the computer runs is fundamentally different.

And it’s not only that. There’s another really really strong argument that shows that our brains don’t work like “logical machines”. In the recent past, several logicians like Dedekind, Peano, Frege, Russell, and Whitehead, tried to found arithmetic from pure formal rules.

So for example, take Peano’s axioms, which can basically be reduced to these 5 statements:

• 1 is a number.

• Every number has a successor, denoted as Sn or simply as n+1.

• Every number but 1 has a predecessor (assuming that we consider only the positive integers).

• Two different numbers cannot have the same successor.

• Axiom of recurrence: If a property is verified for number 1, and if the fact that it is verified for n implies that it is also verified for its successor n+1, then the property is true of any number n.

Did you figure out what they’re describing? They may sound complex, but all they’re actually doing is formalizing a chain of numbers like 1,2,3,4 and so on, and saying that that chain is never ending. And, they allow a very simple definition of addition and multiplication.

But there’s a very big problem with Peano’s axioms. Even though they successfully describe the properties of integers, they also allow objects “monsters”, which we wouldn’t dare to call numbers, but that would satisfy all the axioms in every sense.

They’re formally called “nonstandard models of arithmetic”. And they’re really hard to describe, but the point is that they allow for models that are not standard, like the natural numbers. But here’s a good question: couldn’t we just continue adding on axioms to Peano’s list, until we got to the point that they would only be able to describe “true” numbers, and only them? This is the core of the paradox. 

There’s a theorem in mathematical logic first proved by mathematician Thoralf Skolem, that shows that “the addition of new axioms can never abolish nonstandard models.” Basically, as much as mathematicians would like to push axiomatic formalism, they will constantly meet new “monsters” that will satisfy all the formal definitions without being identical to the natural numbers.

Maybe numbers are so primitive that we can’t fully define them in more basic formal terms at all, as was said by a philosopher named Husserl. This may be hard to believe, because integers are so clear and so natural – they make so much sense in our head, so it’s weird that defining them rigorously would be such a big problem. But we keep on failing to describe a system that would ONLY describe the natural numbers, they always have some unintended sidekicks.

Like, if we tried a simplistic approach and said that we get natural numbers by counting. Just start with 1, and go on to repeat the ‘successor’ operation as many times as you need. But what does it mean “as many times as needed”? We can only allow repetition to happen finitely many times. But since it’s actually infinite, logic tells us that we will for sure meet some other strange models, different from the one we initially had in mind

Our brain just gets what numbers are, it doesn’t at all rely on axioms.

And here’s what is so important about this point. It’s the problem of how mathematics is taught. In the 1970s, there came a reform in the way mathematics was taught, influenced by the computer – brain analogy, where children were viewed as  “little information processing devices”. A group of mathematicians, known as the “Bourbaki” believed that teachers should start right away by teaching students the most fundamental formal bases of mathematics. So according to that logic, to teach functions for example, you’d have to start with the most general case, which is this monstrous function.

Basically, why would you have students solve simple arithmetic problems when abstract group theory summarizes absolutely everything in a very concise and rigorous way. Let’s start with that instead, it’s much more rigorous and complete right?

Well the thing is, immediately bombarding the brain with a bunch of abstract axioms is not very useful. A more reasonable strategy would follow how our brain operates: first leaning on the intuitive side, like for children that would be counting for example, and then expand it into puzzles and little problems. And only afterwards, little by little, introducing symbolic notation, and eventually axioms in an abstract way.

If you’re serious about learning and expanding your knowledge, using an intuitive approach, check out our catalogue of PDFs. Each of them comes with a YouTube video, and they are built on intuition, concrete examples, rigorous explanations, and (only then) exercises with detailed solutions.

Let us know what you think: is the brain just a logical machine? And we didn’t even mention AI in this video, because well, it’s a ‘mind’ that works differently from both our own and from computers, but it’s an interesting thought as well.

DiBeo's Avatar

Posted by

2 responses to “Do Mathematicians Think Like Computers?”

  1. cbryant1000 Avatar

    Terrence Tao posted this video on YouTube, which I basically agree with. I believe that “intuition” is an important aspect of doing mathematics; part of which is being able to visualize a problem in your mind.

    Tao’s 3 steps are:

    1. Calculation (brute force practice, repetition) to ingrain the techniques to become second nature.

    2. Rigorous phase: painstakingly constructing rigorous proofs of the methods learned in Step 1.

    And finally, 3. Post-rigorous phase: Letting go of the rigorous thinking, and letting you mind wander, so to speak. This is where intuition comes in. You may have a hunch about some idea. You may follow up; if it turns into something meaningful or significant, then you know how to make it into a rigorous argument.

    Like

  2. Vivek Avatar
    Vivek

    Hello I appreciate your efforts in Math Education and Research . While learning Euclidean Geometry I was stuck because of the self-evident axioms . Actually the careful precise flawless foundations mathematicians are trying to build has come to them after years of accumulating wisdom and organizing and re-organizing the whole body of knowledge . You realize something today and something few days later and something years later . That is fine we can’t do anything if all of humanity is not understanding something . Some day one human or a group of humans get some insight and show others the way . All this is natural evolution in our understanding .

    But the problem is these learned people then throw this fine-tuned knowledge acquired through years of work on a learner who has just picked up the field of learning . This makes the learner self-doubt himself and more worryingly doubt the abilities of his gifted brain which is much more capable than the poor guy is assuming for himself . This way he end up completely dependent on the instructor to show him the way . This is how our education system works by suppressing your natural intuition ,natural will to explore the unknown ask intuitive amazing questions into a dull suppressed guy who become silent over the years because you already killed his zeal to explore by instructing him what is right what is wrong every time .

    I would like to like to say let humans be humans don’t make them behave like machines until all that stuff is intuitively clear to a human only then instruct him how he would instruct a dumb machine that works purely on logic without emotions . Then only we can make sense of these axioms and truths based on them .

    I also thought about why the self-evident truths are being proved and why it cause so much trouble for a human to digest . The reason is that these axioms are self- evident to us because of our remarkable brains that subconsciously computes for us and make our daily life and movements possible . Since it has already sensed the truth of those self-evident truths it hate being taught back .

    When teaching proof technique I think we should teach students that we are making efforts to convince a dumb machine which works on very elementary logic and has got no other help (unlike AI which has lots of data and learning algorithms built in ) .

    Also Human brain will have difficulty analyzing complex things and we humans that’s why make math to help us penetrate those things . Math can deeply increase the abilities of human mind . A Philosopher can be more intelligent than a mathematician but he can’t solve a relatively simple math problem disguised in everyday language for him without the use of math tools humans have invented . if he is able to solve it he has basically discovered the underlying math .

    Like

Leave a comment