This post discusses the widespread notion that the mind is some kind of computer; that the computer is able to represent knowledge, and this knowledge can be about the world. As we shall see, this notion is quite silly, although people—who are either not physicists, mathematicians, or computer engineers, or just happen to have an academic title without an understanding of these subjects—tend to profess it over and over. This post explores the multiple and successive levels at which this notion is flawed, and why fixing it has proven so hard so far. The post ends by commenting on whether it can ever be fixed.
Table of Contents
Let’s Define Knowledge
Let’s begin by defining what we mean by knowledge. Don’t worry, I’m not about to get into some esoteric philosophical idea about what it is to know the truth. I will restrict myself to the much more pedestrian notion that the length of my bedroom is 15 feet and I want to express this through a computer. Even to represent such a simple idea in a computer involves many assumptions—choices—which are naively taken for granted by the “mind is a computer” proponent.
These choices are made in the process of designing a computer, but those who don’t understand how these choices have been made don’t see that the technologist has already chosen for them. That these assumptions could be different and are often different illustrates why they are choices. How we explain these choices remains unknown today, which is the primary reason why the reduction of the mind to computers has thus far been unsuccessful, despite the best minds working on it for the last 50 years.
From Ambiguous States to Definite States
Everyone knows that computers work by operating on digits—1s and 0s. The physical world itself, however, is not 1s and 0s. The physical world is electrons, protons, and neutrons, at least at the level of complexity at which computers deal with the physical world. Physicists will tell you that the protons and neutrons can further be broken down into quarks, but whether the quarks reduce to something even more fundamental is not yet known. Never mind, that is not of key importance here. What is important is that we have some fundamental entities that the world comprises of. We can make that assumption, and proceed with the best understanding of these entities today.
The world of subatomic particles is described by quantum theory, which states that an electron could be in more than one state at any time, and that state isn’t known until the measurement is performed. It is important to know this state before you can use it to represent a digit—the fundamental unit of all successive operations.
For example, the spin of an electron can point upwards or downwards, and quantum theory stipulates that an electron can simultaneously be in the state up and the state down. Only when a measurement is performed do we truly know whether the state is up or down. The conversion of the “real” electron state—either up or down—therefore involves the choice in measurement, which chooses and fixes the ambiguity in electron state to either up or down. This choice presents a great unsolved problem in modern physics, and is called the Quantum Measurement Problem. How the ambiguous state of the electron becomes a definite state is unknown today, despite nearly 100 years of speculation on how this problem could be solved. And yet, we know that this choice must be made if the physical state of the electron is to become a digit in a computer. If this state cannot be fixed, then the electron simultaneously represents up and down. The electron’s state must therefore get fixed through a choice, but we don’t know how that occurs.
From Definite States to Digits
Assuming that someday we will solve the problem of how an electron state becomes definite, we would be faced with a new problem, which is how you map the up or down state to a digit—i.e. 1 or 0. You could say that up represents 1 and down represents 0. You could also say the reverse. The up or down is a physical state and 1 or 0 is a digit. To use a physical state to denote a digit involves a choice in which we uniformly convert the up-state to 1 and down-state to 0 or vice versa. That convention is arbitrary, and therefore represents a choice. The computer manufacturer makes that choice for you, but there is no definite way of choosing, since you could choose either way.
Many scientists currently suppose that choice appear only as a higher function of the brain (e.g. involved in making moral decisions of right and wrong) when the brain becomes highly complex, and therefore choice must be explained through an understanding of brain complexity. This view is flawed because choice is needed for far more simpler issues such as fixing the physical state of an electron, followed by another choice in fixing the digit based on the physical state. Note that the computer isn’t functional at this stage because it isn’t yet capable of running any program. In fact, we haven’t yet obtained the ability to encode a program in a computer because we are only trying to give the computer the ability to denote 1s and 0s, and we have already made two kinds of choices—that of electron state, and that of using this state to denote a 1 or a 0. We cannot therefore suppose that choice must appear only after a complex program is running, because we need a choice even to represent 1s and 0s—i.e. the constituents of a program.
From Digits to Numbers
If you were shown some digits—e.g. 101—you have no way of saying whether these digits represent “one hundred and one” or “five”. The digits by themselves are not numbers, because we haven’t yet chosen a basis of representing numbers into digits. If 101 is understood in terms of a decimal numbering system, then it is “one hundred and one”. If, however, 101 is understood in terms of a binary number system, then it is “five”. Similarly, the digits 10 denote “ten” in decimal, and “sixteen” in hexadecimal number system.
Therefore, even if you got your computer to represent digits (which already involves two choices, of fixing the electron state, following by fixing the up/down convention) the digits still don’t indicate a number, unless you now choose a basis of counting—e.g. binary, ternary, octal, decimal, hexadecimal, etc. The same digit has a very different meaning—i.e. number—depending on which basis of counting you choose. Since there is no preferred way of converting the digits into numbers, you must again make a choice.
Most modern computers count in hexadecimal while most humans count in decimal. Hexadecimal makes for a more convenient design of computer storage, because the computer gets its numeracy (number literacy) from binary counting and 16 is a more convenient way of counting than decimal. Humans, however, learn to count with their fingers and given that most of us have 10 fingers, a decimal counting system appears more convenient. There is quite clearly no preferred reason to count either in decimal or hexadecimal; and therefore either basis of counting is a choice. This choice must be made before the digits can be interpreted as numbers. And since we are starting from the ground up right now, we can’t rely on anything else to determine the basis.
From Numbers to Properties
We have by now made three kinds of choices to fix the physical state, the digit, and the number. But a number by itself is meaningless unless we know what property it quantifies. For example the number ten can denote height, weight, time, temperature, or money. How do we know what 10 refers to? To make the computer a bit more literate, we must give the number 10 a dimension—e.g. length, weight, time, temperature, etc.
The idea of a dimension follows our ordinary ability to draw diagrams and charts in which the different axes denote properties. In one chart, the horizontal axis can denote color, while in another it can denote time; somewhere the vertical axis can denote years, while at other times it can denote distance. There is no fixed convention for how we interpret these axes. Therefore, the point on an axis itself is not a property unless that axis is given an interpretation of denoting a property – e.g. time vs. money.
Mathematicians have often argued that a line is nothing other than a continuous succession of points, which is quite contrary to how we use such lines to denote knowledge by treating the line itself as a property and a point on that line as a value. If we were to be a reductionist, and claim that a line is nothing other than the collection of points, then we would also imply that temperature is nothing but all the numbers. And yet, this view would fail to explain why the same line can sometimes be called time or money. How can the number sometimes be one property and at other times, another?
There is no answer to this problem, except to say that numbers are not properties; numbers are values, but they are given an interpretation by placing them along a dimension, which is then taken to denote a property such as length or time. Since the same line could be given another interpretation, such interpretations are choices which have to be made before the electron state (which first became a digit, and then became a number) can now become a property. The choice of a property is unlike all the other choices we have seen thus far: we are now not interpreting the electron by itself, but interpreting the location at which it exists. In other words, we have now proceeded to interpret the space in which electrons exist in new ways.
Such interpretations are disallowed in modern physics. You cannot call one line in space money while another line denotes color. All lines in physics are just length. As a result, you also cannot give a number—which we obtained by three successive choices of deciding the ambiguous state to a fixed state, from state to digit, and from digit to number – a property qualification. All computers at present therefore go only so far as to denote numbers (after the above three choices have been made) but no more. Whether a number denotes length or time or color or money is entirely a programmers choice.
And you cannot fix this problem unless you allow lines in space to represent arbitrary ideas such as color, time or money. Since physics will forbid you from doing that, you cannot have a physical system that will ever give a number a property. That some variable “X” denotes weight, while “Y” denotes “age” is entirely a programmers interpretation. A computer at best only manipulates numbers, without an understanding.
From Properties to References
Our minds have many successive layers of sophistication that can never be incorporated in a computer where the ability to make choices terminates even before our ability to reinterpret space. For example, I can know that the property of length being 15 feet is a property of my room, and not a property of my brain, even though the idea of 15 feet exists in my brain. This is ordinarily called aboutness in which our brain has a physical state, which becomes a digit via a choice, which becomes a number through another choice, which becomes a property after yet another choice is made, which then becomes the knowledge about the world, after another kind of choice.
Just as modern physics forbids us to reinterpret space to denote properties, it similarly disallows an object to represent another object. The table is a thing-in-itself, not a thing-about-other-things. All particles and waves are objects, and their physical states are the states of those objects, not descriptions of the states of other objects.
Anybody who claims that the physical states in the brain can become knowledge of the world, should go to high school again, because that claim arises only if the person wasn’t paying attention to what the teachers taught in Physics class. Whether or not you had a good teacher, you must know that physics disallows one object state to refer to another object. When you measure the mass of an object to be 5 kilograms, it is the mass of that object, and not a description of the mass of another object. All physical properties pertain to the objects being measured, and there is no concept of reference.
The only way you could explain how the brain knows the world is if we began by rejecting that fundamental idea in physics, quite like the only way you can explain that that brain knows a property is if you begin by rejecting that space cannot be reinterpreted. In other words, my ability to know the world violates current physics. Clearly, if you are designing a computer based on current physics, and that physics forbids knowledge representation and references to objects, your computer cannot achieve these either. The limitation, however, doesn’t arise because of computing theory; it rather arises because of physics. The design of a sophisticated thinking machine is therefore not a project in engineering new kinds of computers; it is a problem of fundamental physics.
From References to Truths
Human thinking goes far beyond knowledge representation. Once we represent knowledge, we must judge something to be true or false. Everything that exists is not necessarily true, because there can be false ideas or mistaken theories. But if you were to take the world to be mathematical in just the sense that we do today, then these falsities must also be produced according to mathematical laws.
Assume that there is some true mathematical law of nature X, but you think that the mathematical law is Y, then according to the current picture in science, Y must be produced by X. In other words, if you begin in the premise X, you must arrive at a conclusion Y, which may even contradict X. In short, the world governed by mathematics must allow logical contradictions, if there can be a single wrong idea.
This problem arises when we begin to imagine that the judgment of truth and false is not a choice, but always a logical conclusion. The moment you make that mistake, you preclude the ability to explain the human mind simply due to logic and mathematics. Our minds can make a mistake, but if the world is logical and mathematical, then it must be free of any contradictions. In other words, even our false ideas must be logically derived from the true ideas; in summary, there cannot ever be a false idea because the existence of a single such an idea would make the universe inconsistent and hence non-existent.
This problem can only be addressed in one way—which is to say that the judgment of true or false is a choice. This choice in turn is made by selecting some axioms or beliefs. Given a set of axioms, X, Y, and Z, some propositions can be theorems of these axioms. Given a different set of axioms A, B, and C, a different set of propositions will be theorems. The truth of a proposition (i.e. whether it is a theorem or not) depends on the choice of axioms. Logical reasoning is itself not the decider of truth; truth is decided by a choice.
One common mistake people often make is to think that our beliefs or axioms are tested against facts, and science tests these theories to arrive at the correct set of beliefs. That is only part of the fact. The other fact is that we choose how to describe the facts we observe. For example, current physics assumes that lengths in space cannot be interpreted as properties, and objects in space do not refer. And hence, we describe our brain as an object which has physical states but not meanings, which then refer to the world.
Clearly, these are inconsistent with the facts. And yet, the correction of those mistakes would involve rejecting some fundamental ideas in modern physics. Attachment to those physical ideas, however, leads us to reject the facts themselves. For example, under attachment to false beliefs we reject the idea that choices are so fundamental to everything in nature that we could not even convert an ambiguous physical state into a definite state, a definite physical state into a digit, a digit into a number, a number into a property, a property into a reference, and a reference into a truth, unless choices existed before a physical state, a digit, a number, a property, a reference, or a truth existed.
That choices are tested by facts is not incorrect. But that we make choices about axioms even before we test them is also a fact. Indeed, if modern conundrums in science are any indication of the problem in science, it is this: we have made a choice about what nature is, and we are trying to force-fit nature into that choice, whether it works or not. When this choice works, we claim it to be true. When it fails, we still insist that the facts – such as the ability to make a choice to convert a physical state into a digit – must be wrong.
From Truth to Rightness and Beyond
The human mind goes far beyond truth determination. As we have seen, to decide truth, we must decide some axioms. We can test the axioms against facts, but which facts must we test it against? Should we test particle physics in an accelerator, or should we test the implications of particle physics in biology, neuroscience, or cosmology? What about biology, and neuroscience? Can we test their implications in a particle accelerator?
Scientists will insist that each theory is created relative to a particular problem. We choose some axioms to solve a particular problem, and the validation of those axioms depends only in relation to that problem. In other words, the truth of axioms can only be tested in relation to a certain chosen problem. The axioms are not universally true, although they can be sufficiently useful for a chosen set of problems. There you go again. Axioms can be validated in science, provided you have chosen some problem to solve.
Every scientist is therefore free to formulate some axioms or theories, because they are only trying to solve a chosen problem. The truth of your axioms is again relative to your choices. You could say that the Earth is flat, and it would work for all practical purposes if you were only traveling within a city. The assumption fails only when you broaden the problem. Similarly, you can say that the world is governed by classical physics if you are only trying to design a bicycle. That assumption fails only when you broaden the problem to include radiation and cosmology. If axioms are decided by problems, then how are problems decided? Why do you think that a problem is worth solving?
That question in turn leads to more choices—e.g. into the question of what is right and wrong action, and what is good or bad in this world. I will not elaborate them here, because I believe you should have already gotten the drift of the problem.
Is the Mind Really a Computer?
If you have followed the post closely—and I acknowledge that it may have not been easy—you will see that there is a single problem that we are dealing with at many levels just in order to solve to bridge the divide between the mind and the computer. This is the problem of choice. It appears in trying to convert an ambiguous state into a definite state. It appears in converting a definite state into a digit. It appears in a converting a digit into a number. It then appears into the problem of converting a number into a property. It appears in the conversion of a property into a reference. It then appears in the conversion of a reference into a truth. It manifests in the conversion of truths into goals. It appears into the conversion of goals into what we consider morally good.
Anyone who thinks that choices are “higher” functions of the brain doesn’t understand science, because choices are involved at every stage of material complexity. The problem isn’t that we might not have free will, although we think that we have some free will. Many people have this belief that there is no free will, although there is an illusion of free will. They should carefully read this post to understand how free will is essential at every level of fundamental physics, mathematics, computing, and logic. If they are relying on explaining away their free will based on mathematics, computing, logic or physics, then they might be better off understanding what they consider explains their existence before they assume (i.e. choose to believe) that they have explained free will.