The Church-Turing Thesis as an Immature Form of the Zuse-Fredkin Thesis (More Arguments in Support of the "Universe as a Cellular Automaton" Idea)*

Plamen Petrov <>

Abstract: In [Pet02a1] we have shown a strong argument in support of the "Universe as a computer" idea. In the current work, we continue our exposition by showing more arguments that reveal why our Universe is not only "some kind of computer", but also a concrete computational model known as a "cellular automaton".

Last revised: September 6, 2003, 1:35 AM

Contents

1. Introduction
2. The golden thirties
3. Some major issues with regard to the Zuse-Fredkin thesis
4. The Zuse-Fredkin thesis explained by the Church-Turing thesis
5. Space-time continuum or space-time discretuum?
6. Cellular Automata and EPR
7. The Fredkin-Petrov ("3D UCA") thesis
8. Parallelism vs. serialism; society of minds vs. egoism of a single mind (An AI point of view)
9. Quantum mechanics is merely& mechanics!
10. Zeno's paradoxes resolved
11. Conclusion
Acknowledgments
References
Footnotes

1. Introduction

This paper is essentially an addition and continuation of the main idea described previously in [Pet02a1].

In fact, it will be best if [Pet02a1], the current work, and the forthcoming [Pet02a3] are read in a rapid succession. These three works may be considered something like a "triad of papers" that essentially describes a single topic, but only with gradually increasing depth of understanding.

While we are certain that the first article of the "triad" [Pet02a1] will be easily accepted by most readers as a thought-provoking argument in support of the "Universe as a computer" idea, it is our anticipation that many may have substantial difficulties grasping the much deeper and more diverse ideas behind the present work.

As for [Pet02a3], our only hope is that at least some readers will be able to understand it. In fact, what we are going to present in the last article will be so "out-and-out" of any analogy known from any other science (or simply "nondescript") that it may require a complete overhaul of the reader's thinking in order to comprehend it, even partially.

To summarize our basic plan very briefly, we could say that the main idea of [Pet02a1] is to convince the reader that the Universe indeed may be some kind of universal computational device, or, to say the least, there may be some advantage to look at the Universe as if it is a computer.

The idea of the present work is to show that our Universe is not just "some kind of a computational machine", but also a concrete model for computation known as a "cellular automaton" (CA). The idea of the Universe as a CA is known as the "Zuse-Fredkin thesis".

And, finally, believe it or not, the idea of [Pet02a3] is to show that the Zuse-Fredkin thesis alone, in some sense, is a self-sufficient statement, i.e. all we need to know in order to derive (mathematically!) the exact form of the CA that is supposed to be isomorphic to our Universe is the Zuse-Fredkin thesis itself, and nothing else!1

But, for the moment, let us get back to our historical notes from [Pet02a1] and the remarkable period of time when the overall idea behind the so-called "Church-Turing thesis" emerged.

It is hard to believe, but most of these incredibly important notions and results happened in just less than a decade when a number of different researchers came to essentially one and the same idea independently of each other. And that golden decade started reshaping the very way we look at things in science as a whole, ranging from philosophy to quantum physics and mathematics to theoretical computer science (CS).

2. The golden thirties

"It is not once nor twice but times without number that the same ideas make their appearance in the world."
(Aristotle, "On the Heavens")
"We so readily assume that discovering, like seeing or touching, should be unequivocally attributable to an individual and to a moment in time. But the latter attribution is always impossible, and the former often is as well. [...] discovering [...] involves recognizing both that something is and what it is."
(Thomas S. Kuhn, "The Structure of Scientific Revolutions", 2nd Ed., 1970)

The 1930s must have been a remarkable time...

Kazimierz Kuratowski (10)
In 1930, the Polish mathematician Kazimierz Kuratowski proved his theorem on planar graphs [Kur30].

The theorem provides a criterion for deciding whether a graph can be drawn in two or three dimensions without crossing of the edges (see Fig. 1). Even from today's perspective this still remains an important result that lies in the very foundations of the modern Graph Theory.

(What is less known, however, is that this theorem was also proven earlier by Pontryagin (18), and later by Frink and Smith (1930) — for a detailed history of the theorem see [KQS85]).

Figure 1: Kuratowski's theorem

Figure 1: According to Kuratowski's theorem, every nonplanar graph is a supergraph of an expansion of the so called complete graph K5 or the utility graph K3,3. The K5 graph consists of five vertices connected to each other; the K3,3 has two groups of 3 vertices each, where every vertex from a group is connected to every other vertex from the other group.

This theorem gives a necessary and sufficient condition that lets us distinguish between 2D and 3D graphs.

The connection between what is now known as "Kuratowski's theorem" and cellular automata will be studied later in Section 7.

Kurt Gödel (18)
In 1931, the Austrian mathematician Kurt Gödel published his work [Goe31], that is considered to be one of the greatest and most surprising results in the whole history of mathematics.

This work is best known for two of its theorems, now known as "Gödel's incompleteness theorems". (The results of Gödel, however, has been also achieved — but not published — by Zermelo and Post)

Somewhat simplified, the first incompleteness theorem states that in any sufficiently complex axiomatic system that allows one to do basic arithmetic, one can construct a statement that:

(1) can be neither proven nor disproven within that system;

or

(2) can be both proven and disproven within that system.

In the first case, we call the axiomatic system incomplete; in the second case, we call it inconsistent. A statement of the above type is called undecidable in the system.

The second incompleteness theorem states that a sufficiently strong consistent system cannot prove its own consistency. The importance of Gödel's incompleteness theorems with regard to the Zuse-Fredkin thesis will be studied later in [Pet02i].

In 16, a number of influential papers were published2:

A growing interest in defining the intuitive notion of "mathematical function" led to appearance of notions like "lambda-definable functions", introduced by Alonzo Church [Chu32, Chu36b] and Stephen Kleene [Kle35] and "recursive functions", studied by Jacques Herbrand [Her32] and Kurt Gödel [Goe34].

These two formalisms describe the same set of functions, as was shown in the case of functions of positive integers by Church [Chu36a] and Kleene [Kle36].

Alan Turing (14)
In his paper [Tur36] Alan Turing tried to capture the same notion formally with the introduction of Turing machines (TMs).

TMs are abstract models of computation, defined by Turing to give a mathematically precise definition of algorithm or "mechanical procedure".

A TM can be realized in a form of a mechanical machine that consists of (see Fig. 2):

  1. A tape that is assumed to be divided into cells that contain a symbol from some finite alphabet. The alphabet consists of special symbol, known as "blank symbol" (often denoted as '0') and one or more other symbols. The tape is assumed to be arbitrarily extendible, i.e. the Turing machine is always supplied with as much tape as it needs for its computation. Initially, the tape contains a finite number of non-blank symbols; the other (extendible) part of the tape that has not been written before is assumed to be filled with the blank symbol.
  2. A head that can read and write symbols on the tape and (eventually) move one step left or right.
  3. A state register (known also as "internal state") that stores the state of the Turing machine. The number of different states is finite and there is one special start state with which the state register is initialized.
  4. An action table that tells the machine what symbol to write, how to move the head (one step left, one step right, or stay in place) and what its new state will be, given the symbol it has just read on the tape and the state it is currently in. If there is no entry in the table for the current combination of symbol and state then the machine will halt.

Note that every part of the machine is finite, but there is a potentially unlimited amount of tape that gives it an unbounded amount of storage space ("external memory").

Click here for a Java demonstration of a Turing Machine!

Figure 2: A theoretical model of computation, described by Alan Turing in 1936, now known as a "Turing machine" (TM).

Typically, a TM consists of an infinite tape that can store symbols from some finite alphabet, a head that can read/write symbols on the tape and (eventually) move left or right to the next symbol on the tape, a state register with finite memory, and a set of instructions (usually presented in the form of a table) that consists of commands of the type:

“if the current read from the tape is '0' and the is '3' then write '1' on the tape, move 'left' and change the state of the to '4' ”, etc.

If no instruction exists for what to do for a particular combination of then the TM halts its work, and what is written on the tape is the result of its work.

Another important notion is the so-called "universal Turing machine" (UTM). An UTM is able to simulate ("emulate") the work of any other TM, and, as Turing showed, such UTMs are indeed possible.

Somewhere in 1936-37, the so-called Church-Turing thesis emerged and was almost instantly recognized to be the rigorous mathematical description for the intuitive notion of "mathematical function", "algorithm", or "computation".

In its most common form, the Church-Turing thesis states that every effective computation or algorithm can be carried out by an UTM. The thesis might be rephrased as saying that the notion of effective or mechanical method in logic and mathematics is captured by Turing machines.

It is interesting to mention that Alonzo Church had published his work [Chu36b] a few months earlier than [Tur36], but he used the notions of recursive and lambda-definable functions to formally describe effective computability. When hearing of Church's proposal, Turing was quickly able to show that his Turing machines in fact describe the same set of functions [Tur36].

Since that time many other formalisms for describing effective computability have been proposed such as register machines, Emil Post's systems, combinatory definability, Markov algorithms [Mar54], etc. Because all these systems have been shown to describe essentially the same set of functions, it is now generally assumed that the Church-Turing thesis is correct.

Along this line of scientific development, yet another interesting series of events took its roots in the same period of time...

Edward Fredkin (1934-)
In 1934, Edward Fredkin, considered by the author of these lines to be one of the most original thinkers of all time, was born. Later (in the sixties and seventies) he had the unique chance to be among the first modern computer programmers and hackers.

The opportunity to be among the first software and hardware developers led Fredkin to a quite non-traditional style of thinking that distinguishes him considerably from all accepted academic norms for reasoning and behavior in science as a whole. Despite this, however, the personality of Fredkin must have been bright enough to be invited to become a professor at MIT in the sixties even without being a graduate student.

Fredkin had the possibility to share some of his numerous ideas with Richard Feynman, a Nobel-prize winner in physics and one of the founders of modern quantum mechanics (QM). The life and ideas of Ed Fredkin are described in a fascinating way by Robert Wright in [Wri88aWri88b].

Among many of the original ideas of Fredkin, however, there is a special one that makes him peculiarly popular and, depending on the source, greatly adored or reviled:

Fredkin is the originator of the incredible idea that our Universe is a cellular automaton (CA).

Cellular automata are abstract mathematical models for computation that, unlike TMs, operate in full parallelism. In their essence, CAs can be described as an infinite set of uniform parallel processors, arranged uniformly through the space.

Every such "processor" (called a "cell") is in fact a Finite State Machine (FSM) that operates by exchanging "signals" with its "near-by" processors using a finite set of internal memory states and a uniform deterministic function called a "rule". It is important also to add that all updates of all FSMs/cells occur in an "instant" of time called "time step" or "tick".

According to his own reminiscences, Fredkin came to the conclusion that our Universe is actually a CA sometime around 1960. However, this remained unpublished until 1971, when Martin Gardner announced it on the pages of Scientific American [Gar71] as a part of series of publications devoted to the Game of Life.

The Game of Life itself, a chance idea by the British mathematician John Conway, is nothing but an example of a cellular automaton on an infinite two dimensional square lattice (see Fig. 3). And although the Theory of Cellular Automata now has a long history of development, this "game" probably still remains one of the best known and well-studied CAs.

Click here for a Java animation!

Figure 3: The Game of Life, invented by John Conway in 1970. The game is played on a 2-dimensional grid. Each square, or "cell", can be either "on" (black) or "off" (white). The rules of Life are delightfully simple:
  1. If a cell is off and has 3 neighbors (out of 8), it will become alive in the next clock tick;
  2. If a cell is on and has 2 or 3 neighbors, it survives;
  3. Otherwise, it dies on the next clock tick.
Life is complex enough to give birth to many strange and beautiful structures. The three common patterns shown above are called "block" (a), "blinker" (b), and "glider" (c); they are examples of still, periodic, and moving configurations, respectively.

Konrad Zuse (15)
As it often happens in science, the "Universe as a CA" idea was independently found and published by Konrad Zuse in the late sixties [Zus67, Zus69].

Zuse was a German engineer that is now recognized to be the inventor of the first programmable computer and first high-level programming language. It is curious to note that Konrad Zuse built a working model of a computer (the so-called Z1) practically the same time when Alan Turing came up with the theoretical idea about his famous UTM. In 1936 Zuse applied for a patent [Zus36], which he did not get. This presented him with a dilemma: on the one hand the fight for the patent would be very expensive, and on the other hand he wanted to build a computer; Zuse decided to build the machines.

Karl Popper (14)
In 1934, Karl Popper published his first major work [Pop34]. Popper was considered by many to be the premier philosopher of science in the twentieth century. In his book, he criticized the then-popular schools of logical positivism and favored an approach to science called falsificationism, which was based on criticism rather than verification. This work gained much attention.

His scientific work was influenced by his study of Albert Einstein's theory of relativity, which he used to exemplify the difference between a truly scientific theory and pseudo-scientific theories. In Popper's view, the difference was that theories such as Einstein's could be readily falsified by simple experiments. This criterion of falsifiability, and the practice of using experiments not to verify but to criticize scientific theories, are the cornerstones of true science in his view, in contrast to the common belief at the time (first proposed by Francis Bacon) that science was based on inductive reasoning, and experimental verification.

Albert Einstein (15)
In 1935, approximately the same time when Alan Turing started dreaming about his "universal computing machine" (in its essence — a completely mechanical device), Albert Einstein, together with his colleagues Boris Podolsky and Nathan Rosen developed a thought experiment to demonstrate a lack of completeness in quantum mechanics [EPR35]. This so-called "EPR paradox" has led to much subsequent, and still on-going, research.

It would not be an exaggeration to say that the EPR paradox was (and still is) considered by many leading physicists to be the major problem with regard to acceptance of the Zuse-Fredkin thesis; in fact, it was the main reason the idea of "The Universe as a CA" was not taken seriously. Instead, later (in the nineties) so-called "quantum computation" was vigorously advanced and developed.

The period between 17 was a remarkable time, indeed... A golden period of time that initiated a major thought revolution and traced the scientific development of the rest of the twentieth century.

3. Some major issues with regard to the Zuse-Fredkin thesis

Unlike the Church-Turing thesis that has been easily accepted among all major scientific circles in the mid thirties (especially, among the mathematicians and the emerging branch of theoretical computer scientists), the Zuse-Fredkin thesis has been less fortunate [Pet02a1].

In fact, even now the Zuse-Fredkin thesis remains largely unknown and unaccepted.

In our work [Pet02a1] we have advanced a simple, but very suggestive argument in support of the Zuse-Fredkin thesis.

A careful reader of [Pet02a1], however, should be able to note that our argument does not argue in any way that the Universe is exactly a cellular automaton; it merely argues that our Universe is some kind of computational device.

In the present work we shall fulfill that omission by studying all major counter-arguments that have been proposed against the Zuse-Fredkin thesis and, in defense of the thesis, we shall give a concrete explanation to each of them.

As we shall see, surprisingly, while the objections against the thesis could be various in their nature and origins, all of them can be easily divided into two major categories, using the explanations that could be given to them.

And all explanations that could be given in defense of the "Universe as a CA" idea can be separated into two large groups:

(A) Explanations using the Church-Turing thesis in its classical form;

(B) Explanations using the new idea that the Church-Turing thesis is actually an immature form of the Zuse-Fredkin thesis, as proposed by the present paper.

It is one way or another unexpected that most of the well-known counter-arguments against the Zuse-Fredkin thesis actually fall into the first category (A). For example, the often restated and endlessly repeated arguments that our Universe is actually fundamentally continuous and probabilistic in nature (but not discrete and deterministic) turn out to be easily explainable using the Church-Turing thesis only.

Notions like "(actual) infinity", "continuity", "probability", various kinds of "dualisms" (take the "wave-particle duality", for example), and also various "multisms" and multi-path theories (think of the so-called MWI, the "Many Worlds Interpretation", now popular in quantum physics) has been taken for granted by contemporary physicists for decades. But all these turn out to be just "mind monsters" easily dispersible into pieces using the Church-Turing thesis alone!

A special category of problems usually raised in the mathematical circles involves the Second incompleteness theorem of Gödel. As we shall see later in this work and in more detail in [Pet02i], all counter-arguments against the Zuse-Fredkin thesis (and any well-definable TOE) actually are easily defensible within the boundaries of the Church-Turing thesis as well.

Among the counter-arguments of the second type (those that could not be defended using the C-T thesis) special consideration should be given to the objections related to EPR, Bell's theorem [Bel64] and quantum computation. In fact, all other counter-arguments should not be seen as any serious threat for the idea that the Universe is a CA; this issue is actually the main reason why the Zuse-Fredkin thesis is currently not widely accepted among physicists.

While serious study of this problem goes beyond the limited boundaries of the present article, still, we shall outline here our basic idea presented in a much greater detail in [Pet02m].

In order to resolve this problem, we have to tackle also some other problems of general CA theory, to be introduced here and presented later as the "emulation issue" [Pet02g] and "synchronization issue" [Pet02f].

For example, it is well known that some "intuitive" tasks suggested by physics turn out to be hard tasks for cellular automata. Of all these probably the most famous is the task of showing circular/spherical wave propagation on a 2D/3D CA, respectively (see Fig. 4 and Fig. 5).

Click here for a Java animation!

Figure 4: Demonstration of circular wave propagation in a 2D CA.

The initial configuration consists of 7 adjacent cells. The Rule used is B3/S01234, i.e. a new cell is born when an empty cell has exactly 3 alive neighbors, and a cell survives for the next generation if it has 0, 1, 2, 3 or 4 alive neighbors.

This rule was discovered by Norman Packard and Stephen Wolfram [Wol02].

Click here for a movie!

Figure 5: Demonstration of spherical wave propagation in a 3D CA.

The rule was discovered by Joel Dobrzelewski.

The solution of this task suggests that there is some subtle issue about the notion of computation that has not been "captured" by classical computation on a UTM. And, therefore, UTMs should be replaced with a more developed computational model.

This model is the computation on a three dimensional universal cellular automaton (3D UCA). As we shall see, computation on a 3D UCA has certain advantages over the computation carried out on a sequential machine like UTM.

It is well known that UTMs can be regarded as 1D UCAs [Gut96]. But it turns out that none of the 1D UCAs can solve the problem of circular wave propagation as "speedy" as any 2D UCA can do.

Likewise, it also turns out that spherical wave propagation can be emulated with the same speed on a 3D UCAs only, but not on any universal CA of lower dimension (1D or 2D UCAs).

This is what the so-called "emulation issue" [Pet02g] is all about; it leads to a new idea to replace the classical "UTM thesis" (i.e. the Church-Turing thesis) with a "3D UCA thesis" ("Fredkin-Petrov thesis"), to be presented here for the first time — see Section 7.

The problem of constructing a physically realizable CA system that runs in full parallelism will be presented as the "synchronization issue". It turns out that CAs — regardless of their dimensionality — are not only abstract models of computation, but also physical devices that can be constructed in principle using mechanical parts only [Pet02f].

We shall conclude our exposition by presenting a simple, yet very impressive explanation of the classical paradox of Zeno. This explanation may be regarded as a highly suggestive argument in favor of the main idea that we actually live in a Universe that is ultimately discrete and deterministic in nature.

4. The Zuse-Fredkin thesis explained by the Church-Turing thesis

"All creations of the human mind are:
(1) finite in precision and resolution
(2) local and deterministic in interaction"
(Joel Dobrzelewski)

Any scientific knowledge (or "theory") can be represented in the form of a scientific "paper" that contains a finite number of symbols, using some finite alphabet. Regardless of the alphabet used and overall abstract meaning of the paper, if it provides us with mathematical statements ("formulas") and some finite mathematical apparatus behind them, then it is perfectly "discretizable" and hence, representable on a UTM.

The absence of continuity, probability, the so-called "actual infinity", or doubtful mind-provoking notions like "dualism" or even "multism" (think about "particle-wave duality", "Schrödinger's cat that can be both dead and alive" [Sch35], Everett's "Many Worlds Interpretation", and other conceptual twists extensively used in modern quantum mechanics) has nothing to do with any experiments that can "prove" or "disprove" them.

And that is one of the major misunderstandings in modern science.

The "absence" of continuity, probability, actual infinity, or dualisms and multisms of any kind in our real world, however, has something to do with the Church-Turing thesis; in fact, it is a direct corollary of it.

To be more precise, what we mean here by "absence" is not that these notions are "falsifiable" or "missing" in Reality, but merely illustrative of the fact that we are aware of a more minimalistic model (namely, a UTM) that is able to represent (and compute) all mathematical derivations of these notions without exception.

For example, every continuous notion in mathematics is represented by a variable (i.e. some finite symbol from a finite alphabet of symbols). The same is true for probability or any kind of dualism/multism as well.

Regardless of how complex these notions (and theories associated with them) may look like in our imaginations, the final (and simple) truth is that we cannot even begin to imagine what a notion or theory looks like if it is not(?) representable on a UTM.

The crucial argument in favor of discreteness and determinism, proposed by the Zuse-Fredkin thesis, is Occam's razor principle.

Definition (Occam's razor): If we have two (or more) competing theories that essentially explain one and the same (physical) phenomenon, we must choose to use the simplest theory possible.

Occam's razor, although frequently cited by physicists, may prove to be one of the most undervalued concepts in modern physics.

So, in this sense, what we mean here by "absence" is actually yet another way to say that we have another, more minimalistic representation of all these notions that allows us to look at the Universe as if it is fundamentally discrete and deterministic in nature.

Otherwise stated:

UTMs (and universal computers, in general) help us to "map" all other "vague" notions into a set of finite, well-definable notions like "bit" and "program instructions" (i.e. finite set of "commands", or cellular automata "rules" — essentially, a computer science equivalent of the mathematical notion of function over a finite set of elements that returns a finite set of bits).

For better understanding, the following mapping table could be used:

Table 1: Translating notions from "continuous" mathematics/physics into their equivalents from discrete mathematics and digital physics

"Continuous" mathematics and physics Discrete mathematics and digital physics
actual infinity potential infinity (in a form of infinitely extendible 'external memory', e.g. 'tape', if we talk about UTM)
continuity finitely describable algorithm (i.e. "computer program") that is able to run indefinitely by computing any "real number" to any specified precision
(true) probability and randomness pseudo-probability and pseudo-randomness (generated algorithmically by a pseudo-random number generator)
dualism/multism algorithmic "threads" that run in parallelism

Please note now that concepts like "discreteness" (i.e. everything "still" as a combination of bits) and "determinism" (everything "in motion" as an operation over a finite set of elementary "still objects", i.e. bits) cannot be reduced further:

Indeed, there is no way one can minimize "further" the notions of "bit" and "instruction", especially if the "instruction" is a single one, just like in the case of a cellular automaton rule!

The C-T thesis causes "amorphous" concepts familiar in "continuous" mathematics and physics to finally take shape into crystal clear concepts from discrete mathematics and CS, namely: discreteness, determinism and finiteness (to be exact: "extendible finiteness", i.e. "potential infinity").

For example, various kinds of probabilistic or non-deterministic3 TMs can be built, but they can all be shown to have computational power that is equal to or does not exceed the power of classical (deterministic) UTMs; the mathematical proof of this fact can be found in almost any CS textbook.

Likewise, the computational clarity of the CA model certainly outperforms the ambiguity of the so-called "Heisenberg interpretation" in physics and its modern successors like MWI, for example4. Moreover: any kind of MWI-like theory can certainly be shown to be equivalent to various multi-path computational models, and these in turn can be shown to be fully implementible on a UCA or UTM.

5. Space-time continuum or space-time discretuum?

John Conway, the inventor of the famous "Game of Life", proposed recently the interesting question why (if the Universe is indeed a CA) we are not able to "feel" its granularity/discreteness [CFM02].

The answer to this question, as well as all questions regarding the "continuity of the space/time", used in contemporary physics, is quite simple and it falls into the first category (A).

Indeed, if C-T thesis is a correct assumption, then all mathematical functions/theories, including any possible TOE that is or will ever be proposed, can be represented (and computed) on a UTM.

From some certain point of view that is a truly remarkable fact: it turns out that every TOE (or proposal for TOE), taken from contemporary physics with its seemingly continuous apparatus, can be "transformed" (or "mapped", "encoded") into the obviously one-dimensional discrete space of some "tape" and computed using a read/write "head" that operates in a discrete time intervals moving left or right, step by step, on a UTM.

In other words, it turns out that all dispute regarding the "true nature" of the space and time (i.e. whether the space/time is fundamentally discrete or continuous) is largely "relativistic": surprisingly, everything depends on the point of view!

Please note now that our physical "feeling" that the Universe is continuous in nature is nothing more than educational issue: we have been taught continuous mathematics and physics at school, that's why we started to believe the Universe is fundamentally continuous!5

On the other hand, there is a serious reason why all theories concerning the discrete/deterministic nature of the Universe should be preferred before all other theories suggesting the opposite — Occam's razor principle:

If space is simpler(?) than a discrete grid and time can be presented as something simpler(?) than discrete time steps, then, by all means, please tells us what your theory looks like!

Obviously, the process of "simplification" could not be developed further.

Still, in order for one to understand why space is not a 1D, but a 3D grid, one has to use arguments in the form of (B). Namely, one has to accept that the Universe is a 3D UCA rather than 1D UTM, as the Church-Turing thesis seemingly suggests (see section 7).

Actually, the general idea behind the Church-Turing thesis has not been based explicitly on the idea of 1D space ("tape") and "single observer" ("head") — the overall idea of the thesis has to be studied in a much wider context (section 8).

The idea was to show a concrete example of a mechanically constructible machine that is able to capture the intuitive concept of universality. In this sense, there is nothing special about a UTM: it is merely one of many possible ways to build a mechanical machine that implements computational universality.

Otherwise, a UTM is certainly not the best possible example for a universal computation machine.

Applying Occam's razor argument, it is easy to conclude that every possible theory that implies homogeneity of space/time should be preferred over any other theory that suggests there exist(s) some "special" point(s) in the space/time construction where the laws of physics are different. In this sense, if we want to construct not only a universal, but also a minimalistic computational model, we have to use some identical storage-device elements ("cells") that store a finite number of elements ("states") and update them in parallel steps ("clock cycles") using uniform local transformations ("rules").

But that's essentially what a cellular automaton is!

So, there is no surprise that if we believe in the Church-Turing thesis, we are inclined also to believe in the Zuse-Fredkin thesis: cellular automata are our best choice if we want to use a computational model that is able to "capture" all possible "intuitively computable mathematical functions" and we also want our model to be in concordance with the Occam's razor principle. So to speak, CAs that may initially seem as a great surprise as a proposed source for a model of the Universe are actually not only extremely "natural" models, but also the only possible models, if we want to build a theory that takes into account both universality and minimality (Occam's razor) principles.

At the end of this article, we will not miss the chance to reply to John Conway by putting forward a simple illustrative "Game of Life" argument, related to the old paradox of Zeno about Achilles and the Tortoise (section 10).

We should probably be able to make our conclusion right away if it was not for one particular detail that nowadays so troubles the best minds in both theoretical computer science and theoretical physics...

6. Cellular automata and EPR

"...there is no problem, only solutions..."
(John Lennon, from his song "Watching the Wheels")

Allan Turing's representation of the class of all "mental functions" on a mechanical machine (like a UTM is) still leaves room for one to speculate about possible quantum-mechanical, "non-mechanical", or even "super-mechanical" computation, if you like. But that is only one possible way to interpret the Church-Turing thesis.

What Turing actually had in mind (and he states this, one way or another, a number of times in his paper [Tur36]) is in fact an unambiguous way of computation, i.e. a purely "mental" way to describe computation as a "mind, pencil and paper" procedure. In this sense, "mind" should be considered to be equivalent to the "internal state" of a UTM, "pencil" — equivalent to the "head" that moves left and right and writes symbols, and "paper" is — of course — equivalent to the "tape" used for storing a possibly infinite amount of "external information" for that UTM.

Now, if one makes certain efforts to stop thinking about an UTM as a "mechanical machine" and switches instead to the viewpoint presented above, namely, that a UTM is simply an exact and unambiguous way to think about scientific process as a "mind, pencil and paper" procedure, one can certainly reach the point to make an amazing discovery:

It really does not matter what one can observe in a laboratory or the external world; the only thing that really matters is what one can achieve using his/her own mind only!

In other words:

If something cannot be achieved using "thought experiments" only, then it cannot be achieved using any other kind of experiments as well.

This point is really very interesting, and in fact that is the main idea behind Digital Physics (DP), a new scientific field proposed by the author of these lines. Despite its name, DP is not "yet another theory of physics, as we know it", but rather a theory of how physics should look if we adopt the Zuse-Fredkin thesis and try to concentrate on the main problem:

"If our Universe is indeed a CA, then what CA (exactly) should it be?"

The main idea here is not to study physics and try to "embed" its properties into some CA model6, but merely to try to "compress" or "represent" all possible algorithms into a single CA rule and discrete space topology [Pet02b].

Just like the C-T thesis is a kind of "compression" of all the abstract notions like "continuity", "(actual) infinity", or "dualism" into a smaller set of notions like "discreteness", "potential infinity" and "determinism" (see table above), DP as a science, as we see it, is a kind of "compression", but this time on a cellular automata level.

As we shall demonstrate in future articles of this series [Pet02], this nearly fantastic approach is not just a fantasy. It is an approach that really works and produces results in terms of mathematical restrictions that should be passed by "The CA" that is supposed to be isomorphic to our Universe.

---

The only serious objection to the Church-Turing thesis (and, consequently, the Zuse-Fredkin thesis as well) comes from the so-called "quantum computation" (QC) field.

This is what we call the "speed of computation" counter-argument:

"Even if quantum computers are not able to solve any other problems that classical (mechanical) computing machines cannot, they (quantum computers) may still be able to solve some tasks faster than the usual ones"

All other arguments (about the seemingly "continuous" nature of space/time, various probabilistic issues, etc.) should not be regarded as a real threat to the Zuse-Fredkin thesis, but (usually) mainly show some fundamental misunderstanding of what the Church-Turing thesis (and theoretical computer science) is all about&

But this objection alone is a serious issue that really deserves our attention.

In fact (as we shall see below), the Zuse-Fredkin thesis really is an immature form of the Church-Turing thesis, simply because 2D and 3D cellular automata are able to solve some computational tasks faster than the classical UTM (that, in turn, may be regarded as a 1D CA).

In this sense, not only should the notion of computation, but quite likely also the notion of universality be redefined in a way that will "capture" not only the notion of "computability" (in principle) of some mathematical function, but also the time used for computation of that function as well.

"Quantum computation" supporters may prove to be overly optimistic in their statements, but they have noticed quite an interesting "flaw" in the definition of the Church-Turing thesis that certainly has not been discussed in the mid 1930's. And, quite likely, has never been discussed at all.

The whole notion of the "speed of computation" of some mathematical function makes little sense in the classical understanding of mathematics: the element of "speed" has only been recognized as an important one with the advent of the modern computers in the sixties, and computation complexity theory.

And even with the appearance of modern electronic computers the "speed of computation" issue of the Church-Turing thesis still remains in shadows for two main reasons:

  1. Most of the theoretical papers written in the field of so-called algorithmic and computation complexity theories, even in modern times, still discuss the obviously outdated, but taken as "standard" model of UTM;
  2. The vast majority of contemporary computers are still sequential machines; the theoretical difficulties toward implementation and use of truly parallel computers and languages are formidable.

As David Pearson describes it in [Pea96]:

"[&] if we had a crystal-lattice computer today, we simply wouldn't know how to use it. Programming is so well-developed under the Von Neumann model that is virtually impossible to remove it from our thinking. Obviously we can simply program a CA to emulate a standard Von Neumann machine [&] that will certainly be done [&]. But that does not take advantage of the added power the CA has. To use it efficiently, we must be prepared to change our programming model.
What cellular automata array offers us is a parallelism, the ability to perform lots of computations at the same time. Parallel machines promise to be at the forefront of high-performance computing, and virtually every major player in the supercomputer field offers some massively parallel machine. Unfortunately, this promise has been made for at least 30 years, since the ILLIAC IV, and it never been fulfilled. There are significant problems to be overcome in the development of parallel computers — synchronization, programming constructs, data locality, interconnection networks, distribution memory, and instruction set model are only the few of them. Conventional machines have always overtaken the parallel computers in speed before these problems could be overcome."

And also:

"The difficulty of writing software for parallel machines is well-known. [&] Unfortunately, the automatic techniques have generally resulted in, say, 5-way parallelism, and this small constant factor is not enough to sustain the design effort — sequential machines catch up very soon, particularly because the parallel machines require a longer design time.[&] Because of these formidable difficulties, because parallel computers have failed to really show their promise for some 30 years, and after several bankruptcies and failed projects, some observers have declared parallelism dead. Looking far enough ahead, a resurrection is clearly on the way, but the challenge remain, and we must tackle them before the crystal-lattice computer can become reality."

---

In our work [Pet02f] we describe a schema that turns any abstract CA into a parallel mechanical machine that can be constructed in principle in our real world.

Although this mechanical machine (unlike the abstract notion of CA) does not update the state of all its cells instantly, but rather asynchronously, it operates in a way that for every local observer living within such a machine everything looks as though he/she lives in a Universe/CA that works in perfect parallelism.

Just like the classical UTM, described by Alan Turing [Tur36], our parallel machine is a purely mechanical device that does not use any QM effects.

Also, in [Pet02m] we demonstrate a 1D CA that exhibits truly QM (EPR) effects without using any "quantum computation" magic.

Thus, we are inclined to believe that the now fashionable "quantum computation" (QC) may turn out to be one of the most remarkable fallacies of all time: there is no "speedup" of computation using QM, simply because the same effect can be obtained on a classical, i.e. purely mechanical CA.

---

At this point, a clever reader may pose the following tricky question:

"OK, you say a UTM can compute everything conceivable by the human mind. You say also it is not important to observe the 'external' world or to do experiments in a laboratory; the only important thing is to sit down in a chair, take out a pen and a sheet of paper, and concentrate on thought experiments using the only 'laboratory' everyone has — his/her own mind.
OK, that sounds pretty interesting and thought provoking& But still, why do you say the Church-Turing thesis is actually an immature form of the Zuse-Fredkin thesis? Where does the boundary between the two lie?
For if one thinks there is no boundary at all, one can easily imagine that our entire Universe is not a 3D UCA, but merely a pitiful one-dimensional tape that also 'operates' in a sequential way, just like a UTM does?
After all, for God's sake, why should we leave the well-established 'standard' notion of computation on a UTM and replace it with the notion of computation on a UCA? Why is the Universe not a sequential one-dimensional UTM, but a 3D parallel computer, isomorphic to a CA?"

That is a pretty good question, indeed! (And if the reader has been able to come up with such a question, he/she certainly deserves our admiration.)

This question really has an answer, but the answer itself is lengthy enough to deserve yet another article [Pet02g].

With a title for a paper like this one ("What is Emulation?") one can hardly expect there is anything new to be presented there. After all, isn't  "emulation" a well-established term in CS?

But it turns out it is not!

Surprisingly, 2D and 3D CAs can solve certain computational problems faster than 1D CAs, and this simple fact has enormous consequences that, quite likely, has never been mentioned and analyzed in theoretical computer science.

For if C-T thesis really was a finite form of "capturing" all the subtle issues of notions like "the class of all intuitively computable mathematical functions/theories" then it really would follow that our Universe is indeed a 1D automaton (i.e., an UTM)!7 But, obviously, this is not the case: we live in a Universe that simply happens to be three-dimensional, at least spatially.

Therefore, something is missing!

Obviously, C-T thesis has been unable to "capture" some subtle issue of the notion of "computation", and that issue remained hidden until now.

This issue is the "speed of computation" issue.

7. The Fredkin-Petrov ("3D UCA") thesis

Now we are ready to make an important statement that we shall present below in the following three essentially equivalent forms:

Fredkin-Petrov thesis (or "3D UCA thesis"): The Universe is a three-dimensional (computationally) universal cellular automaton (3D UCA).

Fredkin-Petrov thesis in a mathematical form (or "3D UCA thesis in a mathematical form"): The class of all intuitively computable mathematical functions coincides with the class of all functions computable on a 3D UCA.

Fredkin-Petrov thesis in a CS form (or "3D UCA thesis in a CS form"): The class of all intuitively definable algorithms coincides with the class of all algorithms computable on a 3D UCA.

A few important remarks:

  1. Fredkin himself supports the idea of the Universe being a 3D UCA. However, his explanation about this is based not on the idea to redefine the notion of universality in order to capture the "speed of computation" issue, but merely on the fact that our physical experience strongly suggests we live in a world that, no doubt, is fundamentally three-dimensional in space [FP97]. Please note now that the author's treatment of the basis of the 3D space is not based on any physical observations or experiments with the "external" world, but rather on the fact that any abstract structure serving as a "backbone" of a CA is actually a graph, and any graph can be presented in no more than three dimensions. (This is a well-known "minimax" statement from the graph theory: "The minimum number of spatial dimensions that is sufficient to represent any possible graph is equal to 3." See also: Kuratowski's theorem for planar graphs)
  2. The proposed Fredkin-Petrov (F-P) thesis is actually a direct corollary of the Zuse-Fredkin thesis. Still, the explicit inclusion of the "three-dimensionality" of the CA space is an important addition paying regard to the "speed of computation" issue.
  3. The reason to "christen" these reformulations ("refinements") of the Church-Turing thesis using the authors names is not based on any egoistic motivations, but merely tries to follow the long-term tradition in mathematics and computer science to associate such definitions with the names of their authors (rather than using some "descriptive" notions like "UTM thesis" for "Church-Turing thesis", for example). Anyway, we propose alternative "descriptive" names (in parentheses) for the new theses as well, so the reader is free to choose.

8. Parallelism vs. serialism; society of minds vs. egoism of a single mind (An AI point of view)

An interesting way to look at these things is to think of Turing's approach (i.e. a single "head" or lonely "observer" that "travels" in a 1D space, changing its "internal state" or "mind") as an "egoistic" approach, compared to a CA that may be thought of being a combination of many cells ("minds") that operate in a full parallelism, computing some unknown algorithm cooperatively ("society of minds").

This AI-like point of view goes slightly outside the scope of the present article and Digital Physics in general, but if it can be of some value for the reader, it should be pursued to some extent.

For example, one can think that the Z-F thesis actually replaces the C-T thesis by assuming things about our real world that are more "natural" and "well suited" to the current scope and interest of areas like AI and ALife, for example:

"The Universe does not contain a single mind/observer that is doomed to do a computation in a space that looks very much like a 1D 'tape' or 'paper', but is actually composed of small living elements ('many observers'), each of them communicating with all others by exchanging 'ideas' or 'information'."

9. Quantum mechanics is merely& mechanics!

"QM is all there is!"
(Richard Feynman to Edward Fredkin about cellular automata, as communicated in [Fre01])

One may ask himself/herself:

"How long do we have to wait for the best theoretical minds in contemporary physics finally to settle down and recognize the obvious elegance and power of the Zuse-Fredkin thesis? Or, maybe just like in the case of Church-Turing-Post-Markov etc. (see our historical notes above) we are doomed to wait for "yet another quantum-mechanical hocus-pocus" to appear in "yet another respectful physics journal", in order to disprove it by showing its mutual equivalence to some well-known purely mechanical computational model?"8

The good news is:

"No, we don't have to go through this vicious cycle forever: if the Universe is indeed a mechanical computational device and we are able to build a TOE based on this idea, all other threats will be easily checked and overthrown."

The bad new is, however:

"Until we have the TOE based on the Zuse-Fredkin thesis, i.e. until we have the exact CA that is supposed to be isomorphic to our Universe, the dialog of 'QM as merely mechanics' will continue."

As Yogi Berra put it: "The game isn't over until it's over."

Anyway, it is our expectation that the idea behind the Zuse-Fredkin thesis will be easily accepted when the right CA is found; our bet is that, after that, in just a year or two, the idea will be accepted among all major scientific circles, much like the Church-Turing thesis was proposed and accepted in 1936-37.

The big difficulty towards any major acceptance of the Zuse-Fredkin thesis lies in its purely abstract form. Unlike UTMs that can be easily constructed and demonstrated to be "universal" (in all possible ways and common sense that "computational universality" may hold for any individual researcher), the Zuse-Fredkin thesis does not provide us with a concrete example of such a CA, but merely predicts its existence.

But all this is quite understandable, indeed: one can provide many examples of UTMs, but a UCA that also happens to be a TOE certainly would be a single one!9

---

If we agree that the Church-Turing thesis strongly suggests that all mathematical functions are mechanically computable10, then the only "flaw" in the Zuse-Fredkin thesis available to the QC lovers is the "speed of computation" issue.

To emphasize how important this point is in our exposition, we would like to repeat the above statement in a somewhat enlarged form:

If QC proponents really want to defend their counter-argument (the "speed of computation" issue) the only path (literally!) they can take is to "invent" more EPR-like correlations, to observe them experimentally, and to prove some Bell's-like theorems, but this time observing the fact that our Universe may turn out to be a completely deterministic process.

But we doubt anything like this will be ever found.

Everything else should be left to rest in peace merely as "historical notes" in the development of the idea of the so long awaited TOE&

---

This article is written from the perspective of the theoretical computer scientist.

Its main audience is the theoretical computer scientists, AI researchers and maybe simply any computer programmers that still like their old-fashioned (in someone's eyes) computers — the "simple" toys of the present day that (oh, miracle!) do not contain any quantum-mechanical magic, but are still useful, somehow.

Our main goal is to convey a simple message to all the people that still wonder what quantum computing and all that fuss around it is all about:

Well, believe it or not, it's about nothing. Literally!

We know currently there is a lot of passion around quantum computation and we know making statements like the one above will inevitably draw a rage of ire from the physics community.

Still, we are happy to make such a statement. We have nothing to fear: after years of quiet research, we finally have a model [Pet02m] that shows how EPR works in cellular automata. And please note our model is a concrete CA, not just a bunch of prose how one "should eventually be able to construct such a CA".

From our point of view, QC is merely a "paper dragon"11 that is currently used (albeit subconsciously) as a bugaboo to "carry away" theoretical computer scientists form the right path to follow.

And, believe us, the right way to go is not called "quantum computation" or "quantum physics"; it is called "digital physics", and we hope we shall have enough time to describe it in detail in our future papers from this "Series&" [Pet02].

For all others that still believe "there is some chance for quantum computing" finally to deliver its "promises", we can say only we are certain there is no chance at all, but anyway.

What we have learned these past years is that there is virtually no way one can convince everybody in these plain simple things what we are trying to convey here with plain simple arguments&

About all we can do for all of those who are "still not convinced" is to sit comfortably out here and remind them of the wise words of a lovely song:

"People asking questions lost in confusion
Well I tell them there's no problem, only solutions
Well they shake their heads and look at me as if I've lost my mind
I tell them there's no hurry...
I'm just sitting here doing time
I'm just sitting here watching the wheels go round and round&"
(John Lennon, "Watching the Wheels")

So, how long will we have to wait for the physics community to recognize these simple things we are trying to communicate with the present article?

Well, we don't know& We just don't care too much about it. (As long as the "mainstream" does not carry away too many of the theoretical CS minds&)

We are in a simple state of mind and soul; we are in a no hurry situation. We can just sit there taking our time, waiting patiently and confidently for the change to come, simply "watching the wheels go round and round"&

10. Zeno's paradoxes resolved

"Zeno elaborated forty different paradoxes following from the assumption of plurality and motion, all of them apparently based on the difficulties deriving from an analysis of the continuum."
(Proclus)
"In this capricious world nothing is more capricious than posthumous fame. One of the most notable victims of posterity's lack of judgment is the Eleatic Zeno. Having invented four arguments all immeasurably subtle and profound, the grossness of subsequent philosophers pronounced him to be a mere ingenious juggler, and his arguments to be one and all sophisms. After two thousand years of continual refutation, these sophisms were reinstated, and made the foundation of a mathematical renaissance..."
(Bertrand Russell)
"Although they have often been dismissed as logical nonsense, many attempts have also been made to dispose of them by means of mathematical theorems, such as the theory of convergent series or the theory of sets. In the end, however, the difficulties inherent in his arguments have always come back with a vengeance, for the human mind is so constructed that it can look at a continuum in two ways that are not quite reconcilable."
(Encyclopedia Britannica about Zeno's paradoxes)

Almost everything that we know about Zeno of Elea (about 490 BC - 425 BC) is to be found in the opening pages of Plato's "Parmenides"12.

According to Plato and Proclus, in his youth Zeno elaborated forty different paradoxes13, written in a book. Unfortunately, this work has been lost, so most of them did not survive.

The paradoxes that Zeno gave regarding motion are the most well known and most perplexing ones. Aristotle, in his work "Physics", gives four of Zeno's arguments, known as "The Dichotomy", "Achilles and the Tortoise", "The Arrow", and "The Stadium".

10.1. Dichotomy

For the dichotomy, Aristotle describes Zeno's argument (in Heath's translation [Hea21]) as follows:

"There is no motion because that which is moved must arrive at the middle of its course before it arrives at the end."

In order to traverse a line segment it is necessary to reach its midpoint. After this one must reach the 1/4 point of the rest, after this one must reach the 1/8 point and so on ad infinitum. Hence motion can never get finished.

The argument here is not answered by the well known infinite sum:

1/2 + 1/4 + 1/8 + ... = 1.

On the one hand Zeno can argue that the sum 1/2 + 1/4 + 1/8 + ... never actually reaches 1, but more perplexing to the human mind is the attempts to sum 1/2 + 1/4 + 1/8 + ... backwards. Before traversing a unit distance we must get to the middle, but before getting to the middle we must get 1/4 of the way, but before we get 1/4 of the way we must reach 1/8 of the way etc. This argument makes us realize that we can actually never get started(!) since we are trying to build up this infinite sum from the "wrong" end. Indeed this is a clever argument, which still puzzles the human mind today.

In other words, the argument was that:

"&traveling an infinite number of finite distances, which, Zeno would have us conclude, must take an infinite time, which is to say it is never completed. And since the argument does not depend on the distance or who or what the mover is, it follows that no finite distance can ever be traveled, which is to say that all motion is impossible." [Hug02]

However, this paradox is easily resolved if we adopt the cellular automaton point of view:

"An electron, in Fredkin's universe, is nothing more than a pattern of information, and an orbiting electron is nothing more than that pattern moving. Indeed, even this motion is in some sense illusory: the bits of information that constitute the pattern never move [&]. Each bit stays put and confines its activity to blinking on and off." [Wri88a]

Please note now that CAs with their intrinsic properties to contain "patterns" that move without actually moving (think about a glider in the Game of Life, for example) and their space-time construction that is absolutely discrete seem like a perfect proposal for an abstract mathematical model that explains both issues of the paradox:

  1. Surprisingly, Zeno is in a sense quite correct pointing out that "actual motion" does not exist(!) — it is merely a sensual perception;
  2. If time and space are discrete, then what would certainly follow is that traveling requires a finite number of finite distances, and that takes a finite amount of time, hence no paradox arises!

10.2. Achilles and the Tortoise

The most famous of Zeno's arguments against continuity of space/time is undoubtedly that of Achilles. Heath's translation [Hea21] from Aristotle's "Physics" is:

"... the slower when running will never be overtaken by the quicker; for that which is pursuing must first reach the point from which that which is fleeing started, so that the slower must necessarily always be some distance ahead."

The traditional communication of this paradox usually depicts the Greek hero Achilles chasing a tortoise that is much slower than him in motion, but he never reaches the point to catch it: our hero runs much faster than the tortoise, but every time he reaches the place the tortoise was, the tortoise itself uses the time to move some (small) distance ahead, and so on ad infinitum.

Again, in a discrete space-time no such paradox occurs:

Think about two patterns in the Game of Life, let's say "spaceships" traveling the same line and direction. Suppose the first one is "light" (and "slow") and the other one is a "heavy" (and "fast") spaceship. Will the second spaceship ever reach the point to "get" the first one?!

Absolutely!

(Even if we apply the procedure described by Zeno — see Fig. 6)

Click here for a Java animation!

Figure 6: In the Game of Life we call all moving configurations that preserve their shape "spaceships"; the glider configuration from Fig. 3c is an example of the "lightest" (i.e. smallest in terms of cell number) possible spaceship.

In Life, the speed of a chess king is called the speed of light (Conway chose the phrase because it is the highest speed at which any kind of movement can occur on the board). Usually, the speed of light is denoted with c and all spaceship speeds are measured relative to this constant. For example, a glider moves diagonally with a speed one fourth speed of the light, i.e. c/4.

Two other spaceships are shown above. The first ("lighter") one on bottom is called "turtle" and was found by Dean Hickerson. It moves orthogonally with speed c/3, where c is the speed of light.

The next ("heavier") spaceship on the top was found by Hartmut Holzwart, but was never given any specific name. For our particular purposes let us call this configuration "Achilles", since it somewhat resembles a muscular human body. The "Achilles" moves orthogonally with speed c/2, i.e. is faster than the "turtle".

Let us now organize a race between Achilles and the turtle in the digital realm. Initially, we give the turtle an advantage of 20 cells ahead. Will the second (faster) configuration ever reach the first (slower) one?!

Absolutely!

(Discrete space is not infinitely divisible, so even if we decide to apply the procedure suggested by Zeno, no paradoxes will occur!)

Please note now that all "explanations" of Zeno's paradox (from "continuous" mathematics/physics) actually do not explain it. They merely compute the "rendezvous" point, but certainly do not explain the paradox, i.e. why the procedure suggested by Zeno for computing the space/time position of the "rendezvous" leads to an "infinite algorithm"14.

Very simply put: they have no explanation at all.

Similarly, QM does not explain EPR nor any kind of "quantum mechanical" effect or paradox; QM merely computes the result of the experiment, but does not give any meaningful interpretation about it.

11. Conclusion

As we have already mentioned in [Pet02a1], regardless of all the explanations in defense of the Zuse-Fredkin thesis that could be given, many will tend to refuse to recognize its validity, even without listening to the arguments proposed.

Some will tend to reject the thesis simply because it seems so "unnatural" or "weird"; others will tend to ask an endless chain of questions that either reveal their misinformation about the recent scientific developments, or even may reveal some gaps in their education, often including serious misunderstanding of what mathematics and theoretical computer science is all about.

Zeno, Schrödinger's cat, and Einstein-Podolsky-Rosen paradoxes are essentially arguments against "fundamental" continuity of space/time and "true" probability of our Universe. While all these arguments are subtle and profound, conventional physics does not seem to listen to them carefully enough.

QM merely ignores them, or even dodges — regarding them as "arguments" in its favor(?):

Zeno has been seen merely as a "juggler"; there are some physicists that really believe that "Schrödinger's cat can be both death and alive"; Einstein's arguments against "completeness" of quantum mechanics has been used as a good opportunity to introduce notions like "fundamental non-locality" — essentially a "spooky action at a distance" — within serious scientific literature.

And, finally, Bell's theorem has been "interpreted" as a "mathematical proof"(?) that local models like cellular automata are unable(?) to show truly non-local quantum-mechanical (EPR) effects — an "argument" that can be easily shown to be simply a nonsense [Pet02m].

While debunking pseudo-science is certainly possible in a step-by-step manner, it is still sad to see the human intellect involved in producing mountains of "scientific" papers with no scientific value at all. Certainly, deflating the enormous egos of their authors, which makes them blind in search of "quantum ghosts" and "true miracles" (rather than seeking explanations of the newly discovered phenomena) would help a lot.

(But egoism is usually regarded as a spiritual, rather than a scientific issue, so we shall cease our thoughts here.)

And again, the only reason we started to "believe" in this nonsense is because of an educational issue:

For a long time, we had no reasonable alternative (i.e. some other mathematical paradigm that resolves all the "paradoxes" typical for QM), and that is why we started to believe this has to be the case!

(And many started teaching the nonsense to other students, rather than clearly identifying the problem and looking for some other solution...)

After all, in their essence, all "paradoxes" are actually clear mathematical indicators that something is fundamentally wrong with a theory (whether it be QM or something else).

All paradoxes can be reduced to statements of this form:

'A' and 'not A' are both true.

But, as it is well-known from elementary logic, one can derive anything he/she likes starting from "statements" like the one above. Therefore, such paradoxes are not just "some teeny little paradoxes that we can observe here and there", but are actually something that ruins any scientific theory, and ruins it completely...

In author's eyes, what is now called "standard QM" has already begun to look much like Swiss cheese: "there are holes everywhere", but no one seems to care about it.

From this perspective, the "development" of physics of the twentieth century can be seen as a sad story that constantly "chooses" the wrong way to go. A really sad story, indeed&

Especially for most physicists who are so accustomed to thinking of Reality as something that can be studied using "external" experiments only, that the idea of using computer experiments as the only source of "physics" seems like plain madness to them.

For them, we would like once again to repeat our main argument:

If something cannot be shown to work using the "mind, pencil and paper" method, it will not work in the laboratory either.

While the Church-Turing thesis itself is a sufficiently strong argument in support of the idea of the Universe as a CA, with the "efforts" of the physicists (and, especially, quantum computation proponents) even this argument may not seem as sound as it really is.

But the overall feeling of ecstasy during the nineties towards quantum computation is now far from its initial promises. For the last five years little progress has been made. With a little sense of humor, one can add that a lot of the initial enthusiasm can now hardly be kept with a bunch of papers full of promises and somewhat "elementary" results like dividing 15 into 5*3.

While the last achievement may still be regarded as a "major scientific breakthrough", the truth is that most of the problems that initially seemed solvable "speedier" using quantum computers only turned out to belong to the same well-known classes of problems solvable for the same amount of time on the good old (non-quantum) computers as well.

The classical Church-Turing thesis is in a pretty good shape: while it certainly needs some "revision", it is not a revision in the sense that everything should be abandoned and we should start thinking of the world as something that is not liable to any explanation at all.

Quantum world phenomena may be explained in a way that requires less magic than currently they are supposed to contain. While from some point of view this may look "disappointing", from a scientific point of view we can certainly be happy to know the truth, whatever it is.

Unfortunately, we can add also there is a special category of well-educated opponents that will tend to refuse anything that is related to "discreteness" or "determinism" in physics and in science as a whole; they merely regard such an idea as an "absurd" or "heresy".

Actually, we do not beg for their endorsement: quite likely, our theoretical computer science arguments may seem too abstract for them, and they should be redirected to read once again the classical works of Fredkin [Fre90] or a recently published book by Wolfram [Wol02]. For them, the more "illustrative" approach that includes a myriad of examples that range from politics to seashells or communication with extraterrestrial life and beautiful snowflake simulations may serve better.

Still, for them, little can be done: we can only hope that after revealing some of the results of our general theory that the whole thing will stop looking like "pure fantasy" and will start taking the shape of an elegant and beautiful theory, as we see it.

Our hope is also that the new generation of scientists, which has been born in the computer era, will not be so tightly involved with "continuous" mathematics and physics, as the scientists from the older generation, unfortunately, are.

As Edward Fredkin put it once:

"The problem in teaching Digital Physics is not how much has to be learned, rather how much has to be unlearned"

And also:

"There are amazingly few physicists that understand anything about theoretical computer science".

Still, the other unfortunate thing is that the enormous power locked up within an exquisite statement like the "Universe is a CA" really has never been realized by its author (Ed Fredkin). [FP97]

That in turn leaves the door open for our next paper [Pet02a3]. But that is another story...

From all articles in the "Series..." [Pet02], this will be the only one that tries to convince the reader, one way or another, that our Universe is exactly a CA. In our future articles we shall concentrate not on giving more arguments in support of the Zuse-Fredkin thesis, but rather on our general theory, namely:

If our Universe is indeed a CA, then what kind of CA (exactly) should it be?

After all, the Zuse-Fredkin thesis is just what it says: a thesis, i.e. a mathematically unverifiable fact that should simply be either accepted or rejected.

The main thesis of this work is that Alan Turing actually almost succeeded in guessing the whole thing: his famous UTM from the mid 1930's is, in fact — no more or less — (a somewhat restricted form of) a universal one-dimensional CA.

Still, in order for one to start perceiving the complete picture, one has to "expand" the computational model into three spatial dimensions:

The Universe is really a UCA.  But it is a 3D UCA (F-P thesis), not a 1D UCA or UTM (C-T thesis).

For the author of these lines this is not even a point of discussion; it is merely a fact.

A mathematically unverifiable fact, but still& a fact.


Acknowledgments

The author would like to thank Joel Dobrzelewski for editing the text of this article and for his graphics used in Fig. 2 and Fig. 5.

Footnotes

* This article is a part of [Pet02].

1 To be exact: Fredkin thesis plus Occam's razor. However, Occam's razor is often considered to be so "evident" as a metaphysical statement that we can even neglect it by saying that it is merely "tacitly embedded" in the very foundations of any kind of science.

2 The historical notes below are largely taken from Wikipedia, Free Online Encyclopedia - an excellent online resource for mathematics, computer science and physics related materials. In particular, check out:
http://www.wikipedia.com/wiki/The+Church-Turing+thesis&action=edit
for an in-depth review of the historical background in the development of the Church-Turing thesis.

3 Non-deterministic TM is a TM-based model of computation that uses probabilistic functions for its "action table" in order to determine its next "internal state" and behavior.

4 Actually, MWI, super-strings, and various theories of this kind, albeit largely fashionable in the modern times, are not even examples of "rigorous mathematical theories" (strictly speaking), but more like proposals for theories. It is our opinion that physics-like fantasies should be separated from anything that really has a well-defined mathematical apparatus behind it.

5 About all we can do for anyone who believes in the opposite is to recommend him/her to start writing more computer programs; computer programming greatly helps one to shift his/her viewpoint from one view to the other.

6 As Fredkin's DM does — cf. [Fre90].

7 Or, at least, a direct corollary would be that there exists some TOE that represents our Universe as if it is fundamentally one-dimensional in space.

8 Just like we did it in [Pet02m] for EPR.

9 Actually, this "simple" statement is far from obvious and will be studied later in [Pet02j].

10 For the author anyone who seriously believes that "there is some chance for QM to demonstrate non-UTM computable mathematical functions" is merely a science-fiction writer. In this sense, our position is that the good old Church-Turing thesis is still in a pretty good shape, despite its honorable age.

11 For the sake of the joke only, we can add that this "paper dragon" is probably made of physics papers published in rash in a lot of respectful scientific journals since 1985.

12 Written 370 BC. Online version (translated by Benjamin Jowett) available at: http://classics.mit.edu/Plato/parmenides.html

13 A good online source on Zeno's paradoxes is [Hug02].

14 This is also known as a "supertask" — for an enlightening overview check out [Lar99].

References

[Bel64] Bell, John, "On the Einstein Podolsky Rosen paradox", Physics 1 #3 (1964) 195.

[Chu32] Church, Alonzo, "A Set of Postulates for the Foundation of Logic", Annals of Mathematics, second series, 33 (1932) .

[Chu36a] Church, Alonzo, "An Unsolvable Problem of Elementary Number Theory", American Journal of Mathematics 58 (1936) .

[Chu36b] Church, Alonzo, "A Note on the Entscheidungsproblem", Journal of Symbolic Logic 1 (1936) 40-41.

[CFM02] Conway, John, Edward Fredkin, Norman Margolus, Plamen Petrov, Tommaso Toffoli, et al., private e-mail discussion (2002).

[EPR35] Einstein, A., B. Podolsky and N. Rosen, "Can quantum-mechanical description of physical reality be considered complete?", Physical Review 41 (1935) .

[FP97] Fredkin, Edward and Plamen Petrov, private communications, Brookline, MA (1996, 1997).

[Fre90] Fredkin, Edward, "Digital Mechanics: An Informational Process Based on Reversible Universal CA'', Physica D 45 (1990) . Online version: http://digitalphilosophy.org/dm_paper.htm

[Fre01] Fredkin, Edward, Introduction to Digital Philosophy, online document (2001).
See chapter 14, footnote 1 at: http://digitalphilosophy.org

[Gar71] Martin Gardner, "Mathematical Games: On Cellular Automata, Self-Reproduction, the Garden of Eden, and the Game 'Life' ", Scientific American 224 (February 1971) .

[Goe31] Gödel, Kurt, "Uber formal unentseheidbare Satze der Principia Mathematica und verwandter Systeme I" ("On formally undecidable propositions of Principia Mathematica and related systems I"), Monatshefte fur Mathematik und Physik 38 (1931) . Translated in: Solomon Feferman (ed.), "Kurt Gödel: Collected Works", Oxford University Press, volume 1 (1986) . Includes the German text and parallel English translation.
Online version (translation by B. Meltzer): http://www.ddc.net/ygg/etext/godel/
PDF version (translation by M. Hirzel): http://nago.cs.colorado.edu/~hirzel/papers/canon00-goedel.pdf

[Goe34] Godel, Kurt, On Undecidable Propositions of Formal Mathematical Systems (1934), Lecture notes taken by Kleene and Rosser at the Institute for Advanced Study. Reprinted in Davis, M. (ed.), "The Undecidable", New York: Raven (1965).

[Gut96] Gutowitz, Howard (ed.), Frequently Asked Questions About Cellular Automata (CA FAQ), online document (1996). See: "Is there is universal 1D CA?". Available online at: http://cafaq.com/properties/index.shtml#SECTION00029000000000000000

[Hea21] Heath, Thomas, A History of Greek Mathematics 1, Clarendon Press, Oxford, (1921). Reprint: Dover Publ., New York (1981).

[Her32] Herbrand, Jacques, "Sur la non-contradiction de l'arithmetique", Journal fur die reine und angewandte Mathematik 166 (1932) 1-8.

[Hug02] Huggett, Nick, "Zeno's Paradoxes". In: Stanford Encyclopedia of Philosophy, edited by Edward Zalta  (2002). Available online at: http://plato.stanford.edu/entries/paradox-zeno/

[KQS85] Kennedy, J. W., Quintas, L. V. and Syslo, M. M., "The Theorem on Planar Graphs", Historia Math. 12 (1985) .

[Kle35] Kleene, Stephen, "A Theory of Positive Integers in Formal Logic", American Journal of Mathematics 57 (1935) , .

[Kle36] Kleene, Stephen, "Lambda-Definability and Recursiveness", Duke Mathematical Journal 2 (1936) .

[Kur30] Kuratowski, C. "Sur le problème des courbes gauches en topologie", Fund. Math. 15 (1930) .

[Lar99] Laraudogoitia, Jon, "Supertasks". In: Stanford Encyclopedia of Philosophy, edited by Edward Zalta (1999). Available online at: http://plato.stanford.edu/entries/spacetime-supertasks/

[Mar54] Markov, A., "Teoriya algoritmov" ("The Theory of Algorithms"), Trudy MIAN USSR 42, Izdatelstvo AN, Moscow (1954). Translated in: American Mathematical Society Translations, series 2, 15 (1960) 1-14.

[Pea96] Pearson, David, "Programming a crystal-lattice computer", Proceedings of the Fourth Workshop on Physics and Computation. In: T. Toffoli, M. Biafore, and J. Leão (ed.), PhysComp96, New England Complex Systems Institute (1996) . Also online in the InterJournal: http://www.interjournal.org/cgi-bin/manuscript_abstract.cgi?540078425

[Pet02] Petrov, Plamen, Series of Brief Articles about Digital Physics (DP), in preparation. For the latest update, check out: https://digitalphysics.org/Publications/Petrov/DPSeries/

[Pet02a1] Petrov, Plamen, "Church-Turing Thesis is Almost Equivalent to Zuse-Fredkin Thesis (An Argument in Support of Zuse-Fredkin Thesis)", 3rd WSEAS International Conference on Systems Theory and Scientific Computation, Special Session on Cellular Automata and Applications, to be held on 15-17 November 2003 at Rhodes Island, Greece, submitted (August 30, 2003). This article is a part of [Pet02].
Available online at: https://digitalphysics.org/Publications/Petrov/Pet02a1/Pet02a1.htm

[Pet02a3] Petrov, Plamen, "The Zuse-Fredkin Thesis is the Ultimate Form of the Church-Turing Thesis", in preparation.

[Pet02b] Petrov, Plamen, "The Game (Introduction to Digital Physics)", InterJournal of Complex Systems, submitted, 604 (2002). This article is a part of [Pet02].
Available online at: https://digitalphysics.org/Publications/Petrov/Pet02b/Pet02b.htm

[Pet02f] Petrov, Plamen, "Synchronization without Synchronization", draft (2002). This article is a part of [Pet02].
Available online at: https://digitalphysics.org/Publications/Petrov/Pet02f/Pet02f.htm

[Pet02g] Petrov, Plamen, "What is Emulation?", in preparation.

[Pet02i] Petrov, Plamen, "Goedel's Incompleteness Theorems and How Digital Physics Interprets Them", in preparation.

[Pet02j] Petrov, Plamen, "Petrov's Hypothesis is (almost) a Theorem", in preparation.

[Pet02m] Petrov, Plamen "Bell's Theorem is Invalid for Cellular Automata, or Demonstrating EPR effects on a One-Dimensional (Non-Quantum) CA", InterJournal of Complex Systems, submitted, 612 (2002). This article is a part of [Pet02].
Available online at: https://digitalphysics.org/Publications/Petrov/Pet02m/Pet02m.htm

[Pop34] Popper, Karl, Logik der Forschung, J.C.B.Mohr, Tubingen (1934). Translated into English as: "The logic of scientific discovery", New York: Basic Books (1959).

[Sch35] Schrödinger, Erwin, "Die gegenwartige Situation in der Quantenmechanik", Naturwissenschaften, 23 (1935) , , . English translation: John D. Trimmer, Proceedings of the American Philosophical Society 124 (1980) . Reprinted in "Quantum Theory and Measurement" (1983) 152.

[Tur36] Turing, Alan, "On Computable Numbers, With an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, Series 2, Volume 42 (1936-37) ; Corrections, Ibid., vol. 43 (1937) . Reprinted in M. David (ed.), "The Undecidable", Hewlett, NY: Raven Press (1965).
Online version: http://www.abelard.org/turpap2/tp2-ie.asp

[Wol02] Wolfram, Stephen, A New Kind of Science, Wolfram Media, Inc., Champaign IL (2002), 1192 pp. Home page: http://www.wolframscience.com

[Wri88a] Wright, Robert, "Did the Universe Just Happen?", Atlantic Monthly (April 1988) 29-44. Translated into Bulgarian as: "Kompjutar li e Vselenata?" ("Is the Universe a Computer?"), Spectar 66 (1989) 42-51.
Online version (in English): https://digitalphysics.org/Publications/Wri88a/html/
Online version (in Bulgarian): https://digitalphysics.org/Publications/Wri88a-Bul/html/

[Wri88b] Wright, Robert, Three Scientists and Their Gods: Looking for Meaning in an Age of Information, Times Books (1988). See: "Part One: Edward Fredkin".

[Zus36] Zuse, Konrad, "Verfahren zur selbsttätigen Durchführung von Rechnungen mit Hilfe von Rechenmaschinen", Patentanmeldung Z 23 139 / GMD Nr. 005/021 (1936).

[Zus67] Zuse, Konrad, "Rechnender Raum", Elektronische Datenverarbeitung 8 (1967) . Available online at:

[Zus69] Zuse, Konrad, "Rechnender Raum", Schriften zur Datenverarbeitung, Band 1, Friedrich Vieweg & Sohn, Braunschweig (1969). English translation: "Calculating Space", MIT Technical Translation AZT-70-164-GEMIT, MIT (Proj. MAC), Cambridge, Mass. 02139, Feb. 1970. Available online at:

Back to Digital Physics