Conclusions

 

Gottlob Frege

 

Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build arithmetic.

~Frege, in a letter to Russell, 1902

Is logicism true?

Frege's Begriffsschrift
Frege's
Begriffsschrift.

When we last spoke of Frege, we had mentioned that his goal in creating the world's first logical formal language was to show that logicism was true. Logicism, recall, is the theory that mathematics can be derived out of logic. Thus, Frege believed that logic is the root and that mathematics is one of the branches. And so Frege labored furiously first in formalizing logic, giving us symbolic logic, and then in systematically defining all the concepts and relations of arithmetic in his "concept notation". The goal was that, once he was done, proving logicism to be true would be a purely mechanical task.

You might remember that this goal of Frege's was shared with David Hilbert. Hilbert, much like Frege, wanted to prove that mathematics is complete and consistent. In other words, he wanted to formalize mathematics. If you recall, Hilbert popularized this issue in 1900, at a mathematics conference, by framing it as a problem that 20th century mathematicians should attempt to solve (see Rules of Inference (Pt. III)). If logicism could be proved, Hilbert reasoned, then this problem would be that much closer to getting crossed off his list.

Frege's work was obviously very esoteric, as we've mentioned before, but at least one important thinker paid close attention when Frege published his Begriffsschrift in 1879. This person was none other than Bertrand Russell. In 1902, while the second volume of his Grundgesetze der Arithmetik (The Basic Laws of Arithmetic, 1893, 1903) was in press, Frege received a letter from Russell, who claimed to have found an inconsistency in one of Frege’s axioms.

Paradoxes entail a contradiction, and a contradiction could doom Frege's whole enterprise of proving logicism. The paradox, though, didn't appear in Frege's formalization of logic (Lucky us!), but it was clearly present in his attempted formalization of arithmetic. In a kind letter, Frege responded to Russell, a portion of which can be seen in the epigraph above.

What was this paradox? It is a technical detail which is difficult to summarize. Suffice it to say that Frege wanted to define arithmetical numbers in terms of classes (or sets). But this raises the possibility of there being sets that contain themselves. So the paradox is this: does the set of all sets that do not contain themselves as members contain itself? This is much easier to understand when explained by Russell through his barber paradox (see also Irvine 2016, section 2), but here's what you need to know: Frege lost hope.

But Russell didn't. In fact, Russell revered Frege's work and so he set up to make the necessary amendments and complete the task himself. He eventually, along with Alfred North Whitehead, published Principia Mathematica, the first volume of which was published in 1910. Although Principia had problems, Russell and Whitehead made a heroic attempt to continue Frege's work. In fact, the language that we used in this course is closely related to the one used by Russell and Whitehead. Logicism seemed to have hope.

And then, in 1931, Gödel (pictured below) publishes On Formally Undecidable Propositions. Although Gödel's incompleteness theorems cannot be done justice here, we can summarize his main findings. Gödel’s first incompleteness theorem states that:

  • in any consistent formal system F within which a certain amount of arithmetic can be carried out,
  • there are statements of the language of F which can neither be proved nor disproved in F.

In other words, any formal system that can do some arithmetic is either complete or consistent, but not both. Hofstadter summarizes it this way:2

 

 

If a logical system that performs arithmetic cannot be complete, then Frege and Russell's attempts will always be futile (see Shapiro 2005, chapter 5 and Nagel & Newman 2001). Most mathematicians and logicians agree. Logicism is false.

Does this mean it was all for nothing? Was Frege's life work pointless? Not at all. Frege's work is an essential component of our shared intellectual history. Without him, logic and mathematics wouldn't have been the same.

"The formalization of arithmetic—and other branches of mathematics too—still rests largely on techniques that Russell and Frege had forged" (Shenefelt and White 2013: 143).

But that's not all...

 

Kurt Gödel (1906-1978)

 

 

The Digital Revolution

Geography and ideas

An interesting approach to the writing of history that has become increasingly popular is to emphasize the role of geography on historical events. For example, Jared Diamond's (2017; originally published in 1997) Guns, Germs, and Steel makes the case that what enabled some Eurasian and North African civilizations to thrive and conquer other less-developed civilizations has nothing to do with greater intellectual prowess or inherent genetic superiority (as much as some white supremacists would love to think it to be the case). Rather, the gap between these civilizations, which is manifested in more complex organizational structures and advanced technology and weaponry, originated mostly due to environmental differences. The interested student can refer to Diamond's book or see the documentary series that was inspired by the book.

Of course, Diamond isn't without his critics. But many of the criticisms of his view also emphasize the role of environmental factors. In other words, they are disagreeing with his findings but not with his method. For example, in chapter 1 of their If A then B, Shenefelt and White make the case that, more than anything, it was the ubiquitous access to water routes (which allowed the ease of travel from city-state to city-state), that led to the flourishing of Greek philosophy. This is because the ease of travel served as a means of escape should a thinker’s ideas become too heterodox. In short, thinkers were allowed to push the envelope, challenge authority, explore new ideas—all of which are essential ingredients in intellectual breakthroughs—with more peace of mind than elsewhere. They could just leave if things were getting too hot for them!

Politics and ideas

Representatives of Athens and Corinth at the Court of Archidamas, King of Sparta
Representatives of Athens and
Corinth at the Court of
Archidamas, King of Sparta.

But it's not only geography that influences the history of ideas; politics and war are influential in subtle ways that are difficult to see at first. In the first lesson of this course, in a footnote, I argued that Aristotle truly did develop a field that was unique and could not be found in other civilizations in the same form. For example, Indian logic was preoccupied with rhetorical force, i.e., the persuasive characteristic of argumentation. It featured repetition and multiple examples of the same principle. But Aristotle’s logic focused on logical force, the study of validity itself. The Greeks wanted to know why an argument’s premises force the conclusion on you. Interestingly, Chinese logic apparently did study some fallacious reasoning, but never its counterpart (i.e., validity). In short, it was only Aristotle that studied logical force unadulterated by rhetorical considerations.

But this was not enough. In order for an idea, like the idea that studying validity is a worthwhile endeavor, to take hold, you need not only an innovative thinker, i.e., Aristotle, but you also need a receptive audience. So here's the puzzle: Why was Aristotle’s audience so receptive to the study of validity? Because 5th century BCE Athenians made two grave errors that 4th century BCE Athenians hadn't forgotten and would never forgive:

  1. They lost the Second Peloponnesian War (431-404 BCE); and
  2. They fell for “deceptive public speaking.”

In other words, 5th century BCE Athenians knew that the people from a century earlier had really messed up. And Plato, for one, blamed both of those errors on the sophists. The sophists were paid teachers of philosophy and rhetoric (or persuasion). Plato in particular makes them seem more like they were intellectual mercenaries who would argue for any position if you paid them. In truth, the sophists were probably more scholarly and less mercenary than Plato makes them out to be. For instance, sophists generally preferred natural explanations over supernatural explanations, and this preference might’ve been an early impetus for the development of what would eventually be science. Nonetheless, sophists would often argue that matters of right or wrong are simply custom (nomos). Moreover, these first teachers were more pervasive in Athens than elsewhere due to the opulence of the city. And so, it was easy to place the blame on them. Once public opinion turned on the sophists, things got violent. One person accused of sophistry (whose name was Socrates) was even put to death.

And so it was for these reasons—remembering the mistakes of the past and the violent turn against the sophists—that the populace was ready to restore the distinction between merely persuasive arguments and truly rational ones. The study of the elenchus (argument by contradiction) was divided into rhetoric (persuasion) and rational argumentation, thanks to Plato. From here, it was just one step further to begin to standardize rational argumentation and study its logical form. Enter Aristotle. But without the preceding turmoil, it is possible that Aristotle's logic would've fallen on deaf ears (see Shenefelt and White 2013, chapter 2).

Technology and ideas

In a previous lesson, we looked at how the rules of logic, in particular Boole's algebra, were realized physically in computer circuits. For a refresher on the connection between Boolean algebra computer circuits, you can see this in the slideshow below (see also this helpful video).

 

 

A Roberts loom in a weaving shed in 1835
A Roberts loom in a
weaving shed in 1835.

It turns out that the even 19th century logicians that we covered, George Boole and Augustus De Morgan, were inspired by their environment. Let me explain. The Industrial Revolution began in the late 1700s in Britain. This was a time when mechanics were developing a new generation of steam engines (fired by coal), there was a boom in the construction of public works, clothing began to be manufactured on large industrial machines, and, interestingly enough, these new developments were themselves stimulated by improvements in agricultural science. It was also during this time period that diesel power, photography, telegraphs, telephones, and petroleum-based chemicals were invented. In sum, mechanization was in full swing.

Enter George Boole and Augustus De Morgan. When the Industrial Revolution was fully underway, Boole and De Morgan published important treatises on mathematical logic. In the year 1847, both published the first major contributions to logic (arguably) since Aristotle. This is when Boole published his Mathematical Analysis of Logic which we discussed briefly back in Unit I. According to Shenfelt and White, it was the Industrial Revolution itself that inspired these two thinkers and many others:

“The Industrial Revolution convinced large numbers of logicians of the immense power of mechanical operations. Industrial machines are complicated and difficult to construct, and unless used to manufacture on a large scale, they are hardly worth the trouble of building. Much the same can be said of modern symbolic logic. Symbolic logic relies on abstract principles that are difficult, at first, to connect with common sense, and they are particularly complicated. Nevertheless, once a system of symbolic logic has been constructed, it can embrace a vast array of theorems, all derived from only a few basic assumptions, and suitably elaborated, it can supply a clear, unequivocal, and mechanical way of determining what counts as a proof within the system and what doesn't. The Industrial Revolution convinced many logicians from the nineteenth century onward that complicated mechanical systems are truly worth building” (Shenefelt and White 2013: 205-206).

Hints of the complicated story of computation

It appears that the history of computation, just like logic itself, is influenced by environment, politics, and technology.

First off, as we've seen, logic itself was integral to computer science:

“Just as machines first gave rise to symbolic logic in the nineteenth century, so, in the twentieth and twenty-first centuries, symbolic logic has given rise to a new generation of machines: digital computers, which are essentially logic machines whose programming languages are analogous to the symbolic languages invented by logicians” (Shenefelt and White 2013: 206).

One individual responsible for radical new developments in theory of computation was Alan Turing. Turing's contributions to mathematics, logic, and computer science are difficult to summarize, and I mentioned only a few of them in Lesson 3.2. Nonetheless, here's a refresher of Turing's insights:

 

 

Colossus, the world's first single-purpose electronic computer
Colossus.

Turing, of course, was instrumental in bringing about the world's first large-scale electronic computer, Colossus, in 1944. However, this computer was special-purpose—it was for cryptanalysis. But once World War II was over, theorists wanted to make a general-purpose electronic computer.

Enter John von Neumann. Von Neumann was a mathematician, physicist, and computer scientist who was, by all accounts, (frighteningly) brilliant. As a child, he would memorize entire pages of the telephone directory, to the amusement of his parents' houseguests. As the first general-purpose computers were being built, other researchers would ensure that the computer's calculations were correct by checking with von Neumann (who would do the calculations in his head!). In fact, he was so brilliant that we might actually give him too much credit for some of his ideas (since others were also involved; see Ceruzzi 2003: 50).3

In any case, whether he deserves the whole credit or not, we know that von Neumann was important to the development of the stored program principle. The general idea is to store both the programs and data in a common high-speed memory (or storage). This allowed programs to be executed at electronic speeds, and it allowed programs to be operated on as if they were data. This second innovation is what allowed the blossoming of high-level languages like Python and Java. Computers that follow these general architectural guidelines are sometimes called "von Neumann machines."

Was von Neumann connected at all with the field of logic? Yes—and in a stunning way. He was the former research assistant to David Hilbert (who posed the question that Frege had already been working on and that Russell continued to work on which inspired Turing to focus on mathematics). It's all connected.4

Also, I would be remiss if I didn't also mention the pioneering work of Claude Shannon, whose masters thesis(!) laid the foundations for information theory. In the resulting 1938 paper, he showed that switching circuits could solve any problems that Boolean algebra could solve. The interested student can check out this helpful video.

Left to right: Julian Bigelow, Herman Goldstine, J. Robert Oppenheimer, and John von Neumann at Princeton Institute for Advanced Study, in front of MANIAC I.
Left to right: Julian Bigelow,
Herman Goldstine,
J. Robert Oppenheimer,
and John von Neumann at
the Princeton Institute for
Advanced Study, in front of
MANIAC I.

And so this is how logic, which itself was influenced by environment, politics, and technology, helped to bring about the digital revolution. But logic alone did not make the digital revolution possible. It seems that politics and technology also played a role. With regards to politics, Jim Holt reminds us that the digital universe and the hydrogen bomb were brought into existence at the same time, by the same people. In fact, the first task of MANIAC I was to do the calculations for the hydrogen bomb (see Holt 2018, chapter 9). War, an extension of politics according to Carl von Clausewitz, was the catalyst. Moreover, it appears that without this heavy government investing (which founded and continually funded the research of countless computer science departments across the country, etc.), the computing revolution would not have happened (see Mazzucato 2015, chapter 3). Lastly, with regards to technology, developments in lithography (which is a printing process) enabled computer scientists to develop better integrated circuits, which affects everything from computational power to the competitiveness of the computing industry.5

It boggles the mind that so many things had to fall into place for the computing revolution, each with its own story. You at least now know one of those stories: the story of logic. Having said that, there's no guarantee that this story has a happy ending...

 

 

 

The Threat

In his 2017 Superintelligence, Nick Bostrom lays out his case for why AI might be an existential risk to humankind. He does this by describing several problems that might arise and for which we are not preparing. For example:

  1. The value alignment problem has to do with ensuring that when an AI obeys one of our commands, they do not do so by exploiting a loophole or otherwise accomplishing the task in a way that goes against their human master’s values.
  2. The motivation selection problem involves the worry about what sort of motivational states the AI will have. The concern is that we should begin to attempt to make "friendly" AI.

To get into the issue of AI as an existential risk is far beyond the scope of this course. This is lamentable since global catastrophic risks are some of my favorite things to think about—weird, I know. Nonetheless, here I'll simply say: 1. This is a real problem; and 2. The clock is ticking. Enter logic. In his chapter of Possible Minds (2020), Stewart Russell, who has worked on the synthesis of first-order logic and probability theory, discusses the value alignment problem. After discussing the risks behind AI, he gives a possible solution: expressing human problems in a purely formal way and then designing AI to be solvers of these carefully specified problems. This means, of course, re-defining what AI is; but Russell thinks this is necessary for our survival. In short, we must not merely focus on problem-solving but on building machines that are provably in the interest of humans. And the primary tool for this would be—you guessed it—a formal language.6

 

 

 

That's it!

You've made it. This puzzle is solved. Go solve some more.

FYI

Related Material—

Advanced Material—

 

Footnotes

1. We also mentioned that Frege's work, along with that of Russell and Hilbert, were what inspired Alan Turing to study mathematics more rigorously.

2. Interestingly, Hofstadter believes that our cognitive capacity to construct categories and the capacity for self-reference are what make consciousness possible; see his (2008) I am a strange loop.

3. For a great overview of the time period and von Neumann's multiple roles in it, from the birth of general-purpose computers to nuclear strategizing, see Poundstone (1993).

4. Turing almost became a research assistant to von Neumann, who was ten years Turing's elder. Turing, however, opted to do the patriotic thing and went back to the UK in 1938. There was, of course, a world war brewing, and Turing played an important role in ensuring his country was on the winning side.

5. I hasten to add that simple and and or logic is even useful in exploitation, i.e., hacking, and that hacking (in a way) boosts software security. This is because hackers find holes and vulnerabilities in existing software, which prompts firms to patch up those holes and fortify their products. This creates a cycle of tit-for-tat with one side improving their exploitation techniques and the other their software (see Erickson 2008: 451-52).

6. The interested student can find Stewart Russell's essay in Brockman (2020, chapter 3).