Advertiser Disclosure

The AI Debate: A Primer

Artificial intelligence (AI) is now unavoidable.

If it isn’t someone telling you what great investment opportunities the recent advances in AI have opened up, it’s someone else warning you of the serious threats posed by this technology. Arnold Schwarzenegger’s The Terminator (1984) is almost 40 years old. But the idea of computers running out of control, or even of AI-powered robots wiping out the human species, has now acquired new urgency.

Then, there are all the actual news reports about AI. For example, during just the past few years, such major corporations and hi-tech players as IBM, Microsoft, Apple, Google, Toyota, and Elon Musk have all announced major investments in AI.[1] Several national governments—above all, China, but also the US, the UK, Canada, and others—have followed suit.[2]

This year alone (2023), news of the the stunningly successful writing program ChatGPT broke in the early spring. A flood of new AI-related investment then overtook Wall Street. This in turn led to a sharp spike in the S&P 500, which occurred by the summer.[3] And then, to cap things off, the Biden administration issued a voluminous executive order creating a massive new regulatory regime governing all aspects of AI in the fall.[4]

Clearly, something new and important is afoot. For this reason, Expensivity felt this was an opportune moment to provide our readers with an introductory but relatively comprehensive guide to the claims and counter-claims surrounding AI today.

The purpose of this primer is to help you separate the wheat of outstanding investment opportunities from the chaff of media-driven hyperbole.

Above all, my aim in this primer is not to tell you what to think about the public debate concerning AI. Rather, it is to provide you with enough information about the various significant aspects of this controversy to enable you to make up your own mind.

A Bit of History

It is usually a good idea to have some grasp of the history of any subject matter you are attempting to master. So, let us begin with a thumbnail sketch of the main phases in the development of the technology that is creating such a stir today.

The history of AI may be usefully divided into three distinct phases.

First, was the attempt in the early 1950s to write programs that mimicked the aspect of human intelligence we think of as “deductive reasoning” or “logic.”

Using the physical states of individual loci on a computer chip corresponding to settings of “on” and “off,” low-level “machine language” programs could be written (in binary notation) to represent the natural numbers, as well as Boolean algebra functions. Then, with the help of assembly and other, higher-level languages, scientists could program digital computers to simulate aspects of human reasoning.

It was assumed by this first phase of AI developers that the human brain is a “symbol system” just like the digital computer. According to this view—known as the doctrine of “computationalism”—the brain is thought of as the “hardware” that runs the “software” of the mind. Later, this initial phase came to be known as “Good Old Fashioned Artificial Intelligence (GOFAI).”

One mark of GOFAI was the effort to reduce all of human behavior to a sequence of deductive inferences. Deduction is a truth-preserving argument that moves from a set of statements called “premises” to a statement called the “conclusion,” guaranteeing the truth of the conclusion if the premises are true.

If the movement from statement to statement is indeed truth-preserving, then the argument is said to be “valid.” If, in addition, all the premises are true, then the argument is called “sound.” (Example of a valid argument that is not sound: If all swans grow features; and Taylor Swift is a swan; then Taylor Swift grows feathers.)

The second phase of AI development, known variously as “parallel distributed processing” (PDP), “connectionism,” or “artificial neural networks (ANNs),” had its roots in the 1960s, but only really got under way in the late 1980s.

In this phase, the top-down attempt to explicitly represent numbers and logic gates by means of symbols was abandoned in favor of a bottom-up, statistical approach. To arrive at their conclusion, ANNs (the preferred term today) employ induction, instead of deduction.

Inductive reasoning moves from similar individual cases to some general rule or category which represents what the individual cases have in common. Induction can be used to formulate a general rule governing the individual cases and to correctly classify future cases according to the rule. (Example: A farmer fed his chicken yesterday and the day before that and the day before that . . . Therefore, the farmer always feeds his chicken—which is why, when the chicken sees the farmer today, it goes running to meet him, thinking it will be fed.)

Artificial neural networks specialize in pattern recognition. In a nutshell, they are “trained” by presenting them with a large number of inputs that are each slightly different but, from the human point of view, belong together in some category. A properly designed ANN has the ability to formulate general rules and to correctly classify new input on the basis of this “training.”

The training also involves a human observer’s providing feedback to the machine upon its performance—basically telling it how and where it has gone wrong, until it gets it right.

While this feedback (or “back-propagation”) can itself be automated to some extent, everything ultimately depends upon the human being’s selecting a target for the ANN and providing error correction that enables it to home in on that target. The human being is an indispensable element of this process, at least with respect to the kinds of machines we have today.

Artificial neural networks began to replace GOFAI systems during the 1990s. However, by the second decade of the twenty-first century they seemed to be bumping up against inherent limitations, both in processing speed and in the amount of raw data required for training.

After roughly 2015, however, we entered the third phase of AI development, which essentially consists of the marriage of ANNs with “Big Data.”

Technological improvements, from “supercomputers” to “deep learning” (multi-layer) ANNs, have contributed to the success of this third wave of AI. However, the most important factor has been the ease with which enormous amounts of data can now be made available to train ANNs with the help of the Internet, Wikipedia, and other vast databases.

As it has turned out, this large increase in the size of the training sets makes a huge difference to performance. Nevertheless, the question remains: Is third-wave AI, with its “large language models” and “deep learning” capabilities, the key to automating authentic, humanlike intelligence?

And even if it isn’t quite that, may it not still be the key to the hi-tech based economic future we have all been waiting for?

In the sequel, I will address the prospects of AI from four separate points of view: commercial/economic, technical/scientific, moral/political, and philosophical.

While these viewpoints are all interrelated—and, as I hope to show, the last one has a crucial contribution to make to each of the other three—I believe the public discussion of AI will be best served by keeping them as distinct as possible.

Commercial/Economic Point of View

“We believe AI has the potential to be among the most disruptive secular growth trends of all time.” So writes Bryan Wong, a Vice President with Oberweis Capital Investment, in a recent online article that strikes a neat balance between enthusiasm and prudence.[5]

The main reason Wong cites for his bullishness on AI is his conviction that the new technology will not just lead to a string of specific inventions, however useful and even awe-inspiring. Rather, he is convinced it will call into being an entirely new commercial world.

Within that world, Wong believes, technologies will arise with the potential to change our everyday lives beyond recognition, in a way comparable to the impact that the Internet, social media, and smartphones have already had.

Obviously, we are talking here about more than just a sweet investment opportunity. If Wong is right, what is at stake is the wholesale transformation of the economy (yet again).

The author goes on to explain that in the twenty-first century economic growth is no longer driven by labor (increased skill or manpower) or even by capital (increased investment) nearly so much as it is by the progress of technology. He writes, “The potential of AI is enormous because it promises to enhance productivity significantly.”[6]

In summary, Wong argues that “this major technological shift offers the potential for significant value creation for companies and investors.”[7]

However, I would be remiss to quote Wong only in his enthusiastic, salesman mode. Throughout the piece, he hedges his bets in various ways, issuing several statements of the need for caution.

For example, in one passage, he characterizes the effusions of an AI entrepreneur as “a bit grandiose.”[8] Elsewhere, he admits it is probably “too soon to evaluate [AI’s] real economic impact.”[9]

It is interesting that Wong’s actual advice to his readers is on the modest side. He implicitly acknowledges that AI’s near-term impact is unlikely to be as game-changing as some people imagine. Nonetheless, he has some useful advice to give.

Two of his suggestions are to “avoid businesses that are easily replaced by AI” and “avoid short-term hype.”[10]

Of course, one might be forgiven for thinking that what an investor wants more than anything from an expert like Wong is detailed assistance in distinguishing at-risk businesses from the rest and in telling the difference between hype and the truth. It goes without saying that investors don’t want to throw away their money. But neither do they want to miss out on the chance of a lifetime. So, it is a fine line that we must walk here.

And to walk it successfully, we need to dig deeper into the underlying factors driving the AI bandwagon, which can best be done, I believe, by looking at the problem from the other points of view mentioned above.

But first—as a prelude to my discussion of the technical/scientific side of AI—I would like briefly to rehearse the background in popular culture against which the scientists’ words and work have been interpreted by the public.

Turning to Hollywood as a mirror of public sentiment, let us take the depiction of the friendly and adorable robots, R2-D2 and C-3PO, in George Lucas’s 1977 film, Star Wars, and the rebellious, ultimately malevolent computer, HAL, in Stanley Kubrick’s 1968 film, 2001: A Space Odyssey, as culminations of two related streams of thought.

Stream 1: the idea of a manmade, humanoid servant or slave.

The oldest form taken by this idea is probably the Golem, a Jewish oral legend that dates back to Antiquity in the Middle East and was committed to writing during the sixteenth century in Eastern Europe. The Golem is a creature conjured by a sorcerer by shaping (like God) a thing made in his own image out of clay.

In this line of descent, we may mention the creature imagined by the nineteen-year-old Mary Shelley in her novel, Frankenstein, published in 1818, and Karel Čapek’s “robots” from his 1920 play, R.U.R. (Rossumovi Univerzální  Roboti) [Rossum’s Universal Robots; the Czech word “robot” deriving from the pan-Slavic root meaning “to work”—ed.].

Stream 2: the idea that human thinking is at bottom a kind of computation.

This doctrine is the assumption upon which the concept of AI is based, namely, the thought that human intelligence can in principle be reduced to algorithmic form and ultimately mechanized.

This vision was already present in the work of various Enlightenment thinkers, notably the German Gottfried Wilhelm Leibniz in the early eighteenth century. However, it did not begin to take concrete shape until the work of the Englishman Charles Babbage, who designed a prototype of what he called a “difference engine” during the 1820s.

This second lineage, equating human thought with computation reached its culmination in 1936 in a paper by Alan Turing proving the concept of what later came to be known as a “universal Turing machine.”

Or, perhaps, one should rather cite Turing’s 1950 paper in which he advanced the idea of what he called “the imitation game” and which is now known as “the Turing test.”

In a nutshell, the Turing test pits a human against a computer in an open-ended conversation. If the human believes the entity that he or she is conversing with (which is screened from view) is another human being, whereas it is really a computer program, then the program is said to have “passed the Turing test.”

For the past 70-odd years, the Turing test has been in the back of the minds of many, if not most, computer scientists—and not infrequently in the forefront of technical discussion, as well.

There is a long history of scientists predicting that the programs they were working on would pass the Turing test in the not-too-distant future.

For example, to focus just on recent predictions, the text and speech recognition software pioneer Ray Kurzweil claimed in 2002 that computers would surpass human beings in intelligence by 2029.[11]

Kurzweil has still got six years left to go, but somehow I doubt there are many bets on his prediction still being placed with Ladbrokes.

Similarly, Ilya Sutskever—co-founder of OpenAI, the company that developed ChatGPT—has opined that there is a serious possibility that AGI (“artificial general intelligence,” meaning human-like intelligence) will be invented in the “near term.”[12]

In the same vein, Geoffrey Hinton—who won the 2018 Turing Award (the so-called “Nobel Prize for Computing”) for his early work on the development of ANNs—has advanced the view that society ought to stop training human radiologists, given the recent advances in pattern recognition software.[13]

Finally, Sundar Pichai, CEO of Google, has stated that AI will turn out to be a more significant invention for humanity than the generation of electricity or—going all the way back to Homo erectus here—the mastery of fire![14]

Next, consider the string of high-profile contests which, while not even close to Turing tests, nevertheless have generated tremendous favorable publicity for AI.

I am thinking especially of IBM’s Deep Blue program, which in 1997 beat the reigning world chess champion Garry Kasparov, IBM’s Watson, which came in first in a 2011 competition with two human champion players of the TV game show Jeopardy!, and DeepMind’s AlphaGo program, which in 2016 defeated a Japanese Go master.

As for the film 2001: A Space Odyssey, Director Stanley Kubrick interviewed IBM engineers to make sure HAL was plausible. Just think of the optimism—perhaps “hubris” is a better word—that those IBM experts must have communicated to Kubrick for him to have named the 1968 film “2001”!

Star Wars, unlike 2001, was not, of course, intended to be taken seriously. Nevertheless, audiences in 1977 accepted the premise of those cute, all-too-human androids without batting an eyelash.

And think of where we are today, some 55 and 45 years, respectively, after those two films were made. It is hardly surprising that the person in the street in 2023 is willing to take the prospect of human-like artificial intelligence perfectly seriously.

However, against this backdrop of gung-ho optimism, the reality has been decidedly more modest. In fact, a number of recent efforts to develop advanced AI programs have begun to be quietly shelved.

For example, in 2018 Facebook abandoned a major project to develop a new, AI-based, personal assistant.[15] Most embarrassingly, IBM’s Watson Health—basically Watson repurposed from a Jeopardy!-playing device to a personal assistant for physicians, designed to peruse the medical literature and make diagnoses—was reported to have missed several cancer cases and generally to have worked poorly and been unsafe.[16] As a result, IBM lost a number of high-profile contracts.

So, what gives? Why are these state-of-the-art computer programs being pulled from the market and their inventors back-pedaling on their bullish claims?

We will look at some of the reasons for believing in the probable inherent limitations on AI in the last section, devoted to philosophical analysis. But for the present, let us listen to what some of the scientists and engineers involved in AI’s recent impressive achievements have had to say.

Technical/Scientific Point of View

This would be a good place to introduce some useful terminology to help us to think about the complexities of the technical side of AI.

First, there is a widely recognized distinction, which goes back decades, between “strong” and “weak” AI. Here, “strong AI” refers to the instantiation (full realization) of human-like cognitive states—including conscious states—in a non-biological medium, meaning, in practice, a digital computer.

“Weak AI,” on the other hand, refers to the simulation (mere imitation) of human-like behavior by a computer.

Next, we need to make a second distinction within weak AI—recently suggested by a Norwegian physicist—between “artificial general intelligence” (AGI) and “artificial narrow intelligence” (ANI).[17]

Here, AGI denotes a program (currently non-existent) which would be able in theory to simulate human-like intelligence in all its sophistication and complexity.

ANI refers to machines that are highly tuned to some particular activity analogous to a specific human ability. They may far exceed human capabilities in this one, circumscribed domain, while wholly lacking any ability to perform even the simplest tasks outside of it.

Nearly everyone agrees that current AI programs are endowed with ANI. The main question up for debate is whether this is only an inessential, temporary limitation, or whether there is some deeper reason why the gap between the two realms may be permanent.

The first distinction between strong and weak AI can only be justified by means of philosophical argument. For this reason, I will postpone discussion of it until the last section. The second distinction, then—between AGI and ANI—will occupy us for the rest of this section.

Many scientists are willing to admit that, while the current ANN plus Big Data approach to AI has racked up many recent successes, still it faces inherent limitations. Most of the remaining problems boil down to the question of how to instill a machine with common sense.

More specifically, even programs that are performing with a high degree of reliability may still fall victim to gross errors that exhibit a type of “stupidity” that no human being, even a small child, would ever be guilty of.

Many examples of such mistakes could be adduced. Below are two of my favorites.

First, there is the case of pattern recognition software developed by the Chinese government to monitor pedestrians and issue citations for jaywalking. A famous businesswoman was surprised to receive a citation in the mail, even though she had never jaywalked. Upon investigation, it turned out that the program had confused the businesswoman herself with a large poster of her face on the side of bus![18]

Second, even for the relatively simple problem of chess, the distinguished British physicist Roger Penrose recently demonstrated similar limitations exhibited by the world’s premier chess program (as of 2022), called “Fritz.”[19]

Penrose devised an endgame position that—to a human player—is an obvious draw. Although the position is a possible one, in the sense that it can be attained without violating any rule of chess, it is rather peculiar and not one that would be likely to occur in real play. Presented with this novel position, Fritz reverts to the level of patzer (in chess parlance, a poorly playing amateur). Here is what happened.

Penrose (playing white) played to maintain the draw, but Fritz, not knowing what to do, blundered, sacrificing a bishop necessary to maintain the drawn position for black. In so doing, Fritz lost an essentially drawn game.

In light of these examples, it seems obvious that even the best current AI systems are highly brittle. Behind their seemingly brilliant performance lurks a surprising intellectual dimness.

In other words, our current crop of AI programs are so many idiots savants. They know how to do one thing, under the usual conditions. And that’s it.

They are prime examples of ANI, and many experts now argue that there is little reason to believe we are ever going to get to AGI by traveling farther down the path we are on.

What would it take to make a program more flexible? Not just more reliable, but more adaptable, more able to change its behavior appropriately when it encounters unexpected difficulties? In short, more able to understand what it is doing?

There are roughly three theories on the table.

First, there are those who believe that something like human common sense can still be imparted to systems broadly similar to the ones we have today. Their idea is that to get flexible common sense, all we need is to get past what has been called computer science’s “blank slate” problem.[20]

This means that AI programs should not have to learn everything from scratch. If the goal is to simulate human intelligence, and human beings come into the world already equipped with a fair amount of innate knowledge of the world (which they do), then computers ought to be able to be given that advantage, as well.

More generally, humans rely on a great deal of background information and situational context, whether innate or learned. Computers will need to know a lot more than they do at present about seemingly unrelated aspects of the world if they are to break out of ANI into the more flexible realm of AGI. Moreover, an important part of that contextual information is going to be knowledge of causation, that is, of what sorts of things influence which other things.[21]

This brings us to the second important proposal on the table for fixing AI. In a book that is a must-read for anyone interested in this topic, computer scientist and entrepreneur Erik J. Larson has expressed grave doubt about our ever being able to ramp up current systems to the level of general human intelligence.[22]

Larson’s basic argument is simple. Computer programs as we currently know them simulate two aspects of human intelligence. GOFAI mimicked deductive reasoning, while artificial neural networks imitate inductive reasoning. However, as Larson demonstrates at length, human beings are in fact far more reliant upon a third type of reasoning, namely, “inference to the best explanation (IBE),” or, as it is also known, “abduction” (the latter term is due to Charles S. Peirce). (Example: If I come across a wolf’s track in the woods, I search for the most likely cause that would explain this effect—a wolf passed this way some time ago.)

Although Larson argues persuasively for the central importance of this third form of inference as the foundation of common sense, he does not hold out much hope of our being able to reduce abduction to algorithmic form.

If his argument is sound—and, as we shall see below, he is not alone: other philosophers have long been arguing in a similar fashion—then the prospects of our ever closing the gap between ANI and AGI are slim to none.

Finally, the third proposal on the table is that computers be completely redesigned in such a way as to incorporate more realistic models of how brains really work. What does this mean? It may mean at least two different things.

First, it may mean that we need to move beyond both the GOFAI and the ANN approaches altogether, instead modeling the rate-bound changes in the brain’s measurable functional activity (using EEG data as a proxy) by means of systems of differential equations, just as we do in other areas of science. This is often called the “dynamical systems theory” (DST) approach.[23]

First and foremost, DST is a branch of mathematics. It is descended from the “qualitative dynamics” developed by Henri Poincaré around the turn of the twentieth century to handle the “three-body problem,” which standard Newtonian celestial mechanics could not model with sufficient exactitude.[24]

DST is the basic approach used today whenever cognitive scientists find themselves confronted with systems exhibiting complex, nonlinear (“chaotic”) behavior. As a prominent psychologist and philosopher team have recently pointed out:

Nature . . . is much less attached to simple linear and nonlinear functions than are the researchers trying to make sense of nature. It presents us with numerous phenomena in which the changes of state are nonlinear in complex ways.[25]

To be sure, DST is controversial. One of the main complaints leveled against the DST approach by proponents of GOFAI and ANNs is that the cognitive functioning it models is generally located only at the level of surface EEG activity. Critics argue that DST provides no account of the underlying neural mechanisms causing that activity. For this reason, DST models are often derogated as “phenomenological,” as opposed to “explanatory.”

The other approach to a more brain-based AI is even newer and more controversial. However, it possesses the virtue of responding directly to the charge that DST provides merely phenomenological models.

I am thinking of certain recent attempts to model cognition via physical theories of the collective action of extended nerve cell assemblies—groups of neurons whose coordinated activity may extend over broad regions of the cerebral cortex. Once again, this approach comes in two flavors, classical and quantum, based on nonequilibrium thermodynamics [26] and quantum field theory, [27] respectively.

Now, one might be forgiven for wondering how on earth modeling the brain’s activity using differential equations—as though that organ was just another, ordinary physical system—might throw any light on cognition.

After all, the philosophical doctrine of computationalism was originally introduced precisely because it appeared to many investigators that the brain produces knowledge by means of processing symbols of one sort or another. And for that, we need digital computers—whether GOFAI-inspired or in the form of ANNs—don’t we?

The answer is: No, we don’t.

And the reason why, according to this last group of researchers, is because the brain is not in the business of “processing information” or “data” or anything else, because it does not “represent” the world in symbolic form in the first place.

But, as one investigator has poignantly asked, “What Might Cognition Be, If Not Computation?”[28] His answer: an “epistemic engine,” dynamically coupled to its environment.

According to this idea, the brain interacts directly with the environment by means of establishing causal relations between external features of the environmental and the activity of its own, internal nerve cell assemblies.

If you like, the rallying cry of this approach to cognitive science might be:

Down with the Turing machine! Up with the Watt governor!

Obviously, there is no space here to go into all these proposals in any detail. Instead, in the final section, I will discuss some of the philosophical considerations that might be advanced in favor of some of them and against others.

First, though, let us take a quick look at the moral/political viewpoint on AI.

Moral/Political Point of View

Moral and political concerns related to AI may be divided into two broad categories.

First, there are risks and other concerns that are clearly relevant to the kinds of AI machines we have today or may reasonably expect to have in the near future.

Second, there are concerns that may putatively be raised by forms of AI that do not yet exist and whose future existence cannot be predicted with any degree of confidence.

Examples of the first category might involve discussions about how liability ought to be apportioned for, say, traffic accidents caused by self-driving vehicles, or about how automated drones capable of deciding on their own when to pull the trigger on their target should be held accountable.

Such devices, with their attendant problems, are already here. They present lawyers, insurance adjustors, generals, and just-war theorists with legal and moral dilemmas never faced before.

To be sure, as the drone example already implies, some issues in this category are far more complex, with wider social and political ramifications, than others. An especially concerning example with the broadest imaginable consequences is the presidential executive order mentioned above.

In this document, President Biden writes that:

Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights. My Administration cannot—and will not—tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice.[29]

The red flags set into motion by this ukase are manifold. To begin with, the language in the document simply assumes the danger of “systemic racism” in AI programs already in use throughout American society. It makes no effort to persuade anyone of the actual existence of this danger. It simply imposes a new, highly intrusive regime of government surveillance upon the country because it can.

For many, the real purpose of the President’s executive order will appear to be to put in place a recipe for the alignment of AI with ”progressive” values and agendas. However, setting aside such (one hopes) outlier cases as the recent presidential executive order—and while there will surely be disagreement about precisely how and to what extent AI ought to cause us to revise our liability laws, rules of engagement, and so on—I do not think there will be much dispute about the legitimacy of the questions themselves.

AI as it already exists clearly calls for rethinking many of the moral and political assumptions we have been comfortable with up until now.

I suspect that where moral intuitions are going to differ will be regarding the second category—how seriously we ought to take threats supposedly posed by forms of AI that do not yet exist and perhaps never will.

I am going to call these the “alarmist” scenarios. There are two different sorts of alarmist scenarios, as well.

First, there is the fear that AI may come to be a threat to human beings in a direct physical sense—that the machines might take it into their heads to enslave the  human race, if not exterminate it.

There is a whole sub-genre of science fiction literature devoted to this theme, of course. Unfortunately, it may also be found in the thinking of more exalted persons, such as an Oxford philosopher[31] and even a recently deceased, internationally respected American statesman, who has publicly wondered:

Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them?[32]

Finally, in the second category, one finds the additional fear, not that the machines will mistreat us, but that we will mistreat the machines!

For example, one professor of communications has recently argued in favor of extending full legal rights to robots,[33] while an edited volume of essays by legal scholars and philosophers raises the question—with a straight face—how robots can best be protected from sexual exploitation by humans.[34]

To be sure, it is easy to make fun of such academic vaporings. Indeed, it is far too easy. These ideas are not really a laughing matter, because they tacitly demean human beings by reducing them to the ontological and moral status of machines.

Certainly, the perception that there is no significant difference between humans and machines seems to be what is driving the enthusiasm in certain quarters for such troubling ideas—too tedious to explain here—as “transhumanism,”[35]the Singularity,”[36] and “longtermism.”[37]

But if we wish to successfully resist such preposterous notions—and defend the traditional understanding of the specialness (if not sacredness) of human beings—then we will need to do a lot more than mock them. We must pinpoint the precise reasons why they are foolish and worthy of rejection.

And to do that, we will need to stand back and look at the whole subject of AI from a wider and higher perspective—that of philosophy.

Philosophical Point of View

Arguably, the concept that human = machine is the beating heart of AI, in all its guises. You might look far and wide, though, without finding much actual evidence advanced in support of this proposition.

At best, one might hear some variation on the question, “What else could a human being be, if not a machine?,” often accompanied by the sneering addendum, “A soul?,” muttered under the breath.

This form of reasoning has been well described by a former, hi-tech industry insider, turned renegade:

Standing amazed before this human-created machine, the computer scientist declares it to be our very identity; then, to learn who and what we are, he advises that we study . . . the machine.

This circular idea—the mind is like a computer; study the computer to learn about the mind—has infected decades of thinking in the computer and cognitive sciences.[38]

In short, the equivalence between man and machine is an article of faith, with very little in the way of evidence or sound argument to support it. In contrast, there are many good reasons to question it.

First, there is the issue of context, or background information, already touched upon above. This point has been emphasized by philosopher Hubert L. Dreyfus, both in his 1972 book[39] (with several later editions) and in more recent work.[40]

Dreyfus draws upon Martin Heidegger’s account of how the intelligence of our conscious self (Dasein) grows out of our lived experience of inhabiting a “world” (in a semi-ecological sense) that is centered upon our body.

Dreyfus makes the point that the most fundamental form of human intelligence is what he calls “skillful coping” within this world of lived experience. Higher-level capabilities, he believes, like language, logic, and math, derive from this lower-level, skillful but inarticulable coping. And what cannot be articulated cannot be reduced to an algorithm.

Our lived world is not articulable, either, since it largely consists of our skilled coping. Hence, it, too, is irreducible to algorithmic form. Therefore, according to Dreyfus’s argument, tacit human “background” experience can never be made explicit, and so cannot be rendered into the form of a computer program.

And if all that is so, then the hope that ANI programs may be boosted over the hump into the Promised Land of AGI by adding background information is vain.

I would not want to claim that Dreyfus’s argument from the phenomenology of experience to the impossibility of AGI is rock solid. But I do think it is worth taking seriously. And, to that degree, it is certainly superior to the circular thinking of many of AI’s proponents.

The greatest weakness in Dreyfus’s argument, as I see it, is that it takes no heed of the twofold distinction between strong and weak AI, on the one hand, and AGI and ANI (within weak AI), on the other, which we have already discussed.

For this reason, it seems to me that it is open to a proponent of weak AI to push back against Dreyfus as follows:

Dreyfus, you are right—about strong AI. But I merely seek to simulate human behavior—that is, to implement AGI within weak AI. I am not trying to instantiate Dasein!

Fair enough. So, let us now briefly pursue the prospects for AGI before returning to the question of strong AI.

Basically, three improvements have been proposed as the key to transforming ANI into AGI: background information (part of which might be made innate), knowledge of causation, and abduction (or IBE).

Let us set aside Dreyfus’s strictures against background information as applying mainly, if not exclusively, to strong AI. Once we are clear about what is being claimed—namely, simulation à la AGI, not instantiation—we might allow that providing the present generation of deep learning devices with a good deal more contextual knowledge might indeed improve their performance greatly. That seems a safe enough bet. But would that be enough, all by itself, to get us to AGI?

After everything that has been discussed above, one may be permitted to doubt it. More specifically, it seems questionable whether feeding more information into essentially the same kind of machine would be likely to produce a drastically different result.

What if we added one or both of the other two improvements mentioned a moment ago—knowledge of causation and abductive inference?

Adding knowledge of causation to an ANN’s repertoire does sound quite promising. Is it feasible? It may be.

Judea Pearl, who won the 2011 Turing Award for his early work on the application of Bayesian theory to ANNs, has developed over the past two or three decades a new and powerful algorithm which he claims captures the essence of causal and counterfactual reasoning, including reasoning from effects backwards to causes.[41] To many, this sounds a lot like abduction.

And yet, Pearl vigorously denies having any truck with abduction. In fact, he ridicules the very idea of IBE in no uncertain terms. Writing in response to a paper[42] calling for greater attention to abduction in epidemiology, he says:

Another conceptual paradigm which the authors hope would liberate us from the tyranny of DAGs [directed acyclic graphs—ed.] and counterfactuals is [Peter] Lipton‘s romantic aspiration for “Inference to the Best Explanation (IBE).” IBE is a compelling mantra, going back at least to Charles Peirce’s theory of abduction which, unfortunately, has never operationalized its key terms: “inference-to,” “best,” and “explanation.”[43]

In short, although Pearl has made what he believes to be the key contribution to the push for AGI—namely, the ability to reduce causal and counterfactual reasoning to algorithmic form—he is the first to admit that what he has achieved is by no means equivalent to abduction in Charles Peirce’s (and Erik Larson’s) sense.

To an outsider, this is beginning to sound like quibbling about words.

On the other hand, Larson has argued powerfully that what is needed is precisely abduction—that nothing less than IBE will suffice to get us all the way to AGI.

Who is right? Is abductive reasoning both sui generis and necessary to human reasoning? Or is it, pace Peirce and Larson, unnecessary? Or is what Pearl has achieved tantamount to abduction, after all, despite his own dismissal of that claim? It is not obvious how to adjudicate among these competing interpretations of the importance of induction for AGI. However, I would like to make two points.

First, I believe there is a strong case to be made that abduction is a part of what Dreyfus calls our “lived experience” in the world (i.e., the “background”). Why do I say this? Because it would appear to be an essential aspect of what he means by “skillful coping.”

Moreover, it is not clear that anything approximating AGI—which you may recall means behavior indistinguishable from that of a normal human being—is achievable in the absence of conscious experience.

At the very least, I think the burden of proof ought to rest on those who claim that “philosophical zombies” might exist (for that is what a machine that lacked consciousness but simulated human behavior perfectly would be).[44]

The more one thinks about these various ideas, the more intuitively clear it becomes that skillful coping in the lived world, IBE, and our subjectivity are all part of a package deal. Or, perhaps, it would be more correct to say that abduction simply is an aspect of skillful coping—possibly the main aspect—and that consciousness is essential to skillful coping-cum-abduction.

If that is so, then the prospects for AGI are not so rosy, after all—at least, not unless a case can be made out for strong AI, as well, since it is now looking like the two are intimately connected.

The other point I wanted to make was raised recently by the Italian physicist Giuseppe Vitiello, who begins by pointing out that all forms of human reasoning are fallible.

That is:

  • Deduction may rest upon false premisesall swans are not in fact white; some are black.
  • Induction may fail if the world surprises us—like Bertrand Russell’s chicken, which runs to the farmer every morning expecting to be fed, until one day the farmer wrings its neck instead.[45]
  • Abduction may also fail if the world surprises us—as in the Indian story of a man who, to please his lover, makes a figure like a wolf’s footprint with his fingers in the dust, only to find the whole village terrified of the wolf.[46]

Since human reasoning ability is essentially fallible, says Vitiello, human intelligence is not formalizable, for an essentially fallible computer program would not be human-like—it would be broken.

Vitiello concludes that, to build a machine with human-like general intelligence, we would need to know how to build a machine with an inherent capability for making mistakes.[47]

Common sense is not about our possessing certain knowledge or about regularities in the world being 100-percent reliable. Rather, it is about our ability to behave reasonably when the world surprises us.

These reflections bring us to the threshold of strong AI.

The locus classicus of this discussion, of course, is John Searle’s “Chinese Room” argument,[48] which is essentially a modern updating of “Leibniz’s mill.”[49]

Leibniz reasoned as follows: If you imagine the brain to be the size of a mill and imagine yourself walking among its turning parts, nowhere will you see anything capable of thinking or feeling. Assuming the brain to be composed of inert matter only, like the parts of a mill, Leibniz believed he had shown that the mind must be immaterial.

Searle took Lebiniz’s strategy a step farther. He imagined a sealed room with himself inside. Though knowing no Chinese himself, nevertheless, with the help of a set of books containing Chinese characters, each indexed by an Arabic numeral, together with another set of books containing elaborate rules about how the indexed figures may be combined with each other, Searle might in principle provide written answers in Chinese to written questions submitted to him in the same form.

How? By receiving into the room one set of cards with sentences in Chinese printed upon them as input, consulting the rule books, and writing down the corresponding strings of Chinese characters as indicated by the rule books upon a second set of cards, which he then sends back out of the room as output.

To a Chinese-speaker located outside the room, it would seem as though the person inside the room understood Chinese. But Seale does not understand Chinese. To him, the characters on the cards he receives as input and issues as output are entirely meaningless.

For this reason, Searle has no understanding of the conversation that is taking place outside the room, which he is facilitating.

The reader will not be surprised to learn that Searle maintains that computers are like the Chinese room. The whole point of his famous thought experiment was to show that a computer program that functioned the same way he did inside the Chinese room—say, a Chinese version of ChatGPT—would have no more understanding of what it was doing than he did. Which is to say, none at all.

Therefore, while a program might give the appearance of understanding Chinese, in truth it would simply be juggling symbols according to the rules of that language without having the least glimmering of the meaning of those symbols, either separately as individual words or grouped together in sentences.

I believe that, as “intuition pumps,” Leibniz’s mill and Searle’s Chinese room remain as powerful today as they were more than 40—and 300—years ago. However, many people report finding these arguments inconclusive. So, here is one more.

Increasingly, neuroscientists are coming to the conclusion that the brain is far from a passive transformer of stimuli into responses. Rather, according to some, it is better thought of as a “dynamically active organ” that takes pains to seek out information about its environment even when no stimulus is present.[50]

Some investigators are even extending this idea beyond the brain to encompass all the levels of the organism—from macromolecules to cells, tissues, and organs. Everywhere they look, they are finding what one team has called “endogenous activity.”[51]

Now, assuming these observations are correct, the question arises: Could we program a mere machine to mimic such spontaneous behavior?

No doubt, we might be able to devise a machine that could simulate endogenous activity. But here, it seems to me we are up against a fundamental limitation of simulation.

Intuitively, there is a world of difference between simulating fear (which human beings can also do) and actually feeling fear.

By parity of reasoning, it seems to me that there is a crucial difference between simulating endogenous activity in the absence of stimuli and possessing the real capacity for such activity.

But this difference—like the difference between simulating fear and feeling real fear—becomes apparent only from the far side of the weak AI/strong AI divide.

Once again, I must acknowledge that nothing I have said here is conclusive. Those with a prior commitment to the view that they themselves are nothing but machines will be unmoved by my arguments.

But in light of everything that has been discussed above, I think a disinterested observer might well conclude that intelligent human behavior is likely to be too closely associated with conscious experience for us ever to succeed in prizing them apart.

And if AGI really did require the success of the strong AI program—that is, the successful instantiation of human understanding in metallic form, as opposed to its simulation—then it is tolerably clear that strong AI, and so AGI, are simply not in the cards.

Put another way, it is arguable that no machine will ever be able to be taught to act reasonably—by which I mean with common sense—until machines have an existential stake in their own, ongoing existence.

It just seems that circumventing the brittleness of ANI by instilling our computers with the capacity for skillful coping and abduction—not to mention giving them the ability to make mistakes and to generate their own activity endogenously—would require us to endow our machines with genuine concern for their own existence. And that, I submit, is simply not in the cards. Not now. Not ever.

Why?

Because the matter out of which something is made matters to the causal powers the thing possesses. And if that is so, then the doctrine of computationalism is flat-out wrong.[52]

In that case, then we must admit that the brain is simply not a machine—that, in fact, organisms and machines belong to disjoint metaphysical categories.

And I submit that only biological matter possesses the causal powers necessary to give rise, under the right circumstances, to systems that care about their own existence. We call such systems “agents,” and agency—in a broad, biological sense of the term—appears to be another package deal, essentially involving purpose, value, meaning, striving, and other intentional and normative properties.[53]

That is why, in my estimation, no mere machine with nothing at stake existentially—which does not truly care one way or the other about getting things right or even whether it goes on existing—will ever be capable of acting reasonably.

The main reason I cite for this conviction is that nothing made out of inert matter such as silicon, steel, and plastic can care about anything—which, if true, casts grave doubt on the possibility of strong AI and hence (if my arguments above are correct) even on that of AGI.

Call it the “Rhett Butler Problem”:

The trouble with machines is that they just don’t give a damn.

Conclusion

We have covered a lot of ground in this primer. What moral should the practically minded businessman take away from this quick Cook’s Tour of the AI debate?

I think the upshot of our discussion may be summarized in three main points:

  1. The short-term prospects for AI (that is, ANI) are excellent. ANI will likely create many new applications, leading to better and better business opportunities, for some time to come.
  2. The longer-term prospects for genuine AGI—for something that would truly transform human life in unrecognizable ways—are much less certain. So, buyer beware.
  3. Strong AI is almost certainly going to remain in the realm of science fiction.

These conclusions may sound disappointing to some. However, in the long run facing up to the facts is always best for the bottom line.

And, besides, the last point, especially, is not necessarily bad news. For it means, among other things, that we can all rest easy at night without worrying about violating our laptop’s “rights” or how we can survive the “robot apocalypse.”


[1] Hector J. Levesque, Common Sense, the Turing Test, and the Quest for Real AI. Cambridge, MA: MIT Press, 2017; p. 1.

[2] These governments have recently spent approximately the following annual amounts on AI research: China, $13.4 billion (2022); US, $3.3 billion (2022); UK, $1.3 billion (2022); Canada, $344 million (2021). Figures taken from various online sources.

[3] Bryan Wong, “Artificial Intelligence: A Seismic Secular Growth Opportunity,” August 31, 2023.

[4] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” October 30, 2023.

[5] Bryan Wong, op. cit.

[6] Ibid.

[7] Ibid.

[8] Ibid.

[9] Ibid.

[10] Ibid.

[11] Cited in Gary Marcus and Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust. New York: Pantheon Books, 2019; p. 3.

[12] Ibid.

[13] Marcus and Davis, op. cit.; p. 4.

[14] Marcus and Davis, op. cit.; p. 5.

[15] Ibid.

[16] Ibid.

[17] Ragnar Fjelland, “Why General Artificial Intelligence Will Not Be Realized,” Humanities and Social Sciences Communications, 2020, 7: 10.

[18] Marcus and Davis, op. cit.; p. 27.

[19] Roger Penrose and Emanuele Severino, “A Dialogue on Artificial Intelligence Versus Natural Intelligence,” in Fabio Scardigli, ed., Artificial Intelligence Versus Natural Intelligence. Cham, Switzerland: Springer, 2022; pp. 2770; the chess problem is described on pp. 28–32.

[20] Marcus and Davis, op. cit.; p. 144.

[21] Marcus and Davis, op. cit.; p. 142.

[22] Erik J. Larson, The Myth of Artificial Intelligence. Cambridge, MA: Harvard UP, 2021.

[23] Tim van Gelder, “The Dynamical Hypothesis in Cognitive Science,” Behavioral and Brain Sciences, 1998, 21: 615–665.

[24] June Barrow-Green, Poincaré and the Three-Body Problem. Providence, RI/London: American Mathematical Society/London Mathematical Society, 1997.

[25] Adele Abrahamsen and William Bechtel, “Phenomena and Mechanisms: Putting the Symbolic, Connectionist, and Dynamical Systems Debate in Broader Perspective,” in Robert J. Stainton, ed., Contemporary Debates in Cognitive Science. Oxford: Blackwell Publications, 2006; pp. 159–185; p. 173.

[26] Walter J. Freeman, How the Brain Makes Up Its Mind. New York: Columbia UP, 2001.

[27] Giuseppe Vitiello, My Double Unveiled. Amsterdam: John Benjamins Publishing Company, 2001.

[28] Tim van Gelder, “What Might Cognition Be, If Not Computation?,” Journal of Philosophy, 1995, 92: 345–381.

[29]Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” October 30, 2023.

[30] For a different wrinkle on the progressive critique of AI, see the following book by an avowedly anti-capitalist “Professor of AI and Justice”: Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale UP, 2021.

[31] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford UP, 2014.

[32] Henry A. Kissinger, “How the Enlightenment Ends,” The Atlantic, June 2018 issue.

[33] David J. Gunkel, Robot Rights. Cambridge, MA: MIT Press, 2018.

[34] John Danaher and Neil McArthur, eds., Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press, 2017.

[35] Max More and Natasha Vita-More, eds., The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. Hoboken, NJ: Wiley-Blackwell, 2013.

[36] Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology. New York: Viking Press, 2005.

[37] William MacAskill, What We Owe the Future. New York: Basic Books, 2022.

[38] Ellen Ullman, Life in Code: A Personal History of Technology. New York: MDC, 2017.

[39] Hubert L. Dreyfus, What Computers Can’t Do: The Limits of Artificial Intelligence. New York: Harper & Row, 1972.

[40] See, especially, Hubert L. Dreyfus and Stuart E. Dreyfus, “Making a Mind versus Modeling the Brain: Artificial Intelligence Back at a Branch Point,” in Hubert L. Dreyfus, Skillful Coping: Essays on the Phenomenology of Everyday Perception and Action, ed. Mark A. Wrathall. Oxford: Oxford University Press, 2014; 205–230. (Originally published in 1988.)

[41] Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect. New York: Basic Books, 2018.

[42] Nancy Krieger and George Davey Smith, “The Tale Wagged by the DAG: Broadening the Scope of Causal Inference and Explanation in Epidemiology,” International Journal of Epidemiology, 2016, 45: 1787–1808.

[43] Judea Pearl, “Comments on: The Tale Wagged by the DAG,” International Journal of Epidemiology, 2018, 47: 10021004.

[44] A so-called “philosophical zombie” is defined as a creature that looks and acts exactly like a normal human being, but completely lacks “interiority” (internal illumination), “subjective experience,” or “consciousness.”

[45] Bertrand Russell, The Problems of Philosophy. Oxford: Oxford University Press, 1959; p. 63. (Reprinted in 2001; originally published in 1912.)

[46] See Ramkrishna Bhattacharya, “Haribhadra’s Șaḍdarśanasamuccaya, Verses 81–84: A Study,” in Idem, Studies on the Cārvāka/Lokāyata. London: Anthem Press, 2011; 175–186.

[47] Giuseppe Vitiello, “The Brain is Not a Stupid Star,” in Fabio Scardigli, ed., Artificial Intelligence Versus Natural Intelligence. Cham, Switzerland: Springer, 2022; 107–144; see, especially, pp. 131134.

[48] John R. Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences, 1980, 3: 417–457.

[49] Gottfried Wilhelm Leibniz, Monadology, Section 17 (1714); in, e.g., Lloyd Strickland, ed., Leibniz’s Monadology: A New Translation and Guide. Edinburgh: Edinburgh UP, 2014.

[50] Björn Brembs, “The Brain as a Dynamically Active Organ,” Biochemical and Biophysical Research Communications, 2021, 564: 55–69.

[51] Henry D. Potter and Kevin J. Mitchell, “Naturalising Agent Causation,” Entropy, 2022, 24(4): number 472; see section 2.3, pp. 5–6.

[52] Many philosophers will reject this claim out of hand because computationalism is a subset of a broader doctrine known as “functionalism,” and they are committed functionalists. Functionalists maintain that only structure, not material composition, matters to the causal powers a thing has. They adduce as evidence the alleged fact that the same function may be instantiated (or “realized,” as they say) in quite different material substrates. Whence, the idea that it makes no difference whether pain, say, is instantiated in a carbon-based or a silicon-based system, so long as the system’s component parts are arranged in the right way. But this so-called “fact” is empirically mistaken. If it were correct, why would neuroscientists study the neurons of the sea hare and the squid to learn about human neural functioning?

[53] For more on this topic, see Chapter 4: “What Might an Organism Be, If Not a Machine?” of my 2011 University of Notre Dame dissertation, Teleological Realism in Biology.