The Real Function Of Artificial Intelligence Is To Extend Human Cognition

This essay will argue for a categorical deduction that was first presented in Essay 3: How Consciousness and AI Differ: Gödel-Completeness. The idea is that we know some things about AI; i.e., that it is an algorithm. Algorithms are systems of logic, so we can expect the Gödel Incompleteness Theorem to apply directly to them in all cases. Conscious minds are not systems of logic, although there is perhaps a bit of fuzziness between subject and statement when we use language to describe them.

Picture a game trail in the mountains somewhere. It exists because animals repeatedly walked that way. Now perhaps a dirt road is graded into the slope where the game trail was, enabling cars and trucks to use the route. Eventually, if enough people want to use the dirt road, it will be paved, and even more people will be able to use it. If the mountainside is our repository of knowledge as a species and the game trail is a way of speaking about things, then AI is Google Maps. It doesn’t get out there and blaze the trail, and it doesn’t know the trail as well as the animals that created it, but it can certainly help us find the shortest path from one place to another in the area.

In this installment, we will start with the story of where Gödel-completeness as a concept comes from and disambiguate the term as used here from the usage Gödel himself applied it to. Then we’ll walk back from the theoretical heights of the study of consciousness to derive a robust answer to the question of just what AI is for. Understanding the function of AI as an algorithmic tool similar to Google Search will help us to better understand why the Doomer movement is unlikely to age well, but it can also empower us to better utilize the AI tools we already have access to.

An Idea’s Story‍

When I was doing the research for my first peer reviewed philosophy book, Formal Dialectics, I came across the Gödel Incompleteness Theorems and wracked my brain for insight that could help me understand Gödel’s thinking (Daniel, 2023). I was hung up on one notion in particular, this idea that all systems in mathematics could have this property of incompleteness. Like an unopened package on a shelf, it sat in the back of my mind.

Later, the insight that mathematics is a system of language came back and I realized that Gödel’s theorem might apply to language itself, a broader category than mathematics. If mathematics is a language, and theorems of mathematical logic cannot prove the validity of the system in which they are found no matter how good they are, what would enable the more complex English language, for example, to prove its own validity and/or usefulness? Only the actual usage we put it to, it turns out. And that had all sorts of strange ramifications, not the least of which was the Principle of Lexical Isomorphism - which I formulated as follows: a system (of language) depends for its existence upon something outside itself. At best, the language can be the same shape as the thing it represents.

The insight that falls out, here, is that language itself is a sort of computational device. It enables us to represent our memories and compresses aspects of them, which we can then access later to aid in retrieval. Language is more similar to analog computers than to digital computers, which use language but only in a narrow, deterministic way - whereas analog computation is fuzzier (Maley, 2020). The idea is that systems (read: arbitrarily denoted sets or groups of objects) are incomplete as a rule because they cannot exist without people to make them up and use the relationships in them to draw conclusions about the things they are about.

But it goes even further: languages and descriptions only really exist insofar as we use them.

The component of Worldview Ethics that took ten years to formulate is a similar inference, but from the confines of language to the vast horizon of consciousness. Consciousness is, just as language, never extant without a subject; i.e., it is always “of” something beyond itself. In some sense it is dependent upon its environment just as languages need to be used by actual people to have value.

From early in my interdisciplinary graduate school research experience, the necessity that biology should relate all the way back to the ethics of Aristotle somehow seemed inevitable. The thought that there could be some sort of account of consciousness that could prove objectively useful without losing that liberal individualist focus on the individual person was there, all the way back in the beginning, as well.

The interceding time has been spent reading Damasio and Carlo Rovelli, studying colloidal chemistry and biophysics to understand the metabolism at the granular level, and pushing myself to study the metabolism of the human body. All of this seemingly disparate activity contributed to the clarity with which the concepts of Worldview Ethics can now be presented.

That is, all of those things helped to pave the way for a new account of consciousness that has been emerging from contemporary science for some time. It started with the inception of cognitive neuroscience back in the eighties, this idea that the machinery of the brain was somehow responsible for the emergence of the magic of the mind.

Cognitive Neuroscience Investigates Consciousness‍

Today, cognitive neuroscience, an incredibly interdisciplinary modern field of study, extends all the way from the study of the brain into the study of computer algorithms and beyond. Cognitive refers all the way back to a critique of behaviorist psychology forwarded by Noam Chomsky in the late fifties. Chomsky argued that behaviorist thinking treated the subject as a black box, making no effort to explain the actual machinery of cognition. Cognitive psychology and cognitive neuroscience have followed from the efforts of researchers to respond to this criticism by diving in and explaining cognitive processes.

In addition to the cognitive perspective’s arrival, cybernetic technologies have connected the world to an extent we never could have imagined before it happened. When I was writing Formal Dialectics, I was continually in awe of the breadth of the bibliography I was employing. Never before, I thought many times as I was writing, has anyone had such cheap access to such a wealth of knowledge!

If I’d thought about where I would be in five years’ time, I’m not sure I could have imagined the present. One of the most stunning things about the state of modern science is the extent to which it stands poised to explain the lived experience of conscious human beings.

The beauty of the problem of consciousness is that it exists right on the edge of our ability to describe it, putting our language into an impossible situation and forcing it to grow, to test itself, to develop. Cognitive neuroscience is the primary domain in which language is being developed directly in pursuit of an explanation of consciousness.

New dimensions are appearing in our understanding of the world, and, as exemplified by the remarkable innovation in language processing that makes phenomena such as ChatGPT possible, our consciousness can extend further than ever before.

Consciousness never exists in a vacuum, indeed if one accepts enactivism, then consciousness is equally a property of an environment as a body or a brain. Indeed, conscious brains continually map themselves, their bodies, and, through their bodies, their environments.

Despite the fact that the body and brain are part of the enactive complex that also includes the environment, individual people & animals seem to have the distinction of being the locus of the deployment of consciousness.

In a sense, it is fair to say that environments depend upon people for higher-level consciousness!

Self-Justification and Gödel’s Completeness Theorem‍

A system that justifies its own existence could be said to meet the criteria of Gödel’s Incompleteness Theorem in a way that mathematics and language more broadly cannot. For evidence that people are different from systems of language in this regard, one must only consider the unbounded difficulty of creating a complete description of a particular human being in language. This task is so difficult that it makes sense to think of it as impossible, all because people are extremely complex.

If we want to push a bit, wondering why the subject was incompleteness rather than completeness, we can turn to the Gödel Completeness theorem, which is a bit dry. Gödel-complete, in Gödel’s terms, refers to a system of logic that essentially describes well-formed theorems in a given context (i.e., a well-constructed deductive argument in which the conclusion follows from its premises is Gödel-complete in the traditional sense. Philosophers just say ‘valid’ and call it a day.). The asymmetry here is palpable–Incompleteness refers to a profound disconnect between the way people insist upon using systems of language, Completeness only deals with well-formed theorems that still suffer this fatal flaw?!

Gödel doesn’t seem to have been interested in pushing the limits here. Perhaps nothing could be more fittingly ironic than naming the self-justification that life brings with it Gödel-complete. Instead of only negating the Incomplete part of the Gödel Incompleteness Theorem alone, we can negate the “system of logic” part as well to reach a new definition of Completeness which looks a lot more like a life form than Gödel’s own conception of Gödel-completeness.

A system that could be said to be conscious would always have a worldview with its own self-justification; in a sense we could say that minds run a Gödel-complete self process which is definitely distinct from contemporary AI algorithms because every piece of software ever built is Gödel-incomplete, which we know because all software is language-based. In a nutshell, language is like a cybernetic egregore that none of us is in complete control of, but which all of us learn to navigate at a stunningly high level.

Anyone who has ever had a dog could tell you all about their pet’s personality, despite the pet being a non-human animal that likely couldn’t read, write, or carry on a conversation about philosophy. However, very few pet owners of any species would say that their pets lacked any capacity for self-determination.

Justification unneeded, pets manifest their consciousness by interacting with their environment. Hence, we can draw a line between computer programs that are made of language and pets that are made of living, metabolically active cells. Whether our neologism in expanding Gödel’s definition of Gödel-completeness is accepted or not, the concept of the limits of systems of language is what is at stake here.

The Attributes Expressed by Machine Learning Algorithms‍

Machine learning algorithms, by necessity and because they are highly abstract digital representations, lack a will. Will, in the philosophical sense, is the ability to do things in the world. It would perhaps be fair to say that possessing a will and being Gödel-complete are the same thing. The will is the part of a moral agent that determines goodness or badness in the Kantian view, and ML does not have a will - so Kant would likely classify AIs as neutral rather than good or evil.

We could, for example, imagine a system with built in countermeasures to secure itself that had some rudimentary level of consciousness of its surroundings. This system could be very simple; it could, for example, be set up to detect modifications to files in the core set of programs and send an email to the administrator when such a change is detected. And, at least along the security vectors it was programmed to follow, the system could be said to be engaging in the same sort of mapping that conscious minds do, albeit at a far lower level of complexity. Even if we decided to be charitable enough to accept this statement about the machine at face value, our security program would not have a will of its own, but would be instead a mere function or extension of the will of the programmers who created it. It’s still very cool, but it isn’t the same sort of thing as the organism responsible for developing it and tasking it to do things, then, indeed, effectively turning it off when use is finished.

People aren’t (at least the researchers deploying technologies today!) making much progress toward machines that have a will or a self because we don’t know how to do it, among other reasons, such as the boundless complexity that even the world’s greatest researchers of the brain still do not grasp in entirety. This is part of what made the AI Panic Letter so powerful despite being so ultimately silly.

Instead, what the programmers today do, is to build “artificial intelligence” systems that are “trained” on vast datasets. It is fair to say that, during this process, the neural network loses the form it was given by its programmers and takes on divergent properties that can be influenced by changing the size or content of the dataset that is fed into it. However, the outputs can be rather unpredictable because the correlation between input and output is not 1 to 1. This explains why submitting the same query to ChatGPT multiple times can yield widely different results.

The properties of the neural network are, to the extent that they are immediately causal and not emergent properties, directly related to the training data. In some sense, a Large Language Model (LLM) like ChatGPT is nothing but a complicated snapshot of a particular part of the internet, fed into a program that spits it back out in a way that is extraordinarily accessible to the people who use it for things.

The conversational qualities of ChatGPT are much as you’d expect from someone you were texting with, not so much like any other chatbot, and the reason is that the scope of ChatGPT is much broader than the scope of your ordinary chatbot. Systems made of language that can exhibit emergent properties and generate quality text that is very similar to text human beings can generate are remarkable in their own right, but they are not conscious or aware or alive and it is important for us to remember that?

Still, having dispelled some of the myths, the question remains:

What would it mean to say that a machine learning algorithm was conscious of something?

If consciousness in humans is primarily a phenomenon of the body, call it the instantiation of the collective survival drive of the thirty-seven trillion cells that make up that body, then it may be fair to say that the Gödel-complete human consciousness is forever distinguishable from any sort of machine consciousness because it has a survival drive that the machines will always lack. But we have essentially no idea what a conscious machine would be conscious of.

Example: ChatGPT vs Google Search‍

Google Search is an index. That is, it is an application that scours the internet for information which it then compiles into itself and serves up to users who fill out a form and click the button. ChatGPT is another sort of index. Google Search uses an algorithm to display results in a particular order; ChatGPT uses an algorithm to determine which word should come next in a response.

Google Search made the internet navigable; ChatGPT promises to make the entire domain of the written word navigable. Both firms are locked in a mad dash to provide tomorrow’s consumers with what they want most: more effective productivity tools.

Still, in this light, ChatGPT does sound a bit weak, compared to the alarm bells being rung by a large contingent of tech titans. This technology has been called Artificial Intelligence, of course. And nothing is more closely related to intelligence than consciousness. People can be forgiven for assuming that ChatGPT has consciousness even though it doesn’t, but perhaps the comparison to Google can position the ChatGPT technology product in the proper category.

Non-conscious Intelligence‍

You could say that consciousness is the bedrock from which intelligence emerges, and in many cases you’d be right. Human brains in particular have a way of shaping consciousness to form intelligence that we refer to as language when it can be written or spoken and we don’t talk so much about the other part of it that can’t be written or spoken of. Organisms such as bacteria also have intelligence, however. This intelligence does not seem to have an emergent consciousness at its center, but nonetheless evolution is able to drive the individuals that are part of it onward, creating responses to environmental stimuli that are hard to think of as anything other than the product of intelligence.

If neural networks are to be the center of artificial intelligence, perhaps there is a useful analogy here. Just as bacteria in the physical world represent unquantifiably many attempts to navigate an environment saved, propagated, and furthered by the preservation and continual recreation of a genome, perhaps the interactions people have with the internet can represent a sort of intelligence that is also non-conscious as the networks remember paths people take and self-optimize to achieve better experiences for the conscious participants.

The ineffable qualities of the ever-passing moments of our lives aside, our brains love to take time and computational resources and apply them to the task of abstractly representing aspects of our bodies, surroundings, thoughts, and desires as language.

We do this by journaling, or by talking to and texting with our friends and family. We do it by speaking to each other and by writing books.

Consciousness continually generates these abstract representations because the people who are conscious have a need to navigate the world to achieve their goals.

Above, we referred to this sort of conscious awareness of the world as Gödel-complete.

The thing is, Google and OpenAI have something in common - they both want Artificial Intelligence systems they are building to become first-class tools to help users navigate the internet. That’s right! ChatGPT and Google Search have something in common: they’re both built as tools. And a tool is like an appendage - think of a hammer. You’re better at nailing boards together if you’re using a hammer than if you’re using your fist or a rock, because the hammer is designed for the purpose. The hammer doesn’t do the job, though! The conscious human being using the hammer is the worker.

It is easy to understand technology products like ChatGPT and Google as extensions of human cognition - but we need to remember these are networks that enable human minds to link together, not minds themselves. Rather than conscious agents, these are machines that extend human consciousness via cybernetics. We could think of them as cybernetic appendages that are a few clicks away at any time without any inconsistency in our view.

There may be something to the idea that a collective consciousness could begin to emerge as the world becomes more cybernetically entwined, but we must never forget Norbert Wiener’s description of cybernetics as the technological facilitation of the interconnection of human minds (Wiener, 1948). Cybernetic innovation may bring simulated humans, but it is unlikely that the field of technology designed and developed to connect people will ever replace us with automata.

In the end, AI can extend cognition, but it cannot create it.

References:

1. Daniel, T. Dylan (2018). Formal Dialectics. Cambridge Scholars Publishing.

2. Gödel, Kurt. (1931). On Formally Undecidable Propositions of Principia Mathematica and Related Systems. New York, NY, USA: Basic Books.

3. Maley, C. (2020). Analog Computation and Representation. Preprint, British Journal for the Philosophy of Science. https://doi.org/10.48550/arXiv.2012.05965

4. Mitchell, Melanie. (2023, April 3). Thoughts on a Crazy Week in AI News. AI Guide. https://aiguide.substack.com/p/thoughts-on-a-crazy-week-in-ai-news

5. Wiener, Norbert. (1948). Cybernetics, or, Control and communication in the animal and the machine. Cambridge, Mass.: MIT Press.

__________________________________________________________________________________

Read other essays in this series:

Moral Philosophy & The AI Panic

Toward a Metabolic Theory of Consciousness

How Conscious Thinking & AI Differ

What are Enactive Agents?

Want to continue the discourse? Add your contribution here.