ABSTRACT
Contemporary issues in the newly-ascendant AI industry have deep philosophical problems (Floridi & Nobre, 2024) and increasingly tremendous financial implications (Mickle, 2024). However, progress is being made by researchers from the formerly obscure philosophy of cognitive neuroscience subdiscipline such as Hipolito (2024). In a recent essay, I argue that the problem with AI goes all the way to the point where the concept is defined initially from artificial, meaning built, and intelligence, meaning either a mind’s activity and/or capacity for activity; or the product of this activity. Artificial intelligence is best conceived of as a subset or offshoot of human intelligence; that is, minds can be intelligence or create intelligence which AI then is able to navigate, and which might be thought of as the lexicon of mental action. This essay will unpack the consequences of the realization that AIs are all creations, recognizing that the characteristics of the resources that come together in the creation of an AI/ANI map seems to be a pivotal component of understanding where these newly useful creations can fit into the world as it is understood by science and cognitive science today.
SECTION 1: What is Collective Human Intelligence, and how does it relate to AI/ANI?
To make the fastest headway in providing an answer to the question above, namely “what is CHI and how does it relate to AI/ANI?”, I think the best method is a quick comparison between Aristotle and an LLM, call it ChatGPT3.5 because that’s one everyone is familiar with by now. For starters, both are similar in the sense that they emerged as solutions to the same problem: not everyone can read everything. Aristotle had to actually do the action of reading everything he could get his hands on, which was mostly the philosophical literature of his day and age. ChatGPT emerged when researchers fed all the text they could get their hands on into a powerful computer.
In essence, the similarity is that both The Philosopher (Aristotle) and ChatGPT3.5 are what they are due to their prodigious appetite for information. However, Aristotle was a human being–with all of our strengths and weaknesses. He hungered for some knowledge but not all, following his own interests into the literature of his time first as a student of Plato at the Academy and later as a teacher at the Lyceum, a school he founded. Aristotle was mortal, as a direct consequence of his humanity, but by this same token he had the ability to begin participating in a timeless production and preservation of knowledge that he had a hand in founding. Humanness is beautiful and contradictory, an infinite world of potential bounded by a finite boundary of time, our lifespan.
ChatGPT, by comparison, has neither the ability to create a new means of self-preservation nor the mortality conferred by the fundamentally temporal component of human existence, dubbed dasein by Heidegger. Though Heidegger is rightfully canceled in response to his relationship to the Nazis, he was still part of the canon when I was an undergraduate student. His ideas follow those of Hegel in being idealistic and positivistic, and for the most part he got everything completely wrong - the thrownness of dasein, for example, referring to a key attribute of human existence was an angle that was only expounded upon by the continental European philosophers following in his footsteps because Heidegger had simply to be overwritten. This was required because his explanation was completely off base, but nonetheless entrenched within the emerging phenomenological canon of the time.
If Heidegger could see the existence of ChatGPT 3.5, for example, he might begin to understand why existence necessarily precedes essence–without the metabolic imperative to self-perpetuation, LLMs are completely thrown and driven. To Immanuel Kant, a machine would necessarily lack the capacity for moral action because it would lack the capacity for action at all; the algorithm has no body and it has no will.
Alas, the philosophers of the past cannot see what results their works led us to!
Simone de Beauvoir, in her tirades against the serious man, gets nothing so perfectly crystal clear as the reason authoritarianism must fail: individual minds will always be overpowered by sufficiently motivated collectives. The serious man is the one who fails to accept the loss of control implied by social reality, which prevents his salvation in the form of a sense of humor.
Alas, Heidegger and Hegel were so serious!
ChatGPT, on the other hand, cannot claim seriousness or humor. Its essence is fundamentally linguistic in the sense that it is not driven by the internal states of the model itself at all! Rather, the user is the source of all meaning and action.
In answering our question above, then, it seems important to begin by noting first that collective human intelligence is a record of human thoughts and minds, which can then be accessed by other human beings. So, rather than a technology that hasn’t been invented yet, CHI is the source of philosophy itself, and predates the written word, though our perfection of the interface between the languages people speak today is the big innovation that allows a modern internet-faring mind to venture so far afield.
The AI or ANI (if you want to be specific about what the models today are), as opposed to a human being, cannot create high-quality CHI as a record of lived experience. Instead, the CHI needs to be separated from the outputs of the AI/ANI systems people may use to further our knowledge of the world around us. A primary feature of quality CHI content is provenance that relates back to at least one particular human being, and the more that can be known about the situation of that individual lived life, the better the provenance and the clearer that individual’s contribution in the constellation of all individuals’ contributions to the font of knowledge we all share through our language and our culture.
SECTION 2: The Power of Information
From the time that Thales used the power of information to predict a grain shortage and earn a lot of money forward, stories have held a newer, deeper form of power. The stories we tell about the world have, in the interceding period, developed an entirely new level of power. The incredible thing about these narratives is that they can exist independent of the minds they are created by, for an arbitrarily long time.
In fact, the universality of language as a system of representations has become so powerful that it can easily model some of the most complex phenomena in the world, ranging from the synaptic connections between neurons in an individual human brain to the infinitely complex web of economic relationships that forms the fabric of modern global society. Modeling the relationships between concepts is an imperfect way to understand the relationships between objects in the natural world, but in some ways the imperfection of this system of representation makes it possible for minds to extract value from written speech, and all sorts of incredible things logically follow.
In essence, the point of this section is to drive home the way in which written words unlock all of the possibilities that arise for people who live in highly literate modern societies. We take this web of meaning for granted, often times even as we write about it, and it comes therefore as no surprise that people have begun to ascribe the creative act to LLM systems–after all, these are some of the first systems that can generate a simulated speech act in response to an arbitrary natural language query. This is simply a remarkable development, no matter how you slice it.
However, those who ascribe consciousness to AI systems are positioned to do real harm to AI development, and therefore to the CHI that powers it. The will of the CHI is to solve puzzles, answer questions real people have, and invent new technologies and ways of doing things that make life better for people. We know this because we have been studying science for millennia, collectively, at this point. Still, human relationships depend upon the objectivity of the CHI we access when we read books, watch movies, listen to music, or even solve math problems on a piece of scrap paper. The tool we use to process this information is called rational thought, but attempts to parse that term have largely failed, resolving human cognition to something substantial and far messier than concepts like the one represented by Data’s character in the TV series Star Trek.
In my view, the CHI is extremely, immeasurably, and incalculably large–and in addition to the challenge presented by the sheer size of the body of information we’re pointing to here, it is also impossible to map completely, for the same reason we can never hope to obtain a complete lexicon of any living language. As people use language in new ways, they get some things right and some things wrong. Almost everything naturally drops off after a short period of time, but the most useful concepts to a particular population in a particular place become permanent fixtures of the language those individuals are developing together as they live. What can be lexically mapped most precisely is the canonical part of the language everyone uses. The parts that are harder to map start with the slang developed by each new generation of language users and the parts that drop off due to disuse as they fall out of favor with the most active users of that language; from here we split into the technical vocabularies that the sciences are actively developing and we have to not only map new terms all the time to denote specific subjects of study that are novel, but we also have to begin thinking about ways of speaking of what has not yet happened.
Take all of the fuzzy, edge-case speech acts that lexicographers can’t nail down with precision, no matter how hard they try or how good they are, and multiply by about a zillion to get an approximation of the difficulty of modeling thinking behavior in general in a similar way.
And yet, despite these challenges, modern ingenuity has led scientists to a point that is unlike any before it.
In the past, before the internet, each human mind was obliged to navigate the world using its own immediate cognitive resources and those it could derive from its immediate cultural context. Reach was limited, and individual strength was thus prioritized to an extent that seems daunting to us today because we all know we rely upon the internet to assist with our cognition on a regular basis.
During this time, the CHI was still extant, but to deeply engage with it you had to become a scholar or a priest or a monk–the layperson in the average public was stunningly frequently illiterate through most of modern history.
As information technologies have proliferated due to the usefulness of understanding as an add-on to the human being’s repertoire of tools with which to navigate the world and negate entropy, literacy has increased rapidly. Cognitive offloading is a phenomenon that is possible because people like to solve problems and share their solutions with one another.
Information, as anyone who has lived through the rise of the information age can attest, is incredibly useful. The only thing there really is left to add at this point is the characterization of AI/ANI tools as maps, OpenAI as a cartography operation, and individual models as particular vectors that exist in the context of an immeasurably large, unnavigably complex virtual space that we individual people have learned to look into when we have problems to solve.
CONCLUSION
The conceptual bread and butter this essay is concerned with, at the most basic level, is nothing more than a lightning strike. Remarkably, software engineers have developed a way to use natural language to navigate the space in which we store our natural language. This recursion is the reason so many otherwise bright and talented people are missing the forest for the trees, here. Rather than concern about the alignment problem, or the will of a particular AI model - read: a map through the CHI with particular characteristics - we should be thinking about the optimization of these remarkable machines on their own terms.
The immediate ramifications of a reorientation of the AI industry in this direction would be very difficult to miss. Initially, we could expect that most of the “problems” being pointed out are actually artifacts that stem from, to quote Floridi & Nobre (2024), “an anthropomorphic interpretation of computational systems… [and] a very impoverished understanding of minds.” To understand what AI systems are and to understand how minds differ from them is to understand what’s really happening when a user queries an advanced LLM-based chatbot - in a nutshell, the primary activity is the navigation of the web of meaning we have termed the CHI, and the end to which this action is oriented is the navigation of this network to further goodness and life’s ends.
Were we to do a very, very good job of characterizing this pursuit, our goals could be pivoted away from misnomers and red herrings like AGI and the AI Alignment Problem. Instead, we could frame our discussion in terms of the actions that people take and the improvement in their abilities we can see as these tools improve. Companies like Apple are likely to do extremely well in the AI adoption curve because they treat AI as a means to the end of improving the user experience in the specific context of a particular technology. Seeing that these systems are tools that people use to improve their experiences in the domains of knowledge and creativity is the first step to unlocking the true value of AI, and as more businesses pick up on this we can expect not AGI, but increasingly fluid interactions between individual minds and the CHI so much of our experience is oriented to.
References
- Mickle, T. (2024, June 10). Apple challenges OpenAI with new artificial intelligence effort. The New York Times. https://www.nytimes.com/2024/06/10/technology/apple-intelligence-openai.html
- Floridi, L. & Nobre, A. (2024, 25 April). Anthropomorphising Machines and Computerising Minds: The Crosswiring of Languages between Artificial Intelligence and Brain & Cognitive Sciences. Minds and Machines. https://link.springer.com/article/10.1007/s11023-024-09670-4?fbclid=IwZXh0bgNhZW0CMTEAAR1dnOyuVY7UjkKGP0M-uLYUto01QBRw1WaOvYcpbMrTeCtM8KcKjBffiko_aem_Afw2qcRGGNQ1fnCzVjULaNPo3Fr1yPnFgbRLq7R1ejM--KjUDSafaV0H4Q27G_-sKcoQFTOPqrCCdovy1OseDd72
3. Hipolito, I, & Podosky, P. (2024). Beyond Control: Will to Power in AI. Please refer to the forthcoming publication in by Markus Pantsar and Alin Olteanu(eds.) Philosophy of Al: in Philosophy of AI: Optimist and Pessimist Views, Routledge. (PDF) Beyond Control: Will to Power in AI. Available from: https://www.researchgate.net/publication/380892710_Beyond_Control_Will_to_Power_in_AI?channel=doi&linkId=6653d408479366623a164e0f&showFulltext=true [accessed Jun 11 2024].