top of page

OpenAI's Show and the Consolidation of the (Problematic) Concept of AGI, or the Thinking Machine

  • Writer: Patrícia
    Patrícia
  • Nov 24, 2023
  • 6 min read

Updated: May 12

One of the issues I believe the media hasn't paid enough attention to during the OpenAI showcase is how it has helped further consolidate the concept of AGI (Artificial General Intelligence)—the utopia of a machine that thinks like a human being. This is a topic I address in my doctoral thesis.

But there's more: OpenAI has redefined AGI.


They’ve added the following to the original definition of AGI—what I’ve placed in quotation marks: ...systems that surpass human intelligence "at most economically valuable work."

I believe this change may reveal several important things.

Note: Artificial General Intelligence (AGI) is the ultimate goal of AI research, as is well known and explained by researchers such as Ramón López de Mántaras (2023). A quote by one of the fathers of AI, Marvin Minsky, in Life magazine (1970), summarizes the idea: “(...) we will have a machine with the general intelligence of an average human being.”


It is worth remembering that the concept of Artificial Intelligence (AI) was introduced in 1956 by John McCarthy and Minsky himself at the Dartmouth Conference. Underlying this concept is the premise that any aspect of learning or any other feature of intelligence can be described so precisely that a machine could simulate it.



In 1975, Allen Newell and Herbert Simon—two other renowned AI researchers—formulated the Physical Symbol System Hypothesis (PSSH), which, in very simplified terms, posits that any system capable of processing symbols has the necessary means to be intelligent, in the sense proposed by Minsky. The term general in AGI (Artificial General Intelligence) serves to differentiate it from the specific or narrow intelligence that machines are already known to demonstrate—for example, in playing chess. However, a system that plays chess cannot be used for the broad range of tasks that human or general intelligence entails.


Human aspiration to create thinking machines can be traced back to ancient Greece, through the Middle Ages and into modernity. However, it was Alan Turing—widely regarded as the father of modern computing—who, in 1950, raised the possibility of a machine that could think like a human being in a brilliant essay published in the journal Mind. Turing also devised a method to determine whether a machine could be considered intelligent: the well-known Turing Test. The test involves a human interrogator communicating via keyboard with both a machine and another human. If the interrogator fails to distinguish the machine for more than 30% of the time (over five minutes), the machine is said to have passed the test—and is therefore considered intelligent.


Whether a machine can think is the question that has dominated the philosophy of artificial intelligence to this day—hence the enduring importance of Turing’s essay. It is a fundamental question, because the more capabilities we attribute to machines, the more we tend to trust them. This point is crucial: if we do not clearly understand the limits of machine competence, we may end up over-delegating to them—and this is often at the root of the misuse of technology, even in the best-case scenarios.


Si una máquina puede pensar es la pregunta que ha dominado la filosofía de la IA hasta nuestros días, de ahí la importancia del ensayo de Turing. Se trata de una pregunta fundamental, porque en la medida que le concedamos más capacidad a la máquina, más confiaremos en ella. Este punto es importante, porque si no entendemos los límites de las competencias maquinales podemos delegar en exceso, y ese acostumbra a ser el gran problema que se encuentra en la raiz de los malos usos de la tecnología (en el mejor de los casos).

Whether a machine can think is a fundamental question, because the more capabilities we attribute to it, the more we will tend to trust it.

The Turing Test has been refuted on several occasions since its conception. The most well-known objections are those of philosophers Hubert Dreyfus (1965) and John Searle (1980).

Dreyfus argued that turning an artifact into a thinking mind is akin to the alchemy of the Middle Ages that sought to turn lead into gold, since a machine operates on finite data and is subject to human instructions.


Searle objected with the thought experiment known as the Chinese Room. In it, he imagines himself locked in a room receiving messages in Chinese through a slot—messages he does not understand. Unbeknownst to him, they are questions, and he must reply to them, also in Chinese, using a rulebook that tells him which symbols to use. This rulebook represents a natural language processing program. On the other side of the wall, a native Chinese speaker reads the responses and believes they are conversing with someone who understands the language. Searle asks whether, in this scenario, it can truly be said that he understands Chinese. His conclusion: formal computation alone (what he is doing in the room) cannot generate intentionality. In other words, the fact that a program manipulates symbols correctly does not mean it genuinely understands them as a human would.

The main objection to the Turing Test, then, is that it is limited because it focuses solely on observable behavior.



Today, we find ourselves at the point where Searle left off. Nonetheless, new refutations continue to emerge, although no one has yet demonstrated the actual possibility of Artificial General Intelligence (AGI). The main objection to the Chinese Room argues that, in the brain, individual neurons do not understand language either, yet the biological system as a whole (body and mind) does.

From biology, there are explanations that link mental activity with being alive—among other reasons, because the mind functions through biochemical processes which, due to their material nature, cannot be computed. Along these lines, phenomenology asserts that thinking requires being an agent: having a body, which implies having goals, perceiving stimuli, and experiencing emotions.

Much more has been said about this, but what is clear, as AI expert Margaret Boden notes, is that if we accept that being alive is a prerequisite for thinking like a human, then a machine cannot possess general intelligence.

That said, I turn now to OpenAI's redefinition of AGI. As I mentioned, it may reveal several important things:

  1. We're still playing word games. A new definition is pulled out of thin air. Language and narratives play a FUNDAMENTAL role in all of this—something we fail to address sufficiently from a communication perspective.

  2. They seem to abandon the original meaning of AGI—perhaps because, in the end, debating whether a thinking machine can be created is not practical.

  3. What is practical—and probably highly profitable—is creating machines that "surpass human performance in most economically valuable tasks."


I say they seem to abandon the original definition because the goal of the AIM (Artificial Intelligence Movement) is still to create AGI, and they truly believe in it: machines that think like humans. The claims made by Google engineer Blake Lemoine about the alleged sentience of the LaMDA chatbot, after engaging in a deep conversation with it, was one of the signs of this belief.


OpenAI is not a non-profit organization: it is a business with the goal of replacing people in most economically valuable tasks

George Hinton, the Turing Award winner who left Google out of fear of the machines being developed, is another sign of this growing concern (by the way, Hinton is a mentor to Ilya Sutskever, co-founder of OpenAI and one of the key players in this saga). The interview CNN conducted with him is pure gold. There are constant references to a supposed subjectivity of machines.


This redefinition of AGI may reveal something more, or rather, it confirms the conclusion of the drama unfolding these days: OpenAI is not a non-profit organization; it is a business with the goal of replacing people in most economically valuable tasks.


Welcome to machines that complement us. But why do we want them to replace us? Who benefits from this replacement? Why is the focus on replacing, rather than complementing/augmenting?




Esta es la política de la tecnología sobre la que tenemos que hablar, como hace décadas reclamaron pensadoras como Hannah Arendt.

Pongamos la tecnología al servicio de la humanidad, no de unos cuantos. Empecemos por usar bien las palabras y evitemos antropoformizar a la máquina, porque contribuimos a dotarla de una agencia que no tiene y en consecuencia al relato del miedo.

Así se transmiten estas narrativas del miedo, de la inferioridad humana ante la máquina:



y los AI Hypers locales cotagiadísimos:



Estos días conviene equilibrar el hype con voces como las de Virginia Dignum, Emily Bender o Tamnit Gebru.

También recomiendo este informe publicado por Observatori Crític dels Mitjans y La Fede, elaborado por Bru Aguiló y Pau Zalduendo. ¿Cómo se informa sobre IA?


Por aquí os dejo el vídeo de la presentación, en la que tuve el honor de participar.




Comentários


On social media

  • X
  • LinkedIn Social Icon
Archive
Search by tags
bottom of page