Your search

In authors or contributors
  • The rapid rise of Large Language Models (LLMs) has sparked intense debate across multiple academic disciplines. While some argue that LLMs represent a significant step toward artificial general intelligence (AGI) or even machine consciousness (inflationary claims), others dismiss them as mere trickster artifacts lacking genuine cognitive abilities (deflationary claims). We argue that both extremes may be shaped or exacerbated by common cognitive biases, including cognitive dissonance, wishful thinking, and the illusion of depth of understanding, which distort reality to our own advantage. By showcasing how these distortions may easily emerge in both scientific and public discourse, we advocate for a measured approach— skeptical open mind - that recognizes the cognitive abilities of LLMs as worthy of scientific investigation while remaining conservative concerning exaggerated claims regarding their cognitive status.

  • Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see 'sparks of artificial general intelligence' in their apparently boundless faculty for conversation and reasoning, Their sophisticated emergent faculties, which were not initially anticipated by their designers, has ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative “behavioral inference principle”, whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically unbiased and operationalizable criterion to assess machine consciousness.

  • Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see 'sparks of artificial general intelligence' in their apparently boundless faculty for conversation and reasoning, Their sophisticated emergent faculties, which were not initially anticipated by their designers, has ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative “behavioral inference principle”, whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically unbiased and operationalizable criteria to assess machine consciousness.

Last update from database: 5/19/25, 5:58 AM (UTC)