Your search

In authors or contributors
  • The rise of generative artificial intelligence (GenAI), especially large language models (LLMs), has fueled debates on whether these technologies genuinely emulate human intelligence. Some view LLM architectures as capturing human language mechanisms, while others, from a posthumanist and a transhumanist perspective, herald the obsolescence of humans as the sole conceptualizers of life and nature. This paper challenges, from a practical philosophy of science perspective, such views by arguing that the reasoning behind equating GenAI with human intelligence, or proclaiming the “demise of the human,” is flawed due to conceptual conflation and reductive definitions of humans as performance-driven semiotic systems deprived of agency. In opposing theories that reduce consciousness to computation or information integration, the present proposal posits that consciousness arises from the holistic integration of perception, conceptualization, intentional agency, and self-modeling within biological systems. Grounded in a model of Extended Humanism, I propose that human consciousness and agency are tied to biological embodiment and the agents’ experiential interactions with their environment. This underscores the distinction between pre-trained transformers as “posthuman agents” and humans as purposeful biological agents, which emphasizes the human capacity for biological adjustment and optimization. Consequently, proclamations about human obsolescence as conceptualizers are unfounded, as they fail to appreciate intrinsic consciousness-agency-embodiment connections. One of the main conclusions is that the capacity to integrate information does not amount to phenomenal consciousness as argued, for example, by Information Integration Theory (ITT).

Last update from database: 2/10/26, 2:00 AM (UTC)