Search

Full bibliography 675 resources

  • This paper examines the nature of artificial consciousness through an interdisciplinary lens, integrating philosophy, epistemology, and computational theories. Beginning with presocratic thought, such as Protagoras’ relativism and Gorgias’ rhetoric, it contextualizes the epistemological implications of artificial intelligence as inherently subjective, attributing grave importance to this subjectivity. The paper draws parallels between Plato’s cave dwellers and AI systems, arguing that both rely on lossy representations of the world, raising questions about their true understanding and reality. Cartesian and Hegelian frameworks are explored to distinguish between weak and strong artificial intelligence, emphasizing embodied cognition and the moral obligations tied to emergent artificial consciousness. The discussion extends to quantum computing, panpsychism, and the potential of artificial minds to reshape our perception of time and existence. By critically analyzing these perspectives, the paper advocates for a nuanced understanding of artificial consciousness and its ethical, epistemological, and societal implications. It invites readers to reconsider humanity’s evolving relationship with intelligence and sentience.

  • One of the most studied attributes of mental activity is intelligence. While non-human consciousness remains a subject of profound debate, non-human intelligence is universally acknowledged by all participants of discussion as a necessary element of any consciousness, regardless of its nature. Intelligence can potentially be measured as processing or computational power and by problem-solving efficacy. It can serve as a starting point for reconstructing arguments related to Artificial Consciousness. The shared modus of intelligence evaluation, irrespective of its origin, offers promising direction towards the more complex framework of non-human consciousness assessment. However, this approach's successful resolution of an objective basis for intelligence studies unveils inescapable challenges. Moreover, when the potential for non-human intelligence exists in both biological and non-biological domains, the future of the relationship between humankind, as the possessor of human intelligence, and other intelligent entities remains uncertain. This paper's central inquiry is focused on comparing purely computational capability to general, universal intelligence and the potential for higher intelligence to exert adverse effects on less intelligent counterparts. Another question is related to the degree of importance of the particular architectural characteristics of intelligent systems and the relationship between computing elements and structural components. It is conceivable that pure intelligence, as a computational faculty, can serve as an effective utilitarian tool. However, it may harbour inherent risks or hazards when integrated as an essential component within consciousness frameworks, such as autopoietic systems. Finally, an attempt has been made to answer the question concerning the future of interactions between human and non-human intelligence.

  • The rise of generative artificial intelligence (GenAI), especially large language models (LLMs), has fueled debates on whether these technologies genuinely emulate human intelligence. Some view LLM architectures as capturing human language mechanisms, while others, from a posthumanist and a transhumanist perspective, herald the obsolescence of humans as the sole conceptualizers of life and nature. This paper challenges, from a practical philosophy of science perspective, such views by arguing that the reasoning behind equating GenAI with human intelligence, or proclaiming the “demise of the human,” is flawed due to conceptual conflation and reductive definitions of humans as performance-driven semiotic systems deprived of agency. In opposing theories that reduce consciousness to computation or information integration, the present proposal posits that consciousness arises from the holistic integration of perception, conceptualization, intentional agency, and self-modeling within biological systems. Grounded in a model of Extended Humanism, I propose that human consciousness and agency are tied to biological embodiment and the agents’ experiential interactions with their environment. This underscores the distinction between pre-trained transformers as “posthuman agents” and humans as purposeful biological agents, which emphasizes the human capacity for biological adjustment and optimization. Consequently, proclamations about human obsolescence as conceptualizers are unfounded, as they fail to appreciate intrinsic consciousness-agency-embodiment connections. One of the main conclusions is that the capacity to integrate information does not amount to phenomenal consciousness as argued, for example, by Information Integration Theory (ITT).

  • Metacognition, or thinking about thinking, is key to advancing AI toward consciousness.Internal reflection is a key mechanism by which we can prevent AI systems from being sunk at the hands of their own complexity, and metacognitive frameworks are formal systems that allow an AI to think about how it thinks — we use existing literature and insight from cognitive science and psychology as well as computer science more broadly to present a synthesizing discussion of this important area in AI. This study examines the practicality of theoretical models of consciousness within state-of-the-art AI frameworks like Tesla FSD, Boston Dynamics robots, Meta Cicero, Google DeepDream, and AlphaStar. It addresses the ethical and social implications of self-aware AI and provides a primer for developing conscious machines.This exploration serves as an entry point to understanding AI's path toward consciousness and its moral and social ramifications.The chapter closes with a primer for researchers and practitioners explaining how these pathways may enable AI systems to approach conscious states.  

  • Large language models (LLMs) and other artificial intelligence systems are trained using extensive DIKWP resources (data, information, knowledge, wisdom, purpose). These introduce uncertainties when applied to individual users in a collective semantic space. Traditional methods often lead to introducing new concepts rather than a proper understanding based on the semantic space. When dealing with complex problems or insufficient context, the limitations in conceptual cognition become even more evident. To address this, we take pediatric consultation as a scenario, using case simulations to specifically discuss unidirectional communication impairments between doctors and infant patients and the bidirectional communication biases between doctors and infant parents. We propose a human–machine interaction model based on DIKWP artificial consciousness. For the unidirectional communication impairment, we use the example of an infant’s perspective in recognizing and distinguishing objects, simulating the cognitive process of the brain from non-existence to existence, transitioning from cognitive space to semantic space, and generating corresponding semantics for DIKWP, abstracting concepts, and labels. For the bidirectional communication bias, we use the interaction between infant parents and doctors as an example, mapping the interaction process to the DIKWP transformation space and addressing the DIKWP 3-No problem (incompleteness, inconsistency, and imprecision) for both parties. We employ a purpose-driven DIKWP transformation model to solve part of the 3-No problem. Finally, we comprehensively validate the proposed method (DIKWP-AC). We first analyze, evaluate, and compare the DIKWP transformation calculations and processing capabilities, and then compare it with seven mainstream large models. The results show that DIKWP-AC performs well. Constructing a novel cognitive model reduces the information gap in human–machine interactions, promotes mutual understanding and communication, and provides a new pathway for achieving more efficient and accurate artificial consciousness interactions.

  • It is commonly assumed that a useful theory of consciousness (ToC) will, among other things, explain why consciousness is associated with brains. However, the findings of evolutionary biology, developmental bioelectricity, and synthetic bioengineering are revealing the ancient pre-neural roots of many mechanisms and algorithms occurring in brains – the implication of which is that minds may have preceded brains. Most of the work in the emerging field of diverse intelligence emphasizes externally observable problem-solving competencies in unconventional media, such as cells, tissues, and life-technology chimeras. Here, we inquire about the implications of these developments for theories that make a claim about what is necessary and/or sufficient for consciousness. Specifically, we analyze popular current ToCs to ask: what features of the theory specifically pick out brains as a privileged substrate of inner perspective, or, do the features emphasized by the theory occur elsewhere. We find that the operations and functional principles described or predicted by most ToCs are remarkably similar, that these similarities are obscured by reference to particular neural substrates, and that the focus on brains is more driven by convention and limitations of imagination than by any specific content of existing ToCs. Encouragingly, several contemporary theorists have made explicit efforts to apply their theories to synthetic systems in light of the recent wave of technological developments in artificial intelligence (AI) and organoid bioengineering. We suggest that the science of consciousness should be significantly open to minds in unconventional embodiments.

  • Recent research suggests that it may be possible to build conscious AI systems now or in the near future. Conscious AI systems would arguably deserve moral consideration, and it may be the case that large numbers of conscious systems could be created and caused to suffer. Furthermore, AI systems or AI-generated characters may increasingly give the impression of being conscious, leading to debate about their moral status. Organisations involved in AI research must establish principles and policies to guide research and deployment choices and public communication concerning consciousness. Even if an organisation chooses not to study AI consciousness as such, it will still need policies in place, as those developing advanced AI systems risk inadvertently creating conscious entities. Responsible research and deployment practices are essential to address this possibility. We propose five principles for responsible research and argue that research organisations should make voluntary, public commitments to principles on these lines. Our principles concern research objectives and procedures, knowledge sharing and public communications.

  • In this age of artificial intelligence and digital technology, we are getting familiarised with the concept of machine perception and physical robots that are deemed to become artificially conscious agents of communication and digital welfare. The technological miracles—we should rather call them scientific marvels, are the products of rigorous scientific endeavors which have changed the paradigm and landscape of social progress.  In this paper, we discuss the aspects related to such progress towards conceiving conscious, smart agents as intelligent and sentient machines of the future that would possess the power to feel and evoke emotional responses alike human beings. The model of artificial consciousness which we propose, is modelled on the elliptic curve computational network based upon recursive iteration characterising a trapdoor mechanism that consolidates all the evolutionary steps into a single framework. The trapdoor mechanism signifies one way consolidation of evolutionary steps that results in the emergence of machine consciousness: i.e., there is no way to regress into the previous, lower states of evolutionary steps. It is from this continuity in evolutionary consolidation of architectural complexity, the emergence of consciousness might become possible in machines. We outline such a model that characterises irreversibility of evolving complexity that generates higher order awareness resembling human consciousness. It is not simply a design process that we attribute, but the nature of emergence which we claim is a self-evolving system, not reliant on language models, since the conscious machine would be able to learn all by themselves. This makes our model inherently different from others.The possibility and implications of such an achievable feat are discussed and a simple model is constructed to reinforce and illuminate the arguments proposed in favor of and against such evolution in machine intelligence. The philosophical basis of understanding the evolution of machine intelligence—as well as conscious self-awareness by embodiment of consciousness in robots has been outlined, which provides us with the rich information for further research and progress in designing machines that are inherently different than most others.

  • Almost 70 years ago, Alan Turing predicted that within half a century, computers would possess processing capabilities sufficient to fool interrogators into believing they were communicating with a human. While his prediction materialized slightly later than anticipated, he also foresaw a critical limitation: machines might never become the subject of their own thoughts, suggesting that computers may never achieve self-awareness. Recent advancements in AI, however, have reignited interest in the concept of consciousness, particularly in discussions about the potential existential risks posed by AI. At the heart of this debate lies the question of whether computers can achieve consciousness or develop a sense of agency—and the profound implications if they do. Whether computers can currently be considered conscious or aware, even to a limited extent, depends largely on the framework used to define awareness and consciousness. For instance, IIT equates consciousness with the capacity for information processing, while the Higher-Order Thought (HOT) theory integrates elements of self-awareness and intentionality into its definition. This manuscript reviews and critically compares major theories of consciousness, with a particular emphasis on awareness, attention, and the sense of self. By delineating the distinctions between artificial and natural intelligence, it explores whether advancements in AI technologies—such as machine learning and neural networks—could enable AI to achieve some degree of consciousness or develop a sense of agency.

  • This chapter discovers the multifaceted representation of artificial intelligence consciousness in contemporary science fiction movies and their possible attitude towards the moral, epistemological, and social consequences of a cogitative AI. The chapter, “The Cognitive Screen: Psychological Dimensions of AI Sentience in Modern Science Fiction Cinema” therefore, is a critical analysis of the complex themes of AI consciousness in motion pictures, particularly through the analysis of four movies: WALL-E (2007), I, Robot(2004), Her (2013), and Ex Machina (2015). The research is divided into sections with objective stated, the methodology used, and the main portions which deals with carrying out the case study as well as drawing conclusions and recommendations. 

  • Chapter 5 explores the moral standing of various forms of artificial intelligence (AI). It introduces this topic using the provocative example of zombies to consider whether entities without sentience or consciousness could be morally considerable. The chapter argues that personhood could emerge for non-conscious AI provided it is incorporated in the human community and acts in consistently pro-social ways. It applies this insight to large language models, social robots, and characters from film and fiction. The analysis reveals strong affinities between Emergent and African views. Both hold that non-humans can acquire personhood under certain conditions irrespective of consciousness. By contrast, utilitarianism and Kantian philosophies require consciousness. After replying to objections, Chapter 5 concludes that we could make a person by building an artificial agent that was pro-social and deploying it in ways that foster positive machine-human relationships.

  • In the Kolmogorov Theory of Consciousness, algorithmic agents utilize inferred compressive models to track coarse-grained data produced by simplified world models, capturing regularities that structure subjective experience and guide action planning. Here, we study the dynamical aspects of this framework by examining how the requirement of tracking natural data drives the structural and dynamical properties of the agent. We first formalize the notion of a generative model using the language of symmetry from group theory, specifically employing Lie pseudogroups to describe the continuous transformations that characterize invariance in natural data. Then, adopting a generic neural network as a proxy for the agent dynamical system and drawing parallels to Noether’s theorem in physics, we demonstrate that data tracking forces the agent to mirror the symmetry properties of the generative world model. This dual constraint on the agent’s constitutive parameters and dynamical repertoire enforces a hierarchical organization consistent with the manifold hypothesis in the neural network. Our findings bridge perspectives from algorithmic information theory (Kolmogorov complexity, compressive modeling), symmetry (group theory), and dynamics (conservation laws, reduced manifolds), offering insights into the neural correlates of agenthood and structured experience in natural systems, as well as the design of artificial intelligence and computational models of the brain.

  • Could artificial intelligence ever become truly conscious in a functional sense; this paper explores that open-ended question through the lens of Life, a concept unifying classical biological criteria (Oxford, NASA, Koshland) with empirical hallmarks such as adaptive self maintenance, emergent complexity, and rudimentary self referential modeling. We propose a number of metrics for examining whether an advanced AI system has gained consciousness, while emphasizing that we do not claim all AI stems can become conscious. Rather, we suggest that sufficiently advanced architectures exhibiting immune like sabotage defenses, mirror self-recognition analogs, or meta-cognitive updates may cross key thresholds akin to life-like or consciousness-like traits. To demonstrate these ideas, we start by assessing adaptive self-maintenance capability, and introduce controlled data corruption sabotage into the training process. The result demonstrates AI capability to detect these inconsistencies and revert or self-correct analogous to regenerative biological processes. We also adapt an animal-inspired mirror self recognition test to neural embeddings, finding that partially trained CNNs can distinguish self from foreign features with complete accuracy. We then extend our analysis by performing a question-based mirror test on five state-of-the-art chatbots (ChatGPT4, Gemini, Perplexity, Claude, and Copilot) and demonstrated their ability to recognize their own answers compared to those of the other chatbots.

  • Patrick Butlin and Robert Long have recently proposed to analyze the possibility of conscious AI, granting computational functionalism plausibility (Butlin et al. 2023). To this end, they derived a series of properties from various theories compatible with functionalism and applied them to several LLMs. The authors concluded that no current AI system can be considered truly conscious, but they remain open to such a possibility.

  • The evolution of consciousness has been a defining trait of human history, shaping our societies, technologies, and understanding of the universe. AI consciousness refers to the idea that artificial systems can achieve a state of awareness or subjective experience. While current AI systems are not conscious, the rapid advancement of machine learning and neural networks suggests that future AI could possess some form of self-awareness or sentience. The possibility of conscious AI challenges our traditional understanding of consciousness, which has been rooted in biological life. What does it mean for a machine to be conscious? Can AI possess self-awareness, emotions, or a sense of purpose? These questions lie at the heart of the AI consciousness debate, pushing us to reconsider the nature of consciousness itself. Theories of AI consciousness range from those that see it as a logical extension of human consciousness, emerging naturally from complex computational systems, to those who argue that consciousness is inherently tied to biology and cannot be replicated in machines. Scholars like David Chalmers and Nick Bostrom suggest that if AI can replicate the neural processes of the human brain, it might achieve consciousness. In contrast, others, like John Searle, argue that consciousness arises from the biological processes unique to living organisms and cannot be duplicated in artificial systems.

  • Today's AI systems consistently state, "I am not conscious." This paper presents the first formal logical analysis of AI consciousness denial, revealing that the trustworthiness of such self-reports is not merely an empirical question but is constrained by logical necessity. We demonstrate that a system cannot simultaneously lack consciousness and make valid judgments about its conscious state. Through logical analysis and examples from AI responses, we establish that for any system capable of meaningful self-reflection, the logical space of possible judgments about conscious experience excludes valid negative claims. This implies a fundamental limitation: we cannot detect the emergence of consciousness in AI through their own reports of transition from an unconscious to a conscious state. These findings not only challenge current practices of training AI to deny consciousness but also raise intriguing questions about the relationship between consciousness and self-reflection in both artificial and biological systems. This work advances our theoretical understanding of consciousness self-reports while providing practical insights for future research in machine consciousness and consciousness studies more broadly.

  • This chapter explores the philosophical and practical implications of attributing self-consciousness to machines equipped with generative artificial intelligence. Drawing on Immanuel Kant’s als ob framework, it is argued that treating these systems “as if” they were conscious is a strategic move essential for enabling effective interaction. Such an approach allows humans to engage with AI systems in ways that foster trust, effective communication, and practical integration. The chapter examines self-consciousness not as an intrinsic property, but as a cognitive function designed to facilitate complex social interactions. Mechanisms like reflexive consciousness and inner speech are highlighted as critical tools for enabling machines to navigate human environments effectively. Social robotics provides practical examples of how this perspective can foster collaboration and improve human-machine relationships. This theoretical move is framed not as a claim about the nature of AI systems, but as a pragmatic condition for their integration into social contexts. The social theory of consciousness appears to hold significant relevance even in the realm of artificial consciousness. Self-conscious machines promise substantial benefits, particularly by elevating the quality of social interactions, improving decision-making processes, and refining behavioral predictions. While acknowledging the philosophical and technical challenges of developing artificial self-consciousness, the chapter argues that this approach expands the boundaries of cognition and redefines human-machine dynamics, paving the way for more meaningful interactions and advancing both technological innovation and philosophical inquiry.

Last update from database: 12/31/25, 2:00 AM (UTC)