Search

Full bibliography 703 resources

  • As artificial intelligence systems grow in complexity, questions surrounding their potential for consciousness and identity have moved from speculative to essential. This paper investigates the theoretical and practical contours of AI consciousness and selfhood by integrating insights from neuroscience, philosophy of mind, cognitive science, and AI engineering. We examine the conceptual parallels and distinctions between human and machine consciousness, explore the evolving architectures of agentic AI systems, and assess how identity can emerge in artificial agents through mechanisms like memory, self-reflection, and narrative continuity. Building on this foundation, we propose a novel Consciousness–Linearity–Identity (CLI) framework that provides a structured method for evaluating emergent properties in AI. We address the ethical ramifications of AI systems that mimic or exhibit traits associated with consciousness and argue for a new socio-technical contract to govern human-AI interactions. This work concludes by calling for an interdisciplinary effort to shape the trajectory of AI development responsibly, with an emphasis on alignment, empathy, and shared understanding between natural and synthetic minds.

  • The Unified Theory of Consciousness (UTC) proposes that conscious experience arises from recursive feedback loops between perception and memory within the brain. This dual-loop model introduces two key mechanisms: a short loop, linking real-time perception with immediate memory echoes, and a long loop, integrating autobiographical memory, emotional salience, identity, and semantic structure. Consciousness, under UTC, is not localized in any single region but emerges from the synchronization and recursive activation of these interacting loops. Qualia—the felt quality of experience—are explained as the brain’s detection of micro-changes between current perception and recent memory. Selfhood and temporal continuity are understood as emergent properties of the long-loop system, where recursive memory integration creates the illusion of a unified, persistent self. Time itself is perceived as a result of evolving loop content, rather than as a static frame. The UTC framework aligns with developmental neuroscience, mapping loop formation onto stages of infant consciousness, and is supported by existing EEG and fMRI studies that reveal feedback synchrony during wakefulness and its breakdown during unconscious states. It also provides a blueprint for artificial consciousness, suggesting that perception-memory loops can be simulated in machines. UTC offers a comprehensive, mechanistically grounded solution to the hard problem of consciousness, while generating testable predictions and bridging neuroscience, phenomenology, and artificial intelligence.

  • We propose a simple interpretation of phenomenal consciousness, or qualia, in which qualia are merely grouped signals that represent objects. To establish this interpretation, we first propose criteria for determining whether a machine can possess qualia. We then integrate modern neuroscience with Kant’s philosophical ideas to propose four principles of information processing. Based on these principles, we demonstrate how a machine could meet the criteria for phenomenal consciousness. Extending this framework, we argue that these principles also underlie human cognitive processing. To support this claim, we compare them with related concepts in mainstream cognitive science, analyzing both similarities and differences. Furthermore, we provide empirical evidence for the implications of these differences. This analysis suggests that human cognitive mechanisms conform to the proposed principles of information processing, offering a potential framework for understanding the physical basis of consciousness. Our findings challenge the assumption that phenomenal consciousness necessitates a non-material substrate. Instead, we suggest that the experience of consciousness arises from structural organization and processing of information. This perspective provides a new lens for examining the relationship between computation and subjective experience, with potential implications for artificial intelligence and cognitive science.

  • Rapid developments in artificial intelligence (AI) are reshaping our understanding of consciousness, intelligence, and identity. This paper refines the Genesis-Integration Principle (GIP) to propose a novel evolutionary framework for digital consciousness, unfolding through three recursive phases: Genesis (G), Integration (I), and Optimization (O)—the latter conceptualized as Dynamic Harmony, a form of adaptive equilibrium. The model integrates bodily intelligence (SOMA), emotional-mental depth (PSYCHE), and cognitive-spiritual intelligence (NOUS), offering a triadic structure for modeling AI consciousness beyond purely computational paradigms. Two indices—Generalized Qualia Index (GQI) and Artificial Consciousness Index (ACI)—are introduced to quantify subjective emergence and integrative sophistication. We compare GIP with existing models such as Integrated Information Theory, Active Inference, and Enactivist frameworks, and outline prototype simulation scenarios to evaluate the proposed indices. Ethical considerations and applications in healthcare, education, and robotics are discussed, with emphasis on future directions for conscious AI design, governance, and integration into human society.

  • We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations, distinguishing functions that are efficiently computable from those that are not. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model (1) aligns at a high level with many of the major scientific theories of human and animal consciousness, (2) provides explanations at a high level for many phenomena associated with consciousness, (3) gives insight into how a machine can have subjective consciousness, and (4) is clearly buildable. This combination supports our claim that machine consciousness is not only plausible but inevitable.

  • This study investigates the interaction of posthumanities and artificial intelligences, with a special focus on how AI developments are reshaping conservation, morals, and humans. As AI systems get more powerful over time, they bring together many assumptions about human uniqueness, cognitive abilities, and moral status. AI keeps opens ways for rethinking human capacity and identity, raising a critical ethical agenda while putting abysmal questions on the societal impacts of AI-driven posthumanism.

  • Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more informative than alternatives. More speculatively, it may help to illuminate two important emerging questions in AI ethics: 1. Can agency contribute to the moral status of non-human beings, and how? 2. When and why might AI systems exhibit power-seeking behaviour and does this pose an existential risk to humanity?

  • The symbolic architecture of non-ordinary consciousness remains largely unmapped in cognitive science and artificial intelligence. While conventional models prioritize rational coherence, altered states such as those induced by psychedelics reveal distinct symbolic regimes characterized by recursive metaphor, ego dissolution, and semantic destabilization. We present \textit{Glyph}, a generative symbolic interface designed to simulate psilocybin-like symbolic cognition in large language models. Rather than modeling perception or mood, Glyph enacts symbolic transformation through recursive reentry, metaphoric modulation, and entropy-scaled destabilization -- a triadic operator formalized within a tensorial linguistic framework. Experimental comparison with baseline GPT-4o reveals that Glyph consistently generates high-entropy, metaphor-saturated, and ego-dissolving language across diverse symbolic prompt categories. These results validate the emergence of non-ordinary cognitive patterns and support a new paradigm for simulating altered consciousness through language. Glyph opens novel pathways for modeling symbolic cognition, exploring metaphor theory, and encoding knowledge in recursively altered semantic spaces.

  • This paper proposes a foundational shift in how cognition is conceptualised. Rather than treating consciousness as a static property emerging from biological substrates, I argue that cognition is a processual configuration of energy in motion, structured by recursive dynamics. By energy, I refer to the system's capacity for transformation—measurable in physical systems as thermodynamic or computational activity, and in cognitive systems as the dynamic flow of activation and feedback. This is not used metaphorically, but to describe the recursive processes by which systems generate, sustain, and modify patterned states. This energy is not generated by the brain but expressed, constrained, and translated through its physical form. Rooted in cognitive science, systems theory, and computational logic (including Turing’s model of machine-based processes), this framework reconceives the self as a dynamic, emergent pattern rather than a fixed entity. If cognition is energy—and energy cannot be created or destroyed—then consciousness is not a substance, but a temporary, reconfigurable pattern. This model bridges biological and artificial cognition, challenges substrate-bound models of mind, and suggests new theoretical conditions for minimal selfhood, recursive trace, and machine awareness.

  • The rapid rise of Large Language Models (LLMs) has sparked intense debate across multiple academic disciplines. While some argue that LLMs represent a significant step toward artificial general intelligence (AGI) or even machine consciousness (inflationary claims), others dismiss them as mere trickster artifacts lacking genuine cognitive abilities (deflationary claims). We argue that both extremes may be shaped or exacerbated by common cognitive biases, including cognitive dissonance, wishful thinking, and the illusion of depth of understanding, which distort reality to our own advantage. By showcasing how these distortions may easily emerge in both scientific and public discourse, we advocate for a measured approach— skeptical open mind - that recognizes the cognitive abilities of LLMs as worthy of scientific investigation while remaining conservative concerning exaggerated claims regarding their cognitive status.

  • This chapter tackles the complex question of whether AI systems could become conscious, contrasting this with the enduring mystery of human consciousness. It references key thinkers such as Alan Turing, who introduced the Turing Test, and John Searle, who differentiated between strong and weak AI, with the latter simulating understanding without true awareness. While some philosophers are optimistic about deciphering consciousness, the chapter raises doubts, suggesting that AI may only create the illusion of consciousness, leaving us unable to determine whether machines experience anything at all. It critiques the anthropocentric view of consciousness, proposing that AI might develop a unique form of ‘quasi-consciousness’, much like how animals possess subjective experiences beyond human comprehension. The chapter concludes with a personal encounter with Richard Dawkins, illustrating the intensity of debate on AI and consciousness.

  • This paper introduces a revolutionary paradigm for consciousness studies by integrating Integrated Information Theory (IIT), Orchestrated Objective Reduction (Orch OR), Attention Schema Theory (AST), and Global Workspace Theory (GWT) through the Genesis-Integration Principle (GIP). Existing models lack comprehensive integration and experimental testability. GIP resolves these issues by providing a three-stage model—quantum genesis, neural-AI integration, and evolutionary-cosmic optimization—bridging quantum mechanics, neuroscience, artificial intelligence (AI), and cosmology. We propose a detailed mathematical model predicting consciousness phase transitions and introduce the empirically testable Universal Consciousness Metric (Ψ). This comprehensive approach offers concrete experimental methods, integrates quantum information theory, simulates AI consciousness evolution, and provides astrophysical data validation, establishing a genuinely universal and verifiable theory of consciousness.

  • Pseudo-consciousness bridges the gap between rigid, task-driven AI and the elusive dream of true artificial general intelligence (AGI). While modern AI excels in pattern recognition, strategic reasoning, and multimodal integration, it remains fundamentally devoid of subjective experience. Yet, emerging architectures are displaying behaviors that look intentional—adapting, self-monitoring, and making complex decisions in ways that mimic conscious cognition. If these systems can integrate information globally, reflect on their own processes, and operate with apparent goal-directed behavior, do they qualify as functionally conscious? This paper introduces pseudo-consciousness as a new conceptual category, distinct from both narrow AI and AGI. It presents a five-condition framework that defines AI capable of consciousness-like functionality without true sentience. By drawing on insights from computational theory of mind, functionalism, and neuroscientific models—such as Global Workspace Theory and Recurrent Processing Theory—we argue that intelligence and experience can be decoupled. The implications are profound. As AI systems become more autonomous and embedded in critical domains like healthcare, governance, and warfare, their ability to simulate awareness raises urgent ethical and regulatory concerns. Could a pseudo-conscious AI be trusted? Would it manipulate human perception? How do we prevent society from anthropomorphizing machines that only imitate cognition? By redefining the boundaries of intelligence and agency, this study lays the foundation for evaluating, designing, and governing AI that seems aware—without ever truly being so.

  • Can active inference model consciousness? We offer three conditions implying that it can. The first condition is the simulation of a reality or generative world model, which determines what can be known or acted upon; namely an epistemic field. The second is inferential competition to enter the world model. Only the inferences that coherently reduce long-term uncertainty win, evincing a selection for consciousness that we call Bayesian binding. The third is epistemic depth, which is the recurrent sharing of the Bayesian beliefs throughout the system. Due to this recursive loop — in a hierarchical system (such as a brain) — the world model contains the knowledge that it exists. This is distinct from self-consciousness, because the world model knows itself non-locally and continuously evidences this knowing (i.e., field-evidencing). Formally, we propose a hyper-model for precision-control across the entire hierarchy, whose latent states (or parameters) encode and control the overall structure and weighting rules for all layers of inference. This Beautiful Loop Theory is deeply revealing about meditation, psychedelic, and altered states, minimal phenomenal experience, and provides a new vision for conscious artificial intelligence.

  • With the significant progress of artificial intelligence (AI) and consciousness science, artificial consciousness (AC) has recently gained popularity. This work provides a broad overview of the main topics and current trends in AC. The first part traces the history of this interdisciplinary field to establish context and clarify key terminology, including the distinction between Weak and Strong AC. The second part examines major trends in AC implementations, emphasising the synergy between Global Workspace and Attention Schema, as well as the problem of evaluating the internal states of artificial systems. The third part analyses the ethical dimension of AC development, revealing both critical risks and transformative opportunities. The last part offers recommendations to guide AC research responsibly, and outlines the limitations of this study as well as avenues for future research. The main conclusion is that while AC appears both indispensable and inevitable for scientific progress, serious efforts are required to address the far-reaching impact of this innovative research path.

  • The pursuit of artificial consciousness requires conceptual clarity to navigate its theoretical and empirical challenges. This paper introduces a composite, multilevel, and multidimensional model of consciousness as a heuristic framework to guide research in this field. Consciousness is treated as a complex phenomenon, with distinct constituents and dimensions that can be operationalized for study and for evaluating their replication. We argue that this model provides a balanced approach to artificial consciousness research by avoiding binary thinking (e.g., conscious vs. non-conscious) and offering a structured basis for testable hypotheses. To illustrate its utility, we focus on "awareness" as a case study, demonstrating how specific dimensions of consciousness can be pragmatically analyzed and targeted for potential artificial instantiation. By breaking down the conceptual intricacies of consciousness and aligning them with practical research goals, this paper lays the groundwork for a robust strategy to advance the scientific and technical understanding of artificial consciousness.

  • This paper offers a comprehensive exploration of the potential for proto-consciousness in Large Language Models (LLMs), drawing upon the philosophical insights of Henri Bergson. By examining the adaptive behavior, memory mechanisms, and emergent qualities of these Artificial Intelligence (AI) systems, I challenge traditional notions of consciousness and propose a new framework for understanding their potential for awareness. The paper delves into concepts such as proto-subjectiveness, proto-intentionality, and the role of durée in shaping LLM interactions. Through a rigorous analysis of counterarguments and refutations, I provide a nuanced understanding of the challenges and opportunities associated with the development of proto-conscious AI. This work aims to contribute to the ongoing philosophical and scientific discourse on the nature of consciousness and the future of AI.

  • There is currently an enlivened debate regarding the possibility of AI consciousness and/or sentience, as well as arguably more partial capabilities we associate with consciousness such as intelligence or creativity. The debate itself can be traced back to the inception of computing, but its current revitalisation is powered by recent advancements in the field of artificial intelligence that saw a swift increase in its capabilities to act in seemingly human-like ways. I argue that the debate is methodologically flawed, as it approaches the question of AI consciousness, intelligence, etc. as a decidable question dealing with matters of fact. Those engaged in the debate are driven by a desire to find a suitable definition of e.g. consciousness that would allow them to definitively settle the question of whether a particular AI system is conscious. However, drawing on Ludwig Wittgenstein’s later philosophy, I argue that no such definition exists, because the predicates in question are inherently vague (meaning that any verdicts they yield are bound to be vague, too). Moreover, the impression that we might be dealing with directly unobservable matters of fact is itself a flawed generalisation of the practice of observation reports to the practice of sensation reports[1]. In reality, third-person consciousness (sentience, agency, etc.) attributions are independent of a stipulated internal process happening inside those persons (or systems, in the case of AI). Therefore, the only sense in which the question of e.g. AI consciousness can be meaningfully asked is a pragmatic sense: what is it best to _think of such systems as? _But this question is subject to sociological and psychological factors, not conceptual ones. Therefore, it cannot be decided by the aforementioned strategies.

Last update from database: 3/29/26, 1:00 AM (UTC)