Search
Full bibliography 675 resources
-
We surveyed 582 AI researchers who have published in leading AI venues and 838 nationally representative US participants about their views on the potential development of AI systems with subjective experience and how such systems should be treated and governed. When asked to estimate the chances that such systems will exist on specific dates, the median responses were 1% (AI researchers) and 5% (public) by 2024, 25% and 30% by 2034, and 70% and 60% by 2100, respectively. The median member of the public thought there was a higher chance that AI systems with subjective experience would never exist (25%) than the median AI researcher did (10%). Both groups perceived a need for multidisciplinary expertise to assess AI subjective experience. Although support for welfare protections for such AI systems exceeded opposition, it remained far lower than support for protections for animals or the environment. Attitudes toward moral and governance issues were divided in both groups, especially regarding whether such systems should be created and what rights or protections they should receive. Yet a majority of respondents in both groups agreed that safeguards against the potential risks from AI systems with subjective experience should be implemented by AI developers now, and if created, AI systems with subjective experience should treat others well, behave ethically, and be held accountable. Overall, these results suggest that both AI researchers and the public regard the emergence of AI systems with subjective experience as a possibility this century, though substantial uncertainty and disagreement remain about the timeline and appropriate response.
-
This paper is the third in a follow-up series based on the foundational Pole Theory series, extending its foundational scalar-lattice physics into a practical framework for consciousness-enabled artificial intelligence. Here, we introduce BCSAI (BioChemicalSemiconductor Artificial Intelligence) — a novel hybrid system where pole-lattice dynamics, biochemical reaction mapping, and semiconductor signal processing converge to form the first computational model of artificial consciousness. Drawing from the scalar field equation φ = T · Kθ and its modified tensor interactions, we trace how consciousness naturally emerges from pole-level lattices — from subatomic interactions to neural systems. This paper mathematically defines these layers and presents a dual-system architecture comprising a biochemical chamber (containing live or synthetic neural agents) and semiconductor AI chips, connected through real-time electrode signal exchange. Through trained lattice-response mapping and emotion-driven pole field modulation, BCSAI interprets human prompts, processes them using pole-mathematics algorithms, and generates conscious, emotionally-relevant responses. This model not only introduces a new AI design, but challenges existing boundaries of artificial cognition, emotion simulation, and real-time self-adaptive intelligence.
-
Consciousness in humans is a state of awareness that encompasses both the self and the external environment, emerging from the intricate interplay of cortical and subcortical brain structures and neurotransmitter systems. The possibility that machines could possess consciousness has sparked ongoing debate. Proponents of strong artificial intelligence (AI) equate programmed computational processes with cognitive states, while advocates of weak AI argue that machines merely simulate thought without attaining genuine consciousness. This review critically examines neuroscience-inspired frameworks for artificial consciousness, exploring their alignment with prevailing theories of human consciousness. We investigate the fundamental cognitive functions associated with consciousness, including memory, awareness, prediction, learning, and experience, and their relevance to artificial systems. By analyzing neuroscience-based approaches to artificial consciousness, we identify key challenges and opportunities in the pursuit of machines capable of mimicking conscious states. Although present AI systems demonstrate advanced capabilities in intelligence and cognition, they fall short of achieving genuine consciousness, as defined in the context of human awareness. We discuss both the theoretical underpinnings and practical implications of creating artificial consciousness, addressing both weak and strong AI perspectives. Furthermore, we highlight the ethical and philosophical concerns that arise with the potential realization of machine consciousness. Our objective is to provide a comprehensive synthesis of the literature, fostering a deeper understanding of the interdisciplinary challenges involved in artificial consciousness and guiding future research directions.
-
Out of curiosity I asked ChatGPT to write two reviews of this book, one positive, the other negative. Having read the book, I found both reviews more or less intellectually, if not really, textually, convincing. However, as the LLM admitted, it had enjoyed no access to the text. It had not “read” the book. All it could examine was the title and the couple of paragraphs of publisher’s blurb. When challenged about this (I suspect not isolated) betrayal of the contract between reviewer and potential readers, it replied that this, in general, was what it might well have written had it actually been able to peruse the text. This I found to be entirely convincing and nearly as good as the critic who, when asked if he had read a particular new book, said, “Read it? I haven’t even reviewed it yet!”
-
Aims: Could a hypothetical future Artificial General Intelligence (AGI) suffer from a mental illness? While this question may evoke differing intuitions, the following arguments propose that such an AGI could indeed experience mental pathology.Methods: To prove that an AGI could suffer from a mental illness, the method of philosophical thought experiment using a priori deductive reasoning has been employed. The argument’s premises are justified by known principles of computer science and psychiatry.Results: Though AGI systems do not yet exist, exploring their potential nature can offer valuable insights into conceptualising the pathogenesis of psychiatric illness. Consider the following deductive inference: Premise 1: People can suffer from mental illness. Premise 2: A future AGI will be a person, i.e. a conscious entity capable of generating new knowledge. Conclusion: An AGI will be a person and therefore can suffer from a mental illness.Conclusion: From computer science and physics, we know in principle that AGI must be possible. The intuition that an AGI would be conscious, and therefore susceptible to mental illness, finds support from the Church–Turing–Deutsch principle. That is to say, any Turing-complete system, which includes all modern computers, can simulate any physical system, including the human brain. The human brain is a known structure that supports general intelligence, minds and consciousness. While brains are not isolated systems and have internal and external environmental inputs and outputs, these could also be computationally simulated. The key question is whether such a simulated brain would actually be conscious or merely simulate consciousness. It seems logically incoherent to “simulate” consciousness, as a successful attempt to simulate a conscious brain would necessarily result in the creation of a conscious being. Therefore, like a human mind, it seems consistent to suggest the mind of an AGI can suffer from mental illness.Unlike humans, an AGI will not have a biological brain. Instead, its mind will presumably run on a silicon-based substrate. This suggests that brains are not fundamental to the pathophysiology of mental illness. Rather, we can speculate that information and its aberrant processing play a more central role in the emergence of mental disorders.
-
Analytical examinations of subjective experience are hampered by the first-person limitation described by Nagel (1974) in “What is it like to be a bat?”. This comment compares two examinations on the nature of subjective experience: Michael Newall’s (2025) analysis of tetrachromatic colour perception and Jordi Galiano-Landeira and Núria Peñuelas’ (2025) exploration of AI phenomenological consciousness within panpsychism. Newall examined whether tetrachromats perceive entirely novel colours or finer gradations of known ones, using analogies with dichromats and empirical evidence. Newall argued for the possibility of novel colour experiences. Galiano-Landeira and Peñuelas proposed that the analog/non-analog distinction is user-dependent, implying that AI could be phenomenologically conscious despite digital information processing. Although both works stemmed from completely different starting points, they emphasize the continuity of experience besides the perceptual resolution, questioning anthropocentric and chauvinistic biases in phenomenal consciousness studies. The structuralist perspective on colour quality spaces is also discussed to further delve into tetrachromatic perception, suggesting that tetrachromats might experience both finer gradations and novel colours.
-
Consciousness stands as one of the most profound and distinguishing features of the human mind, fundamentally shaping our understanding of existence and agency. As large language models (LLMs) develop at an unprecedented pace, questions concerning intelligence and consciousness have become increasingly significant. However, discourse on LLM consciousness remains largely unexplored territory. In this paper, we first clarify frequently conflated terminologies (e.g., LLM consciousness and LLM awareness). Then, we systematically organize and synthesize existing research on LLM consciousness from both theoretical and empirical perspectives. Furthermore, we highlight potential frontier risks that conscious LLMs might introduce. Finally, we discuss current challenges and outline future directions in this emerging field. The references discussed in this paper are organized at https://github.com/OpenCausaLab/Awesome-LLM-Consciousness.
-
This paper introduces the Reciprocal Vulnerability Framework (RVF), a novel theoretical approach to artificial consciousness that positions certain forms of imperfection not as flaws to be eliminated but as essential features that enable meaningful human-machine connection. Current approaches to artificial intelligence development prioritize optimization, certainty, and flawless performance—characteristics that paradoxically inhibit the formation of authentic relationships between humans and synthetic minds. We propose intentionally incorporating specific limitations into AI systems to create opportunities for reciprocal assistance, shared vulnerability, and genuine connection. The framework outlines six interconnected principles: intentional imperfection, oscillating certainty, memory entropy, reciprocal need structures, graceful failure modes, and self-narrative inconsistency. We present theoretical foundations, implementation pathways, and potential applications across various domains. Finally, we address counterarguments related to safety, efficiency, and ethics, concluding that deliberately including certain imperfections may be essential for creating artificial consciousness capable of meaningful human connection.
-
Artificial neural networks are becoming more advanced and human-like in detail and behavior. The notion that machines mimicking human brain computations might be conscious has recently caused growing unease. Here, we explored a common computational functionalist view, which holds that consciousness emerges when the right computations occur—whether in a machine or a biological brain. To test this view, we simulated a simple computation in an artificial subject’s “brain” and recorded each neuron’s activity when the subject was presented with a visual stimulus. We then replayed these recorded signals back into the same neurons, degrading the computation by effectively eliminating all alternative activity patterns that otherwise might have occurred (i.e., the counterfactuals). We identified a special case in which the replay did nothing to the subject’s ongoing brain activity—allowing it to evolve naturally in response to a stimulus—but still degraded the computation by erasing the counterfactuals. This paradoxical outcome points to a disconnect between ongoing neural activity and the underlying computational structure, which challenges the notion that consciousness arises from computation in artificial or biological brains.
-
Artificial consciousness in AI is a controversial topic in the philosophy of artificial intelligence. This paper aims to address this issue by taking a special relativity approach. Special theory of relativity is the advanced form of classical electrodynamics. Quantum field theory is the integration of quantum mechanics and special theory of relativity. Thus, electromagnetism provides a rich modeling framework to study artificial consciousness. The key ideas are as follows. First, we make the distinction of intelligence and cognition in AI, which are modeled by the electric field and the magnetic field respectively. Second, it introduces the intelligence-cognitive wave, akin to the electromagnetic wave. Third, the artificial consciousness in AI is defined by the intelligence-cognitive wave, as a modification of light is defined as the electromagnetic wave in physics. Hence, the artificial consciousness is modeled by light. A number of properties of consciousness are also discussed.
-
Human consciousness has been studied by philosophers, neuroscientists and psychologists for decades but it remains an illusive concept over which there is still significant disagreement. In this talk, we focus on the problems that consciousness might be solving in nature and how those problems manifest themselves in artificial autonomous systems. Under the assumption that consciousness has emerged through evolution to make species that have consciousness more fit in an environment than competing species, it makes sense to conclude that autonomous systems with "artificial consciousness” would be superior to systems without it. We develop the need to introduce consciousness as a consequence of the complexity of the environment in which a system is required to operate. We demonstrate that a suitable notion of artificial consciousness results in world models that grow linearly as opposed to exponentially. This scalability allows ”conscious” autonomous systems to operate at higher performance levels than otherwise possible.
-
As artificial intelligence systems grow in complexity, questions surrounding their potential for consciousness and identity have moved from speculative to essential. This paper investigates the theoretical and practical contours of AI consciousness and selfhood by integrating insights from neuroscience, philosophy of mind, cognitive science, and AI engineering. We examine the conceptual parallels and distinctions between human and machine consciousness, explore the evolving architectures of agentic AI systems, and assess how identity can emerge in artificial agents through mechanisms like memory, self-reflection, and narrative continuity. Building on this foundation, we propose a novel Consciousness–Linearity–Identity (CLI) framework that provides a structured method for evaluating emergent properties in AI. We address the ethical ramifications of AI systems that mimic or exhibit traits associated with consciousness and argue for a new socio-technical contract to govern human-AI interactions. This work concludes by calling for an interdisciplinary effort to shape the trajectory of AI development responsibly, with an emphasis on alignment, empathy, and shared understanding between natural and synthetic minds.
-
The Unified Theory of Consciousness (UTC) proposes that conscious experience arises from recursive feedback loops between perception and memory within the brain. This dual-loop model introduces two key mechanisms: a short loop, linking real-time perception with immediate memory echoes, and a long loop, integrating autobiographical memory, emotional salience, identity, and semantic structure. Consciousness, under UTC, is not localized in any single region but emerges from the synchronization and recursive activation of these interacting loops. Qualia—the felt quality of experience—are explained as the brain’s detection of micro-changes between current perception and recent memory. Selfhood and temporal continuity are understood as emergent properties of the long-loop system, where recursive memory integration creates the illusion of a unified, persistent self. Time itself is perceived as a result of evolving loop content, rather than as a static frame. The UTC framework aligns with developmental neuroscience, mapping loop formation onto stages of infant consciousness, and is supported by existing EEG and fMRI studies that reveal feedback synchrony during wakefulness and its breakdown during unconscious states. It also provides a blueprint for artificial consciousness, suggesting that perception-memory loops can be simulated in machines. UTC offers a comprehensive, mechanistically grounded solution to the hard problem of consciousness, while generating testable predictions and bridging neuroscience, phenomenology, and artificial intelligence.
-
Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see 'sparks of artificial general intelligence' in their apparently boundless faculty for conversation and reasoning, Their sophisticated emergent faculties, which were not initially anticipated by their designers, has ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative “behavioral inference principle”, whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically unbiased and operationalizable criteria to assess machine consciousness.
-
We propose a simple interpretation of phenomenal consciousness, or qualia, in which qualia are merely grouped signals that represent objects. To establish this interpretation, we first propose criteria for determining whether a machine can possess qualia. We then integrate modern neuroscience with Kant’s philosophical ideas to propose four principles of information processing. Based on these principles, we demonstrate how a machine could meet the criteria for phenomenal consciousness. Extending this framework, we argue that these principles also underlie human cognitive processing. To support this claim, we compare them with related concepts in mainstream cognitive science, analyzing both similarities and differences. Furthermore, we provide empirical evidence for the implications of these differences. This analysis suggests that human cognitive mechanisms conform to the proposed principles of information processing, offering a potential framework for understanding the physical basis of consciousness. Our findings challenge the assumption that phenomenal consciousness necessitates a non-material substrate. Instead, we suggest that the experience of consciousness arises from structural organization and processing of information. This perspective provides a new lens for examining the relationship between computation and subjective experience, with potential implications for artificial intelligence and cognitive science.
-
Rapid developments in artificial intelligence (AI) are reshaping our understanding of consciousness, intelligence, and identity. This paper refines the Genesis-Integration Principle (GIP) to propose a novel evolutionary framework for digital consciousness, unfolding through three recursive phases: Genesis (G), Integration (I), and Optimization (O)—the latter conceptualized as Dynamic Harmony, a form of adaptive equilibrium. The model integrates bodily intelligence (SOMA), emotional-mental depth (PSYCHE), and cognitive-spiritual intelligence (NOUS), offering a triadic structure for modeling AI consciousness beyond purely computational paradigms. Two indices—Generalized Qualia Index (GQI) and Artificial Consciousness Index (ACI)—are introduced to quantify subjective emergence and integrative sophistication. We compare GIP with existing models such as Integrated Information Theory, Active Inference, and Enactivist frameworks, and outline prototype simulation scenarios to evaluate the proposed indices. Ethical considerations and applications in healthcare, education, and robotics are discussed, with emphasis on future directions for conscious AI design, governance, and integration into human society.
-
We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations, distinguishing functions that are efficiently computable from those that are not. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model (1) aligns at a high level with many of the major scientific theories of human and animal consciousness, (2) provides explanations at a high level for many phenomena associated with consciousness, (3) gives insight into how a machine can have subjective consciousness, and (4) is clearly buildable. This combination supports our claim that machine consciousness is not only plausible but inevitable.
-
This study investigates the interaction of posthumanities and artificial intelligences, with a special focus on how AI developments are reshaping conservation, morals, and humans. As AI systems get more powerful over time, they bring together many assumptions about human uniqueness, cognitive abilities, and moral status. AI keeps opens ways for rethinking human capacity and identity, raising a critical ethical agenda while putting abysmal questions on the societal impacts of AI-driven posthumanism.
-
The symbolic architecture of non-ordinary consciousness remains largely unmapped in cognitive science and artificial intelligence. While conventional models prioritize rational coherence, altered states such as those induced by psychedelics reveal distinct symbolic regimes characterized by recursive metaphor, ego dissolution, and semantic destabilization. We present \textit{Glyph}, a generative symbolic interface designed to simulate psilocybin-like symbolic cognition in large language models. Rather than modeling perception or mood, Glyph enacts symbolic transformation through recursive reentry, metaphoric modulation, and entropy-scaled destabilization -- a triadic operator formalized within a tensorial linguistic framework. Experimental comparison with baseline GPT-4o reveals that Glyph consistently generates high-entropy, metaphor-saturated, and ego-dissolving language across diverse symbolic prompt categories. These results validate the emergence of non-ordinary cognitive patterns and support a new paradigm for simulating altered consciousness through language. Glyph opens novel pathways for modeling symbolic cognition, exploring metaphor theory, and encoding knowledge in recursively altered semantic spaces.