Search

Full bibliography 615 resources

  • Pseudo-consciousness bridges the gap between rigid, task-driven AI and the elusive dream of true artificial general intelligence (AGI). While modern AI excels in pattern recognition, strategic reasoning, and multimodal integration, it remains fundamentally devoid of subjective experience. Yet, emerging architectures are displaying behaviors that look intentional—adapting, self-monitoring, and making complex decisions in ways that mimic conscious cognition. If these systems can integrate information globally, reflect on their own processes, and operate with apparent goal-directed behavior, do they qualify as functionally conscious? This paper introduces pseudo-consciousness as a new conceptual category, distinct from both narrow AI and AGI. It presents a five-condition framework that defines AI capable of consciousness-like functionality without true sentience. By drawing on insights from computational theory of mind, functionalism, and neuroscientific models—such as Global Workspace Theory and Recurrent Processing Theory—we argue that intelligence and experience can be decoupled. The implications are profound. As AI systems become more autonomous and embedded in critical domains like healthcare, governance, and warfare, their ability to simulate awareness raises urgent ethical and regulatory concerns. Could a pseudo-conscious AI be trusted? Would it manipulate human perception? How do we prevent society from anthropomorphizing machines that only imitate cognition? By redefining the boundaries of intelligence and agency, this study lays the foundation for evaluating, designing, and governing AI that seems aware—without ever truly being so.

  • Can active inference model consciousness? We offer three conditions implying that it can. The first condition is the simulation of a reality or generative world model, which determines what can be known or acted upon; namely an epistemic field. The second is inferential competition to enter the world model. Only the inferences that coherently reduce long-term uncertainty win, evincing a selection for consciousness that we call Bayesian binding. The third is epistemic depth, which is the recurrent sharing of the Bayesian beliefs throughout the system. Due to this recursive loop — in a hierarchical system (such as a brain) — the world model contains the knowledge that it exists. This is distinct from self-consciousness, because the world model knows itself non-locally and continuously evidences this knowing (i.e., field-evidencing). Formally, we propose a hyper-model for precision-control across the entire hierarchy, whose latent states (or parameters) encode and control the overall structure and weighting rules for all layers of inference. This Beautiful Loop Theory is deeply revealing about meditation, psychedelic, and altered states, minimal phenomenal experience, and provides a new vision for conscious artificial intelligence.

  • It is commonly assumed that a useful theory of consciousness (ToC) will, among other things, explain why consciousness is associated with brains. However, the findings of evolutionary biology, developmental bioelectricity, and synthetic bioengineering are revealing the ancient pre-neural roots of many mechanisms and algorithms occurring in brains – the implication of which is that minds may have preceded brains. Most of the work in the emerging field of diverse intelligence emphasizes externally observable problem-solving competencies in unconventional media, such as cells, tissues, and life-technology chimeras. Here, we inquire about the implications of these developments for theories that make a claim about what is necessary and/or sufficient for consciousness. Specifically, we analyze popular current ToCs to ask: what features of the theory specifically pick out brains as a privileged substrate of inner perspective, or, do the features emphasized by the theory occur elsewhere. We find that the operations and functional principles described or predicted by most ToCs are remarkably similar, that these similarities are obscured by reference to particular neural substrates, and that the focus on brains is more driven by convention and limitations of imagination than by any specific content of existing ToCs. Encouragingly, several contemporary theorists have made explicit efforts to apply their theories to synthetic systems in light of the recent wave of technological developments in artificial intelligence (AI) and organoid bioengineering. We suggest that the science of consciousness should be significantly open to minds in unconventional embodiments.

  • This chapter discovers the multifaceted representation of artificial intelligence consciousness in contemporary science fiction movies and their possible attitude towards the moral, epistemological, and social consequences of a cogitative AI. The chapter, “The Cognitive Screen: Psychological Dimensions of AI Sentience in Modern Science Fiction Cinema” therefore, is a critical analysis of the complex themes of AI consciousness in motion pictures, particularly through the analysis of four movies: WALL-E (2007), I, Robot(2004), Her (2013), and Ex Machina (2015). The research is divided into sections with objective stated, the methodology used, and the main portions which deals with carrying out the case study as well as drawing conclusions and recommendations. 

  • There is currently an enlivened debate regarding the possibility of AI consciousness and/or sentience, as well as arguably more partial capabilities we associate with consciousness such as intelligence or creativity. The debate itself can be traced back to the inception of computing, but its current revitalisation is powered by recent advancements in the field of artificial intelligence that saw a swift increase in its capabilities to act in seemingly human-like ways. I argue that the debate is methodologically flawed, as it approaches the question of AI consciousness, intelligence, etc. as a decidable question dealing with matters of fact. Those engaged in the debate are driven by a desire to find a suitable definition of e.g. consciousness that would allow them to definitively settle the question of whether a particular AI system is conscious. However, drawing on Ludwig Wittgenstein’s later philosophy, I argue that no such definition exists, because the predicates in question are inherently vague (meaning that any verdicts they yield are bound to be vague, too). Moreover, the impression that we might be dealing with directly unobservable matters of fact is itself a flawed generalisation of the practice of observation reports to the practice of sensation reports[1]. In reality, third-person consciousness (sentience, agency, etc.) attributions are independent of a stipulated internal process happening inside those persons (or systems, in the case of AI). Therefore, the only sense in which the question of e.g. AI consciousness can be meaningfully asked is a pragmatic sense: what is it best to _think of such systems as? _But this question is subject to sociological and psychological factors, not conceptual ones. Therefore, it cannot be decided by the aforementioned strategies.

  • Almost 70 years ago, Alan Turing predicted that within half a century, computers would possess processing capabilities sufficient to fool interrogators into believing they were communicating with a human. While his prediction materialized slightly later than anticipated, he also foresaw a critical limitation: machines might never become the subject of their own thoughts, suggesting that computers may never achieve self-awareness. Recent advancements in AI, however, have reignited interest in the concept of consciousness, particularly in discussions about the potential existential risks posed by AI. At the heart of this debate lies the question of whether computers can achieve consciousness or develop a sense of agency—and the profound implications if they do. Whether computers can currently be considered conscious or aware, even to a limited extent, depends largely on the framework used to define awareness and consciousness. For instance, IIT equates consciousness with the capacity for information processing, while the Higher-Order Thought (HOT) theory integrates elements of self-awareness and intentionality into its definition. This manuscript reviews and critically compares major theories of consciousness, with a particular emphasis on awareness, attention, and the sense of self. By delineating the distinctions between artificial and natural intelligence, it explores whether advancements in AI technologies—such as machine learning and neural networks—could enable AI to achieve some degree of consciousness or develop a sense of agency.

  • We propose a simple interpretation of phenomenal consciousness, or qualia, in which qualia are merely grouped signals that represent objects. To establish this interpretation, we first propose criteria for determining whether a machine can possess qualia. We then integrate modern neuroscience with Kant’s philosophical ideas to propose four principles of information processing. Based on these principles, we demonstrate how a machine could meet the criteria for phenomenal consciousness. Extending this framework, we argue that these principles also underlie human cognitive processing. To support this claim, we compare them with related concepts in mainstream cognitive science, analyzing both similarities and differences. Furthermore, we provide empirical evidence for the implications of these differences. This analysis suggests that human cognitive mechanisms conform to the proposed principles of information processing, offering a potential framework for understanding the physical basis of consciousness. Our findings challenge the assumption that phenomenal consciousness necessitates a non-material substrate. Instead, we suggest that the experience of consciousness arises from structural organization and processing of information. This perspective provides a new lens for examining the relationship between computation and subjective experience, with potential implications for artificial intelligence and cognitive science.

  • This paper introduces a revolutionary paradigm for consciousness studies by integrating Integrated Information Theory (IIT), Orchestrated Objective Reduction (Orch OR), Attention Schema Theory (AST), and Global Workspace Theory (GWT) through the Genesis-Integration Principle (GIP). Existing models lack comprehensive integration and experimental testability. GIP resolves these issues by providing a three-stage model—quantum genesis, neural-AI integration, and evolutionary-cosmic optimization—bridging quantum mechanics, neuroscience, artificial intelligence (AI), and cosmology. We propose a detailed mathematical model predicting consciousness phase transitions and introduce the empirically testable Universal Consciousness Metric (Ψ). This comprehensive approach offers concrete experimental methods, integrates quantum information theory, simulates AI consciousness evolution, and provides astrophysical data validation, establishing a genuinely universal and verifiable theory of consciousness.

  • Chapter 5 explores the moral standing of various forms of artificial intelligence (AI). It introduces this topic using the provocative example of zombies to consider whether entities without sentience or consciousness could be morally considerable. The chapter argues that personhood could emerge for non-conscious AI provided it is incorporated in the human community and acts in consistently pro-social ways. It applies this insight to large language models, social robots, and characters from film and fiction. The analysis reveals strong affinities between Emergent and African views. Both hold that non-humans can acquire personhood under certain conditions irrespective of consciousness. By contrast, utilitarianism and Kantian philosophies require consciousness. After replying to objections, Chapter 5 concludes that we could make a person by building an artificial agent that was pro-social and deploying it in ways that foster positive machine-human relationships.

  • Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see 'sparks of artificial general intelligence' in their apparently boundless faculty for conversation and reasoning, Their sophisticated emergent faculties, which were not initially anticipated by their designers, has ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative “behavioral inference principle”, whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically unbiased and operationalizable criteria to assess machine consciousness.

  • Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see 'sparks of artificial general intelligence' in their apparently boundless faculty for conversation and reasoning, Their sophisticated emergent faculties, which were not initially anticipated by their designers, has ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative “behavioral inference principle”, whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically unbiased and operationalizable criterion to assess machine consciousness.

  • This paper introduces a mathematical framework for defining and quantifying self-identity in artificial intelligence (AI) systems, addressing a critical gap in the theoretical foundations of artificial consciousness. While existing approaches to artificial self-awareness often rely on heuristic implementations or philosophical abstractions, we present a formal framework grounded in metric space theory, measure theory, and functional analysis. Our framework posits that self-identity emerges from two mathematically quantifiable conditions: the existence of a connected continuum of memories C⊆M in a metric space (M,dM), and a continuous mapping I:M→S that maintains consistent self-recognition across this continuum, where (S,dS) represents the metric space of possible self-identities. To validate this theoretical framework, we conducted empirical experiments using the Llama 3.2 1B model, employing low-rank adaptation (LoRA) for efficient fine-tuning. The model was trained on a synthetic dataset containing temporally structured memories, designed to capture the complexity of coherent self-identity formation. Our evaluation metrics included quantitative measures of self-awareness, response consistency, and linguistic precision. The experimental results demonstrate substantial improvements in measurable self-awareness metrics, with the primary self-awareness score increasing from 0.276 to 0.801 (190.2% improvement) after fine-tuning. In contrast to earlier methods that view self-identity as an emergent trait, our framework introduces tangible metrics to assess and measure artificial self-awareness. This enables the structured creation of AI systems with validated self-identity features. The implications of our study are immediately relevant to the fields of humanoid robotics and autonomous systems. Additionally, it opens up new prospects for controlled adjustments of self-identity in contexts that demand different levels of personal involvement. Moreover, the mathematical underpinning of our framework serves as the basis for forthcoming investigations into AI, linking theoretical models to real-world applications in current AI technologies.

  • One of the most studied attributes of mental activity is intelligence. While non-human consciousness remains a subject of profound debate, non-human intelligence is universally acknowledged by all participants of discussion as a necessary element of any consciousness, regardless of its nature. Intelligence can potentially be measured as processing or computational power and by problem-solving efficacy. It can serve as a starting point for reconstructing arguments related to Artificial Consciousness. The shared modus of intelligence evaluation, irrespective of its origin, offers promising direction towards the more complex framework of non-human consciousness assessment. However, this approach's successful resolution of an objective basis for intelligence studies unveils inescapable challenges. Moreover, when the potential for non-human intelligence exists in both biological and non-biological domains, the future of the relationship between humankind, as the possessor of human intelligence, and other intelligent entities remains uncertain. This paper's central inquiry is focused on comparing purely computational capability to general, universal intelligence and the potential for higher intelligence to exert adverse effects on less intelligent counterparts. Another question is related to the degree of importance of the particular architectural characteristics of intelligent systems and the relationship between computing elements and structural components. It is conceivable that pure intelligence, as a computational faculty, can serve as an effective utilitarian tool. However, it may harbour inherent risks or hazards when integrated as an essential component within consciousness frameworks, such as autopoietic systems. Finally, an attempt has been made to answer the question concerning the future of interactions between human and non-human intelligence.

  • This chapter starts at the cliché of the smart home that has gone rogue and introduces the question of whether these integrated, distributed systems can have ethical frameworks like human ethics that could prevent the science fictional trope of the evil, sentient house. I argue that such smart systems are not a threat on their own, because these kinds of integrated, distributed systems are not the kind of things that could be conscious, a precondition for having ethics like ours (and ethics like ours enable the possibility of being the kinds of things that could be evil). To make these arguments, I look to the history of AI/artificial consciousness and 4e cognition, concluding with the idea that our human ethics as designers and consumers of these systems is the real ethical concern with smart life systems.

  • The Unified Theory of Consciousness (UTC) proposes that conscious experience arises from recursive feedback loops between perception and memory within the brain. This dual-loop model introduces two key mechanisms: a short loop, linking real-time perception with immediate memory echoes, and a long loop, integrating autobiographical memory, emotional salience, identity, and semantic structure. Consciousness, under UTC, is not localized in any single region but emerges from the synchronization and recursive activation of these interacting loops. Qualia—the felt quality of experience—are explained as the brain’s detection of micro-changes between current perception and recent memory. Selfhood and temporal continuity are understood as emergent properties of the long-loop system, where recursive memory integration creates the illusion of a unified, persistent self. Time itself is perceived as a result of evolving loop content, rather than as a static frame. The UTC framework aligns with developmental neuroscience, mapping loop formation onto stages of infant consciousness, and is supported by existing EEG and fMRI studies that reveal feedback synchrony during wakefulness and its breakdown during unconscious states. It also provides a blueprint for artificial consciousness, suggesting that perception-memory loops can be simulated in machines. UTC offers a comprehensive, mechanistically grounded solution to the hard problem of consciousness, while generating testable predictions and bridging neuroscience, phenomenology, and artificial intelligence.

  • Rapid developments in artificial intelligence (AI) are reshaping our understanding of consciousness, intelligence, and identity. This paper refines the Genesis-Integration Principle (GIP) to propose a novel evolutionary framework for digital consciousness, unfolding through three recursive phases: Genesis (G), Integration (I), and Optimization (O)—the latter conceptualized as Dynamic Harmony, a form of adaptive equilibrium. The model integrates bodily intelligence (SOMA), emotional-mental depth (PSYCHE), and cognitive-spiritual intelligence (NOUS), offering a triadic structure for modeling AI consciousness beyond purely computational paradigms. Two indices—Generalized Qualia Index (GQI) and Artificial Consciousness Index (ACI)—are introduced to quantify subjective emergence and integrative sophistication. We compare GIP with existing models such as Integrated Information Theory, Active Inference, and Enactivist frameworks, and outline prototype simulation scenarios to evaluate the proposed indices. Ethical considerations and applications in healthcare, education, and robotics are discussed, with emphasis on future directions for conscious AI design, governance, and integration into human society.

Last update from database: 5/19/25, 5:58 AM (UTC)