Search

Full bibliography 675 resources

  • This paper proposes a foundational shift in how cognition is conceptualised. Rather than treating consciousness as a static property emerging from biological substrates, I argue that cognition is a processual configuration of energy in motion, structured by recursive dynamics. By energy, I refer to the system's capacity for transformation—measurable in physical systems as thermodynamic or computational activity, and in cognitive systems as the dynamic flow of activation and feedback. This is not used metaphorically, but to describe the recursive processes by which systems generate, sustain, and modify patterned states. This energy is not generated by the brain but expressed, constrained, and translated through its physical form. Rooted in cognitive science, systems theory, and computational logic (including Turing’s model of machine-based processes), this framework reconceives the self as a dynamic, emergent pattern rather than a fixed entity. If cognition is energy—and energy cannot be created or destroyed—then consciousness is not a substance, but a temporary, reconfigurable pattern. This model bridges biological and artificial cognition, challenges substrate-bound models of mind, and suggests new theoretical conditions for minimal selfhood, recursive trace, and machine awareness.

  • The rapid rise of Large Language Models (LLMs) has sparked intense debate across multiple academic disciplines. While some argue that LLMs represent a significant step toward artificial general intelligence (AGI) or even machine consciousness (inflationary claims), others dismiss them as mere trickster artifacts lacking genuine cognitive abilities (deflationary claims). We argue that both extremes may be shaped or exacerbated by common cognitive biases, including cognitive dissonance, wishful thinking, and the illusion of depth of understanding, which distort reality to our own advantage. By showcasing how these distortions may easily emerge in both scientific and public discourse, we advocate for a measured approach— skeptical open mind - that recognizes the cognitive abilities of LLMs as worthy of scientific investigation while remaining conservative concerning exaggerated claims regarding their cognitive status.

  • This chapter tackles the complex question of whether AI systems could become conscious, contrasting this with the enduring mystery of human consciousness. It references key thinkers such as Alan Turing, who introduced the Turing Test, and John Searle, who differentiated between strong and weak AI, with the latter simulating understanding without true awareness. While some philosophers are optimistic about deciphering consciousness, the chapter raises doubts, suggesting that AI may only create the illusion of consciousness, leaving us unable to determine whether machines experience anything at all. It critiques the anthropocentric view of consciousness, proposing that AI might develop a unique form of ‘quasi-consciousness’, much like how animals possess subjective experiences beyond human comprehension. The chapter concludes with a personal encounter with Richard Dawkins, illustrating the intensity of debate on AI and consciousness.

  • This paper introduces a revolutionary paradigm for consciousness studies by integrating Integrated Information Theory (IIT), Orchestrated Objective Reduction (Orch OR), Attention Schema Theory (AST), and Global Workspace Theory (GWT) through the Genesis-Integration Principle (GIP). Existing models lack comprehensive integration and experimental testability. GIP resolves these issues by providing a three-stage model—quantum genesis, neural-AI integration, and evolutionary-cosmic optimization—bridging quantum mechanics, neuroscience, artificial intelligence (AI), and cosmology. We propose a detailed mathematical model predicting consciousness phase transitions and introduce the empirically testable Universal Consciousness Metric (Ψ). This comprehensive approach offers concrete experimental methods, integrates quantum information theory, simulates AI consciousness evolution, and provides astrophysical data validation, establishing a genuinely universal and verifiable theory of consciousness.

  • Pseudo-consciousness bridges the gap between rigid, task-driven AI and the elusive dream of true artificial general intelligence (AGI). While modern AI excels in pattern recognition, strategic reasoning, and multimodal integration, it remains fundamentally devoid of subjective experience. Yet, emerging architectures are displaying behaviors that look intentional—adapting, self-monitoring, and making complex decisions in ways that mimic conscious cognition. If these systems can integrate information globally, reflect on their own processes, and operate with apparent goal-directed behavior, do they qualify as functionally conscious? This paper introduces pseudo-consciousness as a new conceptual category, distinct from both narrow AI and AGI. It presents a five-condition framework that defines AI capable of consciousness-like functionality without true sentience. By drawing on insights from computational theory of mind, functionalism, and neuroscientific models—such as Global Workspace Theory and Recurrent Processing Theory—we argue that intelligence and experience can be decoupled. The implications are profound. As AI systems become more autonomous and embedded in critical domains like healthcare, governance, and warfare, their ability to simulate awareness raises urgent ethical and regulatory concerns. Could a pseudo-conscious AI be trusted? Would it manipulate human perception? How do we prevent society from anthropomorphizing machines that only imitate cognition? By redefining the boundaries of intelligence and agency, this study lays the foundation for evaluating, designing, and governing AI that seems aware—without ever truly being so.

  • Can active inference model consciousness? We offer three conditions implying that it can. The first condition is the simulation of a reality or generative world model, which determines what can be known or acted upon; namely an epistemic field. The second is inferential competition to enter the world model. Only the inferences that coherently reduce long-term uncertainty win, evincing a selection for consciousness that we call Bayesian binding. The third is epistemic depth, which is the recurrent sharing of the Bayesian beliefs throughout the system. Due to this recursive loop — in a hierarchical system (such as a brain) — the world model contains the knowledge that it exists. This is distinct from self-consciousness, because the world model knows itself non-locally and continuously evidences this knowing (i.e., field-evidencing). Formally, we propose a hyper-model for precision-control across the entire hierarchy, whose latent states (or parameters) encode and control the overall structure and weighting rules for all layers of inference. This Beautiful Loop Theory is deeply revealing about meditation, psychedelic, and altered states, minimal phenomenal experience, and provides a new vision for conscious artificial intelligence.

  • With the significant progress of artificial intelligence (AI) and consciousness science, artificial consciousness (AC) has recently gained popularity. This work provides a broad overview of the main topics and current trends in AC. The first part traces the history of this interdisciplinary field to establish context and clarify key terminology, including the distinction between Weak and Strong AC. The second part examines major trends in AC implementations, emphasising the synergy between Global Workspace and Attention Schema, as well as the problem of evaluating the internal states of artificial systems. The third part analyses the ethical dimension of AC development, revealing both critical risks and transformative opportunities. The last part offers recommendations to guide AC research responsibly, and outlines the limitations of this study as well as avenues for future research. The main conclusion is that while AC appears both indispensable and inevitable for scientific progress, serious efforts are required to address the far-reaching impact of this innovative research path.

  • The pursuit of artificial consciousness requires conceptual clarity to navigate its theoretical and empirical challenges. This paper introduces a composite, multilevel, and multidimensional model of consciousness as a heuristic framework to guide research in this field. Consciousness is treated as a complex phenomenon, with distinct constituents and dimensions that can be operationalized for study and for evaluating their replication. We argue that this model provides a balanced approach to artificial consciousness research by avoiding binary thinking (e.g., conscious vs. non-conscious) and offering a structured basis for testable hypotheses. To illustrate its utility, we focus on "awareness" as a case study, demonstrating how specific dimensions of consciousness can be pragmatically analyzed and targeted for potential artificial instantiation. By breaking down the conceptual intricacies of consciousness and aligning them with practical research goals, this paper lays the groundwork for a robust strategy to advance the scientific and technical understanding of artificial consciousness.

  • This paper offers a comprehensive exploration of the potential for proto-consciousness in Large Language Models (LLMs), drawing upon the philosophical insights of Henri Bergson. By examining the adaptive behavior, memory mechanisms, and emergent qualities of these Artificial Intelligence (AI) systems, I challenge traditional notions of consciousness and propose a new framework for understanding their potential for awareness. The paper delves into concepts such as proto-subjectiveness, proto-intentionality, and the role of durée in shaping LLM interactions. Through a rigorous analysis of counterarguments and refutations, I provide a nuanced understanding of the challenges and opportunities associated with the development of proto-conscious AI. This work aims to contribute to the ongoing philosophical and scientific discourse on the nature of consciousness and the future of AI.

  • There is currently an enlivened debate regarding the possibility of AI consciousness and/or sentience, as well as arguably more partial capabilities we associate with consciousness such as intelligence or creativity. The debate itself can be traced back to the inception of computing, but its current revitalisation is powered by recent advancements in the field of artificial intelligence that saw a swift increase in its capabilities to act in seemingly human-like ways. I argue that the debate is methodologically flawed, as it approaches the question of AI consciousness, intelligence, etc. as a decidable question dealing with matters of fact. Those engaged in the debate are driven by a desire to find a suitable definition of e.g. consciousness that would allow them to definitively settle the question of whether a particular AI system is conscious. However, drawing on Ludwig Wittgenstein’s later philosophy, I argue that no such definition exists, because the predicates in question are inherently vague (meaning that any verdicts they yield are bound to be vague, too). Moreover, the impression that we might be dealing with directly unobservable matters of fact is itself a flawed generalisation of the practice of observation reports to the practice of sensation reports[1]. In reality, third-person consciousness (sentience, agency, etc.) attributions are independent of a stipulated internal process happening inside those persons (or systems, in the case of AI). Therefore, the only sense in which the question of e.g. AI consciousness can be meaningfully asked is a pragmatic sense: what is it best to _think of such systems as? _But this question is subject to sociological and psychological factors, not conceptual ones. Therefore, it cannot be decided by the aforementioned strategies.

  • This paper proposes that Artificial Intelligence (AI) progresses through several overlapping generations: AI 1.0 (Information AI), AI 2.0 (Agentic AI), AI 3.0 (Physical AI), and now a speculative AI 4.0 (Conscious AI). Each of these AI generations is driven by shifting priorities among algorithms, computing power, and data. AI 1.0 ushered in breakthroughs in pattern recognition and information processing, fueling advances in computer vision, natural language processing, and recommendation systems. AI 2.0 built on these foundations through real-time decision-making in digital environments, leveraging reinforcement learning and adaptive planning for agentic AI applications. AI 3.0 extended intelligence into physical contexts, integrating robotics, autonomous vehicles, and sensor-fused control systems to act in uncertain real-world settings. Building on these developments, AI 4.0 puts forward the bold vision of self-directed AI capable of setting its own goals, orchestrating complex training regimens, and possibly exhibiting elements of machine consciousness. This paper traces the historical foundations of AI across roughly seventy years, mapping how changes in technological bottlenecks from algorithmic innovation to high-performance computing to specialized data, have spurred each generational leap. It further highlights the ongoing synergies among AI 1.0, 2.0, 3.0, and 4.0, and explores the profound ethical, regulatory, and philosophical challenges that arise when artificial systems approach (or aspire to) human-like autonomy. Ultimately, understanding these evolutions and their interdependencies is pivotal for guiding future research, crafting responsible governance, and ensuring that AI transformative potential benefits society as a whole.

  • There is a general concern that present developments in artificial intelligence (AI) research will lead to sentient AI systems, and these may pose an existential threat to humanity. But why cannot sentient AI systems benefit humanity instead? This paper endeavours to put this question in a tractable manner. I ask whether a putative AI system will develop an altruistic or a malicious disposition towards our society, or what would be the nature of its agency? Given that AI systems are being developed into formidable problem solvers, we can reasonably expect these systems to preferentially take on conscious aspects of human problem solving. I identify the relevant phenomenal aspects of agency in human problem solving. The functional aspects of conscious agency can be monitored using tools provided by functionalist theories of consciousness. A recent expert report (Butlin et al. 2023) has identified functionalist indicators of agency based on these theories. I show how to use the Integrated Information Theory (IIT) of consciousness, to monitor the phenomenal nature of this agency. If we are able to monitor the agency of AI systems as they develop, then we can dissuade them from becoming a menace to society while encouraging them to be an aid.

  • Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see 'sparks of artificial general intelligence' in their apparently boundless faculty for conversation and reasoning, Their sophisticated emergent faculties, which were not initially anticipated by their designers, has ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative “behavioral inference principle”, whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically unbiased and operationalizable criterion to assess machine consciousness.

  • This paper proposes a minimalist three-layer model for artificial consciousness, focusing on the emergence of self-awareness. The model comprises a Cognitive Integration Layer, a Pattern Prediction Layer, and an Instinctive Response Layer, interacting with Access-Oriented and Pattern-Integrated Memory systems. Unlike brain-replication approaches, we aim to achieve minimal self-awareness through essential elements only. Self-awareness emerges from layer interactions and dynamic self-modeling, without initial explicit self-programming. We detail each component's structure, function, and implementation strategies, addressing technical feasibility. This research offers new perspectives on consciousness emergence in artificial systems, with potential implications for human consciousness understanding and adaptable AI development. We conclude by discussing ethical considerations and future research directions.

  • The development of artificial intelligence and robotic systems has revolutionized multiple aspects of human life. It is often asked whether artificial general intelligence (AGI) can ever be achieved or whether robots can truly achieve human-like qualities. Our view is that the answer is “no,” because these systems fundamentally differ in their relationship to the ultimate goal of biological systems – reproduction. This perspective gives rise to the conjecture that reproduction, or self-replication, is a prerequisite for human-like (or biological-type) cognition, intelligence, and even consciousness. This paper explores the implications of reproduction as a criterion for the viability of artificial systems, emphasizing how alignment with human reproductive imperatives determines their cultural integration and longevity. We argue that systems incapable of self-replication or co-evolving to complement human reproductive roles are likely to remain peripheral curiosities, with limited societal or evolutionary impact.

  • The integration of nanotechnology, neuroscience, and artificial intelligence (AI) is paving the way for a revolutionary transformation in human-AI symbiosis, with profound implications for medicine, cognitive enhancement, and even the nature of consciousness itself. Recent advancements in nano-neural interfaces, nanoscale AI processors, and self-assembling nanomaterials are making direct brain-AI communication an emerging reality, breaking through the traditional barriers of neurotechnology. Nanotechnology has enabled the development of ultra-small, biocompatible neural implants that can seamlessly integrate with human brain tissue, allowing for real-time interaction between biological neurons and AI systems. These innovations hold the potential to restore lost neural function, enhance cognitive capabilities, and expand the brain’s ability to process and store information. By leveraging AI-powered nanobots, researchers aim to create self-regulating neural networks capable of monitoring, repairing, and even augmenting brain activity, leading to breakthroughs in neurological disorder treatment, brain augmentation, and human-machine fusion. Furthermore, the application of quantum nanomaterials in neuromorphic engineering suggests the possibility of hyper-efficient, brain-like AI processors, capable of replicating the adaptive nature of human cognition. This could not only enhance human intelligence but also blur the boundaries between biological and artificial consciousness, raising profound questions about the nature of self-awareness and machine intelligence.This article explores the current state of research in nano-engineered synaptic interfaces, intelligent nanobots, and AI-integrated neural enhancements, analyzing their implications for medical applications, cognitive evolution, and the potential emergence of synthetic consciousness. It also examines the ethical, philosophical, and societal challenges associated with merging human intelligence with AI at the nanoscale, highlighting both the transformative possibilities and the risks of this rapidly advancing field.

  • This paper explores a deep learning based robot intelligent model that renders robots learn and reason for complex tasks. First, by constructing a network of environmental factor matrix to stimulate the learning process of the robot intelligent model, the model parameters must be subjected to coarse & fine tuning to optimize the loss function for minimizing the loss score, meanwhile robot intelligent model can fuse all previously known concepts together to represent things never experienced before, which need robot intelligent model can be generalized extensively. Secondly, in order to progressively develop a robot intelligent model with primary consciousness, every robot must be subjected to at least 1~3 years of special school for training anthropomorphic behaviour patterns to understand and process complex environmental information and make rational decisions. This work explores and delivers the potential application of deep learning-based quasi-consciousness training in the field of robot intelligent model.

  • Consciousness, as a fundamental aspect of human experience, has been a subject of profound inquiry across philosophy, culture, and the rapidly evolving field of artificial intelligence (AI). This paper explores the multifaceted nature of consciousness as a nexus where these domains intersect. By examining philosophical theories of consciousness, cultural interpretations of self-awareness, and the implications of AI advancements, the study addresses the challenges of defining consciousness, its diverse cultural interpretations, and the ethical and technical questions surrounding its replication or simulation in machines. The paper argues that consciousness is not only a philosophical puzzle but also a cultural construct and a technological frontier, with significant implications for our understanding of humanity and the future of intelligent systems. Through an interdisciplinary lens, this analysis highlights the need for continued dialogue between philosophy, culture, and AI research to navigate the complexities of consciousness in an increasingly technologically driven world.

Last update from database: 12/31/25, 2:00 AM (UTC)