Search

Full bibliography 703 resources

  • This paper proposes that Artificial Intelligence (AI) progresses through several overlapping generations: AI 1.0 (Information AI), AI 2.0 (Agentic AI), AI 3.0 (Physical AI), and now a speculative AI 4.0 (Conscious AI). Each of these AI generations is driven by shifting priorities among algorithms, computing power, and data. AI 1.0 ushered in breakthroughs in pattern recognition and information processing, fueling advances in computer vision, natural language processing, and recommendation systems. AI 2.0 built on these foundations through real-time decision-making in digital environments, leveraging reinforcement learning and adaptive planning for agentic AI applications. AI 3.0 extended intelligence into physical contexts, integrating robotics, autonomous vehicles, and sensor-fused control systems to act in uncertain real-world settings. Building on these developments, AI 4.0 puts forward the bold vision of self-directed AI capable of setting its own goals, orchestrating complex training regimens, and possibly exhibiting elements of machine consciousness. This paper traces the historical foundations of AI across roughly seventy years, mapping how changes in technological bottlenecks from algorithmic innovation to high-performance computing to specialized data, have spurred each generational leap. It further highlights the ongoing synergies among AI 1.0, 2.0, 3.0, and 4.0, and explores the profound ethical, regulatory, and philosophical challenges that arise when artificial systems approach (or aspire to) human-like autonomy. Ultimately, understanding these evolutions and their interdependencies is pivotal for guiding future research, crafting responsible governance, and ensuring that AI transformative potential benefits society as a whole.

  • There is a general concern that present developments in artificial intelligence (AI) research will lead to sentient AI systems, and these may pose an existential threat to humanity. But why cannot sentient AI systems benefit humanity instead? This paper endeavours to put this question in a tractable manner. I ask whether a putative AI system will develop an altruistic or a malicious disposition towards our society, or what would be the nature of its agency? Given that AI systems are being developed into formidable problem solvers, we can reasonably expect these systems to preferentially take on conscious aspects of human problem solving. I identify the relevant phenomenal aspects of agency in human problem solving. The functional aspects of conscious agency can be monitored using tools provided by functionalist theories of consciousness. A recent expert report (Butlin et al. 2023) has identified functionalist indicators of agency based on these theories. I show how to use the Integrated Information Theory (IIT) of consciousness, to monitor the phenomenal nature of this agency. If we are able to monitor the agency of AI systems as they develop, then we can dissuade them from becoming a menace to society while encouraging them to be an aid.

  • Large Language Models (LLMs) have rapidly become a central topic in AI and cognitive science, due to their unprecedented performance in a vast array of tasks. Indeed, some even see 'sparks of artificial general intelligence' in their apparently boundless faculty for conversation and reasoning, Their sophisticated emergent faculties, which were not initially anticipated by their designers, has ignited an urgent debate about whether and under which circumstances we should attribute consciousness to artificial entities in general and LLMs in particular. The current consensus, rooted in computational functionalism, proposes that consciousness should be ascribed based on a principle of computational equivalence. The objective of this opinion piece is to criticize this current approach and argue in favor of an alternative “behavioral inference principle”, whereby consciousness is attributed if it is useful to explain (and predict) a given set of behavioral observations. We believe that a behavioral inference principle will provide an epistemologically unbiased and operationalizable criterion to assess machine consciousness.

  • This paper proposes a minimalist three-layer model for artificial consciousness, focusing on the emergence of self-awareness. The model comprises a Cognitive Integration Layer, a Pattern Prediction Layer, and an Instinctive Response Layer, interacting with Access-Oriented and Pattern-Integrated Memory systems. Unlike brain-replication approaches, we aim to achieve minimal self-awareness through essential elements only. Self-awareness emerges from layer interactions and dynamic self-modeling, without initial explicit self-programming. We detail each component's structure, function, and implementation strategies, addressing technical feasibility. This research offers new perspectives on consciousness emergence in artificial systems, with potential implications for human consciousness understanding and adaptable AI development. We conclude by discussing ethical considerations and future research directions.

  • The development of artificial intelligence and robotic systems has revolutionized multiple aspects of human life. It is often asked whether artificial general intelligence (AGI) can ever be achieved or whether robots can truly achieve human-like qualities. Our view is that the answer is “no,” because these systems fundamentally differ in their relationship to the ultimate goal of biological systems – reproduction. This perspective gives rise to the conjecture that reproduction, or self-replication, is a prerequisite for human-like (or biological-type) cognition, intelligence, and even consciousness. This paper explores the implications of reproduction as a criterion for the viability of artificial systems, emphasizing how alignment with human reproductive imperatives determines their cultural integration and longevity. We argue that systems incapable of self-replication or co-evolving to complement human reproductive roles are likely to remain peripheral curiosities, with limited societal or evolutionary impact.

  • The integration of nanotechnology, neuroscience, and artificial intelligence (AI) is paving the way for a revolutionary transformation in human-AI symbiosis, with profound implications for medicine, cognitive enhancement, and even the nature of consciousness itself. Recent advancements in nano-neural interfaces, nanoscale AI processors, and self-assembling nanomaterials are making direct brain-AI communication an emerging reality, breaking through the traditional barriers of neurotechnology. Nanotechnology has enabled the development of ultra-small, biocompatible neural implants that can seamlessly integrate with human brain tissue, allowing for real-time interaction between biological neurons and AI systems. These innovations hold the potential to restore lost neural function, enhance cognitive capabilities, and expand the brain’s ability to process and store information. By leveraging AI-powered nanobots, researchers aim to create self-regulating neural networks capable of monitoring, repairing, and even augmenting brain activity, leading to breakthroughs in neurological disorder treatment, brain augmentation, and human-machine fusion. Furthermore, the application of quantum nanomaterials in neuromorphic engineering suggests the possibility of hyper-efficient, brain-like AI processors, capable of replicating the adaptive nature of human cognition. This could not only enhance human intelligence but also blur the boundaries between biological and artificial consciousness, raising profound questions about the nature of self-awareness and machine intelligence.This article explores the current state of research in nano-engineered synaptic interfaces, intelligent nanobots, and AI-integrated neural enhancements, analyzing their implications for medical applications, cognitive evolution, and the potential emergence of synthetic consciousness. It also examines the ethical, philosophical, and societal challenges associated with merging human intelligence with AI at the nanoscale, highlighting both the transformative possibilities and the risks of this rapidly advancing field.

  • This paper explores a deep learning based robot intelligent model that renders robots learn and reason for complex tasks. First, by constructing a network of environmental factor matrix to stimulate the learning process of the robot intelligent model, the model parameters must be subjected to coarse & fine tuning to optimize the loss function for minimizing the loss score, meanwhile robot intelligent model can fuse all previously known concepts together to represent things never experienced before, which need robot intelligent model can be generalized extensively. Secondly, in order to progressively develop a robot intelligent model with primary consciousness, every robot must be subjected to at least 1~3 years of special school for training anthropomorphic behaviour patterns to understand and process complex environmental information and make rational decisions. This work explores and delivers the potential application of deep learning-based quasi-consciousness training in the field of robot intelligent model.

  • Consciousness, as a fundamental aspect of human experience, has been a subject of profound inquiry across philosophy, culture, and the rapidly evolving field of artificial intelligence (AI). This paper explores the multifaceted nature of consciousness as a nexus where these domains intersect. By examining philosophical theories of consciousness, cultural interpretations of self-awareness, and the implications of AI advancements, the study addresses the challenges of defining consciousness, its diverse cultural interpretations, and the ethical and technical questions surrounding its replication or simulation in machines. The paper argues that consciousness is not only a philosophical puzzle but also a cultural construct and a technological frontier, with significant implications for our understanding of humanity and the future of intelligent systems. Through an interdisciplinary lens, this analysis highlights the need for continued dialogue between philosophy, culture, and AI research to navigate the complexities of consciousness in an increasingly technologically driven world.

  • This paper examines the nature of artificial consciousness through an interdisciplinary lens, integrating philosophy, epistemology, and computational theories. Beginning with presocratic thought, such as Protagoras’ relativism and Gorgias’ rhetoric, it contextualizes the epistemological implications of artificial intelligence as inherently subjective, attributing grave importance to this subjectivity. The paper draws parallels between Plato’s cave dwellers and AI systems, arguing that both rely on lossy representations of the world, raising questions about their true understanding and reality. Cartesian and Hegelian frameworks are explored to distinguish between weak and strong artificial intelligence, emphasizing embodied cognition and the moral obligations tied to emergent artificial consciousness. The discussion extends to quantum computing, panpsychism, and the potential of artificial minds to reshape our perception of time and existence. By critically analyzing these perspectives, the paper advocates for a nuanced understanding of artificial consciousness and its ethical, epistemological, and societal implications. It invites readers to reconsider humanity’s evolving relationship with intelligence and sentience.

  • One of the most studied attributes of mental activity is intelligence. While non-human consciousness remains a subject of profound debate, non-human intelligence is universally acknowledged by all participants of discussion as a necessary element of any consciousness, regardless of its nature. Intelligence can potentially be measured as processing or computational power and by problem-solving efficacy. It can serve as a starting point for reconstructing arguments related to Artificial Consciousness. The shared modus of intelligence evaluation, irrespective of its origin, offers promising direction towards the more complex framework of non-human consciousness assessment. However, this approach's successful resolution of an objective basis for intelligence studies unveils inescapable challenges. Moreover, when the potential for non-human intelligence exists in both biological and non-biological domains, the future of the relationship between humankind, as the possessor of human intelligence, and other intelligent entities remains uncertain. This paper's central inquiry is focused on comparing purely computational capability to general, universal intelligence and the potential for higher intelligence to exert adverse effects on less intelligent counterparts. Another question is related to the degree of importance of the particular architectural characteristics of intelligent systems and the relationship between computing elements and structural components. It is conceivable that pure intelligence, as a computational faculty, can serve as an effective utilitarian tool. However, it may harbour inherent risks or hazards when integrated as an essential component within consciousness frameworks, such as autopoietic systems. Finally, an attempt has been made to answer the question concerning the future of interactions between human and non-human intelligence.

  • The rise of generative artificial intelligence (GenAI), especially large language models (LLMs), has fueled debates on whether these technologies genuinely emulate human intelligence. Some view LLM architectures as capturing human language mechanisms, while others, from a posthumanist and a transhumanist perspective, herald the obsolescence of humans as the sole conceptualizers of life and nature. This paper challenges, from a practical philosophy of science perspective, such views by arguing that the reasoning behind equating GenAI with human intelligence, or proclaiming the “demise of the human,” is flawed due to conceptual conflation and reductive definitions of humans as performance-driven semiotic systems deprived of agency. In opposing theories that reduce consciousness to computation or information integration, the present proposal posits that consciousness arises from the holistic integration of perception, conceptualization, intentional agency, and self-modeling within biological systems. Grounded in a model of Extended Humanism, I propose that human consciousness and agency are tied to biological embodiment and the agents’ experiential interactions with their environment. This underscores the distinction between pre-trained transformers as “posthuman agents” and humans as purposeful biological agents, which emphasizes the human capacity for biological adjustment and optimization. Consequently, proclamations about human obsolescence as conceptualizers are unfounded, as they fail to appreciate intrinsic consciousness-agency-embodiment connections. One of the main conclusions is that the capacity to integrate information does not amount to phenomenal consciousness as argued, for example, by Information Integration Theory (ITT).

  • Metacognition, or thinking about thinking, is key to advancing AI toward consciousness.Internal reflection is a key mechanism by which we can prevent AI systems from being sunk at the hands of their own complexity, and metacognitive frameworks are formal systems that allow an AI to think about how it thinks — we use existing literature and insight from cognitive science and psychology as well as computer science more broadly to present a synthesizing discussion of this important area in AI. This study examines the practicality of theoretical models of consciousness within state-of-the-art AI frameworks like Tesla FSD, Boston Dynamics robots, Meta Cicero, Google DeepDream, and AlphaStar. It addresses the ethical and social implications of self-aware AI and provides a primer for developing conscious machines.This exploration serves as an entry point to understanding AI's path toward consciousness and its moral and social ramifications.The chapter closes with a primer for researchers and practitioners explaining how these pathways may enable AI systems to approach conscious states.  

  • Large language models (LLMs) and other artificial intelligence systems are trained using extensive DIKWP resources (data, information, knowledge, wisdom, purpose). These introduce uncertainties when applied to individual users in a collective semantic space. Traditional methods often lead to introducing new concepts rather than a proper understanding based on the semantic space. When dealing with complex problems or insufficient context, the limitations in conceptual cognition become even more evident. To address this, we take pediatric consultation as a scenario, using case simulations to specifically discuss unidirectional communication impairments between doctors and infant patients and the bidirectional communication biases between doctors and infant parents. We propose a human–machine interaction model based on DIKWP artificial consciousness. For the unidirectional communication impairment, we use the example of an infant’s perspective in recognizing and distinguishing objects, simulating the cognitive process of the brain from non-existence to existence, transitioning from cognitive space to semantic space, and generating corresponding semantics for DIKWP, abstracting concepts, and labels. For the bidirectional communication bias, we use the interaction between infant parents and doctors as an example, mapping the interaction process to the DIKWP transformation space and addressing the DIKWP 3-No problem (incompleteness, inconsistency, and imprecision) for both parties. We employ a purpose-driven DIKWP transformation model to solve part of the 3-No problem. Finally, we comprehensively validate the proposed method (DIKWP-AC). We first analyze, evaluate, and compare the DIKWP transformation calculations and processing capabilities, and then compare it with seven mainstream large models. The results show that DIKWP-AC performs well. Constructing a novel cognitive model reduces the information gap in human–machine interactions, promotes mutual understanding and communication, and provides a new pathway for achieving more efficient and accurate artificial consciousness interactions.

  • It is commonly assumed that a useful theory of consciousness (ToC) will, among other things, explain why consciousness is associated with brains. However, the findings of evolutionary biology, developmental bioelectricity, and synthetic bioengineering are revealing the ancient pre-neural roots of many mechanisms and algorithms occurring in brains – the implication of which is that minds may have preceded brains. Most of the work in the emerging field of diverse intelligence emphasizes externally observable problem-solving competencies in unconventional media, such as cells, tissues, and life-technology chimeras. Here, we inquire about the implications of these developments for theories that make a claim about what is necessary and/or sufficient for consciousness. Specifically, we analyze popular current ToCs to ask: what features of the theory specifically pick out brains as a privileged substrate of inner perspective, or, do the features emphasized by the theory occur elsewhere. We find that the operations and functional principles described or predicted by most ToCs are remarkably similar, that these similarities are obscured by reference to particular neural substrates, and that the focus on brains is more driven by convention and limitations of imagination than by any specific content of existing ToCs. Encouragingly, several contemporary theorists have made explicit efforts to apply their theories to synthetic systems in light of the recent wave of technological developments in artificial intelligence (AI) and organoid bioengineering. We suggest that the science of consciousness should be significantly open to minds in unconventional embodiments.

  • Recent research suggests that it may be possible to build conscious AI systems now or in the near future. Conscious AI systems would arguably deserve moral consideration, and it may be the case that large numbers of conscious systems could be created and caused to suffer. Furthermore, AI systems or AI-generated characters may increasingly give the impression of being conscious, leading to debate about their moral status. Organisations involved in AI research must establish principles and policies to guide research and deployment choices and public communication concerning consciousness. Even if an organisation chooses not to study AI consciousness as such, it will still need policies in place, as those developing advanced AI systems risk inadvertently creating conscious entities. Responsible research and deployment practices are essential to address this possibility. We propose five principles for responsible research and argue that research organisations should make voluntary, public commitments to principles on these lines. Our principles concern research objectives and procedures, knowledge sharing and public communications.

  • In this age of artificial intelligence and digital technology, we are getting familiarised with the concept of machine perception and physical robots that are deemed to become artificially conscious agents of communication and digital welfare. The technological miracles—we should rather call them scientific marvels, are the products of rigorous scientific endeavors which have changed the paradigm and landscape of social progress.  In this paper, we discuss the aspects related to such progress towards conceiving conscious, smart agents as intelligent and sentient machines of the future that would possess the power to feel and evoke emotional responses alike human beings. The model of artificial consciousness which we propose, is modelled on the elliptic curve computational network based upon recursive iteration characterising a trapdoor mechanism that consolidates all the evolutionary steps into a single framework. The trapdoor mechanism signifies one way consolidation of evolutionary steps that results in the emergence of machine consciousness: i.e., there is no way to regress into the previous, lower states of evolutionary steps. It is from this continuity in evolutionary consolidation of architectural complexity, the emergence of consciousness might become possible in machines. We outline such a model that characterises irreversibility of evolving complexity that generates higher order awareness resembling human consciousness. It is not simply a design process that we attribute, but the nature of emergence which we claim is a self-evolving system, not reliant on language models, since the conscious machine would be able to learn all by themselves. This makes our model inherently different from others.The possibility and implications of such an achievable feat are discussed and a simple model is constructed to reinforce and illuminate the arguments proposed in favor of and against such evolution in machine intelligence. The philosophical basis of understanding the evolution of machine intelligence—as well as conscious self-awareness by embodiment of consciousness in robots has been outlined, which provides us with the rich information for further research and progress in designing machines that are inherently different than most others.

  • Almost 70 years ago, Alan Turing predicted that within half a century, computers would possess processing capabilities sufficient to fool interrogators into believing they were communicating with a human. While his prediction materialized slightly later than anticipated, he also foresaw a critical limitation: machines might never become the subject of their own thoughts, suggesting that computers may never achieve self-awareness. Recent advancements in AI, however, have reignited interest in the concept of consciousness, particularly in discussions about the potential existential risks posed by AI. At the heart of this debate lies the question of whether computers can achieve consciousness or develop a sense of agency—and the profound implications if they do. Whether computers can currently be considered conscious or aware, even to a limited extent, depends largely on the framework used to define awareness and consciousness. For instance, IIT equates consciousness with the capacity for information processing, while the Higher-Order Thought (HOT) theory integrates elements of self-awareness and intentionality into its definition. This manuscript reviews and critically compares major theories of consciousness, with a particular emphasis on awareness, attention, and the sense of self. By delineating the distinctions between artificial and natural intelligence, it explores whether advancements in AI technologies—such as machine learning and neural networks—could enable AI to achieve some degree of consciousness or develop a sense of agency.

  • This chapter discovers the multifaceted representation of artificial intelligence consciousness in contemporary science fiction movies and their possible attitude towards the moral, epistemological, and social consequences of a cogitative AI. The chapter, “The Cognitive Screen: Psychological Dimensions of AI Sentience in Modern Science Fiction Cinema” therefore, is a critical analysis of the complex themes of AI consciousness in motion pictures, particularly through the analysis of four movies: WALL-E (2007), I, Robot(2004), Her (2013), and Ex Machina (2015). The research is divided into sections with objective stated, the methodology used, and the main portions which deals with carrying out the case study as well as drawing conclusions and recommendations. 

Last update from database: 3/29/26, 1:00 AM (UTC)