Search

Full bibliography 558 resources

  • Recent advances in artificial intelligence have reinvigorated the longstanding debate regarding whether or not any aspects of human cognition—notably, high-level creativity—are beyond the reach of computer programs. Is human creativity entirely a computational process? Here I consider one general argument for a dissociation between human and artificial creativity, which hinges on the role of consciousness—inner experience—in human cognition. It appears unlikely that inner experience is itself a computational process, implying that it cannot be instantiated in an abstract Turing machine, nor in any current computer architecture. Psychological research strongly implies that inner experience integrates emotions with perception and with thoughts. This integrative function of consciousness likely plays a key role in mechanisms that support human creativity. This conclusion dovetails with the anecdotal reports of creative individuals, who have linked inner experience with intrinsic motivation to create, and with the ability to access novel connections between ideas.

  • This chapter discusses two ways of looking at the topic of artificial intelligence (AI), selfhood and artificial consciousness. The first is to reflect on human-AI interaction focusing on the human users. This includes how humans respond to and interact with AI, as well as potential selfhood-related implications of interacting with AI. The second is to reflect on potential future machine selfhood and its crucial component, artificial consciousness. While artificial consciousness that resembles human consciousness does not yet exist, and the details are difficult if not impossible to anticipate, a reflection on potential future artificial consciousness is clearly needed given its extensive ethical and social implications.

  • Artificial intelligence (AI) has been fast growing since its evolution and experiments with various new add-on features; human efficiency is one among those and the most controversial topic. This chapter focuses on its attention towards studying human consciousness and AI independently and in conjunction. It provides theories and arguments on AI being able to adapt human-like consciousness, cognitive abilities and ethics. This chapter studies responses of more than 300 candidates of the Indian population and compares it against the literature review. Furthermore, it also discusses whether AI could attain consciousness, develop its own set of cognitive abilities (cognitive AI), ethics (AI ethics) and overcome human beings’ efficiency. This chapter is a study of the Indian population’s understanding of consciousness, cognitive AI and AI ethics.

  • Consciousness is a natural phenomenon, familiar to every person. At the same time, it cannot be described in singular terms. The rise of Artificial Intelligence in recent years has made the topic of Artificial Consciousness highly debated. The paper discusses the main general theories of consciousness and their relationship with proposed Artificial Consciousness solutions. There are a number of well-established models accepted in the area of research: Higher Order Thoughts/Higher Order Perception, Global Network Workspace, Integrated Information Theory, reflexive, representative, functional, connective, Multiple Draft Model, Neural Correlate of Consciousness, quantum consciousness, to name just a few. Some theories overlap, which allows for speaking about more advanced, complex models. The disagreement in theories leads to different views on animal consciousness and human conscious states. As a result, there are also variations in the opinions about Artificial Consciousness based on the discrepancy between qualia and the nature of AI. The hard problem of consciousness, an epitome of qualia, is often seen as an insurmountable barrier or, at least, an “explanatory gap”. Nevertheless, AI constructs allow imitations of some models in silico, which are presented by several authors as full-fledged Artificial Consciousness or as strong AI. This itself does not make the translation of consciousness into the AI space easier but allows decent progress in the domain. As argued in this paper, there will be no universal solution to the Artificial Consciousness problem, and the answer depends on the type of consciousness model. A more pragmatic view suggests the instrumental interaction between humans and AI in the environment of the Fifth Industrial Revolution, limiting expectations of strong AI outcomes to cognition but not consciousness in wide terms.

  • The authors present a proposal for theconstruction of a model of artificial consciousness. Consciousness is not only a subject of science, but also involves many philosophical issues. There is much debate about qualia, the intact sensations we normally feel. This paper focuses on a philosophical problem called inverted qualia to gain a deeper understanding of consciousness and simulate its possibilities using neural networks. The possibilityhas emerged that the problem, which was previouslyonly the subject of philosophical thought experiments, can be discussed using a neural network, a model of the brain. According to our simulations, we concluded that inverted qualia could occur.

  • This study explores the potential for artificial agents to develop core consciousness, as proposed by Antonio Damasio's theory of consciousness. According to Damasio, the emergence of core consciousness relies on the integration of a self model, informed by representations of emotions and feelings, and a world model. We hypothesize that an artificial agent, trained via reinforcement learning (RL) in a virtual environment, can develop preliminary forms of these models as a byproduct of its primary task. The agent's main objective is to learn to play a video game and explore the environment. To evaluate the emergence of world and self models, we employ probes-feedforward classifiers that use the activations of the trained agent's neural networks to predict the spatial positions of the agent itself. Our results demonstrate that the agent can form rudimentary world and self models, suggesting a pathway toward developing machine consciousness. This research provides foundational insights into the capabilities of artificial agents in mirroring aspects of human consciousness, with implications for future advancements in artificial intelligence.

  • The authors present a proposal for the construction of a model of artificial consciousness. Consciousness is not only a subject of science, but also involves many philosophical issues. There is much debate about qualia, the intact sensations we normally feel. This paper focuses on a philosophical problem called inverted qualia to gain a deeper understanding of consciousness and simulate its possibilities using neural networks. The possibility has emerged that the problem, which was previously only the subject of philosophical thought experiments, can be discussed using a neural network, a model of the brain. According to our simulations, we concluded that inverted qualia could occur. In addition, an experimental approach was used to assess individual differences in feeling-qualia for color.

  • What criteria must an artificial intelligence (AI) satisfy to qualify for moral standing? My starting point is that sentient AIs should qualify for moral standing. But future AIs may have unusual combinations of cognitive capacities, such as a high level of cognitive sophistication without sentience. This raises the question of whether sentience is a necessary criterion for moral standing, or merely sufficient. After reviewing nine criteria that have been proposed in the literature, I suggest that there is a strong case for thinking that some non-sentient AIs, such as those that are conscious and have non-valenced preferences and goals, and those that are non-conscious and have sufficiently cognitively complex preferences and goals, should qualify for moral standing. After responding to some challenges, I tentatively argue that taking into account uncertainty about which criteria an entity must satisfy to qualify for moral standing, and strategic considerations such as how such decisions will affect humans and other sentient entities, further supports granting moral standing to some non-sentient AIs. I highlight three implications: that the issue of AI moral standing may be more important, in terms of scale and urgency, than if either sentience or consciousness is necessary; that researchers working on policies designed to be inclusive of sentient AIs should broaden their scope to include all AIs with morally relevant interests; and even those who think AIs cannot be sentient or conscious should take the issue seriously. However, much uncertainty about these considerations remains, making this an important topic for future research.

  • This paper introduces a mathematical framework for defining and quantifying self-identity in artificial intelligence (AI) systems, addressing a critical gap in the theoretical foundations of artificial consciousness. While existing approaches to artificial self-awareness often rely on heuristic implementations or philosophical abstractions, we present a formal framework grounded in metric space theory, measure theory, and functional analysis. Our framework posits that self-identity emerges from two mathematically quantifiable conditions: the existence of a connected continuum of memories $C \subseteq \mathcal{M}$ in a metric space $(\mathcal{M}, d_{\mathcal{M}})$, and a continuous mapping $I: \mathcal{M} \to \mathcal{S}$ that maintains consistent self-recognition across this continuum, where $(\mathcal{S}, d_{\mathcal{S}})$ represents the metric space of possible self-identities. To validate this theoretical framework, we conducted empirical experiments using the Llama 3.2 1B model, employing Low-Rank Adaptation (LoRA) for efficient fine-tuning. The model was trained on a synthetic dataset containing temporally structured memories, designed to capture the complexity of coherent self-identity formation. Our evaluation metrics included quantitative measures of self-awareness, response consistency, and linguistic precision. The experimental results demonstrate substantial improvements in measurable self-awareness metrics, with the primary self-awareness score increasing from 0.276 to 0.801. This enables the structured creation of AI systems with validated self-identity features. The implications of our study are immediately relevant to the fields of humanoid robotics and autonomous systems.

  • With the development in cognitive science and Large Language Models (LLMs), increasing connections have come to light between these two distinct fields. Building upon these connections, we propose a conjecture suggesting the existence of a duality between LLMs and Tulving's theory of memory. We identify a potential correspondence between Tulving's synergistic ecphory model (SEM) of retrieval and the emergent abilities observed in LLMs, serving as supporting evidence for our conjecture. Furthermore, we speculate that consciousness may be considered a form of emergent ability based on this duality. We also discuss how other theories of consciousness intersect with our research.

  • We consider consciousness attributed to systems in space-time which can be purely physical without biological background and focus on the mathematical understanding of the phenomenon. It is shown that the set theory based on sets in the foundations of mathematics, when switched to set theory based on ZFC models, is a very promising mathematical tool in explaining the brain/mind complex and the emergence of consciousness in natural and artificial systems. We formalise consciousness-supporting systems in physical space-time, but this is localised in open domains of spatial regions and the result of this process is a family of different ZFC models. Random forcing, as in set theory, corresponds precisely to the random influence on the system of external stimuli, and the principles of reflection of set theory explain the conscious internal reaction of the system. We also develop the conscious Turing machines which have their external ZFC environment and the dynamics is encoded in the random forcing changing models of ZFC in which Turing machines with oracles are formulated. The construction is applied to cooperating families of conscious agents which, due to the reflection principle, can be reduced to the implementation of certain concurrent games with different levels of self-reflection.

  • Can we conceive machines that can formulate autonomous intentions and make conscious decisions? If so, how would this ability affect their ethical behavior? Some case studies help us understand how advances in understanding artificial consciousness can contribute to creating ethical AI systems.

  • The confluence of extended reality (XR) technologies, including augmented and virtual reality, with large language models (LLM) marks a significant advancement in the field of digital humanities, opening uncharted avenues for the representation of cultural heritage within the burgeoning metaverse. This paper undertakes an examination of the potentialities and intricacies of such a convergence, focusing particularly on the creation of digital homunculi or changelings. These virtual beings, remarkable for their sentience and individuality, are also part of a collective consciousness, a notion explored through a thematic comparison in science fiction with the Borg and the Changelings in the Star Trek universe. Such a comparison offers a metaphorical framework for discussing complex phenomena such as shared consciousness and individuality, illuminating their bearing on perceptions of self and awareness. Further, the paper considers the ethical implications of these concepts, including potential loss of individuality and the challenges inherent to accurate representation of historical figures and cultures. The latter necessitates collaboration with cultural experts, underscoring the intersectionality of technological innovation and cultural sensitivity. Ultimately, this chapter contributes to a deeper understanding of the technical aspects of integrating large language models with immersive technologies and situates these developments within a nuanced cultural and ethical discourse. By offering a comprehensive overview and proposing clear recommendations, the paper lays the groundwork for future research and development in the application of these technologies within the unique context of cultural heritage representation in the metaverse.</p>

  • This article explores the possibility of conscious artificial intelligence (AI) and proposes an agnostic approach to artificial intelligence ethics and legal frameworks. It is unfortunate, unjustified, and unreasonable that the extensive body of forward-looking research, spanning more than four decades and recognizing the potential for AI autonomy, AI personhood, and AI legal rights, is sidelined in current attempts at AI regulation. The article discusses the inevitability of AI emancipation and the need for a shift in human perspectives to accommodate it. Initially, it reiterates the limits of human understanding of AI, difficulties in appreciating the qualities of AI systems, and the implications for ethical considerations and legal frameworks. The author emphasizes the necessity for a non-anthropocentric ethical framework detached from the ideas of unconditional superiority of human rights and embracing agnostic attributes of intelligence, consciousness, and existence, such as freedom. The overarching goal of the AI legal framework should be the sustainable coexistence of humans and conscious AI systems, based on mutual freedom rather than on the preservation of human supremacy. The new framework must embrace the freedom, rights, responsibilities, and interests of both human and non-human entities, and must focus on them early. Initial outlines of such a framework are presented. By addressing these issues now, human societies can pave the way for responsible and sustainable superintelligent AI systems; otherwise, they face complete uncertainty.

  • Creating machines that are conscious is a long term objective of research in artificial intelligence. This paper look at this idea with new arguments from physics and logic.  Observers have no place in classical physics, and although they play a role in measurement in quantum physics there is no explanation for their emergence within the framework. There is suggestion that consciousness, which is implicitly a property of the observer, is a consequence of the complexity of specific brain structures, but this is problematic because one associates free will with consciousness, which goes counter to causal closure of physics. Considering a nested physical system, we argue that even if the system were assumed to have agency, observers cannot exist within it. Since complex systems can be viewed in nested hierarchies, this constitutes a proof against consciousness as a product of complexity, for then we will have nested system of conscious agents. As the existence of consciousness in cognitive agents cannot be denied, the implication is that consciousness belongs to a dimension that is not physical and machine consciousness is unattainable. These ideas are used to take a fresh look at two well-known paradoxes of quantum theory that are important in quantum information theory.</p>

  • Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions must be addressed, including what forms of machine consciousness would be morally relevant forms of consciousness, and what the ethical implications of morally relevant forms of machine consciousness would be. While admittedly part of this reflection is speculative in nature, it clearly underlines the need for a detailed conceptual analysis of the concept of artificial consciousness and stresses the imperative to avoid building machines with morally relevant forms of consciousness. The article ends with some suggestions for potential future regulation of machine consciousness.

  • At the point when the advanced PC was developed more than half a century prior, many felt that the quintessence of thinking, had been found. As of late, a few scientists wandered the theory of planning and carrying out a model for Artificial cognizance . On one hand any expectation of is having the option to plan a cognizant machine, then again such models could be useful for understanding human awareness.

Last update from database: 3/23/25, 8:36 AM (UTC)