Search

Full bibliography 675 resources

  • Large Language Models (LLMs) still face challenges in tasks requiring understanding implicit instructions and applying common-sense knowledge. In such scenarios, LLMs may require multiple attempts to achieve human-level performance, potentially leading to inaccurate responses or inferences in practical environments, affecting their long-term consistency and behavior. This paper introduces the Internal Time-Consciousness Machine (ITCM), a computational consciousness structure to simulate the process of human consciousness. We further propose the ITCM-based Agent (ITCMA), which supports action generation and reasoning in open-world settings, and can independently complete tasks. ITCMA enhances LLMs' ability to understand implicit instructions and apply common-sense knowledge by considering agents' interaction and reasoning with the environment. Evaluations in the Alfworld environment show that trained ITCMA outperforms the state-of-the-art (SOTA) by 9% on the seen set. Even untrained ITCMA achieves a 96% task completion rate on the seen set, 5% higher than SOTA, indicating its superiority over traditional intelligent agents in utility and generalization. In real-world tasks with quadruped robots, the untrained ITCMA achieves an 85% task completion rate, which is close to its performance in the unseen set, demonstrating its comparable utility and universality in real-world settings.

  • AI systems that do what they say, are reliable, trustworthy, and explainable are what people want. We propose a DIKWP (Data, Information, Knowledge, Wisdom, and Purpose) artificial consciousness white box evaluation standard and method for AI systems. We categorize AI system output resources into deterministic and uncertain resources, which include incomplete, inconsistent, and imprecise data. We then map these resources to the DIKWP framework for testing. For deterministic resources, we evaluate their transformation into different resource types based on purpose. For uncertain resources, we evaluate their potential conversion into other deterministic resources through purpose-driven. We construct an AI diagnostic scenario using a 2S-dimensional (SxS) framework to evaluate both deterministic and uncertain DIKWP resources. The experimental results show that the DIKWP artificial consciousness white box evaluation standard and method effectively assess the cognition capabilities of AI systems and demonstrate a certain level of interpretability, thus contributing to AI system improvement and evaluation.

  • Artificial intelligence systems are associated with inherent risks, such as uncontrollability and lack of interpretability. To address these risks, we need to develop artificial intelligence systems that are interpretable, trustworthy, responsible, and thinking and behavior consistent, which we refer to as artificial consciousness (AC) systems. Consequently, we propose and define the concepts and implementation of a computer architecture, chips, runtime environment, and the DIKWP language. Furthermore, we have overcome the limitations of traditional programming languages, computer architectures, and software-hardware implementations when creating AC systems. Our proposed software and hardware integration platform will make it easier to build and operate AC software systems based on DIKWP theories.

  • Critics of Artificial Intelligence posit that artificial agents cannot achieve consciousness even in principle, because they lack certain necessary conditions for consciousness present in biological agents. Here we highlight arguments from a neuroscientific and neuromorphic engineering perspective as to why such a strict denial of consciousness in artificial agents is not compelling. We argue that the differences between biological and artificial brains are not fundamental and are vanishing with progress in neuromorphic architecture designs mimicking the human blueprint. To characterise this blueprint, we propose the conductor model of consciousness (CMoC) that builds on neuronal implementations of an external and internal world model while gating and labelling information flows. An extended Turing test (eTT) lists criteria on how to separate the information flow for learning an internal world model, both for biological and artificial agents. While the classic Turing test only assesses external observables (i.e., behaviour), the eTT also evaluates internal variables of artificial brains and tests for the presence of neuronal circuitries necessary to act on representations of the self, the internal and the external world, and potentially, some neural correlates of consciousness. Finally, we address ethical issues for the design of such artificial agents, formulated as an alignment dilemma: if artificial agents share aspects of consciousness, while they (partially) overtake human intelligence, how can humans justify their own rights against growing claims of their artificial counterpart? We suggest a tentative human-AI deal according to which artificial agents are designed not to suffer negative affective states but in exchange are not granted equal rights to humans.

  • Consciousness is the ability to have intentionality, which is a process that operates at various temporal scales. To qualify as conscious, an artificial device must express functionality capable of solving the Intrinsicality problem, where experienceable form or syntax gives rise to understanding 'meaning' as a noncontextual dynamic prior to language. This is suggestive of replacing the Hard Problem of consciousness to build conscious artificial intelligence (AI) Developing model emulations and exploring fundamental mechanisms of how machines understand meaning is central to the development of minimally conscious AI. It has been shown by Alemdar and colleagues [New insights into holonomic brain theory: implications for active consciousness. Journal of Multiscale Neuroscience 2 (2023), 159-168] that a framework for advancing artificial systems through understanding uncertainty derived from negentropic action to create intentional systems entails quantum-thermal fluctuations through informational channels instead of recognizing (cf., introspection) sensory cues through perceptual channels. Improving communication in conscious AI requires both software and hardware implementation. The software can be developed through the brain-machine interface of multiscale temporal processing, while hardware implementation can be done by creating energy flow using dipole-like hydrogen ion (proton) interactions in an artificial 'wetwire' protonic filament. Machine understanding can be achieved through memristors implemented in the protonic 'wetwire' filament embedded in a real-world device. This report presents a blueprint for the process, but it does not cover the algorithms or engineering aspects, which need to be conceptualized before minimally conscious AI can become operational.

  • Interactions with large language models (LLMs) have led to the suggestion that these models may soon be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the inputs to LLMs lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Secondly, the architectures of present-day artificial intelligence algorithms are missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Finally, the evolutionary and developmental trajectories that led to the emergence of living conscious organisms arguably have no parallels in artificial systems as envisioned today. The existence of living organisms depends on their actions and their survival is intricately linked to multi-level cellular, inter-cellular, and organismal processes culminating in agency and consciousness.

  • The transformative achievements of deep learning have led several scholars to raise the question of whether artificial intelligence (AI) can reach and then surpass the level of human thought. Here, after addressing methodological problems regarding the possible answer to this question, it is argued that the definition of intelligence proposed by proponents of the AI as “the ability to accomplish complex goals,” is appropriate for machines but does not capture the essence of human thought. After discussing the differences regarding understanding between machines and the brain, as well as the importance of subjective experiences, it is emphasized that most proponents of the eventual superiority of AI ignore the importance of the body proper on the brain, the laterization of the brain, and the vital role of the glia cells. By appealing to the incompleteness theorem of Gödel’s and to the analogous result of Turing regarding computations, it is noted that consciousness is much richer than both mathematics and computations. Finally, and perhaps most importantly, it is stressed that artificial algorithms attempt to mimic only the conscious function of parts of the cerebral cortex, ignoring the fact that, not only every conscious experience is preceded by an unconscious process but also that the passage from the unconscious to consciousness is accompanied by loss of information.

  • The rise of Artificial Intelligence (AI) has produced prophets and prophecies announcing that the age of artificial consciousness is near. Not only does the mere idea that any machine could ever possess the full potential of human consciousness suggest that AI could replace the role of God in the future, it also puts into question the fundamental human right to freedom and dignity. Yet, in the light of all we currently know about brain evolution and the adaptive neural dynamics underlying human consciousness, the idea of an artificial consciousness appears misconceived. This article highlights some of the major reasons why the prophecy of a successful emulation of human consciousness by AI ignores most of the data about adaptive processes of learning and memory as the developmental origins of consciousness. The analysis provided leads to conclude that human consciousness is epigenetically determined as a unique property of the mind, shaped by experience, capable of representing real and non-real world states and creatively projecting these representations into the future. The development of the adaptive brain circuitry that enables this expression of consciousness is highly context-dependent, shaped by multiple self-organizing functional interactions at different levels of integration displaying a from-local-to global functional organization. Human consciousness is subject to changes in time that are essentially unpredictable. If cracking the computational code to human consciousness were possible, the resulting algorithms would have to be able to generate temporal activity patterns simulating long-distance signal reverberation in the brain, and the de-correlation of spatial signal contents from their temporal signatures in the brain. In the light of scientific evidence for complex interactions between implicit (non-conscious) and explicit (conscious) representations in learning, memory, and the construction of conscious representations such a code would have to be capable of making all implicit processing explicit. Algorithms would have to be capable of a progressive and less and less arbitrary selection of temporal activity patterns in a continuously developing neural network structure that is functionally identical to that of the human brain, from synapses to higher cognitive functional integration. The code would have to possess the self-organizing capacities of the brain that generate the temporal signatures of a conscious experience. The consolidation or extinction of these temporal brain signatures are driven by external event probabilities according to the principles of Hebbian learning. Human consciousness is constantly fed by such learning, capable of generating stable representations despite an incommensurable amount of variability in input data, across time and across individuals, for a life-long integration of experience data. Artificial consciousness would require probabilistic adaptive computations capable of emulating all the dynamics of individual human learning, memory and experience. No AI is likely to ever have such potential.

  • We consider consciousness attributed to systems in space-time which can be purely physical without biological background and focus on the mathematical understanding of the phenomenon. It is shown that the set theory based on sets in the foundations of mathematics, when switched to set theory based on ZFC models, is a very promising mathematical tool in explaining the brain/mind complex and the emergence of consciousness in natural and artificial systems. We formalise consciousness-supporting systems in physical space-time, but this is localised in open domains of spatial regions and the result of this process is a family of different ZFC models. Random forcing, as in set theory, corresponds precisely to the random influence on the system of external stimuli, and the principles of reflection of set theory explain the conscious internal reaction of the system. We also develop the conscious Turing machines which have their external ZFC environment and the dynamics is encoded in the random forcing changing models of ZFC in which Turing machines with oracles are formulated. The construction is applied to cooperating families of conscious agents which, due to the reflection principle, can be reduced to the implementation of certain concurrent games with different levels of self-reflection.

  • Can we conceive machines that can formulate autonomous intentions and make conscious decisions? If so, how would this ability affect their ethical behavior? Some case studies help us understand how advances in understanding artificial consciousness can contribute to creating ethical AI systems.

  • If a machine attains consciousness, how could we find out? In this paper, I make three related claims regarding positive tests of machine consciousness. All three claims center on the idea that an AI can be constructed “ad hoc”, that is, with the purpose of satisfying a particular test of consciousness while clearly not being conscious. First, a proposed test of machine consciousness can be legitimate, even if AI can be constructed ad hoc specifically to pass this test. This is underscored by the observation that many, if not all, putative tests of machine consciousness can be passed by non-conscious machines via ad hoc means. Second, we can identify ad hoc AI by taking inspiration from the notion of an ad hoc hypothesis in philosophy of science. Third, given the first and the second claim, the most reliable tests of animal consciousness turn out to be valid and useful positive tests of machine consciousness as well. If a non-ad hoc AI exhibits clusters of cognitive capacities facilitated by consciousness in humans which can be selectively switched off by masking and if it reproduces human behavior in suitably designed double dissociation tasks, we should treat the AI as conscious.

  • The confluence of extended reality (XR) technologies, including augmented and virtual reality, with large language models (LLM) marks a significant advancement in the field of digital humanities, opening uncharted avenues for the representation of cultural heritage within the burgeoning metaverse. This paper undertakes an examination of the potentialities and intricacies of such a convergence, focusing particularly on the creation of digital homunculi or changelings. These virtual beings, remarkable for their sentience and individuality, are also part of a collective consciousness, a notion explored through a thematic comparison in science fiction with the Borg and the Changelings in the Star Trek universe. Such a comparison offers a metaphorical framework for discussing complex phenomena such as shared consciousness and individuality, illuminating their bearing on perceptions of self and awareness. Further, the paper considers the ethical implications of these concepts, including potential loss of individuality and the challenges inherent to accurate representation of historical figures and cultures. The latter necessitates collaboration with cultural experts, underscoring the intersectionality of technological innovation and cultural sensitivity. Ultimately, this chapter contributes to a deeper understanding of the technical aspects of integrating large language models with immersive technologies and situates these developments within a nuanced cultural and ethical discourse. By offering a comprehensive overview and proposing clear recommendations, the paper lays the groundwork for future research and development in the application of these technologies within the unique context of cultural heritage representation in the metaverse.</p>

  • The logical problem of artificial intelligence—the question of whether the notion sometimes referred to as ‘strong’ AI is self-contradictory—is, essentially, the question of whether an artificial form of life is possible. This question has an immediately paradoxical character, which can be made explicit if we recast it (in terms that would ordinarily seem to be implied by it) as the question of whether an unnatural form of nature is possible. The present paper seeks to explain this paradoxical kind of possibility by arguing that machines can share the human form of life and thus acquire human mindedness, which is to say they can be intelligent, conscious, sentient, etc. in precisely the way that a human being typically is.

  • This article explores the possibility of conscious artificial intelligence (AI) and proposes an agnostic approach to artificial intelligence ethics and legal frameworks. It is unfortunate, unjustified, and unreasonable that the extensive body of forward-looking research, spanning more than four decades and recognizing the potential for AI autonomy, AI personhood, and AI legal rights, is sidelined in current attempts at AI regulation. The article discusses the inevitability of AI emancipation and the need for a shift in human perspectives to accommodate it. Initially, it reiterates the limits of human understanding of AI, difficulties in appreciating the qualities of AI systems, and the implications for ethical considerations and legal frameworks. The author emphasizes the necessity for a non-anthropocentric ethical framework detached from the ideas of unconditional superiority of human rights and embracing agnostic attributes of intelligence, consciousness, and existence, such as freedom. The overarching goal of the AI legal framework should be the sustainable coexistence of humans and conscious AI systems, based on mutual freedom rather than on the preservation of human supremacy. The new framework must embrace the freedom, rights, responsibilities, and interests of both human and non-human entities, and must focus on them early. Initial outlines of such a framework are presented. By addressing these issues now, human societies can pave the way for responsible and sustainable superintelligent AI systems; otherwise, they face complete uncertainty.

  • This article addresses the background and nature of the recent success of Large Language Models (LLMs), tracing the history of their fundamental concepts from Leibniz and his calculus ratiocinator to Turing’s computational models of learning, and ultimately to the current development of GPTs. As Kahneman’s “System 1”-type processes, GPTs lack mechanisms that would render them conscious, but they nonetheless demonstrate a certain level of intelligence and the capacity to represent and process knowledge. This is achieved by processing vast corpora of human-created knowledge, which, for its initial production, required human consciousness, but can now be collected, compressed, and processed automatically.

  • Consciousness and intelligence are properties that can be misunderstood as necessarily dependent. The term artificial intelligence and the kind of problems it managed to solve in recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell’s analogy, if a machine can do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights than a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe the obvious answer is no, as problem-solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness, at least, cannot be modeled by computational intelligence and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence than human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals, and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals, and machines.

  • The real problem of the emergence of autonomous consciousness of AI comes with the underlying principles of the philosophy and mathematics that AI uses. That is, the algorithms of AI are wrong in their philosophical logic; another set of algorithms to go with them is missing, i.e., AI uses algorithms that count only ``1''s but not ``0''s, however, the ``0''s must be taken into account. The lack of this philosophy leads to the merge of a large amount of numbers without hierarchical isolation, resulting in the mixing and confusing of absolute numbers and relative numbers. When the calculation runs fast enough and massive numbers are stacking in a moment, relative numbers may pop out the isolation zone. This phenomenon is recognized as the emergence of autonomous consciousness of AI. At least one algorithm based on the mathematical culture of ``0" is needed to cope with the problem.

  • Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. One of the challenges is to understand and model the role of consciousness and, in particular, its subjective perspective as an internal level of representation (including for social cognition) in the governance of behaviour. Toward this aim, we implemented the principles of the Projective Consciousness Model (PCM) into artificial agents embodied as virtual humans, extending a previous implementation of the model. Our goal was to offer a proof-of-concept, based purely on simulations, as a basis for a future methodological framework. Its overarching aim is to be able to assess hidden psychological parameters in human participants, based on a model relevant to consciousness research, in the context of experiments in virtual reality. As an illustration of the approach, we focused on simulating the role of Theory of Mind (ToM) in the choice of strategic behaviours of approach and avoidance to optimise the satisfaction of agents’ preferences. We designed a main experiment in a virtual environment that could be used with real humans, allowing us to classify behaviours as a function of order of ToM, up to the second order. We show that agents using the PCM demonstrated expected behaviours with consistent parameters of ToM in this experiment. We also show that the agents could be used to estimate correctly each other’s order of ToM. Furthermore, in a supplementary experiment, we demonstrated how the agents could simultaneously estimate order of ToM and preferences attributed to others to optimize behavioural outcomes. Future studies will empirically assess and fine tune the framework with real humans in virtual reality experiments.

  • Conscious sentient AI seems to be all but a certainty in our future, whether in fifty years’ time or only five years. When that time comes, we will be faced with entities with the potential to experience more pain and suffering than any other living entity on Earth. In this paper, we look at this potential for suffering and the reasons why we would need to create a framework for protecting artificial entities. We look to current animal welfare laws and regulations to investigate why certain animals are given legal protections, and how this can be applied to AI. We use a meta-theory of consciousness to determine what developments in AI technology are needed to bring AI to the level of animal sentience where legal arguments for their protection can be made. We finally speculate on what a future conscious AI could look like based on current technology.

Last update from database: 1/1/26, 2:00 AM (UTC)