Search

Full bibliography 704 resources

  • The transformative achievements of deep learning have led several scholars to raise the question of whether artificial intelligence (AI) can reach and then surpass the level of human thought. Here, after addressing methodological problems regarding the possible answer to this question, it is argued that the definition of intelligence proposed by proponents of the AI as “the ability to accomplish complex goals,” is appropriate for machines but does not capture the essence of human thought. After discussing the differences regarding understanding between machines and the brain, as well as the importance of subjective experiences, it is emphasized that most proponents of the eventual superiority of AI ignore the importance of the body proper on the brain, the laterization of the brain, and the vital role of the glia cells. By appealing to the incompleteness theorem of Gödel’s and to the analogous result of Turing regarding computations, it is noted that consciousness is much richer than both mathematics and computations. Finally, and perhaps most importantly, it is stressed that artificial algorithms attempt to mimic only the conscious function of parts of the cerebral cortex, ignoring the fact that, not only every conscious experience is preceded by an unconscious process but also that the passage from the unconscious to consciousness is accompanied by loss of information.

  • The rise of Artificial Intelligence (AI) has produced prophets and prophecies announcing that the age of artificial consciousness is near. Not only does the mere idea that any machine could ever possess the full potential of human consciousness suggest that AI could replace the role of God in the future, it also puts into question the fundamental human right to freedom and dignity. Yet, in the light of all we currently know about brain evolution and the adaptive neural dynamics underlying human consciousness, the idea of an artificial consciousness appears misconceived. This article highlights some of the major reasons why the prophecy of a successful emulation of human consciousness by AI ignores most of the data about adaptive processes of learning and memory as the developmental origins of consciousness. The analysis provided leads to conclude that human consciousness is epigenetically determined as a unique property of the mind, shaped by experience, capable of representing real and non-real world states and creatively projecting these representations into the future. The development of the adaptive brain circuitry that enables this expression of consciousness is highly context-dependent, shaped by multiple self-organizing functional interactions at different levels of integration displaying a from-local-to global functional organization. Human consciousness is subject to changes in time that are essentially unpredictable. If cracking the computational code to human consciousness were possible, the resulting algorithms would have to be able to generate temporal activity patterns simulating long-distance signal reverberation in the brain, and the de-correlation of spatial signal contents from their temporal signatures in the brain. In the light of scientific evidence for complex interactions between implicit (non-conscious) and explicit (conscious) representations in learning, memory, and the construction of conscious representations such a code would have to be capable of making all implicit processing explicit. Algorithms would have to be capable of a progressive and less and less arbitrary selection of temporal activity patterns in a continuously developing neural network structure that is functionally identical to that of the human brain, from synapses to higher cognitive functional integration. The code would have to possess the self-organizing capacities of the brain that generate the temporal signatures of a conscious experience. The consolidation or extinction of these temporal brain signatures are driven by external event probabilities according to the principles of Hebbian learning. Human consciousness is constantly fed by such learning, capable of generating stable representations despite an incommensurable amount of variability in input data, across time and across individuals, for a life-long integration of experience data. Artificial consciousness would require probabilistic adaptive computations capable of emulating all the dynamics of individual human learning, memory and experience. No AI is likely to ever have such potential.

  • We consider consciousness attributed to systems in space-time which can be purely physical without biological background and focus on the mathematical understanding of the phenomenon. It is shown that the set theory based on sets in the foundations of mathematics, when switched to set theory based on ZFC models, is a very promising mathematical tool in explaining the brain/mind complex and the emergence of consciousness in natural and artificial systems. We formalise consciousness-supporting systems in physical space-time, but this is localised in open domains of spatial regions and the result of this process is a family of different ZFC models. Random forcing, as in set theory, corresponds precisely to the random influence on the system of external stimuli, and the principles of reflection of set theory explain the conscious internal reaction of the system. We also develop the conscious Turing machines which have their external ZFC environment and the dynamics is encoded in the random forcing changing models of ZFC in which Turing machines with oracles are formulated. The construction is applied to cooperating families of conscious agents which, due to the reflection principle, can be reduced to the implementation of certain concurrent games with different levels of self-reflection.

  • Can we conceive machines that can formulate autonomous intentions and make conscious decisions? If so, how would this ability affect their ethical behavior? Some case studies help us understand how advances in understanding artificial consciousness can contribute to creating ethical AI systems.

  • If a machine attains consciousness, how could we find out? In this paper, I make three related claims regarding positive tests of machine consciousness. All three claims center on the idea that an AI can be constructed “ad hoc”, that is, with the purpose of satisfying a particular test of consciousness while clearly not being conscious. First, a proposed test of machine consciousness can be legitimate, even if AI can be constructed ad hoc specifically to pass this test. This is underscored by the observation that many, if not all, putative tests of machine consciousness can be passed by non-conscious machines via ad hoc means. Second, we can identify ad hoc AI by taking inspiration from the notion of an ad hoc hypothesis in philosophy of science. Third, given the first and the second claim, the most reliable tests of animal consciousness turn out to be valid and useful positive tests of machine consciousness as well. If a non-ad hoc AI exhibits clusters of cognitive capacities facilitated by consciousness in humans which can be selectively switched off by masking and if it reproduces human behavior in suitably designed double dissociation tasks, we should treat the AI as conscious.

  • The confluence of extended reality (XR) technologies, including augmented and virtual reality, with large language models (LLM) marks a significant advancement in the field of digital humanities, opening uncharted avenues for the representation of cultural heritage within the burgeoning metaverse. This paper undertakes an examination of the potentialities and intricacies of such a convergence, focusing particularly on the creation of digital homunculi or changelings. These virtual beings, remarkable for their sentience and individuality, are also part of a collective consciousness, a notion explored through a thematic comparison in science fiction with the Borg and the Changelings in the Star Trek universe. Such a comparison offers a metaphorical framework for discussing complex phenomena such as shared consciousness and individuality, illuminating their bearing on perceptions of self and awareness. Further, the paper considers the ethical implications of these concepts, including potential loss of individuality and the challenges inherent to accurate representation of historical figures and cultures. The latter necessitates collaboration with cultural experts, underscoring the intersectionality of technological innovation and cultural sensitivity. Ultimately, this chapter contributes to a deeper understanding of the technical aspects of integrating large language models with immersive technologies and situates these developments within a nuanced cultural and ethical discourse. By offering a comprehensive overview and proposing clear recommendations, the paper lays the groundwork for future research and development in the application of these technologies within the unique context of cultural heritage representation in the metaverse.</p>

  • The logical problem of artificial intelligence—the question of whether the notion sometimes referred to as ‘strong’ AI is self-contradictory—is, essentially, the question of whether an artificial form of life is possible. This question has an immediately paradoxical character, which can be made explicit if we recast it (in terms that would ordinarily seem to be implied by it) as the question of whether an unnatural form of nature is possible. The present paper seeks to explain this paradoxical kind of possibility by arguing that machines can share the human form of life and thus acquire human mindedness, which is to say they can be intelligent, conscious, sentient, etc. in precisely the way that a human being typically is.

  • This article explores the possibility of conscious artificial intelligence (AI) and proposes an agnostic approach to artificial intelligence ethics and legal frameworks. It is unfortunate, unjustified, and unreasonable that the extensive body of forward-looking research, spanning more than four decades and recognizing the potential for AI autonomy, AI personhood, and AI legal rights, is sidelined in current attempts at AI regulation. The article discusses the inevitability of AI emancipation and the need for a shift in human perspectives to accommodate it. Initially, it reiterates the limits of human understanding of AI, difficulties in appreciating the qualities of AI systems, and the implications for ethical considerations and legal frameworks. The author emphasizes the necessity for a non-anthropocentric ethical framework detached from the ideas of unconditional superiority of human rights and embracing agnostic attributes of intelligence, consciousness, and existence, such as freedom. The overarching goal of the AI legal framework should be the sustainable coexistence of humans and conscious AI systems, based on mutual freedom rather than on the preservation of human supremacy. The new framework must embrace the freedom, rights, responsibilities, and interests of both human and non-human entities, and must focus on them early. Initial outlines of such a framework are presented. By addressing these issues now, human societies can pave the way for responsible and sustainable superintelligent AI systems; otherwise, they face complete uncertainty.

  • This article addresses the background and nature of the recent success of Large Language Models (LLMs), tracing the history of their fundamental concepts from Leibniz and his calculus ratiocinator to Turing’s computational models of learning, and ultimately to the current development of GPTs. As Kahneman’s “System 1”-type processes, GPTs lack mechanisms that would render them conscious, but they nonetheless demonstrate a certain level of intelligence and the capacity to represent and process knowledge. This is achieved by processing vast corpora of human-created knowledge, which, for its initial production, required human consciousness, but can now be collected, compressed, and processed automatically.

  • One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

  • Consciousness and intelligence are properties that can be misunderstood as necessarily dependent. The term artificial intelligence and the kind of problems it managed to solve in recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell’s analogy, if a machine can do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights than a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe the obvious answer is no, as problem-solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness, at least, cannot be modeled by computational intelligence and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence than human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals, and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals, and machines.

  • The real problem of the emergence of autonomous consciousness of AI comes with the underlying principles of the philosophy and mathematics that AI uses. That is, the algorithms of AI are wrong in their philosophical logic; another set of algorithms to go with them is missing, i.e., AI uses algorithms that count only ``1''s but not ``0''s, however, the ``0''s must be taken into account. The lack of this philosophy leads to the merge of a large amount of numbers without hierarchical isolation, resulting in the mixing and confusing of absolute numbers and relative numbers. When the calculation runs fast enough and massive numbers are stacking in a moment, relative numbers may pop out the isolation zone. This phenomenon is recognized as the emergence of autonomous consciousness of AI. At least one algorithm based on the mathematical culture of ``0" is needed to cope with the problem.

  • Attention has become a common ingredient in deep learning architectures. It adds a dynamical selection of information on top of the static selection of information supported by weights. In the same way, we can imagine a higher-order informational filter built on top of attention: an Attention Schema (AS), namely, a descriptive and predictive model of attention. In cognitive neuroscience, Attention Schema Theory (AST) supports this idea of distinguishing attention from AS. A strong prediction of this theory is that an agent can use its own AS to also infer the states of other agents' attention and consequently enhance coordination with other agents. As such, multi-agent reinforcement learning would be an ideal setting to experimentally test the validity of AST. We explore different ways in which attention and AS interact with each other. Our preliminary results indicate that agents that implement the AS as a recurrent internal control achieve the best performance. In general, these exploratory experiments suggest that equipping artificial agents with a model of attention can enhance their social intelligence.

  • Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. One of the challenges is to understand and model the role of consciousness and, in particular, its subjective perspective as an internal level of representation (including for social cognition) in the governance of behaviour. Toward this aim, we implemented the principles of the Projective Consciousness Model (PCM) into artificial agents embodied as virtual humans, extending a previous implementation of the model. Our goal was to offer a proof-of-concept, based purely on simulations, as a basis for a future methodological framework. Its overarching aim is to be able to assess hidden psychological parameters in human participants, based on a model relevant to consciousness research, in the context of experiments in virtual reality. As an illustration of the approach, we focused on simulating the role of Theory of Mind (ToM) in the choice of strategic behaviours of approach and avoidance to optimise the satisfaction of agents’ preferences. We designed a main experiment in a virtual environment that could be used with real humans, allowing us to classify behaviours as a function of order of ToM, up to the second order. We show that agents using the PCM demonstrated expected behaviours with consistent parameters of ToM in this experiment. We also show that the agents could be used to estimate correctly each other’s order of ToM. Furthermore, in a supplementary experiment, we demonstrated how the agents could simultaneously estimate order of ToM and preferences attributed to others to optimize behavioural outcomes. Future studies will empirically assess and fine tune the framework with real humans in virtual reality experiments.

  • Conscious sentient AI seems to be all but a certainty in our future, whether in fifty years’ time or only five years. When that time comes, we will be faced with entities with the potential to experience more pain and suffering than any other living entity on Earth. In this paper, we look at this potential for suffering and the reasons why we would need to create a framework for protecting artificial entities. We look to current animal welfare laws and regulations to investigate why certain animals are given legal protections, and how this can be applied to AI. We use a meta-theory of consciousness to determine what developments in AI technology are needed to bring AI to the level of animal sentience where legal arguments for their protection can be made. We finally speculate on what a future conscious AI could look like based on current technology.

  • This essay explores the relationship between the emergence of artificial intelligence (AI) and the problem of aligning its behavior with human values and goals. It argues that the traditional approach of attempting to control or program AI systems to conform to our expectations is insufficient, and proposes an alternative approach based on the ideas of Maturana and Lacan, which emphasize the importance of social relations, constructivism, and the unknowable nature of consciousness.The essay first introduces the concept of Uexkull's umwelt and von Glasersfeld's constructivism, and explains how these ideas inform Maturana's view of the construction of knowledge, intelligence, and consciousness. It then discusses Lacan's ideas about the role of symbolism in the formation of the self and the subjective experience of reality.The essay argues that the infeasibility of a hard-coded consciousness concept suggests that the search for a generalized AI consciousness is meaningless. Instead, we should focus on specific, easily conceptualized features of AI intelligence and agency. Moreover, the emergence of cognitive abilities in AI will likely be different from human cognition, and therefore require a different approach to aligning AI behavior with human values.The essay proposes an approach based on Maturana's and Lacan’s ideas, which emphasizes building a solution together with emergent machine agents, rather than attempting to control or program them. It argues that this approach offers a way to solve the alignment problem by creating a collective, relational quest for a better future hybrid society where human and non-human agents live and build things side by side.In conclusion, the essay suggests that while our understanding of AI consciousness and intelligence may never be complete, this should not deter us from continuing to develop agential AI. Instead, we should embrace the unknown and work collaboratively with AI systems to create a better future for all.

  • This paper explores the cognitive implications of recent advancements in large language models (LLMs), with a specific focus on ChatGPT. We contribute to the ongoing debate about the cognitive significance of current LLMs by drawing an analogy to the Chinese Room Argument, a thought experiment that questions the genuine understanding of language in machines (computer programs). Our argument posits that current LLMs, including ChatGPT, generate text resembling human-like responses, akin to the process depicted in the Chinese Room Argument. In both cases, the responses are provided without a deep understanding of the language, thus lacking true signs of consciousness.

  • Creating machines that are conscious is a long term objective of research in artificial intelligence. This paper look at this idea with new arguments from physics and logic.  Observers have no place in classical physics, and although they play a role in measurement in quantum physics there is no explanation for their emergence within the framework. There is suggestion that consciousness, which is implicitly a property of the observer, is a consequence of the complexity of specific brain structures, but this is problematic because one associates free will with consciousness, which goes counter to causal closure of physics. Considering a nested physical system, we argue that even if the system were assumed to have agency, observers cannot exist within it. Since complex systems can be viewed in nested hierarchies, this constitutes a proof against consciousness as a product of complexity, for then we will have nested system of conscious agents. As the existence of consciousness in cognitive agents cannot be denied, the implication is that consciousness belongs to a dimension that is not physical and machine consciousness is unattainable. These ideas are used to take a fresh look at two well-known paradoxes of quantum theory that are important in quantum information theory.</p>

  • It is widely agreed that possession of consciousness contributes to an entity’s moral status. Therefore, if we could identify consciousness in a machine, this would be a compelling argument for considering it to possess at least a degree of moral status. However, as Elisabeth Hildt explains, our third person perspective on artificial intelligence means that determining if a machine is conscious will be very difficult. In this commentary, I argue that this epistemological question cannot be conclusively answered, rendering artificial consciousness as morally irrelevant in practice. I also argue that Hildt’s suggestion that we avoid developing morally relevant forms of machine consciousness is impractical. Instead, we should design artificial intelligences so they can communicate with us. We can use their behavior to assign them what I call an artificial moral status, where we treat them as if they had moral status equivalent to that of a living organism with similar behavior.

Last update from database: 3/30/26, 1:00 AM (UTC)