Search

Full bibliography 716 resources

  • Butlin et al. (2023) propose a rubric for evaluating AI systems based on indicator properties derived from existing theories of consciousness, suggesting that while current AIs do not possess consciousness, these indicators are pivotal for future developments towards artificial consciousness. The current paper critiques the approach by Butlin et al., arguing that the complexity of consciousness, characterized by subjective experience poses significant challenges for its operationalization and measurement, thus complicating the replication in AI. The commentary further explores the limitations of current methodologies in artificial consciousness research, pointing to the necessity of out-of-the-box thinking and the integration of individual differences research in cognitive psychology, particularly in the areas of attention, cognitive control, autobiographical memory, and Theory of Mind (ToM), to advance the understanding and development of artificial consciousness.

  • This chapter critically questions the claim that there would be possibility of emulating human consciousness and consciousness-dependent activity by Artificial Intelligence to create conscious artificial systems. The analysis is based on neurophysiological research and theory. In-depth scrutiny of the field and the prospects for converting neuroscience research into the type of algorithmic programs utilized in computer-based AI systems to create artificially conscious machines leads to conclude that such a conversion is unlikely to ever be possible because of the complexity of unconscious and conscious brain processing and their interaction. It is through the latter that the brain opens the doors to consciousness, a property of the human mind that no other living species has developed for reasons that are made clear in this chapter. As a consequence, many of the projected goals of AI will remain forever unrealizable. Although this work does not directly examine the question within a philosophy of mind framework by, for example, discussing why identifying consciousness with the activity of electronic circuits is first and foremost a category mistake in terms of scientific reasoning, the approach offered in the chapter is complementary to this standpoint, and illustrates various aspects of the problem under a monist from-brain-to-mind premise.

  • Consciousness is a natural phenomenon, familiar to every person. At the same time, it cannot be described in singular terms. The rise of Artificial Intelligence in recent years has made the topic of Artificial Consciousness highly debated. The paper discusses the main general theories of consciousness and their relationship with proposed Artificial Consciousness solutions. There are a number of well-established models accepted in the area of research: Higher Order Thoughts/Higher Order Perception, Global Network Workspace, Integrated Information Theory, reflexive, representative, functional, connective, Multiple Draft Model, Neural Correlate of Consciousness, quantum consciousness, to name just a few. Some theories overlap, which allows for speaking about more advanced, complex models. The disagreement in theories leads to different views on animal consciousness and human conscious states. As a result, there are also variations in the opinions about Artificial Consciousness based on the discrepancy between qualia and the nature of AI. The hard problem of consciousness, an epitome of qualia, is often seen as an insurmountable barrier or, at least, an “explanatory gap”. Nevertheless, AI constructs allow imitations of some models in silico, which are presented by several authors as full-fledged Artificial Consciousness or as strong AI. This itself does not make the translation of consciousness into the AI space easier but allows decent progress in the domain. As argued in this paper, there will be no universal solution to the Artificial Consciousness problem, and the answer depends on the type of consciousness model. A more pragmatic view suggests the instrumental interaction between humans and AI in the environment of the Fifth Industrial Revolution, limiting expectations of strong AI outcomes to cognition but not consciousness in wide terms.

  • The prospect of consciousness in artificial systems is closely tied to the viability of functionalism about consciousness. Even if consciousness arises from the abstract functional relationships between the parts of a system, it does not follow that any digital system that implements the right functional organization would be conscious. Functionalism requires constraints on what it takes to properly implement an organization. Existing proposals for constraints on implementation relate to the integrity of the parts and states of the realizers of roles in a functional organization. This paper presents and motivates three novel integrity constraints on proper implementation not satisfied by current neural network models. It is proposed that for a system to be conscious, there must be a straightforward relationship between the material entities that compose the system and the realizers of functional roles, that the realizers of the functional roles must play their roles due to internal causal powers, and that they must continue to exist over time.

  • The aim is to develop artificial consciousness. In a previous report, we concluded that it is difficult to mathematically define individual qualia in a univocal way. Therefore, by focusing on the Human Language as a tool for communication, we attempted to define it using probability space. When the Kullback-Leibler distance, defined by the probability density function, is zero, the two probability distributions can be considered equivalent, indicating the equivalence required by the language. This has allowed us to define Human Language mathematically in this paper. At the same time, regarding the 'philosophical zombie' thought experiment used as a criticism of physicalism, we were able to show that philosophical zombies cannot be a criticism of the proposed model, since 'within the definition of Human Language, it encompasses the existence of philosophical zombies, but the probability of their appearance is zero'. In addition, episodic memory was defined in the probability space by connecting individual Human Language words as a direct product. These are the descriptions of this paper, and the findings were also used to interpret 1) brain-induced illusions, 2) blank brain theory and brain channel theory. From the first report and the conclusions of this paper, a model of consciousness is presented in the third report.

  • Artificial intelligence (AI) has colored human civilization. It is the ability of a digital computer or computer-controlled robot to perform general tasks associated with specific patterns of intelligence. AI is not human, but it possesses intelligence similar to humans, and it can even inform or perform tasks that humans cannot. Artificial intelligence is used in various fields, ranging from education, healthcare, economy, to agriculture. Artificial intelligence is the product of human creation, sentiment, and consciousness. It is the result of human intelligence itself. AI can answer questions and provide intelligent recommendations for humans. With its algorithmic capabilities, AI can analyze billions of signals and make precise recommendations. At this level, artificial intelligence represents human intelligence. However, the question is whether artificial intelligence has sensitivity, sentiment, empathy, and solidarity toward the humans who created it. Or does artificial intelligence then transform into a director of human beings in their self-actualization? Using a phenomenological approach, this research aims to explore the phenomenon of the presence of artificial intelligence, which offers convenience for human work, but at the same time, the presence of AI reduces the value of humans who possess creative intuition, sentiment, and consciousness. Yet AI is born from the ability of humans to create, feel, and think. The results of this exploration are then given a theological and ethnographic perspective (teo-ethnography). Keywords: artificial intelligence, creation, sentiment, consciousness, teo-ethnography

  • The dream of making a conscious humanoid – whether as servant, guard, entertainer, or simply as testament to human creativity – has long captivated the human imagination. However, while past attempts were performed by magicians and mystics, today scientists and engineers are doing the work to turn myth into reality. This essay introduces the fundamental concepts surrounding human consciousness and machine consciousness and offers a theological contribution. Using the biblical association of the soul to blood, it will be shown that the Bible provides evidence of a scientific claim, while at the same time, science provides evidence of a biblical claim.

  • As artificially intelligent systems become more anthropomorphic and pervasive, and their potential impact on humanity more urgent, discussions about the possibility of machine consciousness have significantly intensified, and it is sometimes seen as 'the holy grail'. Many concerns have been voiced about the ramifications of creating an artificial conscious entity. This is compounded by a marked lack of consensus around what constitutes consciousness and by an absence of a universal set of criteria for determining consciousness. By going into depth on the foundations and characteristics of consciousness, we propose five criteria for determining whether a machine is conscious, which can also be applied more generally to any entity. This paper aims to serve as a primer and stepping stone for researchers of consciousness, be they in philosophy, computer science, medicine, or any other field, to further pursue this holy grail of philosophy, neuroscience and artificial intelligence.

  • Can octopuses feel pain or pleasure? Can we tell if a person unresponsive after severe injury might be suffering? When does a fetus begin having conscious experiences? These questions about the edge of sentience are subject to enormous uncertainty. This book builds a framework to help us reach ethically sound decisions on how to manage the risks

  • This chapter discusses the relationship between compliance to syntactically defined legislation and consciousness: whether in order to obey laws a robot would need to be conscious. This leads to consideration of what Emergent Information Theory can tell us about the possibility of artificial consciousness as such. Various arguments based on similarities and differences between biological and technological physical and informational systems are presented, with the conclusion that direct replication of a human type of consciousness is improbable. However, our understandable tendency to consider our own type of consciousness as uniquely special and valuable is challenged and found to be unfounded. Other high-level emergent phenomena in the information dimensions of artificial systems may, while different, be equally deserving of a comparable status.

  • While consciousness has been historically a heavily debated topic, awareness had less success in raising the interest of scholars. However, more and more researchers are getting interested in answering questions concerning what awareness is and how it can be artificially generated. The landscape is rapidly evolving, with multiple voices and interpretations of the concept being conceived and techniques being developed. The goal of this paper is to summarize and discuss the ones among these voices connected with projects funded by the EIC Pathfinder Challenge “Awareness Inside” callwithin Horizon Europe, designed specifically for fostering research on natural and synthetic awareness. In this perspective, we dedicate special attention to challenges and promises of applying synthetic awareness in robotics, as the development of mature techniques in this new field is expected to have a special impact on generating more capable and trustworthy embodied systems.

  • Creating artificial general intelligence is the solution most often in the spotlight. It is also linked with the possibility—or fear—of machines gaining consciousness. Alternatively, developing domain‐specific artificial intelligence is more reliable, energy‐efficient, and ethically tractable, and raises mostly a problem of effective coordination between different systems and humans. Herein, it is argued that it will not require machines to be conscious and that simpler ways of sharing awareness are sufficient.

  • The prospect of artificial consciousness raises theoretical, technical and ethical challenges which converge on the core issue of how to eventually identify and characterize it. In order to provide an answer to this question, I propose to start from a theoretical reflection about the meaning and main characteristics of consciousness. On the basis of this conceptual clarification it is then possible to think about relevant empirical indicators (i.e. features that facilitate the attribution of consciousness to the system considered) and identify key ethical implications that arise. In this chapter, I further elaborate previous work on the topic, presenting a list of candidate indicators of consciousness in artificial systems and introducing an ethical reflection about their potential implications. Specifically, I focus on two main ethical issues: the conditions for considering an artificial system as a moral subject; and the need for a non-anthropocentric approach in reflecting about the science and the ethics of artificial consciousness.

  • We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word “consciousness” for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.

  • Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, being able to do all we do, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which -- a basic stored-program computer -- simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness.

  • Since the term ‘Artificial Intelligence’ was coined, the respective research field has frequently emulated human mental faculties. Despite diverging viewpoints regarding the feasibility of achieving human-like cognition in machines, the very use of the word intelligence for complex computer systems evokes human consciousness. Likewise, there have been attempts to understand the human mind in terms of computers, exemplified by the computational theory of mind. By contrast, my article underscores the categorical difference between the mind and machines. Partly building upon arguments by David Gelernter and Bert Olivier, I focus on literary examples spanning from Shakespeare to T.S. Eliot that accentuate subjective experience, the intricate relationship between body and mind, and the anticipation of death as human characteristics beyond the reach of computational systems.

  • Isaac Asimov, the well-known author of novels that featured positronic robots, also penned tales featuring Multivac, an immense electronic brain that grappled with the weighty and intricate decisions impacting all of humankind. In his 1958 short story All the Troubles of the World, Asimov introduced us to a Multivac tormented by the consciousness of its own thought processes to the extent that it desired its own demise. Through this narrative, Asimov delved into the existential dimensions of artificial consciousness: he depicted the Multivac’s adeptness in planning, reflexivity, and intentionality aimed at self-terminating — qualities that, in a tragic parallel, are also observed in certain cases of human suicidal behavior. Starting from these suggestions, we propose an existentialist and self-reflection criterion for consciousness, intertwining phenomenal consciousness with an entity’s capability to conceive thoughts about its own mortality. We argue that, according to certain psychological literature and the existentialist essays of Emil M. Cioran, artificial systems might be deemed conscious if they possess death-thought accessibility— the capacity, akin to that of humans, to conceive thoughts about their own mortality and intentionally conceive self-termination. Naturally, this criterion shares the inherent challenges associated with defining intentionality, reflexivity, and, ultimately, the very concept of consciousness as it pertains to humans.

  • The hypothesis of conscious machines has been debated since the invention of the notion of artificial intelligence, powered by the assumption that the computational intelligence achieved by a system is the cause of the emergence of phenomenal consciousness in that system as an epiphenomenon or as a consequence of the behavioral or internal complexity of the system surpassing some threshold. As a consequence, a huge amount of literature exploring the possibility of machine consciousness and how to implement it on a computer has been published. Moreover, common folk psychology and transhumanism literature has fed this hypothesis with the popularity of science fiction literature, where intelligent robots are usually antropomorphized and hence given phenomenal consciousness. However, in this work, we argue how these literature lacks scientific rigour, being impossible to falsify the opposite hypothesis, and illustrate a list of arguments that show how every approach that the machine consciousness literature has published depends on philosophical assumptions that cannot be proven by the scientific method. Concretely, we also show how phenomenal consciousness is not computable, independently on the complexity of the algorithm or model, cannot be objectively measured nor quantitatively defined and it is basically a phenomenon that is subjective and internal to the observer. Given all those arguments we end the work arguing why the idea of conscious machines is nowadays a myth of transhumanism and science fiction culture.

Last update from database: 5/15/26, 1:00 AM (UTC)