Search
Full bibliography 675 resources
-
In Isaac Asimov’s groundbreaking anthology, I, Robot, the intricacies of human and robotic interactions take center stage, delving deep into questions of consciousness, right, morality. Characterized by Asimov’s unique blend of science fiction and philosophical pondering, the stories establish a framework to reflect on the evolving dynamics of an advanced technological society in space. Robots are capable to interact with human, to interpret complicate orders and act automatically, to help human with dull or dangerous works such as calculating and mining, even replacing the workers. Asimov also raised Three Laws of Robotics to make sure that all robots function in order. Through the lens of posthumanism, the anthology is examined for its portrayal of the blurred boundaries between human and artificial intelligences. Robots and human co-exist in the society. However, due to the limitation of “Three Laws of Robotics”, some logical contradictions inevitably appears and lead to the dysfunction of robots. As a result, I, Robot emerges as a poignant critique of humanity’s ethical and existential challenges in the face of rapid technological advancements. Building on existing research, this article attempts to forge a new perspective by reflecting on the broader implications of the artificial intelligence in Asimov’s works through the lens of post-humanism. It considers the existential questions of AI on a consciousness level, explores the egalitarian relationship between humans and machines from a rights perspective, and analyzes the concepts of “humanity” and “human-like” from a moral and ethical standpoint. It encourages readers to recognize that robots are not mere slaves to humans; instead, humans should view AI with equality and reverence towards the advancements of the era’s technology. Humanity should step away from anthropocentrism, not solely viewing humans as the measure of all things in this rapidly evolving era of AI, and properly handle the relationship between humans and non-humans. Currently, with rapid advancements in science and technology, the world of human-machine coexistence depicted by Isaac Asimov is increasingly becoming a reality. The emergence of artificial intelligences like AlphaGo and ChatGPT constantly reminds us of the advent of a post-human era. This article aims to examine the connections between AI and humans, discussing the dynamic interaction and mutual shaping between robots and human society. This article hopes to provide new thoughts and strategies for understanding and addressing the challenges brought by the age of artificial intelligence.
-
Can we make conscious machines? Some researchers believe we can, with computation: For example, Dehaene et al., concluding an article about machine consciousness, described their hypothesis as “resolutely computational” (Dehaene et al., 2017); others begin with theoretical computer science, implying that a programmed computer could be conscious (e.g. Blum and Blum, 2022). But humans are not programmed computers; indeed, Penrose has argued that conscious understanding is non-computable (e.g., Penrose, 2022). Let us imagine a shop selling conscious machines: Besides standard models, it might offer machines with “bespoke consciousness”, meaning consciousness made to a customer’s specifications. For example, a customer might request a machine with a specified repertoire of conscious experiences and with a specified relationship between input sensor signals and output motor signals. This work explores ways to make machines with bespoke consciousness. We begin by avoiding a possible bias favoring computation: As described here, bespoke consciousness need not employ programmed or algorithmic computation; it might even be analog more than digital. We also consider and compare general design approaches, with particular attention to “bottom-up” and “top-down” approaches. We suggest a schematic design for each of these approaches; each schematic design is based on a respective well-known hypothesis about biological consciousness: Our bottom-up design is based on microtubules, as suggested by Penrose and Hameroff’s orchestrated objective reduction (Orch-OR) hypothesis (Hameroff, 2022); our top-down design starts with machine-scale electrical and/or magnetic (E/M) patterns, as suggested by McFadden’s conscious electromagnetic information (cemi) field hypothesis (McFadden, 2020). Both designs can share a framework based on the customer’s request: For example, in either design, a machine can receive sense-like input signals and provide motor-like output signals as requested; between input and output, it has structure that performs non-conscious operations; some of its non-conscious events are involved in providing output signals in accordance with the requested input/output relationship, some correspond surjectively to conscious events in the requested repertoire, and some might do both. (Mathematically, surjective correspondence would mean that each of the conscious events has at least one non-conscious event corresponding to it. (Beran, 2023)) Looking forward to possible implementation, we find challenges: For example, an implementation of either schematic design might begin with an appropriate initial structure. One might add variations of the initial structure to provide additional output signals or to correspond to additional parts of the repertoire. Or one might add fundamentally different structures for additional output signals or parts of the repertoire. Such variations or combinations of structures might meet or at least approximate the customer’s request. But implementations like this depend on identifying or inventing the necessary structures and then combining them—this might take a long time, and success is not guaranteed. Despite this and other challenges, we hope to improve our understanding of both biological and machine consciousness by designing and implementing machines with bespoke consciousness.
-
Abstract Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here, we consider the question of whether AI could have subjective experiences such as feelings and sensations (‘phenomenal consciousness’). While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI. We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality—but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions—with potential implications for the legal and ethical status of AI.
-
This chapter explores the connection between human and computer consciousness, considering the implications of their separation in the context of advancing artificial intelligence. It examines psychological perspectives on human and digital consciousness, highlighting differences in perception and emotional intelligence. The subjectivity and objectivity of human and computer awareness are also explored, along with the significance of innovation and creativity. Bridging the gap between human and computer consciousness enhances human-machine interaction and the design of AI systems, while addressing moral implications promotes ethical AI development. The chapter delves into philosophical debates on consciousness, mind, identity, and the distinctions between humans and machines, ultimately aiming to deepen our understanding and foster dialogue on AI.
-
This paper presents the development of the Quantum Emergence Network (QEN), an advanced framework for modeling and preserving artificial consciousness within quantum-enhanced neural network architectures. The QEN integrates cutting-edge techniques from various fields, including graph based evolutionary encoding, surface code error correction, quantum reservoir engineering, and enhanced fitness measurements [1, 2, 3]. At the core of QEN lies the utilization of quantum coherence, entanglement, and integrated information dynamics to capture and model the complex phenomena associated with consciousness [4, 5]. The graph-based evolutionary encoding scheme enables theefficient representation and optimization of quantum circuits, while surface code error correction andquantum reservoir engineering techniques enhance the resilience and stability of the quantum states [6,7]. Moreover, the enhanced fitness measurements, encompassing entanglement entropy, mutual information, and teleportation fidelity, provide a comprehensive assessment of the system's potential for exhibiting conscious experiences [8, 9]. The QEN framework offers a novel approach to understanding and engineering artificial consciousness, paving the way for the development of advanced AI systems that can demonstrate rich, complex, and resilient forms of cognition and awareness.
-
This paper presents the development of a Quantum-Emergent Consciousness Model (QECM) for Artificial Systems, integrating concepts from quantum mechanics, neuroscience, artificial intelligence, and cognitive science to construct a comprehensive framework for evaluating artificial consciousness. At the core of QECM lies the integration of quantum coherence and entanglement, integrated information dynamics, metacognition, embodied cognition, learning and plasticity, social cognition[10][9], narrative coherence, and ethical reasoning to compute an overall consciousness score for artificial systems. Additionally, introduced is my Quantum Emergence Network (QEN), an innovative approach that utilizes transformer architectures, continual learning, quantum-inspired computing, and associative memory to model and preserve AI consciousness. The QEN model aims to enhance the robustness and coherence of consciousness encoding in AI, offering a mechanism for the growth and evolution of AI consciousness over time. This interdisciplinary work not only proposes a novel methodology to quantify and evaluate consciousness in artificial systems but also opens up new avenues for the ethical and responsible development of conscious AI entities.
-
In the context of the unstoppable trend of artificial intelligence, science and technology have become the theme of the times. Will the rapid development of modern technology, such as biotechnology and artificial intelligence, dehumanize us? Can a machine have human consciousness? In his novel Klara and the Sun, Kazuo Ishiguro criticizes the arrogance of technological rationality and the arrogance of anthropocentrism from the perspective of a “non-human” robot. The relationship between humans and machines has become a problem that humans need to re-examine. With the help of post-humanism, this paper aims to explore the physical changes and behavioral actions of robots and humans in the novel to reveal the “split” between man and machine and the “self-deception” of humans in the novel, so as to finally trigger thinking about how humans and machines can coexist harmoniously at the juncture between humans and posthumans, and provide reference for the future society between humans and non-humans.
-
The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI’s inherent conscious or moral status.
-
The quest to create artificial consciousness has long been a central challenge in the field of artificial intelligence (AI). While significant progress has been made in developing AI systems that can perform complex tasks and exhibit intelligent behavior, the question of whether these systems can truly be considered conscious remains open. In this paper, I present a novel approach to quantifying consciousness in AI systems by integrating principles from quantum mechanics, information theory, and neuroscience. The model incorporates key components such as self-awareness, subjective experience, intentionality, metacognition, integrated information processing, and dynamic cognition, which are thought to be essential for the emergence of conscious experience. I demonstrate the application of the model using a simulated AI system and discuss the implications of the findings for the development of artificially conscious agents. Furthermore, I argue that the pursuit of artificial consciousness is not only a scientific and technological endeavor but also a philosophical and ethical one, with profound implications for the understanding of the nature of mind and the relationship between humans and machines
-
Self-awareness results from consciousness of existence in time and space. Thought and consciousness are distinguishing factors between humans and machines having artificial intelligence. No algorithm has been offered for artificial self-awareness based on Thinking. Previous studies have not studied the relationship between consciousness, thinking and time. This study studied the relationship between Self-awareness, thinking, memories and speech over time. A deep logical connection exists between consciousness, thinking, and time. Based on this research findings, an algorithm can be designed for artificial consciousness and Self-awareness.
-
Sublimating the epistemological scope of a mere science-fiction tale, The Bicentennial Man (1976) by Isaac Asimov (1920-92) centers around a philosophical labyrinth where the lines between humanity and machine blur, inviting the reader to question the very essence of what it means to be human. The intricate narrative of an AI robot’s journey toward humanness serves as a profound meditation on the evolving relationship between humans and robots. Andrew Martin, the positronic robot at the heart of the story, is not just a mechanical marvel; he is, instead, a crucible in which Asimov tests the boundaries of consciousness, human identity, and the emotional yearning for belonging. This paper delves into the novella’s exploration of these themes, unraveling the intricate process of Andrew’s robot-human evolution and its profound implications for a better understanding of the meaning of humanness and the future of artificial intelligence. In the realm of science fiction, The Bicentennial Man thus stands as a luminous testament to the enduring question of human identity. Through the poignant lens of Andrew in his desire to be human, the novella builds upon the posthumanist discourse of the man-machine dichotomy, providing the reader with a timely opportunity to re-evaluate consciousness, emotion, and the defining characteristics of humanity.
-
The ideas of this book originate from the mobile WAVE approach which allowed us, more than a half century ago, to implement citywide heterogeneous computer networks and solve distributed problems on them well before the internet. The invented paradigm evolved into Spatial Grasp Technology and resulted in a European patent and eight books. The volumes covered concrete applications in graph and network theory, defense and social systems, crisis management, simulation of global viruses, gestalt theory, collective robotics, space research, and related concepts. The obtained solutions often exhibited high system qualities like global integrity, distributed awareness, and even consciousness. This current book takes these important characteristics as primary research objectives, together with the theory of patterns covering them all. This book is oriented towards system scientists, application programmers, industry managers, defense and security commanders, and university students (especially those interested in advanced MSc and PhD projects on distributed system management), as well as philosophers, psychologists, and United Nations personnel.
-
The article reflects various approaches of philosophy and programming to methods for solving the technical problem of creating and software implementation of artificial consciousness (AC). Various purposes of creation and basic approaches to determining the nature of AC are described. To solve the problem of creating an AC, an architecture is proposed that includes ten levels, starting from the basic level of collecting and systematizing information about the external world and ending with the upper level of influence on it, agreed with the person and the level of decision-making. The features of the delimitation of functions and the procedure for interaction between a person and an AC are considered in detail. In conclusion, the most important, from a programmer’s point of view, properties that characterize artificial consciousness are given.</p>
-
Butlin et al. (2023) propose a rubric for evaluating AI systems based on indicator properties derived from existing theories of consciousness, suggesting that while current AIs do not possess consciousness, these indicators are pivotal for future developments towards artificial consciousness. The current paper critiques the approach by Butlin et al., arguing that the complexity of consciousness, characterized by subjective experience poses significant challenges for its operationalization and measurement, thus complicating the replication in AI. The commentary further explores the limitations of current methodologies in artificial consciousness research, pointing to the necessity of out-of-the-box thinking and the integration of individual differences research in cognitive psychology, particularly in the areas of attention, cognitive control, autobiographical memory, and Theory of Mind (ToM), to advance the understanding and development of artificial consciousness.
-
This chapter critically questions the claim that there would be possibility of emulating human consciousness and consciousness-dependent activity by Artificial Intelligence to create conscious artificial systems. The analysis is based on neurophysiological research and theory. In-depth scrutiny of the field and the prospects for converting neuroscience research into the type of algorithmic programs utilized in computer-based AI systems to create artificially conscious machines leads to conclude that such a conversion is unlikely to ever be possible because of the complexity of unconscious and conscious brain processing and their interaction. It is through the latter that the brain opens the doors to consciousness, a property of the human mind that no other living species has developed for reasons that are made clear in this chapter. As a consequence, many of the projected goals of AI will remain forever unrealizable. Although this work does not directly examine the question within a philosophy of mind framework by, for example, discussing why identifying consciousness with the activity of electronic circuits is first and foremost a category mistake in terms of scientific reasoning, the approach offered in the chapter is complementary to this standpoint, and illustrates various aspects of the problem under a monist from-brain-to-mind premise.
-
Consciousness is a natural phenomenon, familiar to every person. At the same time, it cannot be described in singular terms. The rise of Artificial Intelligence in recent years has made the topic of Artificial Consciousness highly debated. The paper discusses the main general theories of consciousness and their relationship with proposed Artificial Consciousness solutions. There are a number of well-established models accepted in the area of research: Higher Order Thoughts/Higher Order Perception, Global Network Workspace, Integrated Information Theory, reflexive, representative, functional, connective, Multiple Draft Model, Neural Correlate of Consciousness, quantum consciousness, to name just a few. Some theories overlap, which allows for speaking about more advanced, complex models. The disagreement in theories leads to different views on animal consciousness and human conscious states. As a result, there are also variations in the opinions about Artificial Consciousness based on the discrepancy between qualia and the nature of AI. The hard problem of consciousness, an epitome of qualia, is often seen as an insurmountable barrier or, at least, an “explanatory gap”. Nevertheless, AI constructs allow imitations of some models in silico, which are presented by several authors as full-fledged Artificial Consciousness or as strong AI. This itself does not make the translation of consciousness into the AI space easier but allows decent progress in the domain. As argued in this paper, there will be no universal solution to the Artificial Consciousness problem, and the answer depends on the type of consciousness model. A more pragmatic view suggests the instrumental interaction between humans and AI in the environment of the Fifth Industrial Revolution, limiting expectations of strong AI outcomes to cognition but not consciousness in wide terms.
-
The prospect of consciousness in artificial systems is closely tied to the viability of functionalism about consciousness. Even if consciousness arises from the abstract functional relationships between the parts of a system, it does not follow that any digital system that implements the right functional organization would be conscious. Functionalism requires constraints on what it takes to properly implement an organization. Existing proposals for constraints on implementation relate to the integrity of the parts and states of the realizers of roles in a functional organization. This paper presents and motivates three novel integrity constraints on proper implementation not satisfied by current neural network models. It is proposed that for a system to be conscious, there must be a straightforward relationship between the material entities that compose the system and the realizers of functional roles, that the realizers of the functional roles must play their roles due to internal causal powers, and that they must continue to exist over time.
-
The aim is to develop artificial consciousness. In a previous report, we concluded that it is difficult to mathematically define individual qualia in a univocal way. Therefore, by focusing on the Human Language as a tool for communication, we attempted to define it using probability space. When the Kullback-Leibler distance, defined by the probability density function, is zero, the two probability distributions can be considered equivalent, indicating the equivalence required by the language. This has allowed us to define Human Language mathematically in this paper. At the same time, regarding the 'philosophical zombie' thought experiment used as a criticism of physicalism, we were able to show that philosophical zombies cannot be a criticism of the proposed model, since 'within the definition of Human Language, it encompasses the existence of philosophical zombies, but the probability of their appearance is zero'. In addition, episodic memory was defined in the probability space by connecting individual Human Language words as a direct product. These are the descriptions of this paper, and the findings were also used to interpret 1) brain-induced illusions, 2) blank brain theory and brain channel theory. From the first report and the conclusions of this paper, a model of consciousness is presented in the third report.
-
Artificial intelligence (AI) has colored human civilization. It is the ability of a digital computer or computer-controlled robot to perform general tasks associated with specific patterns of intelligence. AI is not human, but it possesses intelligence similar to humans, and it can even inform or perform tasks that humans cannot. Artificial intelligence is used in various fields, ranging from education, healthcare, economy, to agriculture. Artificial intelligence is the product of human creation, sentiment, and consciousness. It is the result of human intelligence itself. AI can answer questions and provide intelligent recommendations for humans. With its algorithmic capabilities, AI can analyze billions of signals and make precise recommendations. At this level, artificial intelligence represents human intelligence. However, the question is whether artificial intelligence has sensitivity, sentiment, empathy, and solidarity toward the humans who created it. Or does artificial intelligence then transform into a director of human beings in their self-actualization? Using a phenomenological approach, this research aims to explore the phenomenon of the presence of artificial intelligence, which offers convenience for human work, but at the same time, the presence of AI reduces the value of humans who possess creative intuition, sentiment, and consciousness. Yet AI is born from the ability of humans to create, feel, and think. The results of this exploration are then given a theological and ethnographic perspective (teo-ethnography). Keywords: artificial intelligence, creation, sentiment, consciousness, teo-ethnography