Search

Full bibliography 704 resources

  • In the context of the unstoppable trend of artificial intelligence, science and technology have become the theme of the times. Will the rapid development of modern technology, such as biotechnology and artificial intelligence, dehumanize us? Can a machine have human consciousness? In his novel Klara and the Sun, Kazuo Ishiguro criticizes the arrogance of technological rationality and the arrogance of anthropocentrism from the perspective of a “non-human” robot. The relationship between humans and machines has become a problem that humans need to re-examine. With the help of post-humanism, this paper aims to explore the physical changes and behavioral actions of robots and humans in the novel to reveal the “split” between man and machine and the “self-deception” of humans in the novel, so as to finally trigger thinking about how humans and machines can coexist harmoniously at the juncture between humans and posthumans, and provide reference for the future society between humans and non-humans.

  • The question of whether artificial intelligence (AI) can be considered conscious and therefore should be evaluated through a moral lens has surfaced in recent years. In this paper, we argue that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction. When AI is viewed as conscious like a human, then how people treat AI appears to carry over into how they treat other people due to activating schemas that are congruent to those activated during interactions with humans. In light of this potential, we might consider regulating how we treat AI, or how we build AI to evoke certain kinds of treatment from users, but not because AI is inherently sentient. This argument focuses on humanlike, social actor AI such as chatbots, digital voice assistants, and social robots. In the first part of the paper, we provide evidence for carry-over effects between perceptions of AI consciousness and behavior toward humans through literature on human-computer interaction, human-AI interaction, and the psychology of artificial agents. In the second part of the paper, we detail how the mechanism of schema activation can allow us to test consciousness perception as a driver of carry-over effects between human-AI interaction and human-human interaction. In essence, perceiving AI as conscious like a human, thereby activating congruent mind schemas during interaction, is a driver for behaviors and perceptions of AI that can carry over into how we treat humans. Therefore, the fact that people can ascribe humanlike consciousness to AI is worth considering, and moral protection for AI is also worth considering, regardless of AI’s inherent conscious or moral status.

  • The quest to create artificial consciousness has long been a central challenge in the field of artificial intelligence (AI). While significant progress has been made in developing AI systems that can perform complex tasks and exhibit intelligent behavior, the question of whether these systems can truly be considered conscious remains open. In this paper, I present a novel approach to quantifying consciousness in AI systems by integrating principles from quantum mechanics, information theory, and neuroscience. The model incorporates key components such as self-awareness, subjective experience, intentionality, metacognition, integrated information processing, and dynamic cognition, which are thought to be essential for the emergence of conscious experience. I demonstrate the application of the model using a simulated AI system and discuss the implications of the findings for the development of artificially conscious agents. Furthermore, I argue that the pursuit of artificial consciousness is not only a scientific and technological endeavor but also a philosophical and ethical one, with profound implications for the understanding of the nature of mind and the relationship between humans and machines

  • Self-awareness results from consciousness of existence in time and space. Thought and consciousness are distinguishing factors between humans and machines having artificial intelligence. No algorithm has been offered for artificial self-awareness based on Thinking. Previous studies have not studied the relationship between consciousness, thinking and time. This study studied the relationship between Self-awareness, thinking, memories and speech over time. A deep logical connection exists between consciousness, thinking, and time. Based on this research findings, an algorithm can be designed for artificial consciousness and Self-awareness.

  • Computational functionalism posits that consciousness is a computation. Here we show, perhaps surprisingly, that it cannot be a Turing computation. Rather, computational functionalism implies that consciousness is a novel type of computation that has recently been proposed by Geoffrey Hinton, called mortal computation.

  • Sublimating the epistemological scope of a mere science-fiction tale, The Bicentennial Man (1976) by Isaac Asimov (1920-92) centers around a philosophical labyrinth where the lines between humanity and machine blur, inviting the reader to question the very essence of what it means to be human. The intricate narrative of an AI robot’s journey toward humanness serves as a profound meditation on the evolving relationship between humans and robots. Andrew Martin, the positronic robot at the heart of the story, is not just a mechanical marvel; he is, instead, a crucible in which Asimov tests the boundaries of consciousness, human identity, and the emotional yearning for belonging. This paper delves into the novella’s exploration of these themes, unraveling the intricate process of Andrew’s robot-human evolution and its profound implications for a better understanding of the meaning of humanness and the future of artificial intelligence. In the realm of science fiction, The Bicentennial Man thus stands as a luminous testament to the enduring question of human identity. Through the poignant lens of Andrew in his desire to be human, the novella builds upon the posthumanist discourse of the man-machine dichotomy, providing the reader with a timely opportunity to re-evaluate consciousness, emotion, and the defining characteristics of humanity.

  • The ideas of this book originate from the mobile WAVE approach which allowed us, more than a half century ago, to implement citywide heterogeneous computer networks and solve distributed problems on them well before the internet. The invented paradigm evolved into Spatial Grasp Technology and resulted in a European patent and eight books. The volumes covered concrete applications in graph and network theory, defense and social systems, crisis management, simulation of global viruses, gestalt theory, collective robotics, space research, and related concepts. The obtained solutions often exhibited high system qualities like global integrity, distributed awareness, and even consciousness. This current book takes these important characteristics as primary research objectives, together with the theory of patterns covering them all. This book is oriented towards system scientists, application programmers, industry managers, defense and security commanders, and university students (especially those interested in advanced MSc and PhD projects on distributed system management), as well as philosophers, psychologists, and United Nations personnel.

  • The article reflects various approaches of philosophy and programming to methods for solving the technical problem of creating and software implementation of artificial consciousness (AC). Various purposes of creation and basic approaches to determining the nature of AC are described. To solve the problem of creating an AC, an architecture is proposed that includes ten levels, starting from the basic level of collecting and systematizing information about the external world and ending with the upper level of influence on it, agreed with the person and the level of decision-making. The features of the delimitation of functions and the procedure for interaction between a person and an AC are considered in detail. In conclusion, the most important, from a programmer’s point of view, properties that characterize artificial consciousness are given.</p>

  • Butlin et al. (2023) propose a rubric for evaluating AI systems based on indicator properties derived from existing theories of consciousness, suggesting that while current AIs do not possess consciousness, these indicators are pivotal for future developments towards artificial consciousness. The current paper critiques the approach by Butlin et al., arguing that the complexity of consciousness, characterized by subjective experience poses significant challenges for its operationalization and measurement, thus complicating the replication in AI. The commentary further explores the limitations of current methodologies in artificial consciousness research, pointing to the necessity of out-of-the-box thinking and the integration of individual differences research in cognitive psychology, particularly in the areas of attention, cognitive control, autobiographical memory, and Theory of Mind (ToM), to advance the understanding and development of artificial consciousness.

  • This chapter critically questions the claim that there would be possibility of emulating human consciousness and consciousness-dependent activity by Artificial Intelligence to create conscious artificial systems. The analysis is based on neurophysiological research and theory. In-depth scrutiny of the field and the prospects for converting neuroscience research into the type of algorithmic programs utilized in computer-based AI systems to create artificially conscious machines leads to conclude that such a conversion is unlikely to ever be possible because of the complexity of unconscious and conscious brain processing and their interaction. It is through the latter that the brain opens the doors to consciousness, a property of the human mind that no other living species has developed for reasons that are made clear in this chapter. As a consequence, many of the projected goals of AI will remain forever unrealizable. Although this work does not directly examine the question within a philosophy of mind framework by, for example, discussing why identifying consciousness with the activity of electronic circuits is first and foremost a category mistake in terms of scientific reasoning, the approach offered in the chapter is complementary to this standpoint, and illustrates various aspects of the problem under a monist from-brain-to-mind premise.

  • Consciousness is a natural phenomenon, familiar to every person. At the same time, it cannot be described in singular terms. The rise of Artificial Intelligence in recent years has made the topic of Artificial Consciousness highly debated. The paper discusses the main general theories of consciousness and their relationship with proposed Artificial Consciousness solutions. There are a number of well-established models accepted in the area of research: Higher Order Thoughts/Higher Order Perception, Global Network Workspace, Integrated Information Theory, reflexive, representative, functional, connective, Multiple Draft Model, Neural Correlate of Consciousness, quantum consciousness, to name just a few. Some theories overlap, which allows for speaking about more advanced, complex models. The disagreement in theories leads to different views on animal consciousness and human conscious states. As a result, there are also variations in the opinions about Artificial Consciousness based on the discrepancy between qualia and the nature of AI. The hard problem of consciousness, an epitome of qualia, is often seen as an insurmountable barrier or, at least, an “explanatory gap”. Nevertheless, AI constructs allow imitations of some models in silico, which are presented by several authors as full-fledged Artificial Consciousness or as strong AI. This itself does not make the translation of consciousness into the AI space easier but allows decent progress in the domain. As argued in this paper, there will be no universal solution to the Artificial Consciousness problem, and the answer depends on the type of consciousness model. A more pragmatic view suggests the instrumental interaction between humans and AI in the environment of the Fifth Industrial Revolution, limiting expectations of strong AI outcomes to cognition but not consciousness in wide terms.

  • The prospect of consciousness in artificial systems is closely tied to the viability of functionalism about consciousness. Even if consciousness arises from the abstract functional relationships between the parts of a system, it does not follow that any digital system that implements the right functional organization would be conscious. Functionalism requires constraints on what it takes to properly implement an organization. Existing proposals for constraints on implementation relate to the integrity of the parts and states of the realizers of roles in a functional organization. This paper presents and motivates three novel integrity constraints on proper implementation not satisfied by current neural network models. It is proposed that for a system to be conscious, there must be a straightforward relationship between the material entities that compose the system and the realizers of functional roles, that the realizers of the functional roles must play their roles due to internal causal powers, and that they must continue to exist over time.

  • The aim is to develop artificial consciousness. In a previous report, we concluded that it is difficult to mathematically define individual qualia in a univocal way. Therefore, by focusing on the Human Language as a tool for communication, we attempted to define it using probability space. When the Kullback-Leibler distance, defined by the probability density function, is zero, the two probability distributions can be considered equivalent, indicating the equivalence required by the language. This has allowed us to define Human Language mathematically in this paper. At the same time, regarding the 'philosophical zombie' thought experiment used as a criticism of physicalism, we were able to show that philosophical zombies cannot be a criticism of the proposed model, since 'within the definition of Human Language, it encompasses the existence of philosophical zombies, but the probability of their appearance is zero'. In addition, episodic memory was defined in the probability space by connecting individual Human Language words as a direct product. These are the descriptions of this paper, and the findings were also used to interpret 1) brain-induced illusions, 2) blank brain theory and brain channel theory. From the first report and the conclusions of this paper, a model of consciousness is presented in the third report.

  • Artificial intelligence (AI) has colored human civilization. It is the ability of a digital computer or computer-controlled robot to perform general tasks associated with specific patterns of intelligence. AI is not human, but it possesses intelligence similar to humans, and it can even inform or perform tasks that humans cannot. Artificial intelligence is used in various fields, ranging from education, healthcare, economy, to agriculture. Artificial intelligence is the product of human creation, sentiment, and consciousness. It is the result of human intelligence itself. AI can answer questions and provide intelligent recommendations for humans. With its algorithmic capabilities, AI can analyze billions of signals and make precise recommendations. At this level, artificial intelligence represents human intelligence. However, the question is whether artificial intelligence has sensitivity, sentiment, empathy, and solidarity toward the humans who created it. Or does artificial intelligence then transform into a director of human beings in their self-actualization? Using a phenomenological approach, this research aims to explore the phenomenon of the presence of artificial intelligence, which offers convenience for human work, but at the same time, the presence of AI reduces the value of humans who possess creative intuition, sentiment, and consciousness. Yet AI is born from the ability of humans to create, feel, and think. The results of this exploration are then given a theological and ethnographic perspective (teo-ethnography). Keywords: artificial intelligence, creation, sentiment, consciousness, teo-ethnography

  • The dream of making a conscious humanoid – whether as servant, guard, entertainer, or simply as testament to human creativity – has long captivated the human imagination. However, while past attempts were performed by magicians and mystics, today scientists and engineers are doing the work to turn myth into reality. This essay introduces the fundamental concepts surrounding human consciousness and machine consciousness and offers a theological contribution. Using the biblical association of the soul to blood, it will be shown that the Bible provides evidence of a scientific claim, while at the same time, science provides evidence of a biblical claim.

  • As artificially intelligent systems become more anthropomorphic and pervasive, and their potential impact on humanity more urgent, discussions about the possibility of machine consciousness have significantly intensified, and it is sometimes seen as 'the holy grail'. Many concerns have been voiced about the ramifications of creating an artificial conscious entity. This is compounded by a marked lack of consensus around what constitutes consciousness and by an absence of a universal set of criteria for determining consciousness. By going into depth on the foundations and characteristics of consciousness, we propose five criteria for determining whether a machine is conscious, which can also be applied more generally to any entity. This paper aims to serve as a primer and stepping stone for researchers of consciousness, be they in philosophy, computer science, medicine, or any other field, to further pursue this holy grail of philosophy, neuroscience and artificial intelligence.

  • Can octopuses feel pain or pleasure? Can we tell if a person unresponsive after severe injury might be suffering? When does a fetus begin having conscious experiences? These questions about the edge of sentience are subject to enormous uncertainty. This book builds a framework to help us reach ethically sound decisions on how to manage the risks

  • This chapter discusses the relationship between compliance to syntactically defined legislation and consciousness: whether in order to obey laws a robot would need to be conscious. This leads to consideration of what Emergent Information Theory can tell us about the possibility of artificial consciousness as such. Various arguments based on similarities and differences between biological and technological physical and informational systems are presented, with the conclusion that direct replication of a human type of consciousness is improbable. However, our understandable tendency to consider our own type of consciousness as uniquely special and valuable is challenged and found to be unfounded. Other high-level emergent phenomena in the information dimensions of artificial systems may, while different, be equally deserving of a comparable status.

Last update from database: 3/30/26, 1:00 AM (UTC)