Search
Full bibliography 615 resources
-
Is it possible to create genuine consciousness, artificial intelligence? This paper will examine this question through the philosophical understanding of imagination. Imagination is the creative faculty that not only separates man from beast but also allows man to recognize himself, and it helps him to see beyond his immediate being. Without imagination, man is blind to future possibility and unable to conceive of future events. Imagination produces man's mental cognition that makes him a rational and creative being. The two modern methods of A.I. programming are incapable of imbuing their algorithm with imagination. Without imagination, A.I. not only lacks genuine creativity, but it also risks never understanding self-autonomy and therefore could never be a free and independent entity.
-
Current approaches to machine consciousness (MC) tend to offer a range of characteristic responses to critics of the enterprise. Many of these responses seem to marginalize phenomenal consciousness, by presupposing a 'thin' conception of phenomenality. This conception is, we will argue, largely shared by anti- computationalist critics of MC. On the thin conception, physiological or neural or functional or organizational features are secondary accompaniments to consciousness rather than primary components of consciousness itself. We outline an alternative, 'thick' conception of phenomenality. This shows some signposts in the direction of a more adequate approach to MC.
-
Machine consciousness exists already in organic systems and it is only a matter of time -- and some agreement -- before it will be realised in reverse-engineered organic systems and forward- engineered inorganic systems. The agreement must be over the preconditions that must first be met if the enterprise is to be successful, and it is these preconditions, for instance, being a socially-embedded, structurally-coupled and dynamic, goal-directed entity that organises its perceptual input and enacts its world through the application of both a cognitive and kinaesthetic imagination, that I shall concentrate on presenting in this paper. It will become clear that these preconditions will present engineers with a tall order, but not, I will argue, an impossible one. After all, we might agree with Freeman and Núñez's claim that the machine metaphor has restricted the expectations of the cognitive sciences (Freeman & Núñez, 1999); but it is a double-edged sword, since our limited expectations about machines also narrow the potential of our cognitive science.
-
This paper proposes a brain-inspired cognitive architecture that incorporates approximations to the concepts of consciousness, imagination, and emotion. To emulate the empirically established cognitive efficacy of conscious as opposed to non-conscious information processing in the mammalian brain, the architecture adopts a model of information flow from global workspace theory. Cognitive functions such as anticipation and planning are realised through internal simulation of interaction with the environment. Action selection, in both actual and internally simulated interaction with the environment, is mediated by affect. An implementation of the architecture is described which is based on weightless neurons and is used to control a simulated robot.
-
The aim of this chapter is to refine some questions regarding AI, and to provide partial answers to them. We analyze the state of the art in designing intelligent systems that are able to mimic human complex activities, including acts based on artificial consciousness. The analysis is performed to contrast the human cognition and behavior to the similar processes in AI systems. The analysis includes elements of psychology, sociology, and communication science related to humans and lower level beings. The second part of this chapter is devoted to human-human and man-machine communication, as related to intelligence. We emphasize that the relational aspects constitute the basis for the perception, knowledge, semiotic and communication processes. Several consequences are derived. Subsequently, we deal with the tools needed to endow the machines with intelligence. We discuss the roles of knowledge and data structures. The results could help building "sensitive and intelligent" machines.
-
The purpose of this article is to show why consciousness and thought are not manifested in digital computers. Analyzing the rationale for claiming that the formal manipulation of physical symbols in Turing machines would emulate human thought, the article attempts to show why this proved false. This is because the reinterpretation of ‘designation’ and ‘meaning’ to accommodate physical symbol manipulation eliminated their crucial functions in human discourse. Words have denotations and intensional meanings because the brain transforms the physical stimuli received from the microworld into a qualitative, macroscopic representation for consciousness. Lacking this capacity as programmed machines, computers have no representations for their symbols to designate and mean. Unlike human beings in which consciousness and thought, with their inherent content, have emerged because of their organic natures, serial processing computers or parallel distributed processing systems, as programmed electrical machines, lack these causal capacities.
-
“Oscar” is going to be the first artificial person — at any rate, he is going to be the first artificial person to be built in Tucson's Philosophy Department. Oscar's creator, John Pollock, maintains that once Oscar is complete he “will experience qualia, will be self-conscious, will have desires, fears, intentions, and a full range of mental states” (Pollock 1989, pp. ix–x). In this paper I focus on what seems to me to be the most problematical of these claims, viz., that Oscar will experience qualia. I argue that we have not been given sufficient reasons to believe this bold claim. I doubt that Oscar will enjoy qualitative conscious phenomena and I maintain that it will be like nothing to be Oscar.
-
This paper is a response to Henley who criticizes a previous paper of mine arguing against my claim that computers are devoid of consciousness. While the claim regarding computers and consciousness was not the main theme of my original paper, I do, indeed, subscribe to it. Here, I review the main characteristics of human consciousness presented in the earlier paper and argue that computers cannot exhibit them. Any ascription of these characteristics to computers is superficial and misleading in that it fails to capture essential, intrinsic features of human cognition. More generally, psychological theory couched in terms of semantic representations and the computational operations associated with them is bound to be inadequate. The phenomenology of consciousness is a specific case marking this inadequacy.
-
Large language models (LLMs) and other artificial intelligence systems are trained using extensive DIKWP resources (data, information, knowledge, wisdom, purpose). These introduce uncertainties when applied to individual users in a collective semantic space. Traditional methods often lead to introducing new concepts rather than a proper understanding based on the semantic space. When dealing with complex problems or insufficient context, the limitations in conceptual cognition become even more evident. To address this, we take pediatric consultation as a scenario, using case simulations to specifically discuss unidirectional communication impairments between doctors and infant patients and the bidirectional communication biases between doctors and infant parents. We propose a human–machine interaction model based on DIKWP artificial consciousness. For the unidirectional communication impairment, we use the example of an infant’s perspective in recognizing and distinguishing objects, simulating the cognitive process of the brain from non-existence to existence, transitioning from cognitive space to semantic space, and generating corresponding semantics for DIKWP, abstracting concepts, and labels. For the bidirectional communication bias, we use the interaction between infant parents and doctors as an example, mapping the interaction process to the DIKWP transformation space and addressing the DIKWP 3-No problem (incompleteness, inconsistency, and imprecision) for both parties. We employ a purpose-driven DIKWP transformation model to solve part of the 3-No problem. Finally, we comprehensively validate the proposed method (DIKWP-AC). We first analyze, evaluate, and compare the DIKWP transformation calculations and processing capabilities, and then compare it with seven mainstream large models. The results show that DIKWP-AC performs well. Constructing a novel cognitive model reduces the information gap in human–machine interactions, promotes mutual understanding and communication, and provides a new pathway for achieving more efficient and accurate artificial consciousness interactions.
-
Rapid advancements in large language models (LLMs) have renewed interest in the question of whether consciousness can arise in an artificial system, like a digital computer. The general consensus is that LLMs are not conscious. This paper evaluates the main arguments against artificial consciousness in LLMs and argues that none of them show what they intend. However strong our intuitions against artificial consciousness are, they currently lack rational support.
-
The quest to create artificial consciousness stands as a formidable challenge at the intersection of artificial intelligence and cognitive science. This paper delves into the theoretical underpinnings, methodological approaches, and ethical considerations surrounding the concept of machine consciousness. By integrating insights from computational modeling, neuroscience, and philosophy, we propose a roadmap for comprehending and potentially realizing conscious behavior in artificial systems. Furthermore, we address the critical challenges of validating machine consciousness, ensuring its safe development, and navigating its integration into society.
-
This study seeks to bridge the gap between narrative memory in human cognition and artificial agents by proposing a unified model. Narrative memory, fundamental to human consciousness, organizes experiences into coherent stories, influencing memory structuring, retention, and retrieval. By integrating insights from human cognitive frameworks and artificial memory architectures, this work aims to emulate these narrative processes in artificial systems. The proposed model adopts a multi-layered approach, combining elements of episodic and semantic memory with narrative structuring techniques. It explores how artificial agents can construct and recall narratives to enhance their understanding, decision-making, and adaptive capabilities. By simulating narrative-based memory processing, we assess the model’s effectiveness in replicating human-like retention and retrieval patterns. Applications include improved human-AI interaction, where agents understand context and nuance, and advancements in machine learning, where narrative memory enhances data interpretation and predictive analytics. By unifying the cognitive and computational perspectives, this study offers a step toward more sophisticated, human-like artificial systems, paving the way for deeper explorations into the intersection of memory, narrative, and consciousness.
-
Obsession toward technology has a long background of parallel evolution between humans and machines. This obsession became irrevocable when AI began to be a part of our daily lives. However, this AI integration became a subject of controversy when the fear of AI advancement in acquiring consciousness crept among mankind. Artificial consciousness is a long-debated topic in the field of artificial intelligence and neuroscience which has many ethical challenges and threats in society ranging from daily chores to Mars missions. This paper deals with the impact of AI-based science fiction films in society. This study aims to investigate the fascinating AI concept of artificial consciousness in light of posthuman terminology, technological singularity and superintelligence by analyzing the set of science fiction films to project the actual difference between science fictional AI and operational AI. Further, this paper explores the theoretical possibilities of artificial consciousness through a range of neuroscientific theories that are related to AI development. These theories are built toward prospective artificial consciousness in AI. This study discloses the posthuman fallacies that are built around the fear of AI acquiring artificial consciousness and its outcome.