Search
Full bibliography 558 resources
-
A discussion is given as to the possibility of creating machines which have more powerful consciousnesses than our own. The approach employs, in particular, a brain-based model of human consciousness. From that model a general discussion is developed of the need for a unique central control system in any machine to enable it to be efficient in decision-making. The resulting features of the machine’s consciousness, as the highest order controller, is seen to need to be similar to our own. We conclude that beyond consciousness very likely lies only similar consciousnesses.
-
The humanoid robot we developed is capable of changing its facial expressions according to simple artificial consciousness and stream of consciousness. We think that consciousness consists basically of ‘language and its associative stream’ and ‘representation of feelings related to the language.’ The robot named Kansei, a Japanese term referring to emotion, achieves linguistic association similar to a stream of consciousness and represents its feelings according to the associations through artificial consciousness. The robot is equipped with an artificial cranium made of aluminium which contains 19 servomotors inside it. For the surface of the face, we use polyurethane, a relatively soft material which creates a shape that is closely mimics a human face. Facial expressions are formed on the polyurethane by pulling it into shape via metal wires attached to the 19 internal servomotors. Besides the six basic facial expressions such as happiness and anger, the robot can also make complex expressions that include both happiness and fear.
-
In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics.
-
Is it possible to create genuine consciousness, artificial intelligence? This paper will examine this question through the philosophical understanding of imagination. Imagination is the creative faculty that not only separates man from beast but also allows man to recognize himself, and it helps him to see beyond his immediate being. Without imagination, man is blind to future possibility and unable to conceive of future events. Imagination produces man's mental cognition that makes him a rational and creative being. The two modern methods of A.I. programming are incapable of imbuing their algorithm with imagination. Without imagination, A.I. not only lacks genuine creativity, but it also risks never understanding self-autonomy and therefore could never be a free and independent entity.
-
Current approaches to machine consciousness (MC) tend to offer a range of characteristic responses to critics of the enterprise. Many of these responses seem to marginalize phenomenal consciousness, by presupposing a 'thin' conception of phenomenality. This conception is, we will argue, largely shared by anti- computationalist critics of MC. On the thin conception, physiological or neural or functional or organizational features are secondary accompaniments to consciousness rather than primary components of consciousness itself. We outline an alternative, 'thick' conception of phenomenality. This shows some signposts in the direction of a more adequate approach to MC.
-
Machine consciousness exists already in organic systems and it is only a matter of time -- and some agreement -- before it will be realised in reverse-engineered organic systems and forward- engineered inorganic systems. The agreement must be over the preconditions that must first be met if the enterprise is to be successful, and it is these preconditions, for instance, being a socially-embedded, structurally-coupled and dynamic, goal-directed entity that organises its perceptual input and enacts its world through the application of both a cognitive and kinaesthetic imagination, that I shall concentrate on presenting in this paper. It will become clear that these preconditions will present engineers with a tall order, but not, I will argue, an impossible one. After all, we might agree with Freeman and Núñez's claim that the machine metaphor has restricted the expectations of the cognitive sciences (Freeman & Núñez, 1999); but it is a double-edged sword, since our limited expectations about machines also narrow the potential of our cognitive science.
-
This paper proposes a brain-inspired cognitive architecture that incorporates approximations to the concepts of consciousness, imagination, and emotion. To emulate the empirically established cognitive efficacy of conscious as opposed to non-conscious information processing in the mammalian brain, the architecture adopts a model of information flow from global workspace theory. Cognitive functions such as anticipation and planning are realised through internal simulation of interaction with the environment. Action selection, in both actual and internally simulated interaction with the environment, is mediated by affect. An implementation of the architecture is described which is based on weightless neurons and is used to control a simulated robot.
-
The aim of this chapter is to refine some questions regarding AI, and to provide partial answers to them. We analyze the state of the art in designing intelligent systems that are able to mimic human complex activities, including acts based on artificial consciousness. The analysis is performed to contrast the human cognition and behavior to the similar processes in AI systems. The analysis includes elements of psychology, sociology, and communication science related to humans and lower level beings. The second part of this chapter is devoted to human-human and man-machine communication, as related to intelligence. We emphasize that the relational aspects constitute the basis for the perception, knowledge, semiotic and communication processes. Several consequences are derived. Subsequently, we deal with the tools needed to endow the machines with intelligence. We discuss the roles of knowledge and data structures. The results could help building "sensitive and intelligent" machines.
-
The purpose of this article is to show why consciousness and thought are not manifested in digital computers. Analyzing the rationale for claiming that the formal manipulation of physical symbols in Turing machines would emulate human thought, the article attempts to show why this proved false. This is because the reinterpretation of ‘designation’ and ‘meaning’ to accommodate physical symbol manipulation eliminated their crucial functions in human discourse. Words have denotations and intensional meanings because the brain transforms the physical stimuli received from the microworld into a qualitative, macroscopic representation for consciousness. Lacking this capacity as programmed machines, computers have no representations for their symbols to designate and mean. Unlike human beings in which consciousness and thought, with their inherent content, have emerged because of their organic natures, serial processing computers or parallel distributed processing systems, as programmed electrical machines, lack these causal capacities.
-
“Oscar” is going to be the first artificial person — at any rate, he is going to be the first artificial person to be built in Tucson's Philosophy Department. Oscar's creator, John Pollock, maintains that once Oscar is complete he “will experience qualia, will be self-conscious, will have desires, fears, intentions, and a full range of mental states” (Pollock 1989, pp. ix–x). In this paper I focus on what seems to me to be the most problematical of these claims, viz., that Oscar will experience qualia. I argue that we have not been given sufficient reasons to believe this bold claim. I doubt that Oscar will enjoy qualitative conscious phenomena and I maintain that it will be like nothing to be Oscar.
-
This paper is a response to Henley who criticizes a previous paper of mine arguing against my claim that computers are devoid of consciousness. While the claim regarding computers and consciousness was not the main theme of my original paper, I do, indeed, subscribe to it. Here, I review the main characteristics of human consciousness presented in the earlier paper and argue that computers cannot exhibit them. Any ascription of these characteristics to computers is superficial and misleading in that it fails to capture essential, intrinsic features of human cognition. More generally, psychological theory couched in terms of semantic representations and the computational operations associated with them is bound to be inadequate. The phenomenology of consciousness is a specific case marking this inadequacy.
-
Large language models (LLMs) and other artificial intelligence systems are trained using extensive DIKWP resources (data, information, knowledge, wisdom, purpose). These introduce uncertainties when applied to individual users in a collective semantic space. Traditional methods often lead to introducing new concepts rather than a proper understanding based on the semantic space. When dealing with complex problems or insufficient context, the limitations in conceptual cognition become even more evident. To address this, we take pediatric consultation as a scenario, using case simulations to specifically discuss unidirectional communication impairments between doctors and infant patients and the bidirectional communication biases between doctors and infant parents. We propose a human–machine interaction model based on DIKWP artificial consciousness. For the unidirectional communication impairment, we use the example of an infant’s perspective in recognizing and distinguishing objects, simulating the cognitive process of the brain from non-existence to existence, transitioning from cognitive space to semantic space, and generating corresponding semantics for DIKWP, abstracting concepts, and labels. For the bidirectional communication bias, we use the interaction between infant parents and doctors as an example, mapping the interaction process to the DIKWP transformation space and addressing the DIKWP 3-No problem (incompleteness, inconsistency, and imprecision) for both parties. We employ a purpose-driven DIKWP transformation model to solve part of the 3-No problem. Finally, we comprehensively validate the proposed method (DIKWP-AC). We first analyze, evaluate, and compare the DIKWP transformation calculations and processing capabilities, and then compare it with seven mainstream large models. The results show that DIKWP-AC performs well. Constructing a novel cognitive model reduces the information gap in human–machine interactions, promotes mutual understanding and communication, and provides a new pathway for achieving more efficient and accurate artificial consciousness interactions.
-
Rapid advancements in large language models (LLMs) have renewed interest in the question of whether consciousness can arise in an artificial system, like a digital computer. The general consensus is that LLMs are not conscious. This paper evaluates the main arguments against artificial consciousness in LLMs and argues that none of them show what they intend. However strong our intuitions against artificial consciousness are, they currently lack rational support.