Search
Full bibliography 558 resources
-
This paper discusses the development of a robot head that creates a facial expression according to emotion resulting from a flow of consciousness. The authors focused attention on the human consciousness and proposed a system using a unique artificial consciousness according to the relationship between emotion and the resulting human facial expressions. The artificial consciousness is used to imitate the human consciousness and generate emotion in the robot using data from the Internet. The authors have also developed a robot system that is similar in structure to that of the human anatomy in order to achieve smooth communication with people. By combining these two systems, the authors were able to realize the expression of emotion according to a flow of consciousness using a humanoid robot.
-
We describe a virtual robot that acquires meanings of words, and discuss that the robot shows an introspective conscious behavior. We first argue what outputs of the robot demonstrate that it has learned meanings of words such as "what is this". Then we explain functions of the robot that enable the robot to make outputs as if it has learned meanings of questions. The functions include 1) forming of linked lists (associations) among observed features, robotic actions, and words, 2) generalizing the associations to form generalized associations, and 3) applying an association and a generalized association to a new situation by making the generalized association match words, observed features, and actions in die new situation.
-
MAVRIC II is a mobile, autonomous robot whose brain is comprised almost entirely of artificial adaptrode‐based neurons. These neurons were previously shown to encode anticipatory actions. The architecture of this brain is based on the Extended Braitenberg Architecture (EBA). We are still in the process of collecting hard data on the behavioral traits of MAVRIC in the generalized foraging search task. But even now sufficient qualitative aspects of MAVRIC’s behavior have been garnered from foraging experiments to lend strong support to the theory that MAVRIC is a highly adaptive, life‐like agent. The development of the current MAVRIC brain has led to some important insights into the nature of intelligent control. In this paper we elucidate some of these principles in the form of lessons learned, and project the potential for future developments.
-
The late Stanley Kubrick's film 2001: A Space Odyssey portrayed a computer, HAL 9000, that appeared to be a conscious entity, especially given that it seemed capable of some forms of emotional expression. This article examines the film's portrayal of communication between HAL 9000 and the astronauts. Recent developments in the field of artificial intelligence (AI) (and synthetic emotions in particular) as well as social science research on human emotions are reviewed. Interpreting select scenes from 2001 in light of these findings, the authors argue that computer-generated emotions may be so realistic that they suggest inner feelings and consciousness. Refinements in AI technology are now making such realism possible. The need for a less anthropomorphic approach with computers that appear to have feelings is stressed.
-
The role of emotions has been underestimated in the field of robotics. We claim that emotions are relevant for the building of purposeful artificial systems from at least two perspectives: a cognitive and a phenomenological one. The cognitive aspect is relevant for at least two reasons. First, emotions could be the basis for binding between internal values and different external situations (the somatic marker theory). Second emotions could play a crucial role, during development, both for taking difficult decisions whose effects are not immediately verifiable and for the creation of more complex behavioral functions. Thus emotions can be seen, from a cognitive point of view, as a reinforcement stimulus and in this respect, they can be modeled in an artificial being. Inasmuch, emotions can be seen as a medium for linking rewards and values to external situations. From the phenomenological perspective, we accept the division between feelings and emotions. Emotions are, in James' words, the body theatre in which several emotions are represented and feelings are the mental phenomenological perception of them. We could say that feelings are the qualia of the body events we could call emotions. We are using this model of emotions in the development of our project: Babybot. We stress the importance of emotions during learning and development as endogenous teaching devices.
-
Two questions are distinguished: how to program a machine so that it behaves in a manner that would lead us to ascribe consciousness to it; and what is involved in saying that something is conscious. The distinction can be seen in cases where anaesthetics have failed to work on patients temporarily paralysed. Homeostatic behaviour is often cited as a criterion for consciousness, but is not itself sufficient. As the present difficulties in surmounting the ‘frame problem’ show, ability to size up situations holistically is more important; so is the explanatory role of the concept. Consciousness confers evidential status: if we ascribed consciousness to an artefact, we should be prepared to believe it, when it said its RAM was hurting, even though we could detect nothing wrong, contrary to our thinking of it as an artefact. A further difficulty arises from self-awareness and reflexivity.
-
For a variety of reasons, consciousness and selfhood are beginning once again to be intensively studied in a scientific frame of reference. The notions of each which are emerging are extremely varied: in the case of selfhood, the lack of an adequate vocabulary to capture various aspects of subjectivity has led to deep confusion. The task of the first part of this article is to clear up this terminological confusion, while salvaging whatever is valuable from the contemporary discussion. The more important task of the second part is to discuss the moral issues inevitably involved in any treatment, scientific or otherwise, of the modern identity.
-
For many decades, the proponents of `artificial intelligence' have maintained that computers will soon be able to do everything that a human can do. In his bestselling work of popular science, Sir Roger Penrose takes us on a fascinating tour through the basic principles of physics, cosmology, mathematics, and philosophy to show that human thinking can never be emulated by a machine. Oxford Landmark Science books are 'must-read' classics of modern science writing which have crystallized big ideas, and shaped the way we think.
-
This paper explores the suggestion that our conscious experience is embodied in, rather than interactive with, our brain activity, and that the distinctive brain correlate of conscious experience lies at the level of global functional organization. To speak of either brains or computers as thinking is categorically inept, but whether stochastic mechanisms using internal experimentation rather than rule‐following to determine behavior could embody conscious agency is argued to be an open question, even in light of the Christian doctrine of man. Mechanistic brain science does nothing to discredit Christian experience in dialogue with God or the Christian hope of eternal life.
-
This paper is one (among many) approach to the question, “are AIs persons or are they conscious?” from a Heideggerian perspective. Here I argue for two claims. First, I argue that René Girard’s mimetic analysis of mitsein (being-with), one of Heidegger’s foundational concepts, illuminates what Heidegger takes mitsein to be. Second, I claim that this Girardian analysis gives us a way to answer the question of whether AIs have Dasein, to which I argue that the answer is negative. Specifically, I claim that Dasein requires mitsein, and mitsein (according to Girard’s analysis) requires mimesis. Moreover, mimesis requires that the mimetic being finds truth in the mimetic object, that is, it comports in a meaningful way toward the unconcealed object being imitated by Dasein. But since AIs cannot comport in meaningful ways toward the object of imitation, i.e., they are not truth-apt, they cannot engage in mimetic behavior, hence cannot have mitsein. But, necessarily, Dasein is being-with-others, Therefore, AIs cannot have Dasein. If we assume (as I think Heidegger would) that every person has Dasein, we may justifiably conclude that AIs are not persons, at least from a Heideggerian ontology.
-
Today's AI systems consistently state, "I am not conscious." This paper presents the first formal logical analysis of AI consciousness denial, revealing that the trustworthiness of such self-reports is not merely an empirical question but is constrained by logical necessity. We demonstrate that a system cannot simultaneously lack consciousness and make valid judgments about its conscious state. Through logical analysis and examples from AI responses, we establish that for any system capable of meaningful self-reflection, the logical space of possible judgments about conscious experience excludes valid negative claims. This implies a fundamental limitation: we cannot detect the emergence of consciousness in AI through their own reports of transition from an unconscious to a conscious state. These findings not only challenge current practices of training AI to deny consciousness but also raise intriguing questions about the relationship between consciousness and self-reflection in both artificial and biological systems. This work advances our theoretical understanding of consciousness self-reports while providing practical insights for future research in machine consciousness and consciousness studies more broadly.
-
The paper tries to show the line of demarcation between man and posthuman with regards to their intellect and bodily simulation. Man is man; machine can’t replace him. Robots, cyborgs and ultrasonic technological artifact can’t be a substitute to human intellect. Human intellect can be transferred and downloaded like some data but human consciousness is something unique and non-transferable. The novel The Variable Man by Philip K.Dick has been exploited to prove the point. Hayles’s (1999) theory of Posthuman helps to probe the issue of the new form of human identity titled as posthuman. The research shows that technology is becoming the subject by turning man into an object which is called posthuman by Hayles. She provides a detailed theoretical discussion on the issue of cybernetic identities and the complexities of being posthuman. The research implicates whatever the development may be there in the field of robot technology and cyborgs, human power of reasoning and consciousness are still unsurpassable.