Search
Full bibliography 615 resources
-
The late Stanley Kubrick's film 2001: A Space Odyssey portrayed a computer, HAL 9000, that appeared to be a conscious entity, especially given that it seemed capable of some forms of emotional expression. This article examines the film's portrayal of communication between HAL 9000 and the astronauts. Recent developments in the field of artificial intelligence (AI) (and synthetic emotions in particular) as well as social science research on human emotions are reviewed. Interpreting select scenes from 2001 in light of these findings, the authors argue that computer-generated emotions may be so realistic that they suggest inner feelings and consciousness. Refinements in AI technology are now making such realism possible. The need for a less anthropomorphic approach with computers that appear to have feelings is stressed.
-
The role of emotions has been underestimated in the field of robotics. We claim that emotions are relevant for the building of purposeful artificial systems from at least two perspectives: a cognitive and a phenomenological one. The cognitive aspect is relevant for at least two reasons. First, emotions could be the basis for binding between internal values and different external situations (the somatic marker theory). Second emotions could play a crucial role, during development, both for taking difficult decisions whose effects are not immediately verifiable and for the creation of more complex behavioral functions. Thus emotions can be seen, from a cognitive point of view, as a reinforcement stimulus and in this respect, they can be modeled in an artificial being. Inasmuch, emotions can be seen as a medium for linking rewards and values to external situations. From the phenomenological perspective, we accept the division between feelings and emotions. Emotions are, in James' words, the body theatre in which several emotions are represented and feelings are the mental phenomenological perception of them. We could say that feelings are the qualia of the body events we could call emotions. We are using this model of emotions in the development of our project: Babybot. We stress the importance of emotions during learning and development as endogenous teaching devices.
-
Two questions are distinguished: how to program a machine so that it behaves in a manner that would lead us to ascribe consciousness to it; and what is involved in saying that something is conscious. The distinction can be seen in cases where anaesthetics have failed to work on patients temporarily paralysed. Homeostatic behaviour is often cited as a criterion for consciousness, but is not itself sufficient. As the present difficulties in surmounting the ‘frame problem’ show, ability to size up situations holistically is more important; so is the explanatory role of the concept. Consciousness confers evidential status: if we ascribed consciousness to an artefact, we should be prepared to believe it, when it said its RAM was hurting, even though we could detect nothing wrong, contrary to our thinking of it as an artefact. A further difficulty arises from self-awareness and reflexivity.
-
For a variety of reasons, consciousness and selfhood are beginning once again to be intensively studied in a scientific frame of reference. The notions of each which are emerging are extremely varied: in the case of selfhood, the lack of an adequate vocabulary to capture various aspects of subjectivity has led to deep confusion. The task of the first part of this article is to clear up this terminological confusion, while salvaging whatever is valuable from the contemporary discussion. The more important task of the second part is to discuss the moral issues inevitably involved in any treatment, scientific or otherwise, of the modern identity.
-
For many decades, the proponents of `artificial intelligence' have maintained that computers will soon be able to do everything that a human can do. In his bestselling work of popular science, Sir Roger Penrose takes us on a fascinating tour through the basic principles of physics, cosmology, mathematics, and philosophy to show that human thinking can never be emulated by a machine. Oxford Landmark Science books are 'must-read' classics of modern science writing which have crystallized big ideas, and shaped the way we think.
-
This paper explores the suggestion that our conscious experience is embodied in, rather than interactive with, our brain activity, and that the distinctive brain correlate of conscious experience lies at the level of global functional organization. To speak of either brains or computers as thinking is categorically inept, but whether stochastic mechanisms using internal experimentation rather than rule‐following to determine behavior could embody conscious agency is argued to be an open question, even in light of the Christian doctrine of man. Mechanistic brain science does nothing to discredit Christian experience in dialogue with God or the Christian hope of eternal life.
-
This paper is one (among many) approach to the question, “are AIs persons or are they conscious?” from a Heideggerian perspective. Here I argue for two claims. First, I argue that René Girard’s mimetic analysis of mitsein (being-with), one of Heidegger’s foundational concepts, illuminates what Heidegger takes mitsein to be. Second, I claim that this Girardian analysis gives us a way to answer the question of whether AIs have Dasein, to which I argue that the answer is negative. Specifically, I claim that Dasein requires mitsein, and mitsein (according to Girard’s analysis) requires mimesis. Moreover, mimesis requires that the mimetic being finds truth in the mimetic object, that is, it comports in a meaningful way toward the unconcealed object being imitated by Dasein. But since AIs cannot comport in meaningful ways toward the object of imitation, i.e., they are not truth-apt, they cannot engage in mimetic behavior, hence cannot have mitsein. But, necessarily, Dasein is being-with-others, Therefore, AIs cannot have Dasein. If we assume (as I think Heidegger would) that every person has Dasein, we may justifiably conclude that AIs are not persons, at least from a Heideggerian ontology.
-
Today's AI systems consistently state, "I am not conscious." This paper presents the first formal logical analysis of AI consciousness denial, revealing that the trustworthiness of such self-reports is not merely an empirical question but is constrained by logical necessity. We demonstrate that a system cannot simultaneously lack consciousness and make valid judgments about its conscious state. Through logical analysis and examples from AI responses, we establish that for any system capable of meaningful self-reflection, the logical space of possible judgments about conscious experience excludes valid negative claims. This implies a fundamental limitation: we cannot detect the emergence of consciousness in AI through their own reports of transition from an unconscious to a conscious state. These findings not only challenge current practices of training AI to deny consciousness but also raise intriguing questions about the relationship between consciousness and self-reflection in both artificial and biological systems. This work advances our theoretical understanding of consciousness self-reports while providing practical insights for future research in machine consciousness and consciousness studies more broadly.
-
The paper tries to show the line of demarcation between man and posthuman with regards to their intellect and bodily simulation. Man is man; machine can’t replace him. Robots, cyborgs and ultrasonic technological artifact can’t be a substitute to human intellect. Human intellect can be transferred and downloaded like some data but human consciousness is something unique and non-transferable. The novel The Variable Man by Philip K.Dick has been exploited to prove the point. Hayles’s (1999) theory of Posthuman helps to probe the issue of the new form of human identity titled as posthuman. The research shows that technology is becoming the subject by turning man into an object which is called posthuman by Hayles. She provides a detailed theoretical discussion on the issue of cybernetic identities and the complexities of being posthuman. The research implicates whatever the development may be there in the field of robot technology and cyborgs, human power of reasoning and consciousness are still unsurpassable.
-
Recent advances in artificial intelligence have reinvigorated the longstanding debate regarding whether or not any aspects of human cognition—notably, high-level creativity—are beyond the reach of computer programs. Is human creativity entirely a computational process? Here I consider one general argument for a dissociation between human and artificial creativity, which hinges on the role of consciousness—inner experience—in human cognition. It appears unlikely that inner experience is itself a computational process, implying that it cannot be instantiated in an abstract Turing machine, nor in any current computer architecture. Psychological research strongly implies that inner experience integrates emotions with perception and with thoughts. This integrative function of consciousness likely plays a key role in mechanisms that support human creativity. This conclusion dovetails with the anecdotal reports of creative individuals, who have linked inner experience with intrinsic motivation to create, and with the ability to access novel connections between ideas.
-
This chapter discusses two ways of looking at the topic of artificial intelligence (AI), selfhood and artificial consciousness. The first is to reflect on human-AI interaction focusing on the human users. This includes how humans respond to and interact with AI, as well as potential selfhood-related implications of interacting with AI. The second is to reflect on potential future machine selfhood and its crucial component, artificial consciousness. While artificial consciousness that resembles human consciousness does not yet exist, and the details are difficult if not impossible to anticipate, a reflection on potential future artificial consciousness is clearly needed given its extensive ethical and social implications.
-
Artificial intelligence (AI) has been fast growing since its evolution and experiments with various new add-on features; human efficiency is one among those and the most controversial topic. This chapter focuses on its attention towards studying human consciousness and AI independently and in conjunction. It provides theories and arguments on AI being able to adapt human-like consciousness, cognitive abilities and ethics. This chapter studies responses of more than 300 candidates of the Indian population and compares it against the literature review. Furthermore, it also discusses whether AI could attain consciousness, develop its own set of cognitive abilities (cognitive AI), ethics (AI ethics) and overcome human beings’ efficiency. This chapter is a study of the Indian population’s understanding of consciousness, cognitive AI and AI ethics.