Search

Full bibliography 675 resources

  • Cet article s’intéresse à la façon dont la série de science-fiction Westworld (HBO, 2016-présent) interroge la nature même de la conscience au travers d’une narration réflexive qui mêle la question du libre-arbitre et du sens de soi à celle de l’écriture du personnage sériel, s’appuyant ainsi sur l’héritage des séries de science-fiction narrativement complexes qui ont abordé ce thème au fil des dernières décennies. Après un point sur le « hard problem » de la définition de la conscience, l’article suit la logique de la série en interrogeant successivement la mémoire, l’improvisation et l’intérêt personnel, tout en mobilisant une analyse narratologique fondée notamment sur la théorie des mondes possibles. , This paper describes how the science fiction television series Westworld (HBO, 2016-present) questions the very nature of consciousness through a reflexive narrative blending matters of free will and self interest with clues on how to write a serial character, thus drawing on the rich heritage of narratively complex science fiction television series of the last decades. After detailing the “hard problem” posed by any definition of consciousness, the paper follows the series’ logic, successively questioning memory, improvisation and self-interest, while focusing on a narratological approach centered on possible worlds theory.

  • From the perspective of virtue ethics, this paper points out that Artificial Intelligence becomes more and more like an ethic subject which can take responsibility with its improvement of autonomy and sensitivity. This paper intends to point out that it will produce many problems to tackle the questions of ethics of Artificial Intelligence through programming the codes of abstract moral principle. It is at first a social integration question rather than a technical question when we talk about the question of AI’s ethics. From the perspective of historical and social premises of ethics, in what kind of degree Artificial Intelligence can share the same ethics system with human equals to the degree of its integration into the narrative of human’s society. And this is also a process of establishing a common social cooperation system between human and Artificial Intelligence. Furthermore, self-consciousness and responsibility are also social conceptions that established by recognition, and the Artificial Intelligence’s identity for its individual social role is also established in the process of integration.

  • A model of an intentional self-observing system is proposed based on the structure and functions of astrocyte-synapse interactions in tripartite synapses. Astrocyte-synapse interactions are cyclically organized and operate via feedforward and feedback mechanisms, formally described by proemial counting. Synaptic, extrasynaptic and astrocyte receptors are interpreted as places with the same or different quality of information processing described by the combinatorics of tritograms. It is hypothesized that receptors on the astrocytic membrane may embody intentional programs that select corresponding synaptic and extrasynaptic receptors for the formation of receptor-receptor complexes. Basically, the act of self-observation is generated if the actual environmental information is appropriate to the intended observation processed by receptor-receptor complexes. This mechanism is implemented for a robot brain enabling the robot to experience environmental information as “its own”. It is suggested that this mechanism enables the robot to generate matches and mismatches between intended observations and the observations in the environment, based on the cyclic organization of the mechanism. In exploring an unknown environment the robot may stepwise construct an observation space, stored in memory, commanded and controlled by the intentional self-observing system. Finally, the role of self-observation in machine consciousness is shortly discussed.

  • The major aim of artificial general intelligence’s (AGI) is to allow a machine to perform general intelligence tasks similar to human counterparts. Hypothetically, this general intelligence in a machine can be achieved by establishing cross-domain optimization and learning machine approaches. However, contemporary artificial intelligence (AI) capabilities are only limited to narrow and specific domains utilizing machine learning. Consciousness concept is particularly interesting topic to attain the approaches because it simultaneously encodes and processes all types of information and seamlessly integrates them. Over the last several years, there has been a resurgence of interest in testing theories of consciousness using computer models. The studies of these models are classified into four categories: external behavior associated with consciousness, cognitive characteristics associated with consciousness, a computational architecture correlate of human consciousness and phenomenally of conscious machine. The critical challenge is to determine whether these artificial systems are capable of conscious states by providing a measurement the extent to which the systems are succeeded in realizing consciousness in a machine. Several tests for machine consciousness have been proposed yet their formulation is based on extrinsic measurement of consciousness. Yet extrinsic measurement is not inclusive because many conscious artificial systems behave implicitly. This research proposes a new framework to test machine consciousness based on intrinsic measurement so-called Pak Pandir test. The framework leverages three quantum double-slit settings and information integration theory as consciousness definition of choice.

  • Mind and intelligence are closely related with the consciousness. Indeed, artificial intelligence (AI) is the most promising avenue towards artificial consciousness (AC). However, in literature, consciousness has been considered as the least amenable to being understood or replicated by AI. Further, computational theories of mind (CTMs) render the mind as a computational system and it is treated as a substantial hypothesis within the purview of AI. However, the consciousness, which is a phenomenon of mind, is partially tackled by this theory and it seems that the CTM is not corroborated considerably in this pursuit. Many valuable contributions have been incorporated by the researchers working strenuously in this domain. However, there is still scarcity of globally accepted computational models of consciousness that can be used to design conscious intelligent machines. The contributions of the researchers entail consciousness as a vague, incomplete and human-centred entity. In this paper, attempt has been made to analyse different theoretical and intricate issues pertaining to mind, intelligence and AC. Moreover, this paper discusses different computational models of the consciousness and critically analyses the possibility of generating the machine consciousness as well as identifying the characteristics of conscious machine. Further, different inquisitive questions, e.g., “Is it possible to devise, project and build a conscious machine?”, “Will artificially conscious machines be able to surpass the functioning of artificially intelligent machines?” and “Does consciousness reflect a peculiar way of information processing?” are analysed.

  • The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers are capable of experiencing, we show that they are at least rudimentarily conscious with potential to eventually reach superconsciousness. The main contribution of the paper is a test for confirming certain subjective experiences in a tested agent. We follow with analysis of benefits and problems with conscious machines and implications of such capability on future of computing, machine rights and artificial intelligence safety.

  • Human consciousness is the target of research in multiple fields of knowledge, presenting an important characteristic to better handle complex and diverse situations. Artificial consciousness models have arise, together with theories that try to model what we understand about consciousness, in a way that could be implemented a artificial conscious being. The main motivations to study artificial consciousness are related to the creation of agents more similar to human beings, in order to build more efficient machines and at the same time, make the human-computer interaction more affective and non intrusive. This paper presents an experiment using the Global Workspace Theory and the LIDA Model to build a conscious mobile robot in a virtual environment, using the LIDA framework as a implementation of the LIDA Model. The LIDA Model operates a consciousness system in an iterative cognitive cycle of perceiving the environment, shifting attention and focusing operation. The main objective is to evaluate if it is possible to use conscience and emotion as implemented by the LIDA framework to simplify decision making processes of a mobile robot subject to interaction with people, as part of a cicerone robot development.

  • Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one’s perceptions and/or of one’s thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one’s thoughts is basically listening to one’s own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one’s self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer’s lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology.

  • The study analyzes the philosophy of artificial consciousness presented in the first season of TV series 'Westworld' and as a result of the analysis shows the collision of two opposite philosophical views on consciousness and the possibility of creation of artificial consciousness from the standpoint of two characters of TV series - Arnold Weber and Robert Ford. Arnold Weber proceeds from two philosophical assumptions: consciousness really exists (1) and human consciousness can be a prototype for modeling consciousness in an artificial intelligence bearer (2). And he has to choose: either to pick out one of the already existing conceptions of consciousness to implement the emergence of artificial consciousness within artificial intelligence or to invent his own; Arnold Weber chooses the Julian Jaynes' conception of consciousness as a basis for artificial consciousness what means that artificial consciousness must have the following features: 1) artificial consciousness has to be the result of the breakdown of the bicameral mind (apparently, modeled within artificial intelligence), the state of mind in which cognitive functions are divided into two part, a 'speaking' part and 'hearing' ('obeying') part, until the breakdown that makes the bicameral mind the unified mind; 2) artificial consciousness has to be a mind-space based on language and characterized by introspection, concentration, suppression, consilience and an analog 'I' narratizing in the mindspace. Robert Ford believes that consciousness does not exist at all and that there are only stories (narratives) which human beings and artificial beings, modeled in the image and likeness of human beings, tell each other and always the basis of all those stories (narratives) is suffering.

  • The purpose of the attention schema theory is to explain how an information-processing device, the brain, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is offered as a possible starting point for building artificial consciousness. Given current technology, it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts.

  • The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

  • Artificial intelligence is a tool designed by people for the gratification of their own creative ego, so we can not confuse conscience with intelligence and not even intelligence in its human representation with conscience. They are all different concepts and they have different uses. Philosophically, there are differences between autonomous people and automatic artificial intelligence. This is the difference between intelligence and artificial intelligence, autonomous versus automatic. But conscience is above these differences because it is neither conditioned by the self-preservation of autonomy, because a conscience is something that you use to help your neighbor, nor automatic, because one’s conscience is tested by situations which are not similar or subject to routine. So, artificial intelligence is only in science-fiction literature similar to an autonomous conscience-endowed being. In real life, religion with its notions of redemption, sin, expiation, confession and communion will not have any meaning for a machine which cannot make a mistake on its own.

  • "The Creation of a Conscious Machine" surveys the millennial quest to create an intelligent artefact, concludes consciousness is the key to achieve this goal and proposes we adopt an understanding of synthetic consciousness that is suitable for machine implementation. The text describes how achieving Artificial Intelligence will yield extraordinary intellectual benefits and deep insights into the human condition. It examines past attempts, from ancient times until today, to define intelligence and implement it, drawing useful lessons from each. In particular, the Turing Test, the most influential measure of artificial intelligence, is the subject of an in depth analysis. Ultimately, the author also rejects the Turing Test, and the concept of a test itself, as an unsuitable measure of machine intelligence. Basing himself on this analysis, the author concludes that humans will only consider a machine to be truly intelligent if they also perceive it to be conscious. To realize the quest of Artificial Intelligence, it is necessary to implement consciousness. In turn, to achieve Synthetic Consciousness, we must discard the view that consciousness is a subjective experience in favour of a different understanding, deeply rooted in the Western tradition, that it is an externally observable capability. Using this "new" understanding, the author then proposes a definition of Synthetic Consciousness, expressed as specification objectives, that is suitable for software implementation. This makes it possible to build the first generation of synthetic conscious beings.

  • Using insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self to self-determine; (b) there must be a non-zero probability of more than one option being enacted; (c) there must be an internal means of choosing among options (which is not merely random, since randomness is not a choice). For (a) to be fulfilled, the agent of self-determination must be organisationally closed (a “Kantian whole”). For (c) to be fulfilled: (d) options must be generated from an internal model of the self which can calculate future states contingent on possible responses; (e) choosing among these options requires their evaluation using an internally generated goal defined on an objective function representing the overall “master function” of the agent and (f) for “deep free-will”, at least two nested levels of choice and goal (d–e) must be enacted by the agent. The agent must also be able to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. The main impediment to free-will in present-day artificial robots, is their lack of being a Kantian whole. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low and include relatively simple life-forms that are at least able to learn.

  • This chapter aims to evaluate Integrated Information Theory's claims concerning Artificial Consciousness. Integrated Information Theory (IIT) works from premises that claim that certain properties, such as unity, are essential to consciousness, to conclusions regarding the constraints upon physical systems that could realize consciousness. Among these conclusions is the claim that feed-forward systems, and systems that are not largely reentrant, necessarily will fail to generate consciousness (but may simulate it). This chapter will discuss the premises of IIT, which themselves are highly controversial, and will also address IIT's related rejection of functionalism. This analysis will argue that IIT has failed to established good grounds for these positions, and that convincing alternatives remain available. This, in turn, implies that the constraints upon Artificial Consciousness are more generous than IIT would have them be.

  • Artificial intelligence and research on consciousness have reciprocally influenced each other: theories about consciousness have inspired work on artificial intelligence, and the results from artificial intelligence have changed our interpretation of the mind. Artificial intelligence can be used to test theories of consciousness and research has been carried out on the development of machines with the behavior, cognitive characteristics and architecture associated with consciousness. Some people have argued against the possibility of machine consciousness, but none of these objections is conclusive and many theories openly embrace the possibility that phenomenal consciousness could be realized in artificial systems.

  • Here is examined the work done in many laboratories on the proposition that the mechanisms underlying consciousness in living organisms can be studied using computational theories. This follows an agreement at a 2001 multi-disciplinary meeting of philosophers, neuroscientists and computer scientists that such a research programme was feasible and worthwhile. Here this effort is reviewed both as a historical statement and for the positions held at the time of going to print of this volume. The approaches cover diverse techniques ranging from the machine modeling of neural structures in the brain to abstract models based on the type of logic found in computer programming. Purely theoretical approaches based on hypotheses of what kind of information constitutes a mental state are included.

  • In the past, computational models for machine consciousness have been proposed with varying degrees of challenges for implementation. Affective computing focuses on the development of systems that can simulate, recognize, and process human affects which refer to the experience of feeling or emotion. The affective attributes are important factors for the future of machine consciousness with the rise of technologies that can assist humans and also build trustworthy relationships between humans and artificial systems. In this paper, an affective computational model for machine consciousnesses with a system of management of the major features. Real-world scenarios are presented to further illustrate the functionality of the model and provide a road-map for computational implementation.

  • The structure of consciousness has long been a cornerstone problem in the cognitive sciences. Recently it took on applied significance in the design of computer agents and mobile robots. This problem can thus be examined from perspectives of philosophy, neuropsychology, and computer modeling.

  • Some humans may seek to purchase pain-feeling robots for the purpose of torturing them – a sad fact about some humans. This chapter explores the Chinese room argument against robot pain. The role of the Chinese room thought experiment is to establish the truth of the claim that it is indeed possible for something or someone to run any arbitrarily selected program without thereby understanding Chinese. One way to construct a robotic copy of a human is by gradually transforming a human into a robot by a sequence of prosthetic replacements of the human’s naturally occurring parts, especially parts of the nervous system, with artificial analogs. Like all physiological systems in the human body, the nervous system is composed of causally interacting cells. The chapter emphasizes the ways in which the thought experiments in the respective arguments attempt to marshal hypothetical first-person accessible evidence concerning how one’s own mental life appears to oneself.

Last update from database: 1/2/26, 2:00 AM (UTC)