Search
Full bibliography 615 resources
-
The concept of qualia poses a central problem in the framework of consciousness studies. Despite it being a controversial issue even in the study of human consciousness, we argue that qualia can be complementarily studied using artificial cognitive architectures. In this work we address the problem of defining qualia in the domain of artificial systems, providing a model of “artificial qualia”. Furthermore, we partially apply the proposed model to the generation of visual qualia using the cognitive architecture CERA-CRANIUM, which is modeled after the global workspace theory of consciousness. It is our aim to define, characterize and identify artificial qualia as direct products of a simulated conscious perception process. Simple forms of the apparent motion effect are used as the basis for a preliminary experimental setting focused on the simulation and analysis of synthetic visual experience. In contrast with the study of biological brains, the inspection of the dynamics and transient inner states of the artificial cognitive architecture can be performed effectively, thus enabling the detailed analysis of covert and overt percepts generated by the system when it is confronted with specific visual stimuli. The observed states in the artificial cognitive architecture during the simulation of apparent motion effects are used to discuss the existence of possible analogous mechanisms in human cognition processes.
-
The progress in the machine consciousness research field has to be assessed in terms of the features demonstrated by the new models and implementations currently being designed. In this paper, we focus on the functional aspects of consciousness and propose the application of a revision of ConsScale — a biologically inspired scale for measuring cognitive development in artificial agents — in order to assess the cognitive capabilities of machine consciousness implementations. We argue that the progress in the implementation of consciousness in artificial agents can be assessed by looking at how key cognitive abilities associated to consciousness are integrated within artificial systems. Specifically, we characterize ConsScale as a partially ordered set and propose a particular dependency hierarchy for cognitive skills. Associated to that hierarchy a graphical representation of the cognitive profile of an artificial agent is presented as a helpful analytic tool. The proposed evaluation schema is discussed and applied to a number of significant machine consciousness models and implementations. Finally, the possibility of generating qualia and phenomenological states in machines is discussed in the context of the proposed analysis.
-
Sloman criticizes all existing attempts to define machine consciousness for being overly one-sided. He argues that such definition is not only unattainable but also unnecessary. The critique is well taken in part; yet, whatever his intended aims, by not acknowledging the non-reductive aspects of consciousness, Sloman, in fact, sides with the reductivist view.
-
After discussing a possible contradiction in Sloman's very challenging intervention, I stress the need for not identifying "consciousness" with phenomenal consciousness and with the "qualia" problem. I claim that it is necessary to distinguish different forms and functions of "consciousness" and to explicitly model them, also by exploiting the specific advantage of AI: to make experiments impossible in nature, by separating what cannot be separated in human behavior/mind. As for phenomenal consciousness, one should first be able to model what it means to have a "body" and to "feel" it.
-
Abstract: In the course of seeking an answer to the question “How do you know you are not a zombie?” Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so‐called knowledge game (or “wise‐man puzzle,” or “muddy‐children puzzle”)—one that purportedly ensures that those who pass it are self‐conscious. In this article, on behalf of (at least the logic‐based variety of) AI, I take up the challenge—which is to say, I try to show that this challenge can in fact be met by AI in the foreseeable future.
-
The accurate measurement of the level of consciousness of a creature remains a major scientific challenge, nevertheless a number of new accounts that attempt to address this problem have been proposed recently. In this paper we analyze the principles of these new measures of consciousness along with other classical approaches focusing on their applicability to Machine Consciousness (MC). Furthermore, we propose a set of requirements of what we think a suitable measure for MC should be, discussing the associated theoretical and practical issues. Using the proposed requirements as a framework for the design of an integrative measure of consciousness, we explore the possibility of designing such a measure in the context of current state of the art in consciousness studies.
-
The academic journey to a widely acknowledged Machine Consciousness is anticipated to be an emotional one. Both in terms of the active debate provoked by the subject and a hypothesized need to encapsulate an analogue of emotions in an artificial system in order to progress towards machine consciousness. This paper considers the inspiration that the concepts related to emotion may contribute to cognitive systems when approaching conscious-like behavior. Specifically, emotions can set goals including balancing explore versus exploit, facilitate action in unknown domains and modify existing behaviors, which are explored in cognitive robotics experiments.
-
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988) monograph, “Representation & Reality”, which if correct, has important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set (x)”. Then, equating Q (x) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere.
-
The main sources of inspiration for the design of more engaging synthetic characters are existing psychological models of human cognition. Usually, these models, and the associated Artificial Intelligence (AI) techniques, are based on partial aspects of the real complex systems involved in the generation of human-like behavior. Emotions, planning, learning, user modeling, set shifting, and attention mechanisms are some remarkable examples of features typically considered in isolation within classical AI control models. Artificial cognitive architectures aim at integrating many of these aspects together into effective control systems. However, the design of this sort of architectures is not straightforward. In this paper, we argue that current research efforts in the young field of Machine Consciousness (MC) could contribute to tackle complexity and provide a useful framework for the design of more appealing synthetic characters. This hypothesis is illustrated with the application of a novel consciousness-based cognitive architecture to the development of a First Person Shooter video game character.
-
This paper critically tracks the impact of the development of the machine consciousness paradigm from the incredulity of the 1990s to the structuring of the turn of this century, and the consolidation of the present time which forms the basis for guessing what might happen in the future. The underlying question is how this development may have changed our understanding of consciousness and whether an artificial version of the concept contributes to the improvement of computational machinery and robots. The paper includes some suggestions for research that might be profitable and others that may not be.
-
From the point of view of Cognitive Informatics, consciousness can be considered as a grand integration of a number of cognitive processes. Intuitive definitions of consciousness generally involve perception, emotions, attention, self-recognition, theory of mind, volition, etc. Due to this compositional definition of the term consciousness it is usually difficult to define both what is exactly a conscious being and how consciousness could be implemented in artificial machines. When we look into the most evolved biological examples of conscious beings, like great apes or humans, the vast complexity of observed cognitive interactions in conjunction with the lack of comprehensive understanding of low level neural mechanisms makes the reverse engineering task virtually unreachable. With the aim to effectively address the problem of modeling consciousness at a cognitive level, in this work we propose a concrete developmental path in which key stages in the progressive process of building conscious machines are identified and characterized. Furthermore, a method for calculating a quantitative measure of artificial consciousness is presented. The application of the proposed framework is illustrated with the comparative study of different software agents designed to compete in a first-person shooter video game.
-
Machines can be conscious, is the claim to be adopted here, upon examination of its different versions. We distinguish three kinds of consciousness: functional, phenomenal and hard consciousness (f-, p- and h-consciousness). Robots are functionally conscious already. There is also a clear project in AI on how to make computers phenomenally conscious, though criteria differ. Discussion about p-consciousness is clouded by the lack of clarity on its two versions: (1) First-person functional elements (here, p-consciousness), and (2) Non-functional (h-consciousness). I argue that: (1) There are sufficient reasons to adopt h-consciousness and move forward with discussion of its practical applications; this does not have anti-naturalistic implications. (2) A naturalistic account of h-consciousness should be expected, in principle; in neuroscience we are some way towards formulating such an account. (3) Detailed analysis of the notion of consciousness is needed to clearly distinguish p- and h-consciousness. This refers to the notion of subject that is not an object and complementarity of the subjective and objective perspectives.(4) If we can understand the exact mechanism that produces h-consciousness we should be able to engineer it. (5) H-consciousness is probably not a computational process (more like a liver-function). Machines can, in principle, be functionally, phenomenally and h-conscious; all those processes are naturalistic. This is the engineering thesis on machine consciousness formulated within non-reductive naturalism.
-
This paper reviews the field of artificial intelligence focusing on embodied artificial intelligence. It also considers models of artificial consciousness, agent-based artificial intelligence and the philosophical commentary on artificial intelligence. It concludes that there is almost no consensus nor formalism in the field and that the achievements of the field are meager.
-
This paper briefly describes the most relevant current approaches to the implementation of scientific models of consciousness. Main aspects of scientific theories of consciousness are characterized in sight of their possible mapping into artificial implementations. These implementations are analyzed both theoretically and functionally. Also, a novel pragmatic functional approach to machine consciousness is proposed and discussed. A set of axioms for the presence of consciousness in agents is applied to evaluate and compare the various models.
-
This work aims to describe the application of a novel machine consciousness model to a particular problem of unknown environment exploration. This relatively simple problem is analyzed from the point of view of the possible benefits that cognitive capabilities like attention, environment awareness and emotional learning can offer. The model we have developed integrates these concepts into a situated agent control framework, whose first version is being tested in an advanced robotics simulator. The implementation of the relationships and synergies between the different cognitive functionalities of consciousness in the domain of autonomous robotics is also discussed.
-
Nowadays for robots, the notion of behavior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the consciousness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot’s behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot’s body will be seen for itself as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent’s organizations with a morphologic control.
-
A classic problem for artificial intelligence is to build a machine that imitates human behavior well enough to convince those who are interacting with it that it is another human being [1]. One approach to this problem focuses on building machines that imitate internal psychological facets of human interaction, such as artificially intelligent agents that play grandmaster chess [2]. Another approach focuses on building machines that imitate external psychological facets by building androids [3]. The disparity between these approaches reflects a problem with both: Artificial intelligence abstracts mentality from embodiment, while android science abstracts embodiment from mentality. This problem needs to be solved, if a sentient artificial entity that is indistinguishable from a human being, is to be constructed. One solution is to examine a fundamental human ability and context in which both the construction of internal cognitive models and an appropriate external social response are essential. This paper considers how reasoning with intent in the context of human vs. android strategic interaction may offer a psychological benchmark with which to evaluate the human-likeness of android strategic responses. Understanding how people reason with intent may offer a theoretical context in which to bridge the gap between the construction of sentient internal and external artificial agents.
-
This chapter discusses the observation that human emotions sometimes have unfortunate effects, which raises the concern that robot emotions might not always be optimal. It analyzes brain mechanisms for vision and language to ground an evolutionary account relating motivational systems to emotions and the cortical systems which elaborate them. It also attempts to determine how to address the issue of characterizing emotions in such a way that a robot can be considered to have emotions, even though they are not emphatically linked to human emotions.
-
The author found the "Journal of Consciousness Studies," (JCS) issue on Machine Consciousness, (2003), frustrating and alienating. It is argued that there seems to be a consensus building that consciousness is accessible to scientific scrutiny, so much so that it is already understood well enough to be modeled and even synthesized. It could be instead that the vocabulary of consciousness is being subtly redefined to be amenable to scientific investigation and explicit modeling. Such semantic revisionism is confusing and often misleading. Whatever else consciousness is, it is at least a certain quality of life apparent from personal reflection. Introspection is, after all, the only way we know that consciousness even exists. Scientific and technical redefinitions that fail to account for its phenomenal quality are at best incomplete. In the author's view, all but one of the ten articles in the JCS volume on Machine Consciousness commit various degrees of Protean distortion. In this collection of articles, common sense terms describing consciousness were consistently distorted into special uses that strip them of their meaning. The author has tried to point out in criticism of the JCS articles, explicit principles that are missing in the account of machine consciousness, leaving the various inferences about consciousness open to charges of being distorted and illegitimate. (PsycINFO Database Record (c) 2017 APA, all rights reserved)