Your search
Results 6 resources
-
In his article on The Liabilities of Mobility, Merker (this issue) asserts that “Consciousness presents us with a stable arena for our actions—the world …” and argues for this property as providing evolutionary pressure for the evolution of consciousness. In this commentary, I will explore the implications of Merker’s ideas for consciousness in artificial agents as well as animals, and also meet some possible objections to his evolutionary pressure claim.
-
After discussing various types of consciousness, several approaches to machine consciousness, software agent, and global workspace theory, we describe a software agent, IDA, that is “conscious” in the sense of implementing that theory of consciousness. IDA perceives, remembers, deliberates, negotiates, and selects actions, sometimes “consciously.” She uses a variety of mechanisms, each of which is briefly described. It’s tempting to think of her as a conscious artifact. Is such a view in any way justified? The remainder of the paper considers this question.
-
Baars (1988, 1997) has proposed a psychological theory of consciousness, called global workspace theory. The present study describes a software agent implementation of that theory, called “Conscious” Mattie (CMattie). CMattie operates in a clerical domain from within a UNIX operating system, sending messages and interpreting messages in natural language that organize seminars at a university. CMattie fleshes out global workspace theory with a detailed computational model that integrates contemporary architectures in cognitive science and artificial intelligence. Baars (1997) lists the psychological “facts that any complete theory of consciousness must explain” in his appendix to In the Theater of Consciousness; global workspace theory was designed to explain these “facts.” The present article discusses how the design of CMattie accounts for these facts and thereby the extent to which it implements global workspace theory.
-
Cognitive theories of consciousness should provide effective frameworks to implement machine consciousness. The Global Workspace Theory is a leading theory of consciousness which postulates that the primary function of consciousness is a global broadcast that facilitates recruitment of internal resources to deal with the current situation as well as modulate several types of learning. In this paper, we look at architectures for machine consciousness that have the Global Workspace Theory as their basis and discuss the requirements in such architectures to bring about both functional and phenomenal aspects of consciousness in machines.
-
The currently leading cognitive theory of consciousness, Global Workspace Theory postulates that the primary functions of consciousness include a global broadcast serving to recruit internal resources with which to deal with the current situation and to modulate several types of learning. In addition, conscious experiences present current conditions and problems to a "self" system, an executive interpreter that is identifiable with brain structures like the frontal lobes and precuneus. Be it human, animal or artificial, an autonomous agent is said to be functionally conscious if its control structure (mind) implements Global Workspace Theory and the LIDA Cognitive Cycle, which includes unconscious memory and control functions needed to integrate the conscious component of the system. We would therefore consider humans, many animals and even some virtual or robotic agents to be functionally conscious. Such entities may approach phenomenal consciousness, as found in human and other biological brains, as additional brain-like features are added. Here we argue that adding mechanisms to produce a stable, coherent perceptual field in a LIDA controlled mobile robot might provide a small but significant step toward phenomenal consciousness in machines. We also propose that implementing several of the various notions of self in such a LIDA controlled robot may well prove another step toward phenomenal consciousness in machines.
-
What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.