Search
Full bibliography 675 resources
-
We review computational intelligence methods of sensory perception and cognitive functions in animals, humans, and artificial devices. Top-down symbolic methods and bottom-up sub-symbolic approaches are described. In recent years, computational intelligence, cognitive science and neuroscience have achieved a level of maturity that allows integration of top-down and bottom-up approaches in modeling the brain. Continuous adaptation and learning is a key component of computationally intelligent devices, which is achieved using dynamic models of cognition and consciousness. Human cognition performs a granulation of the seemingly homogeneous temporal sequences of perceptual experiences into meaningful and comprehensible chunks of concepts and complex behavioral schemas. They are accessed during action selection and conscious decision making as part of the intentional cognitive cycle. Implementations in computational and robotic environments are demonstrated.
-
Thinking and being conscious are two fundamental aspects of the subject. Although both are challenging, often conscious experience has been considered more elusive (Chalmers 1996). However, in recent years, several researchers addressed the hypothesis of designing and implementing models for artificial conscious-ness—on one hand there is hope of being able to design a model for consciousness, on the other hand the actual implementations of such models could be helpful for understanding consciousness. The traditional field of Artificial Intelligence is now flanked by the seminal field of artificial or machine consciousness. In this chapter I will analyse the current state of the art of models of consciousness and then I will outline an externalist theory of the conscious mind that is compatible with the design and implementation of an artificial conscious being. As I argue in the following, this task can be profitably approached once we abandon the dualist framework of traditional Cartesian substance metaphysics and adopt a process-metaphysical stance. Thus, I sketch an alternative externalist process-based ontological framework. From within this framework, I venture to suggest a series of constraints for a conscious oriented architecture.
-
Machine consciousness exists already in organic systems and it is only a matter of time -- and some agreement -- before it will be realised in reverse-engineered organic systems and forward- engineered inorganic systems. The agreement must be over the preconditions that must first be met if the enterprise is to be successful, and it is these preconditions, for instance, being a socially-embedded, structurally-coupled and dynamic, goal-directed entity that organises its perceptual input and enacts its world through the application of both a cognitive and kinaesthetic imagination, that I shall concentrate on presenting in this paper. It will become clear that these preconditions will present engineers with a tall order, but not, I will argue, an impossible one. After all, we might agree with Freeman and Núñez's claim that the machine metaphor has restricted the expectations of the cognitive sciences (Freeman & Núñez, 1999); but it is a double-edged sword, since our limited expectations about machines also narrow the potential of our cognitive science.
-
Nowadays for robots, the notion of behavior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the consciousness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot’s behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot’s body will be seen for itself as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent’s organizations with a morphologic control.
-
The central paradigm of artificial intelligence is rapidly shifting toward biological models for both robotic devices and systems performing such critical tasks as network management, vehicle navigation, and process control. Here we use a recent mathematical analysis of the necessary conditions for consciousness in humans to explore likely failure modes inherent to a broad class of biologically inspired computing machines. Analogs to developmental psychopathology, in which regulatory mechanisms for consciousness fail progressively and subtly understress, and toinattentional blindness, where a narrow 'syntactic band pass' de?ned by the rate distortion manifold of conscious attention results in pathological ?xation, seem inevitable. Similar problems are likely to confront other possible architectures, although their mathematical description may be far less straightforward. Computing devices constructed on biological paradigms will inevitably lack the elaborate, but poorly understood, system of control mechanisms which has evolved over the last few hundred million years to stabilize consciousness in higher animals. This will make such machines prone to insidious degradation, and, ultimately, catastrophic failure.
-
A classic problem for artificial intelligence is to build a machine that imitates human behavior well enough to convince those who are interacting with it that it is another human being [1]. One approach to this problem focuses on building machines that imitate internal psychological facets of human interaction, such as artificially intelligent agents that play grandmaster chess [2]. Another approach focuses on building machines that imitate external psychological facets by building androids [3]. The disparity between these approaches reflects a problem with both: Artificial intelligence abstracts mentality from embodiment, while android science abstracts embodiment from mentality. This problem needs to be solved, if a sentient artificial entity that is indistinguishable from a human being, is to be constructed. One solution is to examine a fundamental human ability and context in which both the construction of internal cognitive models and an appropriate external social response are essential. This paper considers how reasoning with intent in the context of human vs. android strategic interaction may offer a psychological benchmark with which to evaluate the human-likeness of android strategic responses. Understanding how people reason with intent may offer a theoretical context in which to bridge the gap between the construction of sentient internal and external artificial agents.
-
Striving in the real world is more and more what artificial agents are required to do, and it is not a simple task. Interacting with humans in general, and with students in specific, requires an awful lot of subtlety if one is to be perceived as a great tutor and a pleasant fellow. Similarly, the more various types of information an artificial agent senses, the more apt it may be. But then comes the need to process all this stuff, and that can overwhelm even the most powerful computer. «Consciousness» mechanisms can help and sustain an apt tutor, allowing it to consider various sources of information in diagnosing and guiding learners. We show in the present paper how they effectively support theses processes in the specific context of astronauts training on the manipulation of the Space Station Robotic Manipulation System, Canadarm2.
-
The study of several theories and models of consciousness, among them the functional and cognitive model exhibited in Baars’ ‘Global Workspace’ theory, led us to identify computational correlates of consciousness and discuss their possible representations within a model of intelligent agent. We first review a particular agent implementation given by an abstract machine, and then identify the extensions required in order to accommodate the main attributes of consciousness. This amounts to form unconscious processor coalitions that result in the creation of contexts. These extensions can be formulated within a reified virtual machine encompassing a representation of the original machine as well as an additional introspective component.
-
This paper proposes a brain-inspired cognitive architecture that incorporates approximations to the concepts of consciousness, imagination, and emotion. To emulate the empirically established cognitive efficacy of conscious as opposed to non-conscious information processing in the mammalian brain, the architecture adopts a model of information flow from global workspace theory. Cognitive functions such as anticipation and planning are realised through internal simulation of interaction with the environment. Action selection, in both actual and internally simulated interaction with the environment, is mediated by affect. An implementation of the architecture is described which is based on weightless neurons and is used to control a simulated robot.
-
This chapter discusses the observation that human emotions sometimes have unfortunate effects, which raises the concern that robot emotions might not always be optimal. It analyzes brain mechanisms for vision and language to ground an evolutionary account relating motivational systems to emotions and the cortical systems which elaborate them. It also attempts to determine how to address the issue of characterizing emotions in such a way that a robot can be considered to have emotions, even though they are not emphatically linked to human emotions.
-
“What is mind?” “Can we build synthetic or artificial minds?” Think these questions are only reserved for Science Fiction? Well, not anymore. This collection presents a diverse overview of where the development of artificial minds is as the twenty first century begins. Examined from nearly all viewpoints, Visions of Mind includes perspectives from philosophy, psychology, cognitive science, social studies and artificial intelligence. This collection comes largely as a result of many conferences and symposiums conducted by many of the leading minds on this topic. At the core is Professor Aaron Sloman's symposium from the spring 2000 UK Society for Artificial Intelligence conference. Authors from that symposium, as well as others from around the world have updated their perspectives and contributed to this powerful book. The result is a multi-disciplinary approach to the long term problem of designing a human-like mind, whether for scientific, social, or engineering purposes. The topics addressed within this text are valuable to both artificial intelligence and cognitive science, and also to the academic disciplines that they draw on and feed. Among those disciplines are philosophy, computer science, and psychology.
-
What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such systems, guided and limited by associative memory, is similar to the stream of consciousness. A specific architecture of an artificial system, termed articon, is introduced that by its very design has to claim being conscious. Non-verbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the flow of inner states of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills – when conscious information processing is replaced by subconscious – is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute articon’s claims that it is conscious. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human.
-
In his article on The Liabilities of Mobility, Merker (this issue) asserts that “Consciousness presents us with a stable arena for our actions—the world …” and argues for this property as providing evolutionary pressure for the evolution of consciousness. In this commentary, I will explore the implications of Merker’s ideas for consciousness in artificial agents as well as animals, and also meet some possible objections to his evolutionary pressure claim.
-
This paper discusses the development of a robot head that creates a facial expression according to emotion resulting from a flow of consciousness. The authors focused attention on the human consciousness and proposed a system using a unique artificial consciousness according to the relationship between emotion and the resulting human facial expressions. The artificial consciousness is used to imitate the human consciousness and generate emotion in the robot using data from the Internet. The authors have also developed a robot system that is similar in structure to that of the human anatomy in order to achieve smooth communication with people. By combining these two systems, the authors were able to realize the expression of emotion according to a flow of consciousness using a humanoid robot.
-
In this article I suggest how we might conceptualize some kind of artificial consciousness as an ultimate development of Artificial Life. This entity will be embodied in some sort of constructed (biological or non-biological) body. The contention is that consciousness within self-organized entities is not only possible but inevitable. The basic sensory and interactive processes by which an organism operates within an environment are such as to be the basic processes that are necessary for consciousness. I then look at likely criteria for consciousness, and point to an architecture of the cognitive which maps onto the physiological layer, the brain. While evolutionary algorithms and neural nets will be at the heart of the production of artificial intelligences there is a particular architectural organization that may be necessary in the production of conscious artefacts. This involves the operations of multiple layers of feedback loops in the anatomy of the brain, in the social construction of the contents of consciousness and in particular in the self-regulation necessary for the continued operation of metabolically organized systems. Finally I make some comments on the ethics of such a procedure.
-
The author found the "Journal of Consciousness Studies," (JCS) issue on Machine Consciousness, (2003), frustrating and alienating. It is argued that there seems to be a consensus building that consciousness is accessible to scientific scrutiny, so much so that it is already understood well enough to be modeled and even synthesized. It could be instead that the vocabulary of consciousness is being subtly redefined to be amenable to scientific investigation and explicit modeling. Such semantic revisionism is confusing and often misleading. Whatever else consciousness is, it is at least a certain quality of life apparent from personal reflection. Introspection is, after all, the only way we know that consciousness even exists. Scientific and technical redefinitions that fail to account for its phenomenal quality are at best incomplete. In the author's view, all but one of the ten articles in the JCS volume on Machine Consciousness commit various degrees of Protean distortion. In this collection of articles, common sense terms describing consciousness were consistently distorted into special uses that strip them of their meaning. The author has tried to point out in criticism of the JCS articles, explicit principles that are missing in the account of machine consciousness, leaving the various inferences about consciousness open to charges of being distorted and illegitimate. (PsycINFO Database Record (c) 2017 APA, all rights reserved)