Search
Full bibliography 615 resources
-
A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized to occur as patterns within the ”moving bubble of attention” of the human brain and any brainlike AI system. They appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well.
-
Is it possible to construct an artificial person? Researchers in the field of artificial intelligence have for decades been developing computer programs that emulate human intelligence. This book goes beyond intelligence and describes how close we are to recreating many of the other capacities that make us human. These abilities include learning, creativity, consciousness, and emotion. The attempt to understand and engineer these abilities constitutes the new interdisciplinary field of artificial psychology, which is characterized by contributions from philosophy, cognitive psychology, neuroscience, computer science, and robotics. This work is intended for use as a main or supplementary introductory textbook for a course in cognitive psychology, cognitive science, artificial intelligence, or the philosophy of mind. It examines human abilities as operating requirements that an artificial person must have and analyzes them from a multidisciplinary approach. The book is comprehensive in scope, covering traditional topics like perception, memory, and problem solving. However, it also describes recent advances in the study of free will, ethical behavior, affective architectures, social robots, and hybrid human-machine societies.
-
In their joint paper entitled “The Replication of the Hard Problem of Consciousness in AI and BIO-AI” (Boltuc et al. Replication of the hard problem of conscious in AI and Bio-AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as “H-consciousness”). The claim is that if we knew the inner workings of phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding.
-
This work describes the application of the Baars-Franklin Architecture (BFA), an artificial consciousness approach, to synthesize a mind (a control system) for an artificial creature. Firstly we introduce the theoretical foundations of this approach for the development of a conscious agent. Then we explain the architecture of our agent and at the end we discuss the results and first impressions of this approach.
-
The classical paradigm of the neural brain as the seat of human natural intelligence is too restrictive. This paper defends the idea that the neural ectoderm is the actual brain, based on the development of the human embryo. Indeed, the neural ectoderm includes the neural crest, given by pigment cells in the skin and ganglia of the autonomic nervous system, and the neural tube, given by the brain, the spinal cord, and motor neurons. So the brain is completely integrated in the ectoderm, and cannot work alone. The paper presents fundamental properties of the brain as follows. Firstly, Paul D. MacLean proposed the triune human brain, which consists to three brains in one, following the species evolution, given by the reptilian complex, the limbic system, and the neo-cortex. Secondly, the consciousness and conscious awareness are analysed. Thirdly, the anticipatory unconscious free will and conscious free veto are described in agreement with the experiments of Benjamin Libet. Fourthly, the main section explains the development of the human embryo and shows that the neural ectoderm is the whole neural brain. Fifthly, a conjecture is proposed that the neural brain is completely programmed with scripts written in biological low-level and high-level languages, in a manner similar to the programmed cells by the genetic code. Finally, it is concluded that the proposition of the neural ectoderm as the whole neural brain is a breakthrough in the understanding of the natural intelligence, and also in the future design of robots with artificial intelligence.
-
The question about the potential for consciousness of artificial systems has often been addressed using thought experiments, which are often problematic in the philosophy of mind. A more promising approach is to use real experiments to gather data about the correlates of consciousness in humans, and develop this data into theories that make predictions about human and artificial consciousness. A key issue with an experimental approach is that consciousness can only be measured using behavior, which places fundamental limits on our ability to identify the correlates of consciousness. This paper formalizes these limits as a distinction between type I and type II potential correlates of consciousness (PCCs). Since it is not possible to decide empirically whether type I PCCs are necessary for consciousness, it is indeterminable whether a machine that lacks neurons or hemoglobin, for example, is potentially conscious. A number of responses have been put forward to this problem, including suspension of judgment, liberal and conservative attribution of the potential for consciousness and a psychometric scale that models our judgment about the relationship between type I PCCs and consciousness.
-
New product and system opportunities are expected to arise when the next step in information technology takes place. Existing Artificial Intelligence is based on preprogramed algorithms that operate in a mechanistic way in the computer. The computer and the program do not understand what is being processed. Without the consideration of meaning, no understanding can take place. This lack of understanding is seen as the major shortcoming of Artificial Intelligence, one that prevents it to achieve its original goal; thinking machines with full human-like cognition and intelligence. The emerging technology of Machine Consciousness is expected to remedy this shortcoming. Machine Consciousness technology is expected to create new opportunities in robotics, information technology gadgets and general information processing calling for machine understanding of auditory, visual and linguistic information.
-
It is argued that qualia are the primary way in which sensory information manifests itself in mind. Qualia are not seen as properties of the physical world, ready to be observed; instead it is argued that they are the way in which the sensory system's response to the sensed stimuli manifests itself inside the system. Systems that have qualia have direct and transparent access to this response. It is argued that even though qualia are produced inside the head, they appear to be outside because this appearance complies with our motions, small and large, in the word. To be conscious in the way that we experience it is to have qualia. True conscious machines must have qualia, but the qualities of machine qualia need not be similar to the qualities of human qualia.
-
Two related and relatively obscure issues in science have eluded empirical tractability. Both can be directly traced to progress in artificial intelligence. The first is scientific proof of consciousness or otherwise in anything. The second is the role of consciousness in intelligent behaviour. This document approaches both issues by exploring the idea of using scientific behaviour self-referentially as a benchmark in an objective test for P-consciousness, which is the relevant critical aspect of consciousness. Scientific behaviour is unique in being both highly formalised and provably critically dependent on the P-consciousness of the primary senses. In the context of the primary senses P-consciousness is literally a formal identity with scientific observation. As such it is intrinsically afforded a status of critical dependency demonstrably no different to any other critical dependency in science, making scientific behaviour ideally suited to a self-referential scientific circumstance. The ‘provability’ derives from the delivery by science of objectively verifiable ‘laws of nature’. By exploiting the critical dependency, an empirical framework is constructed as a refined and specialised version of existing propositions for a ‘test for consciousness’. The specific role of P-consciousness is clarified: it is a human intracranial central nervous system construct that symbolically grounds the scientist in the distal external world, resulting in our ability to recognise, characterise and adapt to distal natural world novelty. It is hoped that in opening a discussion of a novel approach, the artificial intelligence community may eventually find a viable contender for its long overdue scientific basis.
-
To build a true conscious robot requires that a robot’s “brain” be capable of supporting the phenomenal consciousness as human’s brain enjoys. Operational Architectonics framework through exploration of the temporal structure of information flow and inter-area interactions within the network of functional neuronal populations [by examining topographic sharp transition processes in the scalp electroencephalogram (EEG) on the millisecond scale] reveals and describes the EEG architecture which is analogous to the architecture of the phenomenal world. This suggests that the task of creating the “machine” consciousness would require a machine implementation that can support the kind of hierarchical architecture found in EEG.
-
The development of cognitive sciences, whose interdisciplinary connections constitute a necessary condition of their usefulness, depends on the interaction between psychology, physiology, linguistics, as well as logic and artificial intelligence (AI); the latter serves as the proving ground for experimental testing of scientific means for imitating rationality and productive thinking. Thus, it is very interesting to consider the phenomenon of the asymmetry of brain activity from the point of view of AI constructions and its logical means.
-
A method is presented to construct an embodied artificial intelligence. The method grows out of a system for object recognition in artificial vision and relies upon such a capability. Examples from the biosphere are extensively discussed in coming to the method which has, as its central tenet, that all intelligence is fundamentally related to objects. All the aspects needed by an embodied intelligence are developed including consciousness, memory and volition.
-
The currently leading cognitive theory of consciousness, Global Workspace Theory postulates that the primary functions of consciousness include a global broadcast serving to recruit internal resources with which to deal with the current situation and to modulate several types of learning. In addition, conscious experiences present current conditions and problems to a "self" system, an executive interpreter that is identifiable with brain structures like the frontal lobes and precuneus. Be it human, animal or artificial, an autonomous agent is said to be functionally conscious if its control structure (mind) implements Global Workspace Theory and the LIDA Cognitive Cycle, which includes unconscious memory and control functions needed to integrate the conscious component of the system. We would therefore consider humans, many animals and even some virtual or robotic agents to be functionally conscious. Such entities may approach phenomenal consciousness, as found in human and other biological brains, as additional brain-like features are added. Here we argue that adding mechanisms to produce a stable, coherent perceptual field in a LIDA controlled mobile robot might provide a small but significant step toward phenomenal consciousness in machines. We also propose that implementing several of the various notions of self in such a LIDA controlled robot may well prove another step toward phenomenal consciousness in machines.
-
Haikonen envisions autonomous robots that perceive and understand the world directly, acting in it in a natural human-like way without the need of programs and numerical representation of information. By developing higher-level cognitive functions through the power of artificial associative neuron architectures, the author approaches the issues of machine consciousness. Robot Brains expertly outlines a complete system approach to cognitive machines, offering practical design guidelines for the creation of non-numeric autonomous creative machines. It details topics such as component parts and realization principles, so that different pieces may be implemented in hardware or software. Real-world examples for designers and researchers are provided, including circuit and systems examples that few books on this topic give. In novel technical and practical detail, this book also considers: the limitations and remedies of traditional neural associators in creating true machine cognition; basic circuit assemblies cognitive neural architectures; how motors can be interfaced with the associative neural system in order for fluent motion to be achieved without numeric computations; memorization, imagination, planning and reasoning in the machine; the concept of machine emotions for motivation and value systems; an approach towards the use and understanding of natural language in robots. The methods presented in this book have important implications for computer vision, signal processing, speech recognition and other information technology fields. Systematic and thoroughly logical, it will appeal to practising engineers involved in the development and design of robots and cognitive machines, also researchers in Artificial Intelligence. Postgraduate students in computational neuroscience and robotics, and neuromorphic engineers will find it an exciting source of information.
-
The development of conscious machines faces a number of difficult issues such as the apparent immateriality of mind, qualia and self-awareness. Also consciousness-related cognitive processes such as perception, imagination, motivation and inner speech are a technical challenge. It is foreseen that the development of machine consciousness would call for a system approach; the developer of conscious machines should consider complete systems that integrate the cognitive processes seamlessly and process information in a transparent way with representational and non-representational information-processing modes. An overview of the main issues is given and some possible solutions are outlined.
-
Striving in the real world is more and more what artificial agents are required to do, and it is not a simple task. Interacting with humans in general, and with students in specific, requires an awful lot of subtlety if one is to be perceived as a great tutor and a pleasant fellow. Similarly, the more various types of information an artificial agent senses, the more apt it may be. But then comes the need to process all this stuff, and that can overwhelm even the most powerful computer. «Consciousness» mechanisms can help and sustain an apt tutor, allowing it to consider various sources of information in diagnosing and guiding learners. We show in the present paper how they effectively support theses processes in the specific context of astronauts training on the manipulation of the Space Station Robotic Manipulation System, Canadarm2.
-
“What is mind?” “Can we build synthetic or artificial minds?” Think these questions are only reserved for Science Fiction? Well, not anymore. This collection presents a diverse overview of where the development of artificial minds is as the twenty first century begins. Examined from nearly all viewpoints, Visions of Mind includes perspectives from philosophy, psychology, cognitive science, social studies and artificial intelligence. This collection comes largely as a result of many conferences and symposiums conducted by many of the leading minds on this topic. At the core is Professor Aaron Sloman's symposium from the spring 2000 UK Society for Artificial Intelligence conference. Authors from that symposium, as well as others from around the world have updated their perspectives and contributed to this powerful book. The result is a multi-disciplinary approach to the long term problem of designing a human-like mind, whether for scientific, social, or engineering purposes. The topics addressed within this text are valuable to both artificial intelligence and cognitive science, and also to the academic disciplines that they draw on and feed. Among those disciplines are philosophy, computer science, and psychology.
-
What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such systems, guided and limited by associative memory, is similar to the stream of consciousness. A specific architecture of an artificial system, termed articon, is introduced that by its very design has to claim being conscious. Non-verbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the flow of inner states of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills – when conscious information processing is replaced by subconscious – is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute articon’s claims that it is conscious. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human.
-
In his article on The Liabilities of Mobility, Merker (this issue) asserts that “Consciousness presents us with a stable arena for our actions—the world …” and argues for this property as providing evolutionary pressure for the evolution of consciousness. In this commentary, I will explore the implications of Merker’s ideas for consciousness in artificial agents as well as animals, and also meet some possible objections to his evolutionary pressure claim.