Search
Full bibliography 675 resources
-
It is argued that qualia are the primary way in which sensory information manifests itself in mind. Qualia are not seen as properties of the physical world, ready to be observed; instead it is argued that they are the way in which the sensory system's response to the sensed stimuli manifests itself inside the system. Systems that have qualia have direct and transparent access to this response. It is argued that even though qualia are produced inside the head, they appear to be outside because this appearance complies with our motions, small and large, in the word. To be conscious in the way that we experience it is to have qualia. True conscious machines must have qualia, but the qualities of machine qualia need not be similar to the qualities of human qualia.
-
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988) monograph, “Representation & Reality”, which if correct, has important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set (x)”. Then, equating Q (x) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere.
-
The main sources of inspiration for the design of more engaging synthetic characters are existing psychological models of human cognition. Usually, these models, and the associated Artificial Intelligence (AI) techniques, are based on partial aspects of the real complex systems involved in the generation of human-like behavior. Emotions, planning, learning, user modeling, set shifting, and attention mechanisms are some remarkable examples of features typically considered in isolation within classical AI control models. Artificial cognitive architectures aim at integrating many of these aspects together into effective control systems. However, the design of this sort of architectures is not straightforward. In this paper, we argue that current research efforts in the young field of Machine Consciousness (MC) could contribute to tackle complexity and provide a useful framework for the design of more appealing synthetic characters. This hypothesis is illustrated with the application of a novel consciousness-based cognitive architecture to the development of a First Person Shooter video game character.
-
This paper critically tracks the impact of the development of the machine consciousness paradigm from the incredulity of the 1990s to the structuring of the turn of this century, and the consolidation of the present time which forms the basis for guessing what might happen in the future. The underlying question is how this development may have changed our understanding of consciousness and whether an artificial version of the concept contributes to the improvement of computational machinery and robots. The paper includes some suggestions for research that might be profitable and others that may not be.
-
From the point of view of Cognitive Informatics, consciousness can be considered as a grand integration of a number of cognitive processes. Intuitive definitions of consciousness generally involve perception, emotions, attention, self-recognition, theory of mind, volition, etc. Due to this compositional definition of the term consciousness it is usually difficult to define both what is exactly a conscious being and how consciousness could be implemented in artificial machines. When we look into the most evolved biological examples of conscious beings, like great apes or humans, the vast complexity of observed cognitive interactions in conjunction with the lack of comprehensive understanding of low level neural mechanisms makes the reverse engineering task virtually unreachable. With the aim to effectively address the problem of modeling consciousness at a cognitive level, in this work we propose a concrete developmental path in which key stages in the progressive process of building conscious machines are identified and characterized. Furthermore, a method for calculating a quantitative measure of artificial consciousness is presented. The application of the proposed framework is illustrated with the comparative study of different software agents designed to compete in a first-person shooter video game.
-
Machines can be conscious, is the claim to be adopted here, upon examination of its different versions. We distinguish three kinds of consciousness: functional, phenomenal and hard consciousness (f-, p- and h-consciousness). Robots are functionally conscious already. There is also a clear project in AI on how to make computers phenomenally conscious, though criteria differ. Discussion about p-consciousness is clouded by the lack of clarity on its two versions: (1) First-person functional elements (here, p-consciousness), and (2) Non-functional (h-consciousness). I argue that: (1) There are sufficient reasons to adopt h-consciousness and move forward with discussion of its practical applications; this does not have anti-naturalistic implications. (2) A naturalistic account of h-consciousness should be expected, in principle; in neuroscience we are some way towards formulating such an account. (3) Detailed analysis of the notion of consciousness is needed to clearly distinguish p- and h-consciousness. This refers to the notion of subject that is not an object and complementarity of the subjective and objective perspectives.(4) If we can understand the exact mechanism that produces h-consciousness we should be able to engineer it. (5) H-consciousness is probably not a computational process (more like a liver-function). Machines can, in principle, be functionally, phenomenally and h-conscious; all those processes are naturalistic. This is the engineering thesis on machine consciousness formulated within non-reductive naturalism.
-
Machine (artificial) consciousness can be interpreted in both strong and weak forms, as an instantiation or as a simulation. Here, I argue in favor of weak artificial consciousness, proposing that synthetic models of neural mechanisms potentially underlying consciousness can shed new light on how these mechanisms give rise to the phenomena they do. The approach I advocate involves using synthetic models to develop "explanatory correlates" that can causally account for deep, structural properties of conscious experience. In contrast, the project of strong artificial consciousness — while not impossible in principle — has yet to be credibly illustrated, and is in any case less likely to deliver advances in our understanding of the biological basis of consciousness. This is because of the inherent circularity involved in using models both as instantiations and as cognitive prostheses for exposing general principles, and because treating models as instantiations can indefinitely postpone comparisons with empirical data.
-
A discussion is given as to the possibility of creating machines which have more powerful consciousnesses than our own. The approach employs, in particular, a brain-based model of human consciousness. From that model a general discussion is developed of the need for a unique central control system in any machine to enable it to be efficient in decision-making. The resulting features of the machine’s consciousness, as the highest order controller, is seen to need to be similar to our own. We conclude that beyond consciousness very likely lies only similar consciousnesses.
-
In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious.
-
Two related and relatively obscure issues in science have eluded empirical tractability. Both can be directly traced to progress in artificial intelligence. The first is scientific proof of consciousness or otherwise in anything. The second is the role of consciousness in intelligent behaviour. This document approaches both issues by exploring the idea of using scientific behaviour self-referentially as a benchmark in an objective test for P-consciousness, which is the relevant critical aspect of consciousness. Scientific behaviour is unique in being both highly formalised and provably critically dependent on the P-consciousness of the primary senses. In the context of the primary senses P-consciousness is literally a formal identity with scientific observation. As such it is intrinsically afforded a status of critical dependency demonstrably no different to any other critical dependency in science, making scientific behaviour ideally suited to a self-referential scientific circumstance. The ‘provability’ derives from the delivery by science of objectively verifiable ‘laws of nature’. By exploiting the critical dependency, an empirical framework is constructed as a refined and specialised version of existing propositions for a ‘test for consciousness’. The specific role of P-consciousness is clarified: it is a human intracranial central nervous system construct that symbolically grounds the scientist in the distal external world, resulting in our ability to recognise, characterise and adapt to distal natural world novelty. It is hoped that in opening a discussion of a novel approach, the artificial intelligence community may eventually find a viable contender for its long overdue scientific basis.
-
This paper reviews the field of artificial intelligence focusing on embodied artificial intelligence. It also considers models of artificial consciousness, agent-based artificial intelligence and the philosophical commentary on artificial intelligence. It concludes that there is almost no consensus nor formalism in the field and that the achievements of the field are meager.
-
Machine consciousness is not only a technological challenge, but a new way to approach scientific and theoretical issues which have not yet received a satisfactory solution from AI and robotics. We outline the foundations and the objectives of machine consciousness from the standpoint of building a conscious robot.
-
The term "synthetic phenomenology" refers to: 1) any attempt to characterize the phenomenal states possessed, or modeled by, an artefact (such as a robot); or 2) any attempt to use an artefact to help specify phenomenal states (independently of whether such states are possessed by a naturally conscious being or an artefact). The notion of synthetic phenomenology is clarified, and distinguished from some related notions. It is argued that much work in machine consciousness would benefit from being more cognizant of the need for synthetic phenomenology of the first type, and of the possible forms it may take. It is then argued that synthetic phenomenology of the second type looks set to resolve some problems confronted by standard, non-synthetic attempts at characterizing phenomenal states. An example of the second form of synthetic phenomenology is given.
-
To build a true conscious robot requires that a robot’s “brain” be capable of supporting the phenomenal consciousness as human’s brain enjoys. Operational Architectonics framework through exploration of the temporal structure of information flow and inter-area interactions within the network of functional neuronal populations [by examining topographic sharp transition processes in the scalp electroencephalogram (EEG) on the millisecond scale] reveals and describes the EEG architecture which is analogous to the architecture of the phenomenal world. This suggests that the task of creating the “machine” consciousness would require a machine implementation that can support the kind of hierarchical architecture found in EEG.
-
The development of cognitive sciences, whose interdisciplinary connections constitute a necessary condition of their usefulness, depends on the interaction between psychology, physiology, linguistics, as well as logic and artificial intelligence (AI); the latter serves as the proving ground for experimental testing of scientific means for imitating rationality and productive thinking. Thus, it is very interesting to consider the phenomenon of the asymmetry of brain activity from the point of view of AI constructions and its logical means.
-
A method is presented to construct an embodied artificial intelligence. The method grows out of a system for object recognition in artificial vision and relies upon such a capability. Examples from the biosphere are extensively discussed in coming to the method which has, as its central tenet, that all intelligence is fundamentally related to objects. All the aspects needed by an embodied intelligence are developed including consciousness, memory and volition.
-
The currently leading cognitive theory of consciousness, Global Workspace Theory postulates that the primary functions of consciousness include a global broadcast serving to recruit internal resources with which to deal with the current situation and to modulate several types of learning. In addition, conscious experiences present current conditions and problems to a "self" system, an executive interpreter that is identifiable with brain structures like the frontal lobes and precuneus. Be it human, animal or artificial, an autonomous agent is said to be functionally conscious if its control structure (mind) implements Global Workspace Theory and the LIDA Cognitive Cycle, which includes unconscious memory and control functions needed to integrate the conscious component of the system. We would therefore consider humans, many animals and even some virtual or robotic agents to be functionally conscious. Such entities may approach phenomenal consciousness, as found in human and other biological brains, as additional brain-like features are added. Here we argue that adding mechanisms to produce a stable, coherent perceptual field in a LIDA controlled mobile robot might provide a small but significant step toward phenomenal consciousness in machines. We also propose that implementing several of the various notions of self in such a LIDA controlled robot may well prove another step toward phenomenal consciousness in machines.
-
The functional capabilities that consciousness seems to provide to biological systems can supply valuable principles in the design of more autonomous and robust technical systems. These functional concepts keep a notable similarity to those underlying the notion of operating system in software engineering, which allows us to specialize the computer metaphor for the mind into that of the operating system metaphor for consciousness. In this article, departing from these ideas and a model-based theoretical framework for cognition, we present an architectural proposal for machine consciousness, called the Operative Mind. According to it, machine consciousness could be implemented as a set of services, in an operative system fashion, based on modeling of the own control architecture, that supervise the adequacy of the system architectural structure to the current objectives, triggering and managing adaptativity mechanisms.
-
Artificial intelligence (AI) and the research on consciousness have reciprocally influenced each other – theories about consciousness have inspired work on AI, and the results from AI have changed our interpretation of the mind. AI can be used to test theories of consciousness and research has been carried out on the development of machines with the behavior, cognitive characteristics and architecture associated with consciousness. Some people have argued against the possibility of machine consciousness, but none of these objections are conclusive and many theories openly embrace the possibility that phenomenal consciousness could be realized in artificial systems.
-
Consciousness is a tremendously complex phenomenon. We examined the configurations and functions of an autonomously adaptive system that can adapt to an environment without a teacher to understand this complex phenomenon in the easiest way possible, and proposed a modeling method of consciousness on the system. In modeling of consciousness, it is important to note the difference between phenomenal consciousness and functional consciousness. To clarify the difference, a model with two layers, a physical layer and a logical layer, is proposed. The functions of primitive consciousness on the autonomously adaptive system were clarified on the model. The physical layer is composed of an artificial neural node. All signals are processed in detail by the neural nodes. Contrarily, minimum information, necessary for the system to adapt itself, selected from the physical layer composes the logical layer. The operations in the logical layer are represented by interactions between only the selected information. Our daily conscious phenomenon is expressed on the logical layer.