Search
Full bibliography 716 resources
-
A discussion is given as to the possibility of creating machines which have more powerful consciousnesses than our own. The approach employs, in particular, a brain-based model of human consciousness. From that model a general discussion is developed of the need for a unique central control system in any machine to enable it to be efficient in decision-making. The resulting features of the machine’s consciousness, as the highest order controller, is seen to need to be similar to our own. We conclude that beyond consciousness very likely lies only similar consciousnesses.
-
In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious.
-
Two related and relatively obscure issues in science have eluded empirical tractability. Both can be directly traced to progress in artificial intelligence. The first is scientific proof of consciousness or otherwise in anything. The second is the role of consciousness in intelligent behaviour. This document approaches both issues by exploring the idea of using scientific behaviour self-referentially as a benchmark in an objective test for P-consciousness, which is the relevant critical aspect of consciousness. Scientific behaviour is unique in being both highly formalised and provably critically dependent on the P-consciousness of the primary senses. In the context of the primary senses P-consciousness is literally a formal identity with scientific observation. As such it is intrinsically afforded a status of critical dependency demonstrably no different to any other critical dependency in science, making scientific behaviour ideally suited to a self-referential scientific circumstance. The ‘provability’ derives from the delivery by science of objectively verifiable ‘laws of nature’. By exploiting the critical dependency, an empirical framework is constructed as a refined and specialised version of existing propositions for a ‘test for consciousness’. The specific role of P-consciousness is clarified: it is a human intracranial central nervous system construct that symbolically grounds the scientist in the distal external world, resulting in our ability to recognise, characterise and adapt to distal natural world novelty. It is hoped that in opening a discussion of a novel approach, the artificial intelligence community may eventually find a viable contender for its long overdue scientific basis.
-
This paper reviews the field of artificial intelligence focusing on embodied artificial intelligence. It also considers models of artificial consciousness, agent-based artificial intelligence and the philosophical commentary on artificial intelligence. It concludes that there is almost no consensus nor formalism in the field and that the achievements of the field are meager.
-
Machine consciousness is not only a technological challenge, but a new way to approach scientific and theoretical issues which have not yet received a satisfactory solution from AI and robotics. We outline the foundations and the objectives of machine consciousness from the standpoint of building a conscious robot.
-
The term "synthetic phenomenology" refers to: 1) any attempt to characterize the phenomenal states possessed, or modeled by, an artefact (such as a robot); or 2) any attempt to use an artefact to help specify phenomenal states (independently of whether such states are possessed by a naturally conscious being or an artefact). The notion of synthetic phenomenology is clarified, and distinguished from some related notions. It is argued that much work in machine consciousness would benefit from being more cognizant of the need for synthetic phenomenology of the first type, and of the possible forms it may take. It is then argued that synthetic phenomenology of the second type looks set to resolve some problems confronted by standard, non-synthetic attempts at characterizing phenomenal states. An example of the second form of synthetic phenomenology is given.
-
To build a true conscious robot requires that a robot’s “brain” be capable of supporting the phenomenal consciousness as human’s brain enjoys. Operational Architectonics framework through exploration of the temporal structure of information flow and inter-area interactions within the network of functional neuronal populations [by examining topographic sharp transition processes in the scalp electroencephalogram (EEG) on the millisecond scale] reveals and describes the EEG architecture which is analogous to the architecture of the phenomenal world. This suggests that the task of creating the “machine” consciousness would require a machine implementation that can support the kind of hierarchical architecture found in EEG.
-
The development of cognitive sciences, whose interdisciplinary connections constitute a necessary condition of their usefulness, depends on the interaction between psychology, physiology, linguistics, as well as logic and artificial intelligence (AI); the latter serves as the proving ground for experimental testing of scientific means for imitating rationality and productive thinking. Thus, it is very interesting to consider the phenomenon of the asymmetry of brain activity from the point of view of AI constructions and its logical means.
-
A method is presented to construct an embodied artificial intelligence. The method grows out of a system for object recognition in artificial vision and relies upon such a capability. Examples from the biosphere are extensively discussed in coming to the method which has, as its central tenet, that all intelligence is fundamentally related to objects. All the aspects needed by an embodied intelligence are developed including consciousness, memory and volition.
-
The currently leading cognitive theory of consciousness, Global Workspace Theory postulates that the primary functions of consciousness include a global broadcast serving to recruit internal resources with which to deal with the current situation and to modulate several types of learning. In addition, conscious experiences present current conditions and problems to a "self" system, an executive interpreter that is identifiable with brain structures like the frontal lobes and precuneus. Be it human, animal or artificial, an autonomous agent is said to be functionally conscious if its control structure (mind) implements Global Workspace Theory and the LIDA Cognitive Cycle, which includes unconscious memory and control functions needed to integrate the conscious component of the system. We would therefore consider humans, many animals and even some virtual or robotic agents to be functionally conscious. Such entities may approach phenomenal consciousness, as found in human and other biological brains, as additional brain-like features are added. Here we argue that adding mechanisms to produce a stable, coherent perceptual field in a LIDA controlled mobile robot might provide a small but significant step toward phenomenal consciousness in machines. We also propose that implementing several of the various notions of self in such a LIDA controlled robot may well prove another step toward phenomenal consciousness in machines.
-
The functional capabilities that consciousness seems to provide to biological systems can supply valuable principles in the design of more autonomous and robust technical systems. These functional concepts keep a notable similarity to those underlying the notion of operating system in software engineering, which allows us to specialize the computer metaphor for the mind into that of the operating system metaphor for consciousness. In this article, departing from these ideas and a model-based theoretical framework for cognition, we present an architectural proposal for machine consciousness, called the Operative Mind. According to it, machine consciousness could be implemented as a set of services, in an operative system fashion, based on modeling of the own control architecture, that supervise the adequacy of the system architectural structure to the current objectives, triggering and managing adaptativity mechanisms.
-
Artificial intelligence (AI) and the research on consciousness have reciprocally influenced each other – theories about consciousness have inspired work on AI, and the results from AI have changed our interpretation of the mind. AI can be used to test theories of consciousness and research has been carried out on the development of machines with the behavior, cognitive characteristics and architecture associated with consciousness. Some people have argued against the possibility of machine consciousness, but none of these objections are conclusive and many theories openly embrace the possibility that phenomenal consciousness could be realized in artificial systems.
-
Consciousness is a tremendously complex phenomenon. We examined the configurations and functions of an autonomously adaptive system that can adapt to an environment without a teacher to understand this complex phenomenon in the easiest way possible, and proposed a modeling method of consciousness on the system. In modeling of consciousness, it is important to note the difference between phenomenal consciousness and functional consciousness. To clarify the difference, a model with two layers, a physical layer and a logical layer, is proposed. The functions of primitive consciousness on the autonomously adaptive system were clarified on the model. The physical layer is composed of an artificial neural node. All signals are processed in detail by the neural nodes. Contrarily, minimum information, necessary for the system to adapt itself, selected from the physical layer composes the logical layer. The operations in the logical layer are represented by interactions between only the selected information. Our daily conscious phenomenon is expressed on the logical layer.
-
In this stunningly original exploration of human consciousness, philosopher and scientist Thomas Metzinger provides fascinating evidence that the "self" does not really exist. Highlighting a series of groundbreaking experiments in neuroscience, virtual reality, and robotics, and his own pioneering research into the phenomenon of the "out-of-body" experience, Metzinger reveals how our brain constructs our reality—our deepest sense of self is completely dependent on our brain functioning. In The Ego Tunnel, Metzinger examines recent evidence that people born without arms or legs can experience a sensation that they do in fact have limbs—and how we can actually feel a human touch in a rubber hand placed on a desk in front of us. Similarly, he reveals how the state of our experiential self changes when we become lucid while we're dreaming, and how our sense of self can even be transposed into a three-dimensional computer-generated image of our body in cyberspace simply by using virtual reality goggles, creating a conflict between the seeing self and the feeling self. He goes on to discuss the latest research on free will, machine consciousness, and the evolution of empathy. Highlighting these examples and more, Metzinger asserts that if our "self" is created by our brain mechanisms and it's possible to alter our subjective reality, then this creates not only a deeper understanding of consciousness, but a need for a new approach to ethics. Our sense of self, our spatial understanding, and the feeling of embodiment can be manipulated and even controlled. Using new kinds of medication, we can even enhance cognition and fine-tune emotional layers of self consciousness. But what, in an ethical sense, are valuable forms of self-experience in the first place—what is a good state of consciousness? Metzinger ultimately argues that we must be willing to engage with serious and pressing ethical questions as well as cultural consequences that will result from a new image of the "self" and the emerging neurotechnology of consciousness. In a time when the science of cognition is becoming as controversial as evolution, The Ego Tunnel provides a highly innovative take on the mystery of the mind. (PsycInfo Database Record (c) 2025 APA, all rights reserved)
-
Cognitive theories of consciousness should provide effective frameworks to implement machine consciousness. The Global Workspace Theory is a leading theory of consciousness which postulates that the primary function of consciousness is a global broadcast that facilitates recruitment of internal resources to deal with the current situation as well as modulate several types of learning. In this paper, we look at architectures for machine consciousness that have the Global Workspace Theory as their basis and discuss the requirements in such architectures to bring about both functional and phenomenal aspects of consciousness in machines.
-
If human consciousness is the result of complex neural electro-chemical interactions occurring in the brain, the question of whether a machine can ever become self-aware could be a matter of time: the time necessary to fully understand the functional behavior of the brain structure, develop a mathematical model of it, and implement an artificial system capable of working according to such a model. This paper addresses several issues related to the possibility of developing a conscious artificial brain. A number of hazardous questions are posed to the reader, each addressing a specific technical or philosophical issue, which is discussed and developed in a form of a hazardous answer.
-
Consciousness is often thought to be that aspect of mind that is least amenable to being understood or replicated by artificial intelligence (AI). The first-personal, subjective, what-it-is-like-to-be-something nature of consciousness is thought to be untouchable by the computations, algorithms, processing and functions of AI method. Since AI is the most promising avenue toward artificial consciousness (AC), the conclusion many draw is that AC is even more doomed than AI supposedly is. The objective of this paper is to evaluate the soundness of this inference. Methods The results are achieved by means of conceptual analysis and argumentation. Results and conclusions It is shown that pessimism concerning the theoretical possibility of artificial consciousness is unfounded, based as it is on misunderstandings of AI, and a lack of awareness of the possible roles AI might play in accounting for or reproducing consciousness. This is done by making some foundational distinctions relevant to AC, and using them to show that some common reasons given for AC scepticism do not touch some of the (usually neglected) possibilities for AC, such as prosthetic, discriminative, practically necessary, and lagom (necessary-but-not-sufficient) AC. Along the way three strands of the author's work in AC – interactive empiricism, synthetic phenomenology, and ontologically conservative heterophenomenology – are used to illustrate and motivate the distinctions and the defences of AC they make possible.
-
Artificial consciousness is still far from being an established discipline. We will try to outline some theoretical assumption that could help in dealing with phenomenal consciousness. What are the technological and theoretical obstacles that face the enthusiast scholars of artificial consciousness? After presenting an outline of the state of artificial consciousness, we will focus on the relevance of phenomenal consciousness. Artificial consciousness needs to tackle the issue of phenomenal consciousness in a physical world. Up to now, the only models that give some hope of succeeding are the various kinds of externalism.