Search
Full bibliography 675 resources
-
Three fundamental questions concerning minds are presented. These are about consciousness, intentionality and intelligence. After we present the fundamental framework that has shaped both the philosophy of mind and the Artificial Intelligence research in the last forty years or so regarding the last two questions, we turn to consciousness, whose study still seems evasive to both communities. After briefly illustrating why and how phenomenal consciousness is puzzling, a theoretical diagnosis of the problem is proposed and a framework is presented, within which further research would yield a solution. The diagnosis is that the puzzle stems from a peculiar dual epistemic access to phenomenal aspects (qualia) of our conscious experiences. An account of concept formation is presented such that both the phenomenal concepts (like the concepts RED and SWEET) and the introspective concepts (like the concepts EXPERIENCING RED and TASTING SWEET) are acquired from a firstperson perspective as opposed to the third-person one (the standard concept formation strategy about objective features). We explain the first-person perspective in information-theoretic and computational terms: Nature (the Art whereby God hath made and governes the World) is by the Art of man, as in many other things, so in this also imitated, that it can make an Artificial Animal. For seeing life is but a motion of Limbs, the beginning whereof is in some principall part within; why may we not say, that all Automata (Engines that move themselves by springs and wheels as doth a watch) have an artificiall life? For what is the Heart, but a Spring; and the Nerves but so many Strings; and the Joynts, but so many Wheeles, giving motion to the whole Body, such as was intended by the Artificer? Art goes yet further, imitating that Rationall and most excellent worke of Nature, Man. (Hobbes 1651, p. 81) So declared Thomas Hobbes in 1651 in the Introduction to his well-known work, Leviathan, published one year after Réne Descartes' death. Descartes was also interested in mechanical explanations of bodily processes and organic life. In fact, on the basis of his neuroanatomical and physiological studies, as well as philosophical arguments, Descartes had already argued that human and animal bodies could be mechanically understood as complicated and intricately designed machines (Descartes 1664). What differentiated Descartes from Hobbes lay in his belief that human beings, unlike non-human animals, were not merely bodies; they were unions of material bodies and immaterial souls. The immaterial soul was necessary for Descartes to explain the peculiar capacities and activities of the human mind. As such, materialist mechanical explanations could never be sufficient to account for the whole human being.
-
Perception has both unconscious and conscious aspects. In all cases, however, what we perceive is a model of reality. By brain construction through evolution, we divide the world into two parts--our body and the outside world. But the process is the same in both cases. We perceive a construct usually governed by sensed data but always involving memory, goals, fears, expectations, etc. As a first step toward Artificial Perception in man-made systems, we examine perception in general here.
-
Conscious behavior is hypothesized to be governed by the dynamics of the neural architecture of the brain. A general model of an artificial consciousness algorithm is presented, and applied to a one-dimensional feedback control system. A new learning algorithm for learning functional relations is presented and shown to be biologically grounded. The consciousness algorithm uses predictive simulation and evaluation to let the example system relearn new internal and external models after it is damaged.
-
This paper investigates human consciousness in comparison with a robots' internal state. On the basis of Husserlian phenomenology, nine requirements for a model of consciousness are proposed from different technical aspects: self, intentionality, anticipation, feedback process, certainty, embodiment, otherness, emotionality and chaos. Consciousness-based Architecture (CBA) for mobile robots was analyzed in comparison with the requirements proposed for a consciousness model. CBA, previously developed, is a software architecture with an evolutionary hierarchy of the relationship between consciousness and behavior. Experiments with this architecture loaded on two mobile robots explain the emergence of self, and some of the intentionality, anticipation, feedback process, embodiment and emotionality. Modification of CBA will be necessary to better explain the emergence of self in terms of the relationship between consciousness and behavior.
-
This paper addresses the relationship of consciousness to artificial life set in the context of art. Artificial life is as much a part of our quest for self-definition as an instrument in the construction of reality. In exploring the technology of life we are exploring the possibilities of what we might become. In our hypermediated, telematic culture, the self acquires an essentially non-linear identity. Telepresence and virtual reality, the avatars of Net life, present us with a distributed, multiple identity which in turn is producing a radically new art. This embodies an ‘interstitial practice’ set within the domain of artificial life, and located at the intersections of cognitive science, bio-engineering, telematics and metaphysics. Can artists find in artificial life, nanotechnology, robotics and molecular engineering the means towards a re-materialization of art, after its postmodern, screen-based dematerialization? Just as ideas of the ‘immaterial’ have dominated art discourse for the last 15 years, so questions of emergent form, intelligent structures and artificial life are shaping a new discourse, from which art is moving off the screen and back into the material world. Will the real significance of art's re-materialization be at the level of mind? Will artificial life only gain cultural significance when it gives rise to artificial mind and the construction of consciousness?
-
Baars (1988, 1997) has proposed a psychological theory of consciousness, called global workspace theory. The present study describes a software agent implementation of that theory, called “Conscious” Mattie (CMattie). CMattie operates in a clerical domain from within a UNIX operating system, sending messages and interpreting messages in natural language that organize seminars at a university. CMattie fleshes out global workspace theory with a detailed computational model that integrates contemporary architectures in cognitive science and artificial intelligence. Baars (1997) lists the psychological “facts that any complete theory of consciousness must explain” in his appendix to In the Theater of Consciousness; global workspace theory was designed to explain these “facts.” The present article discusses how the design of CMattie accounts for these facts and thereby the extent to which it implements global workspace theory.
-
Biosystems are unitary entities that are alive to some degree as a system. They occur at scales ranging from the molecular to the biospheric, and can be of natural, artificial or combined origin. The engineering of biosystems involves one or more of the activities of design, construction, operation, maintenance, repair, and upgrading. Engineering is usually done in order to achieve certain preconceived objectives by ensuring that the resultant systems possess particular features. This article concerns the engineering of biosystems so that they will be somewhat autonomous, or able to pursue their own goals in a dynamic environment. Central themes include: the computational abilities of a system; the virtual machinery, such as algorithms, that underlie these abilities (mind); and the actual computation that is performed (mentation). A significantly autonomous biosystem must be engineered to possess particular sets of computational abilities (faculties). These must be of sufficient sophistication (intelligence) to support the maintenance and use of a self-referencing internal model (consciousness), thereby increasing the potential for autonomy. Examples refer primarily to engineered ecosystems combined with technological control networks (ecocyborgs). The discussion is focused on clear working definitions of these concepts, and their integration into a coherent lexicon, which has been lacking until now, and the exposition of an accompanying philosophy that is relevant to the engineering of the virtual aspects of biosystems.
-
The purpose of this article is to show why consciousness and thought are not manifested in digital computers. Analyzing the rationale for claiming that the formal manipulation of physical symbols in Turing machines would emulate human thought, the article attempts to show why this proved false. This is because the reinterpretation of ‘designation’ and ‘meaning’ to accommodate physical symbol manipulation eliminated their crucial functions in human discourse. Words have denotations and intensional meanings because the brain transforms the physical stimuli received from the microworld into a qualitative, macroscopic representation for consciousness. Lacking this capacity as programmed machines, computers have no representations for their symbols to designate and mean. Unlike human beings in which consciousness and thought, with their inherent content, have emerged because of their organic natures, serial processing computers or parallel distributed processing systems, as programmed electrical machines, lack these causal capacities.
-
The question “What is consciousness for?” is considered with particular relevance to objects created using interactive technologies. It is argued that an understanding of artificial life with it attendant notion of robotic consciousness is not separable from an understanding of human consciousness. The positions of Daniel Dennett and John Searle are compared. Dennett believes that by understanding the process of evolutionary design we can work towards an understanding of consciousness. Searle's view is that in most cases mental attributes such as consciousness are either dispositional or are observer relative. This opposition is taken as the basis for a discussion of the purposes of consciousness in general and how these might be manifest in human and robotic forms of life.
-
A memory-controlled, sensor/actuator machine senses conditions in its environment at given moments, and attempts to produce an action based upon its memory. However, a sensor/actuator machine will stop producing new behavior if its environment is removed. A sensor/sensor unit can be added to the sensor/actuator machine, forming a compound machine. The sensor/sensor unit produces a stream of internally created sensed conditions, which can replace the sensed conditions from the environment. This illusion of an environment is similar to consciousness. In addition, actuator/sensor and actuator/actuator units can be added to this compound machine to further enhance its ability to function without an environment. Predetermined and empirical memory cells can be distributed throughout the control units of this compound machine to provide instinctive and learned behavior. The internal and exterior behavior of this compound machine can be modified greatly by changing the cycle start and ramp signals that activate these different kinds of memory cells. These signals are similar in form to brain waves.
-
The main concern of this chapter is to determine whether consciousness in robots is possible. Several reasons are illustrated why conscious robots are deemed impossible, namely: robots are purely material things, and consciousness requires immaterial mind-stuff; robots are inorganic (by definition), and consciousness can exist only in an organic brain; robots are artefacts, and consciousness abhors an artefact because only something natural, born and not manufactured, could exhibit genuine consciousness; and robots will always be much too simple to be conscious. These assumptions are considered unreasonable and inadequate by the author, thus, counter-arguments on each assumption are given. The author contends that it is more interesting to explore if a robot that is theoretically interesting, independent of the philosophical conundrum about whether it is conscious, is formable. The Cog project on a humanoid robot is, thus, comprehensively presented and examined in this chapter.
-
Arguments about whether a robot could ever be conscious have been conducted up to now in the factually impoverished arena of what is ‘possible in principle’. A team at MIT, of which I am a part, is now embarking on a longterm project to design and build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated manipulation of objects, and a host of self-protective, self-regulatory and self-exploring activities. The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Many of the details of Cog’s ‘neural’ organization will parallel what is known (or presumed known) about their counterparts in the human brain, but the intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn’t matter. Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments.
-
Two questions are distinguished: how to program a machine so that it behaves in a manner that would lead us to ascribe consciousness to it; and what is involved in saying that something is conscious. The distinction can be seen in cases where anaesthetics have failed to work on patients temporarily paralysed. Homeostatic behaviour is often cited as a criterion for consciousness, but is not itself sufficient. As the present difficulties in surmounting the ‘frame problem’ show, ability to size up situations holistically is more important; so is the explanatory role of the concept. Consciousness confers evidential status: if we ascribed consciousness to an artefact, we should be prepared to believe it, when it said its RAM was hurting, even though we could detect nothing wrong, contrary to our thinking of it as an artefact. A further difficulty arises from self-awareness and reflexivity.
-
Mind<>Computer: Attempts to mimic human intelligence through methods of classical computing have failed because implementing basic elements of rationality has proven obstinate to the design criterion of machine intelligence. A radical definition of Consciousness describing awareness, as the dynamic representation of a noumenon comprised of three base states; and not itself fundamental as generally defined in the current reductionistic view of the standard model, which has created an intractable hard problem of consciousness as defined by Chalmers. By clarifying the definition of matter a broader ontological quantum theory removes immateriality from the Cartesian split bringing mind into the physical realm for pragmatic investigation. Evidence suggests that the brain is a naturally occurring quantum computer, but the brain not being paramount to awareness does not itself evanesce consciousness without the interaction of a nonlocal conscious process; because Mind <> computer and cannot be reduced to brain states alone. The proposed cosmology of consciousness is indicative of a teleological principle as an inherent part of a conscious universe. By applying the parameters of quantum brain dynamics to the stack of a specialized hybrid electronic optical quantum computer with a heterosoric molecular crystal core, consciousness evanesces through entrainment of the non local conscious processes. This 'extracellular containment of natural intelligence' probably represents the only viable direction for AI to simulate 'conscious computing' because true consciousness = life.
-
Abstract. We consider only the relationship of consciousness to physical reality, whether physical reality is interpreted as the brain, artificial intelligence, or the universe as a whole. The difficulties with starting the analysis with physical reality on the one hand and with consciousness on the other are delineated. We consider how one may derive from the other. Concepts of universal or pure consciousness versus local or ego consciousness are explored with the possibility that consciousness may be physically creative. We examine whether artificial intelligence can possess consciousness as an extension of the interrelationship between consciousness and the brain or material reality.