Your search
Results 3 resources
-
We should not be creating conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates.
-
The main concern of this chapter is to determine whether consciousness in robots is possible. Several reasons are illustrated why conscious robots are deemed impossible, namely: robots are purely material things, and consciousness requires immaterial mind-stuff; robots are inorganic (by definition), and consciousness can exist only in an organic brain; robots are artefacts, and consciousness abhors an artefact because only something natural, born and not manufactured, could exhibit genuine consciousness; and robots will always be much too simple to be conscious. These assumptions are considered unreasonable and inadequate by the author, thus, counter-arguments on each assumption are given. The author contends that it is more interesting to explore if a robot that is theoretically interesting, independent of the philosophical conundrum about whether it is conscious, is formable. The Cog project on a humanoid robot is, thus, comprehensively presented and examined in this chapter.
-
Arguments about whether a robot could ever be conscious have been conducted up to now in the factually impoverished arena of what is ‘possible in principle’. A team at MIT, of which I am a part, is now embarking on a longterm project to design and build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated manipulation of objects, and a host of self-protective, self-regulatory and self-exploring activities. The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Many of the details of Cog’s ‘neural’ organization will parallel what is known (or presumed known) about their counterparts in the human brain, but the intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn’t matter. Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments.