Search

Full bibliography 615 resources

  • Asking whether a machine can be conscious is rather like asking whether one has stopped beating one's wife: The question is so heavy with assumptions that either answer would be incriminating! The answer, of course, is: It depends entirely on what you mean by 'machine'! If you mean the current generation of man-made devices (toasters, ovens, cars, computers, today's robots), the answer is: almost certainly not.

  • After discussing various types of consciousness, several approaches to machine consciousness, software agent, and global workspace theory, we describe a software agent, IDA, that is “conscious” in the sense of implementing that theory of consciousness. IDA perceives, remembers, deliberates, negotiates, and selects actions, sometimes “consciously.” She uses a variety of mechanisms, each of which is briefly described. It’s tempting to think of her as a conscious artifact. Is such a view in any way justified? The remainder of the paper considers this question.

  • Could a machine have an immaterial mind? The author argues that true conscious machines can be built, but rejects artificial intelligence and classical neural networks in favour of the emulation of the cognitive processes of the brain--the flow of inner speech, inner imagery and emotions. This results in a non-numeric meaning-processing machine with distributed information representation and system reactions. It is argued that this machine would be conscious; it would be aware of its own existence and its mental content and perceive this as immaterial. Novel views on consciousness and the mind-body problem are presented. This book is a must for anyone interested in consciousness research and the latest ideas in the forthcoming technology of mind.

  • Baars (1988, 1997) has proposed a psychological theory of consciousness, called global workspace theory. The present study describes a software agent implementation of that theory, called “Conscious” Mattie (CMattie). CMattie operates in a clerical domain from within a UNIX operating system, sending messages and interpreting messages in natural language that organize seminars at a university. CMattie fleshes out global workspace theory with a detailed computational model that integrates contemporary architectures in cognitive science and artificial intelligence. Baars (1997) lists the psychological “facts that any complete theory of consciousness must explain” in his appendix to In the Theater of Consciousness; global workspace theory was designed to explain these “facts.” The present article discusses how the design of CMattie accounts for these facts and thereby the extent to which it implements global workspace theory.

  • The question “What is consciousness for?” is considered with particular relevance to objects created using interactive technologies. It is argued that an understanding of artificial life with it attendant notion of robotic consciousness is not separable from an understanding of human consciousness. The positions of Daniel Dennett and John Searle are compared. Dennett believes that by understanding the process of evolutionary design we can work towards an understanding of consciousness. Searle's view is that in most cases mental attributes such as consciousness are either dispositional or are observer relative. This opposition is taken as the basis for a discussion of the purposes of consciousness in general and how these might be manifest in human and robotic forms of life.

  • The main concern of this chapter is to determine whether consciousness in robots is possible. Several reasons are illustrated why conscious robots are deemed impossible, namely: robots are purely material things, and consciousness requires immaterial mind-stuff; robots are inorganic (by definition), and consciousness can exist only in an organic brain; robots are artefacts, and consciousness abhors an artefact because only something natural, born and not manufactured, could exhibit genuine consciousness; and robots will always be much too simple to be conscious. These assumptions are considered unreasonable and inadequate by the author, thus, counter-arguments on each assumption are given. The author contends that it is more interesting to explore if a robot that is theoretically interesting, independent of the philosophical conundrum about whether it is conscious, is formable. The Cog project on a humanoid robot is, thus, comprehensively presented and examined in this chapter.

  • Arguments about whether a robot could ever be conscious have been conducted up to now in the factually impoverished arena of what is ‘possible in principle’. A team at MIT, of which I am a part, is now embarking on a longterm project to design and build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated manipulation of objects, and a host of self-protective, self-regulatory and self-exploring activities. The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Many of the details of Cog’s ‘neural’ organization will parallel what is known (or presumed known) about their counterparts in the human brain, but the intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn’t matter. Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments.

  • Abstract. We consider only the relationship of consciousness to physical reality, whether physical reality is interpreted as the brain, artificial intelligence, or the universe as a whole. The difficulties with starting the analysis with physical reality on the one hand and with consciousness on the other are delineated. We consider how one may derive from the other. Concepts of universal or pure consciousness versus local or ego consciousness are explored with the possibility that consciousness may be physically creative. We examine whether artificial intelligence can possess consciousness as an extension of the interrelationship between consciousness and the brain or material reality.

  • We should not be creating conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates.

  • We might create artificial systems which can suffer. Since AI suffering might potentially be astronomical, the moral stakes are huge. Thus, we need an approach which tells us what to do about the risk of AI suffering. I argue that such an approach should ideally satisfy four desiderata: beneficence, action-guidance, feasibility and consistency with our epistemic situation. Scientific approaches to AI suffering risk hold that we can improve our scientific understanding of AI, and AI suffering in particular, to decrease AI suffering risks. However, such approaches tend to conflict with either the desideratum of consistency with our epistemic situation or with feasibility. Thus, we also need an explicitly ethical approach to AI suffering risk. Such an approach tells us what to do in the light of profound scientific uncertainty about AI suffering. After discussing multiple views, I express support for a hybrid approach. This approach is partly based on the maximization of expected value and partly on a deliberative approach to decision-making.

  • Artificial intelligence (AI) can be characterized as the multidisciplinary approach of computer science and robust dataset that tries to make machines equipped for performing those works that ordinarily require human knowledge. These works include the capacity to learn, adjust, legitimize, comprehend, and understand conceptual ideas as well as the reactivity to complex human credits like consideration, feeling, innovativeness, and so forth. The promising utility of artificial intelligence in medical services has been illustrated with potential advantages in customized medication, drug revelation, and the examination of huge datasets besides the likely applications to further develop conclusions and clinical choices.1 A new debated issue in the digitalized world has been man-made consciousness (artificial intelligence), especially that of ChatGPT. "ChatGPT" is a computer-based intelligence based on huge message datasets in different languages with the capacity to create human-like reactions to message input, created by Open AI (Open AI, L.L.C., San Francisco, CA, USA), ChatGPT derivation is connected with being a chatbot (a program ready to comprehend and produce reactions utilizing a text-based interface) and depends on the generative pre-prepared transformer (GPT) design. 2 ChatGPT can respond to questions and compose various written content, including articles, social media posts, essays, code, and emails. Researchers and the scholar community have gotten blended reactions to this tool regarding its benefits vs its risks. On the other hand, ChatGPT, among different large language models (LLMs), can be helpful in conversational and composing different tasks, helping to build the effectiveness and exactness of the necessary output. This is not a web search tool, reference custodian or even Wikipedia; introducing genuine information isn't planned.3In that capacity, a few teachers and content specialists have proactively found blemishes in the numerical and logical result it delivered. Teachers have additionally found that it will create references and reference records that look genuine, but it does not exist. Furthermore, the utility of artificial intelligence chatbots in the medical field is a fascinating region to test. This relates to the gigantic data and different ideas that medical services understudies are expected to get a handle on. Microsoft likewise declared that ChatGPT would be incorporated into Bing to make a more extravagant inquiry and growth opportunity.4 With the world's biggest innovation organizations contending to coordinate GPT innovation into their apparatuses, new roads of simulated intelligence investigation are not too far off for the field of schooling. While it tends to be useful in numerous ways, there are couple of risks using ChatGPT, for example, expecting that it produces trustworthy outcomes, privileging reproduced knowledge made text over human-made text, offering individual and sensitive data, dismissing the terms of direction, and expanding the mechanized segment. Furthermore, security concerns and the capability of digital assaults with the spread of deception using LLMs ought to likewise be considered. In medical care practice and scholarly composition, real mistakes, moral issues, and the apprehension about abuse including the spread of falsehood ought to be considered.5 Chat GPT and its substitutions can provide teachers and students with equal opportunity to improve their learning, a significant level of creating support, and bearing an innovative thinking. Moreover, with the execution of any new advancement, regardless, its usage conveys numerous risks and the potential for abuse. Deception and predisposition tracked down inside ChatGPT's reactions, combined with occasions of cheating and copyright infringement, have stressed instructive experts. While certain locale and organizations have acted rapidly to boycott ChatGPT, we rather accept, alongside Kranzberg, (1986) that "innovation is neither great nor awful; nor is it impartial" (p. 545).6 As the world discusses the instructive and cultural consequences of ChatGPT and artificial intelligence, what stays clear is that the turn of events and improvement of this kind of innovation indicate that things are not pulling back.  Instructors, overseers, and policymakers must proactively look to teach themselves and their understudies on the most proficient method to utilize these devices both ethically and morally. Instructors ought to likewise comprehend the limits of utilizing man-made intelligence apparatuses and that, while each innovation presents both affordances and difficulties, they additionally accompany their own inserted risks.

  • The article attempts to complement the modern problems of highlighting the criteria of strong AI through discussions in the field of philosophy of consciousness. The popular ideas of D. Dennett (“multiple sketches”), J. Searle (causal emergent description) and D. Chalmers (synthetic approach to understanding consciousness) are compared with the history of the formation of the AI problem. Despite the wide discussion of the problems of consciousness and artificial forms of intelligence (strong and weak), the theories and arguments of philosophers about the psychophysiological problem remain relevant. It is assumed that clarifying the mechanism of analytical work of consciousness, the creative potential of the individual, the ability to cover a variety of phenomena in categorical forms, building axiomatic and synthetic judgments will expand the tools of machine learning. To complement the existing ideas about consciousness in the context of the prevalence of information approaches (D.I. Dubrovsky) and the analytical tradition (V.V. Vasiliev), the key provisions of the psychophysiological problem identified in the history of German and Russian philosophy are given. Given the complexity and versatility of the identified problems (definition of consciousness, psychophysiological problem, definition of AI, demarcation of weak and strong forms of AI, the importance of language for building structures of thinking, analog thinking and its capabilities), the content of the article is limited to analyzing emerging trends in philosophy and identifying prospects for further deepening into the problem.

  • In this paper, we use the recent appearance of LLMs and GPT-equipped robotics to raise questions about the nature of semantic meaning and how this relates to issues concerning artificially-conscious machines. To do so, we explore how a phenomenology constructed out of the association of qualia (defined as somatically-experienced sense data) and situated within a 4e enactivist program gives rise to intentional behavior. We argue that a robot without such a phenomenology is semantically empty and, thus, cannot be conscious in any way resembling human consciousness. Finally, we use this platform to address and supplement widely-discussed concerns regarding the dangers of attempting to produce artificially-conscious machines.

  • This paper examines Kazuo Ishiguro's Klara and The Sun through the lens of posthumanism. It uses the textual analysis method to analyze Ishiguro's text as a posthuman novel that depicts the posthuman society where the boundaries between what is human and the nonhuman is blurred. The basic argument is that the aim of Ishiguro's text is two-fold, while it clearly illustrates the inability of the humanoid robot to attain human consciousness, it attempts also to dismantle the anthropocentric view of man. The findings show that Klara, the narrator-protagonist is used as a tool to raise certain questions such as, can humanoids act humanly? And/or can a 'humanoid machine' attain consciousness? More importantly, what it means to be human, in the first place. In doing so, the story attempts to showcase the ruptured boundaries between human and nonhuman and the changing ideas of humankind and its entanglement with the nonhuman world. Further, the interaction between Klara (AF) and other characters in the story is developed in such a way as to illustrate not only the shortcomings of humans regarding faith and affection but, more importantly, the limits of the nonhuman machine. It dismisses the current debate among technology experts that artificial intelligence would soon be able to develop a human-like robot that enjoys similar human emotional signals and reacts exactly like humans. The story simply puts it, despite the defects of humans, nothing can replace humans as those artificial friends (AI) fundamentally lack the kinds of experience that give rise to human-like affect and emotion.

  • With Large Language Models (LLMs) exhibiting astounding abilities in human language processing and generation, a crucial debate has emerged: do they truly understand what they process and can they be conscious? While the nature of consciousness remains elusive, this synthetic article sheds light on its subjective aspect as well as some aspects of their understanding. Indeed, it can be shown, under specific conditions, that a cognitive system does not have any subjective consciousness. To this purpose the principle of a proof, based on a variation of the thought experiment of the Chinese Room from John Searl, will be developed. The demonstration will be made on a transformer architecture-based language model, however, it could be carried out and extended to many kind of cognitive systems with known architecture and functioning. The main conclusions are that while transformers architecture-based LLMs lack subjective consciousness based, in a nutshell, on the absence of a central subject, they exhibit a form of “asubjective phenomenal understanding” demonstrably through various tasks and tests. This opens a new perspective on the nature of understanding itself that can be uncoupled with any subjective experience.

Last update from database: 5/19/25, 5:58 AM (UTC)