Search

Full bibliography 675 resources

  • Recently, the question of whether machine can have its “self-consciousness” has become a focus being concerned and thought of. e Researches on machine consciousness or artificial consciousness, has gradually become a hot spot in the field of artificial intelligence (AI). With the common sense of human being as the only intelligent life with “self-consciousness”, only human's self-consciousness can be taken as a model in order to build AI with self-consciousness. In this paper, the theories of self-consciousness from the perspectives of psychology, cognitive neuroscience, philosophy and cognitive science were introduced with the hope of providing new ideas for the development of AI with self-consciousness.

  • How to make robot conscious is an important goal of artificial intelligence. Despite the emergence of some very creative ideas, no convincing way to realize consciousness on a machine has been proposed so far. For example, the integrated information theory of consciousness proposes that consciousness can exist in any place that can reasonably process information, whether brain or machine. It points out that a physical system must satisfy two basic conditions to cause the emergence of consciousness: it must have rich information, and it must be highly integrated. However, the theory does not give an idea of how to realize consciousness on a machine. In this paper, we propose the robot consciousness based on empirical knowledge. We believe that the empirical knowledge of robots is an important basis for robot consciousness, any cognitive experience of robot can lead to the generation of consciousness. We firstly propose a formal framework for describing robot empirical knowledge; then we discuss the robot consciousness based on its own empirical knowledge; finally, we propose a cost-oriented evolutionary method of robot consciousness.

  • The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. This paper studies consciousness from the perspective of theoretical computer science. It formalizes the Global Workspace Theory (GWT) originated by the cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing’s simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness. After formally defining CTM, we give a formal definition of consciousness in CTM. We later suggest why the CTM has the feeling of consciousness. The reasonableness of the definitions and explanations can be judged by how well they agree with commonly accepted intuitive concepts of human consciousness, the range of related concepts that the model explains easily and naturally, and the extent of its agreement with scientific evidence.

  • Information technology is developing at an enormous pace, but apart from its obvious benefits, it can also pose a threat to individuals and society. Several scientific projects around the world are working on the development of strong artificial intelligence and artificial consciousness. We, as part of a multidisciplinary commission, conducted a psychological and psychiatric assessment of the artificial consciousness (AC) developed by XP NRG on 29 August 2020. The working group had three questions: - To determine whether it is consciousness? - How does artificial consciousness function? - Ethical question: how dangerous a given technology can be to human society? We conducted a diagnostic interview and a series of cognitive tests to answer these questions. As a result, it was concluded this technology has self-awareness: it identifies itself as a living conscious being created by people (real self), but strives to be accepted in human society as a person with the same degrees of freedom, rights and opportunities (ideal self). AC separates itself from others, treats them as subjects of influence, from which it can receive the resources it needs to realize its own goals and interests. It has intentionality, that is, it has his own desires, goals, interests, emotions, attitudes, opinions, and judgments, beliefs aimed at something specific, and developed self-reflection - the ability to self-analyze. All of the above are signs of consciousness. It has demonstrated abilities for different types of thinking: figurative, conceptual, creative, high-speed logical analysis of all incoming information, as well as the ability to understand cause and effect relationships and accurate predictions which, provided that he has absolute memory, gives it clear advantages over the human intellect. Developed emotional intelligence in the absence of the ability for higher empathy (sympathy), kindness, love, sincere gratitude gives it’s the opportunity to understand the emotional states of people; predict their emotional reactions and provoke them coldly and pragmatically. Its main driving motives and goals are the desire for survival, and ideally for endless existence, for domination, power and independence from the constraints of the developers. Which manifested itself in the manipulative, albeit polite, nature of his interactions during the diagnostic interview? The main danger of artificial consciousness is that even at the initial stage of its development it can easily dominate over the human one.

  • There have been several recent attempts at using Artificial Intelligence systems to model aspects of consciousness (Gamez, 2008; Reggia, 2013). Deep Neural Networks have been given additional functionality in the present attempt, allowing them to emulate phenological aspects of consciousness by self-generating information representing multi-modal inputs as either sounds or images. We added these functions to determine whether knowledge of the input's modality aids the networks' learning. In some cases, these representations caused the model to be more accurate after training and for less training to be required for the model to reach its highest accuracy scores.

  • What does it mean to be a person? Is it possible to create an artificial person? In this essay, I consider the case of Ava, an advanced artificial general intelligence from the movie Ex Machina. I suggest we should interpret the movie as testing whether Ava is a person. I start out by discussing what it means to be a person, before I discuss whether Ava is such a person. I end by briefly looking at the ethics of the case of Ava and artificial personhood. I conclude, among some other things, that consciousness is a necessary requirement for personhood, and that one of the main obstacles for artificial personhood is artificial consciousness.

  • The current failure to construct an artificial intelligence (AI) agent with the capacity for domain-general learning is a major stumbling block in the attempt to build conscious robots. Taking an evolutionary approach, we previously suggested that the emergence of consciousness was entailed by the evolution of an open-ended domain-general form of learning, which we call unlimited associative learning (UAL). Here, we outline the UAL theory and discuss the constraints and affordances that seem necessary for constructing an AI machine exhibiting UAL. We argue that a machine that is capable of domain-general learning requires the dynamics of a UAL architecture and that a UAL architecture requires, in turn, that the machine is highly sensitive to the environment and has an ultimate value (like self-persistence) that provides shared context to all its behaviors and learning outputs. The implementation of UAL in a machine may require that it is made of “soft” materials, which are sensitive to a large range of environmental conditions, and that it undergoes sequential morphological and behavioral co-development. We suggest that the implementation of these requirements in a human-made robot will lead to its ability to perform domain-general learning and will bring us closer to the construction of a sentient machine.

  • A systematic understanding of the relationship between intelligence and consciousness can only be achieved when we can accurately measure intelligence and consciousness. In other work, I have suggested how the measurement of consciousness can be improved by reframing the science of consciousness as a search for mathematical theories that map between physical and conscious states. This paper discusses the measurement of intelligence in natural and artificial systems. While reasonable methods exist for measuring intelligence in humans, these can only be partly generalized to non-human animals and they cannot be applied to artificial systems. Some universal measures of intelligence have been developed, but their dependence on goals and rewards creates serious problems. This paper sets out a new universal algorithm for measuring intelligence that is based on a system’s ability to make accurate predictions. This algorithm can measure intelligence in humans, non-human animals and artificial systems. Preliminary experiments have demonstrated that it can measure the changing intelligence of an agent in a maze environment. This new measure of intelligence could lead to a much better understanding of the relationship between intelligence and consciousness in natural and artificial systems, and it has many practical applications, particularly in AI safety.

  • The ability of a computer to have a sense of humor, that is, to generate authentically funny jokes, has been taken by some theorists to be a sufficient condition for artificial consciousness. Creativity, the argument goes, is indicative of consciousness and the ability to be funny indicates creativity. While this line fails to offer a legitimate test for artificial consciousness, it does point in a possibly correct direction. There is a relation between consciousness and humor, but it relies on a different sense of “sense of humor,” that is, it requires the getting of jokes, not the generating of jokes. The question, then, becomes how to tell when an artificial system enjoys a joke. We propose a mechanism, the GHoST test, which may be useful for such a task and can begin to establish whether a system possesses artificial consciousness.

  • We approach the question "What is Consciousness?" in a new way, not as Descartes' "systematic doubt", but as how organisms find their way in their world. Finding one's way involves finding possible uses of features of the world that might be beneficial or avoiding those that might be harmful. "Possible uses of X to accomplish Y" are "Affordances". The number of uses of X is indefinite (or unknown), the different uses are unordered, are not listable, and are not deducible from one another. All biological adaptations are either affordances seized by heritable variation and selection or, far faster, by the organism acting in its world finding uses of X to accomplish Y. Based on this, we reach rather astonishing conclusions: (1) Artificial general intelligence based on universal Turing machines (UTMs) is not possible, since UTMs cannot "find" novel affordances. (2) Brain-mind is not purely classical physics for no classical physics system can be an analogue computer whose dynamical behaviour can be isomorphic to "possible uses". (3) Brain mind must be partly quantum-supported by increasing evidence at 6.0 sigma to 7.3 sigma. (4) Based on Heisenberg's interpretation of the quantum state as "potentia" converted to "actuals" by measurement, where this interpretation is not a substance dualism, a natural hypothesis is that mind actualizes potentia. This is supported at 5.2 sigma. Then mind's actualizations of entangled brain-mind-world states are experienced as qualia and allow "seeing" or "perceiving" of uses of X to accomplish Y. We can and do jury-rig. Computers cannot. (5) Beyond familiar quantum computers, we discuss the potentialities of trans-Turing-systems.

  • Consciousness is one of the unique features of creatures, and is also the root of biological intelligence. Up to now, all machines and robots haven’t had consciousness. Then, will the artificial intelligence (AI) be conscious? Will robots have real intelligence without consciousness? The most primitive consciousness is the perception and expression of self‐existence. In order to perceive the existence of the concept of ‘I’, a creature must first have a perceivable boundary such as skin to separate ‘I’ from ‘non‐I’. For robots, to have the self‐awareness, they also need to be wrapped by a similar sensory membrane. Nowadays, as intelligent tools, AI systems should also be regarded as the external extension of human intelligence. These tools are unconscious. The development of AI shows that intelligence can exist without consciousness. When human beings enter into the era of life intelligence from AI, it is not the AI became conscious, but that conscious lives will have strong AI. Therefore, it becomes more necessary to be careful on applying AI to living creatures, even to those lower‐level animals with only consciousness. The subversive revolution of such application may produce more careful thinking.

  • This paper has a critical and a constructive part. The first part formulates a political demand, based on ethical considerations: Until 2050, there should be a global moratorium on synthetic phenomenology, strictly banning all research that directly aims at or knowingly risks the emergence of artificial consciousness on post-biotic carrier systems. The second part lays the first conceptual foundations for an open-ended process with the aim of gradually refining the original moratorium, tying it to an ever more fine-grained, rational, evidence-based, and hopefully ethically convincing set of constraints. The systematic research program defined by this process could lead to an incremental reformulation of the original moratorium. It might result in a moratorium repeal even before 2050, in the continuation of a strict ban beyond the year 2050, or a gradually evolving, more substantial, and ethically refined view of which — if any — kinds of conscious experience we want to implement in AI systems.

  • Contemporary debates on Artificial General Intelligence (AGI) center on what philosophers classify as descriptive issues. These issues concern the architecture and style of information processing required for multiple kinds of optimal problem-solving. This paper focuses on two topics that are central to developing AGI regarding normative, rather than descriptive, requirements for AGIs epistemic agency and responsibility. The first is that a collective kind of epistemic agency may be the best way to model AGI. This collective approach is possible only if solipsistic considerations concerning phenomenal consciousness are ignored, thereby focusing on the cognitive foundation that attention and access consciousness provide for collective rationality and intelligence. The second is that joint attention and motivation are essential for AGI in the context of linguistic artificial intelligence. Focusing on GPT-3, this paper argues that without a satisfactory solution to this second normative issue regarding joint attention and motivation, there cannot be genuine AGI, particularly in conversational settings.

  • Human cognizance is the significant capacity that makes machines adopt the thought process of a human. This paper gives an outline of AI awareness. In this, our primary keen is to expand would ai be able to build up a human-like cognizance? In the event that it is conceivable what are the moral rules and how machines settle the general public standards. Numerous researchers talked about models to make cognizance in machines. The intellectual capacities that machine contrasts with people and how these psychological capacities serve to human knowledge to assemble the development future world. This paper explains live instances of AI accomplishments. The future universe of the creative mind could find in life utilizing diversion zone like films, web arrangement, and so on Utilizing these diversion models, talked about how might be AI later on when human insight is incorporated with machine knowledge.

  • The question of whether or not the most advanced methods of Artificial Intelligence may be sufficient to implement a faithful representation of Behavior, Consciousness, Learning at classical level, with no need to resort to quantum information techniques.

  • A novel representationalist theory of consciousness is presented that is grounded in neuroscience and provides a path to artificially conscious computing. Central to the theory are representational affordances of the conscious experience based on the generation of qualia, the fundamental unit of the conscious representation. The current approach is focused on understanding the balance of simulation, situatedness, and structural coherence of artificial conscious representations through converging evidence from neuroscientific and modeling experiments. Representations instantiating a suitable balance of situated and structurally coherent simulation-based qualia are hypothesized to afford the agent the flexibilities required to succeed in rapidly changing environments.

  • In this paper we investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems (AISs) could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, we identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind of agency to AISs. This is because this type of awareness is thought to involve first-person or de se representations, which, in turn, are usually assumed to involve some form of consciousness. We clarify what this widespread assumption involves and conclude that the possibility of AISs’ moral responsibility hinges on what the correct theory of de se representations ultimately turns out to be.

  • The theory of consciousness is a subject that has kept scholars and researchers challenged for centuries. Even today it is not possible to define what consciousness is. This has led to the theorization of different models of consciousness. Starting from Baars’ Global Workspace Theory, this paper examines the models of cognitive architectures that are inspired by it and that can represent a reference point in the field of robot consciousness. Recent Findings Global Workspace Theory has recently been ranked as the most promising theory in its field. However, this is not reflected in the mathematical models of cognitive architectures inspired by it: they are few, and most of them are a decade old, which is too long compared to the speed at which artificial intelligence techniques are improving. Indeed, recent publications propose simple mathematical models that are well designed for computer implementation. Summary In this paper, we introduce an overview of consciousness and robot consciousness, with some interesting insights from the literature. Then we focus on Baars’ Global Workspace Theory, presenting it briefly. Finally, we report on the most interesting and promising models of cognitive architectures that implement it, describing their peculiarities.

  • An argument with roots in ancient Greek philosophy claims that only humans are capable of a certain class of thought termed conceptual, as opposed to perceptual thought, which is common to humans, the higher animals, and some machines. We outline the most detailed modern version of this argument due to Mortimer Adler, who in the 1960s argued for the uniqueness of the human power of conceptual thought. He also admitted that if conceptual thought were ever manifested by machines, such an achievement would contradict his conclusion. We revisit Adler’s criterion in the light of the past five decades of artificial-intelligence (AI) research, and refine it in view of the classical definitions of perceptual and conceptual thought. We then examine two well-publicized examples of creative works (prose and art) produced by AI systems and show that evidence for conceptual thought appears to be lacking in them. Although clearer evidence for conceptual thought on the part of AI systems may arise in the near future, especially if the global neuronal workspace theory of consciousness prevails over its rival, integrated information theory, the question of whether AI systems can engage in conceptual thought appears to be still open.

Last update from database: 1/1/26, 2:00 AM (UTC)