Search
Full bibliography 716 resources
-
Consciousness is one of the unique features of creatures, and is also the root of biological intelligence. Up to now, all machines and robots haven’t had consciousness. Then, will the artificial intelligence (AI) be conscious? Will robots have real intelligence without consciousness? The most primitive consciousness is the perception and expression of self‐existence. In order to perceive the existence of the concept of ‘I’, a creature must first have a perceivable boundary such as skin to separate ‘I’ from ‘non‐I’. For robots, to have the self‐awareness, they also need to be wrapped by a similar sensory membrane. Nowadays, as intelligent tools, AI systems should also be regarded as the external extension of human intelligence. These tools are unconscious. The development of AI shows that intelligence can exist without consciousness. When human beings enter into the era of life intelligence from AI, it is not the AI became conscious, but that conscious lives will have strong AI. Therefore, it becomes more necessary to be careful on applying AI to living creatures, even to those lower‐level animals with only consciousness. The subversive revolution of such application may produce more careful thinking.
-
This paper has a critical and a constructive part. The first part formulates a political demand, based on ethical considerations: Until 2050, there should be a global moratorium on synthetic phenomenology, strictly banning all research that directly aims at or knowingly risks the emergence of artificial consciousness on post-biotic carrier systems. The second part lays the first conceptual foundations for an open-ended process with the aim of gradually refining the original moratorium, tying it to an ever more fine-grained, rational, evidence-based, and hopefully ethically convincing set of constraints. The systematic research program defined by this process could lead to an incremental reformulation of the original moratorium. It might result in a moratorium repeal even before 2050, in the continuation of a strict ban beyond the year 2050, or a gradually evolving, more substantial, and ethically refined view of which — if any — kinds of conscious experience we want to implement in AI systems.
-
Contemporary debates on Artificial General Intelligence (AGI) center on what philosophers classify as descriptive issues. These issues concern the architecture and style of information processing required for multiple kinds of optimal problem-solving. This paper focuses on two topics that are central to developing AGI regarding normative, rather than descriptive, requirements for AGIs epistemic agency and responsibility. The first is that a collective kind of epistemic agency may be the best way to model AGI. This collective approach is possible only if solipsistic considerations concerning phenomenal consciousness are ignored, thereby focusing on the cognitive foundation that attention and access consciousness provide for collective rationality and intelligence. The second is that joint attention and motivation are essential for AGI in the context of linguistic artificial intelligence. Focusing on GPT-3, this paper argues that without a satisfactory solution to this second normative issue regarding joint attention and motivation, there cannot be genuine AGI, particularly in conversational settings.
-
Human cognizance is the significant capacity that makes machines adopt the thought process of a human. This paper gives an outline of AI awareness. In this, our primary keen is to expand would ai be able to build up a human-like cognizance? In the event that it is conceivable what are the moral rules and how machines settle the general public standards. Numerous researchers talked about models to make cognizance in machines. The intellectual capacities that machine contrasts with people and how these psychological capacities serve to human knowledge to assemble the development future world. This paper explains live instances of AI accomplishments. The future universe of the creative mind could find in life utilizing diversion zone like films, web arrangement, and so on Utilizing these diversion models, talked about how might be AI later on when human insight is incorporated with machine knowledge.
-
The question of whether or not the most advanced methods of Artificial Intelligence may be sufficient to implement a faithful representation of Behavior, Consciousness, Learning at classical level, with no need to resort to quantum information techniques.
-
A novel representationalist theory of consciousness is presented that is grounded in neuroscience and provides a path to artificially conscious computing. Central to the theory are representational affordances of the conscious experience based on the generation of qualia, the fundamental unit of the conscious representation. The current approach is focused on understanding the balance of simulation, situatedness, and structural coherence of artificial conscious representations through converging evidence from neuroscientific and modeling experiments. Representations instantiating a suitable balance of situated and structurally coherent simulation-based qualia are hypothesized to afford the agent the flexibilities required to succeed in rapidly changing environments.
-
In this paper we investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems (AISs) could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, we identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind of agency to AISs. This is because this type of awareness is thought to involve first-person or de se representations, which, in turn, are usually assumed to involve some form of consciousness. We clarify what this widespread assumption involves and conclude that the possibility of AISs’ moral responsibility hinges on what the correct theory of de se representations ultimately turns out to be.
-
Most scientific theories of consciousness are challenging to apply outside the human case insofar as non-human systems (both biological and artificial) are unlikely to implement human architecture precisely, an issue I call the specificity problem. After providing some background on the theories of consciousness debate, I survey the prospects of four approaches to this problem. I then consider a fifth solution, namely the theory-light approach proposed by Jonathan Birch. I defend a modified version of this that I term the modest theoretical approach, arguing that it may provide insights into challenging cases that would otherwise be intractable.
-
The theory of consciousness is a subject that has kept scholars and researchers challenged for centuries. Even today it is not possible to define what consciousness is. This has led to the theorization of different models of consciousness. Starting from Baars’ Global Workspace Theory, this paper examines the models of cognitive architectures that are inspired by it and that can represent a reference point in the field of robot consciousness. Recent Findings Global Workspace Theory has recently been ranked as the most promising theory in its field. However, this is not reflected in the mathematical models of cognitive architectures inspired by it: they are few, and most of them are a decade old, which is too long compared to the speed at which artificial intelligence techniques are improving. Indeed, recent publications propose simple mathematical models that are well designed for computer implementation. Summary In this paper, we introduce an overview of consciousness and robot consciousness, with some interesting insights from the literature. Then we focus on Baars’ Global Workspace Theory, presenting it briefly. Finally, we report on the most interesting and promising models of cognitive architectures that implement it, describing their peculiarities.
-
An argument with roots in ancient Greek philosophy claims that only humans are capable of a certain class of thought termed conceptual, as opposed to perceptual thought, which is common to humans, the higher animals, and some machines. We outline the most detailed modern version of this argument due to Mortimer Adler, who in the 1960s argued for the uniqueness of the human power of conceptual thought. He also admitted that if conceptual thought were ever manifested by machines, such an achievement would contradict his conclusion. We revisit Adler’s criterion in the light of the past five decades of artificial-intelligence (AI) research, and refine it in view of the classical definitions of perceptual and conceptual thought. We then examine two well-publicized examples of creative works (prose and art) produced by AI systems and show that evidence for conceptual thought appears to be lacking in them. Although clearer evidence for conceptual thought on the part of AI systems may arise in the near future, especially if the global neuronal workspace theory of consciousness prevails over its rival, integrated information theory, the question of whether AI systems can engage in conceptual thought appears to be still open.
-
Susan Schneider (2019) has proposed two new tests for consciousness in AI (artificial intelligence) systems; the AI Consciousness Test and the Chip Test. On their face, the two tests seem to have the virtue of proving satisfactory to a wide range of consciousness theorists holding divergent theoretical positions, rather than narrowly relying on the truth of any particular theory of consciousness. Unfortunately, both tests are undermined in having an 'audience problem': those theorists with the kind of architectural worries that motivate the need for such tests should, on similar grounds, doubt that the tests establish the existence of genuine consciousness in the AI in question. Nonetheless, the proposed tests constitute progress, as they could find use by some theorists holding fitting views about consciousness and perhaps in conjunction with other tests for AI consciousness.
-
System-informational culture (SIC) is full of science big data anthropogenic environment of artificial intelligence (IA) applications. Mankind has to live in networks of virtual worlds. Cultural evolution extends scientific thinking to everyone in the boundaries of synthetic presentations of systems. Traditional education has become overweighted problem. Because of that it is necessary to learn a person to learn oneself. Achieving level of cognogenesis educational process in SIC is to be directed on consciousness – thinking objectization. Personal self – building leans on axiomatic method and mathematical universalities. For the objective of auto-poiesis, a person come untwisted as universal rational one possessing trans – semantic consciousness. Gender phenomenology in SIC presents thinking – knowledge by IA tools needing consonant partnership with man. The latter is based on epistemology to extend hermeneutic circle of SIC. Like that up-to-date noosphere poses objectization problem to attain Lamarck’s human evolution on the ground of Leibnitz’s mathesis universalis in the form of categories language. It can be solved only by means of deep – learned and natural intelligences adaptive partnership.
-
Consider a question, “Can machines be conscious?” The subject “consciousness” is vague and challenging. Although there has been a rich collection of literature on consciousness, computational modeling of consciousness that is both holistic in scope and detailed in simulatable computation is lacking. Based on recent advances on a new capability—Autonomous Programming For General Purposes (APFGP)—this work presents APFGP as a clearer, deeper and more practical characterization of consciousness, for natural (biological) and artificial (machine) systems. All animals have APFGP but traditional AI systems do not. This work reports a new kind of AI systems—conscious machines. Instead of arguing what static tasks a conscious machine should be able to do, this work suggests that APFGP is a computationally clearer and necessary criterion for us to dynamically judge whether a system can become maturely conscious through lifelong development, even if it (e.g., a fruit fly) does not have a full array of primate like capabilities such as vision, audition, and natural language understanding. The results here involve a series of new concepts and experimental studies for vision, audition, and natural languages with new developmental capabilities that are not present in many published systems, e.g., IBM Deep Blue, IBM Watson, AlphaGo, AlphaFold and other traditional AI systems and intelligent robots.
-
What will be the relationship between human beings and artificial intelligence (AI) in the future? Does an AI have moral status? What is that status? Through the analysis of consciousness, we can explain and answer such questions. The moral status of AIs can depend on the development level of AI consciousness. Drawing on the evolution of consciousness in nature, this paper examines several consciousness abilities of AIs, on the basis of which several relationships between AIs and human beings are proposed. The advantages and disadvantages of those relationships can be analysed by referring to classical ethics theories, such as contract theory, utilitarianism, deontology and virtue ethics. This explanation helps to construct a common hypothesis about the relationship between humans and AIs. Thus, this research has important practical and normative significance for distinguishing the different relationships between humans and AIs.
-
Many scholars make a very clear distinction between intelligence and consciousness. Let’s take one of the most famous today, Israeli history Professor, Yuval Noah Harari, the author of Sapiens and Homo Deus. In his 2018 book, 21 lessons for the twenty-first century, he writes that, “intelligence and consciousness are very different things. Intelligence is the ability to solve problems. Consciousness is the ability to feel things such as pain, joy, love, and anger.”
-
Current theories of artificial intelligence (AI) generally exclude human emotions. The idea at the core of such theories could be described as ‘cognition is computing’; that is, that human psychological and symbolic representations and the operations involved in structuring such representations in human thinking and intelligence can be converted by AI into a series of cognitive symbolic representations and calculations in a manner that simulates human intelligence. However, after decades of development, the cognitive computing doctrine has encountered many difficulties, both in theory and in practice; in particular, it is far from approaching real human intelligence. Real human intelligence runs through the whole process of the emotions. The core and motivation of rational thinking are derived from the emotions. Intelligence without emotion neither exists nor is meaningful. For example, the idea of ‘hot thinking’ proposed by Paul Thagard, a philosopher of cognitive science, discusses the mechanism of the emotions in human cognition and the thinking process. Through an analysis from the perspectives of cognitive neurology, cognitive psychology and social anthropology, this article notes that there may be a type of thinking that could be called ‘emotional thinking’. This type of thinking includes complex emotional factors during the cognitive processes. The term is used to refer to the capacity to process information and use emotions to integrate information in order to arrive at the right decisions and reactions. This type of thinking can be divided into two types according to the role of cognition: positive and negative emotional thinking. That division reflects opposite forces in the cognitive process. In the future, ‘emotional computing’ will cause an important acceleration in the development of AI consciousness. The foundation of AI consciousness is emotional computing based on the simulation of emotional thinking.
-
The question of whether artificial beings or machines could become self-aware or conscious has been a philosophical question for centuries. The main problem is that self-awareness cannot be observed from an outside perspective and the distinction of being really self-aware or merely a clever imitation cannot be answered without access to knowledge about the mechanism's inner workings. We investigate common machine learning approaches with respect to their potential ability to become self-aware. We realize that many important algorithmic steps toward machines with a core consciousness have already been taken.
-
In this study, we propose a model of consciousness that can be implemented by computers as a decision-making system based on psychology, with the goal of enabling artificial intelligence to understand human values and ethics and to make flexible and more human-friendly choices and suggestions.
-
In this essay, I reflect on Susan Schneider's book on AI consciousness and the recent debate on the issue of lab-grown brain consciousness. Given the advances in computer science and bio-engineering, the twin problem of AI and lab-grown brain consciousness is here to stay.
-
This study explores an info-structural model of cognition for non-interacting agents affected by human sensation, perception, emotion, and affection. We do not analyze the neuroscientific or psychological debate concerning the human mind working, but we underline the importance of modeling the above cognitive levels when designing artificial intelligence agents. Our aim was to start a reflection on the computational reproduction of intelligence, providing a methodological approach through which the aforementioned human factors in autonomous systems are enhanced. The presented model must be intended as part of a larger one, which also includes concepts of attention, awareness, and consciousness. Experiments have been performed by providing visual stimuli to the proposed model, coupling the emotion cognitive level with a supervised learner to produce artificial emotional activity. For this purpose, performances with Random Forest and XGBoost have been compared and, with the latter algorithm, 85% accuracy and 92% coherency over predefined emotional episodes have been achieved. The model has also been tested on emotional episodes that are different from those related to the training phase, and a decrease in accuracy and coherency has been observed. Furthermore, by decreasing the weight related to the emotion cognitive instances, the model reaches the same performances recorded during the evaluation phase. In general, the framework achieves a first emotional generalization responsiveness of 94% and presents an approximately constant relative frequency related to the agent’s displayed emotions.