Search
Full bibliography 675 resources
-
If artificial agents are to be created such that they occupy space in our social and cultural milieu, then we should expect them to be targets of folk psychological explanation. That is to say, their behavior ought to be explicable in terms of beliefs, desires, obligations, and especially intentions. Herein, we focus on the concept of intentional action, and especially its relationship to consciousness. After outlining some lessons learned from philosophy and psychology that give insight into the structure of intentional action, we find that attention plays a critical role in agency, and indeed, in the production of intentional action. We argue that the insights offered by the literature on agency and intentional action motivate a particular kind of computational cognitive architecture, and one that hasn’t been well-explicated or computationally fleshed out among the community of AI researchers and computational cognitive scientists who work on cognitive systems. To give a sense of what such a system might look like, we present the ARCADIA attention-driven cognitive system as first steps toward an architecture to support the type of agency that rich human–machine interaction will undoubtedly demand.
-
Previous work [Chrisley & Sloman, 2016, 2017] has argued that a capacity for certain kinds of meta-knowledge is central to modeling consciousness, especially the recalcitrant aspects of qualia, in computational architectures. After a quick review of that work, this paper presents a novel objection to Frank Jackson’s Knowledge Argument (KA) against physicalism, an objection in which such meta-knowledge also plays a central role. It is first shown that the KA’s supposition of a person, Mary, who is physically omniscient, and yet who has not experienced seeing red, is logically inconsistent, due to the existence of epistemic blindspots for Mary. It is then shown that even if one makes the KA consistent by supposing a more limited physical omniscience for Mary, this revised argument is invalid. This demonstration is achieved via the construction of a physical fact (a recursive conditional epistemic blindspot) that Mary cannot know before she experiences seeing red for the first time, but which she can know afterward. After considering and refuting some counter-arguments, the paper closes with a discussion of the implications of this argument for machine consciousness, and vice versa.
-
Artificial intelligence and robotics are opening up important opportunities in the field of health diagnosis and treatment support with aims like better patient follow-up. A social and emotional robot is an artificially intelligent machine that owes its existence to computer models designed by humans. If it has been programmed to engage in dialogue, detect and recognize emotional and conversational cues, adapt to humans, or even simulate humor, such a machine may on the surface seem friendly. However, such emotional simulation must not hide the fact that the machine has no consciousness.
-
This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” reflected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” that excludes any possibility of transition from a complex observable phenomenon to an abstract image or concept. It is, therefore, sensible to factor in new requirements for AI (artificial intelligence) maturity assessment when approaching the Turing test. Such AI must support all forms of communication with a human being, and it should be able to comprehend abstract images and specify concepts as well as participate in social practices.
-
This paper explores some of the potential connections between natural and artificial intelligence and natural and artificial consciousness. In humans we use batteries of tests to indirectly measure intelligence. This approach breaks down when we try to apply it to radically different animals and to the many varieties of artificial intelligence. To address this issue people are starting to develop algorithms that can measure intelligence in any type of system. Progress is also being made in the scientific study of consciousness: we can neutralize the philosophical problems, we have data about the neural correlates and we have some idea about how we can develop mathematical theories that can map between physical and conscious states. While intelligence is a purely functional property of a system, there are good reasons for thinking that consciousness is linked to particular spatiotemporal patterns in specific physical materials. This paper outlines some of the weak inferences that can be made about the relationships between intelligence and consciousness in natural and artificial systems. To make real scientific progress we need to develop practical universal measures of intelligence and mathematical theories of consciousness that can reliably map between physical and conscious states.
-
This work seeks to study the beneficial properties that an autonomous agent can obtain by implementing a cognitive architecture similar to the one of conscious beings. Along this document, a conscious model of autonomous agent based in a global workspace architecture is presented. We describe how this agent is viewed from different perspectives of philosophy of mind, being inspired by their ideas. The goal of this model is to create autonomous agents able to navigate within an environment composed of multiple independent magnitudes, adapting to its surroundings in order to find the best possible position in base of its inner preferences. The purpose of the model is to test the effectiveness of many cognitive mechanisms that are incorporated, such as an attention mechanism for magnitude selection, pos-session of inner feelings and preferences, usage of a memory system to storage beliefs and past experiences, and incorporating a global workspace which controls and integrates information processed by all the subsystem of the model. We show in a large experiment set how an autonomous agent can benefit from having a cognitive architecture such as the one described.
-
The popular expectation is that Artificial Intelligence (AI) will soon surpass the capacities of the human mind and Strong Artificial General Intelligence (AGI) will replace the contemporary Weak AI. However, there are certain fundamental issues that have to be addressed before this can happen. There can be no intelligence without understanding, and there can be no understanding without getting meanings. Contemporary computers manipulate symbols without meanings, which are not incorporated in the computations. This leads to the Symbol Grounding Problem; how could meanings be incorporated? The use of self-explanatory sensory information has been proposed as a possible solution. However, self-explanatory information can only be used in neural network machines that are different from existing digital computers and traditional multilayer neural networks. In humans, self-explanatory information has the form of qualia. To have reportable qualia is to be phenomenally conscious. This leads to the hypothesis about an unavoidable connection between the solution of the Symbol Grounding Problem and consciousness. If, in general, self-explanatory information equals to qualia, then machines that utilize self-explanatory information would be conscious.
-
Taking the stance that artificially conscious agents should be given human-like rights, in this paper we attempt to define consciousness, aggregate existing universal human rights, analyze robotic laws with roots in both reality and science fiction, and synthesize everything to create a new robot-ethical charter. By restricting the problem-space of possible levels of conscious beings to human-like, we succeed in developing a working definition of consciousness for social strong AI which focuses on human-like creativity being exhibited as a third-person observable phenomenon. Creativity is then extrapolated to represent first-person functionality, fulfilling the first/third-person feature of consciousness. Next, several sources of existing rights and rules, both for humans and robots, are analyzed and, along with supplementary informal reports, synthesized to create articles for an additive charter which compliments the United Nation's Universal Declaration of Human Rights. Finally, the charter is presented and the paper concludes with the conditions for amending the charter, as well as recommendations for further charters.
-
This paper is focused on some preliminary cognitive and consciousness test results of using an Independent Core Observer Model Cognitive Architecture (ICOM) in a Mediated Artificial Super Intelligence (mASI) System. These preliminary test results including objective and subjective analyses are designed to determine if further research is warranted along these lines. The comparative analysis includes comparisons to humans and human groups as measured for direct comparison. The overall study includes a mediation client application optimization in helping preform tests, AI context-based input (building context tree or graph data models), intelligence comparative testing (such as an IQ test), and other tests (i.e. Turing, Qualia, and Porter method tests) designed to look for early signs of consciousness or the lack thereof in the mASI system. Together, they are designed to determine whether this modified version of ICOM is a) in fact, a form of AGI and/or ASI, b) conscious and c) at least sufficiently interesting that further research is called for. This study is not conclusive but offers evidence to justify further research along these lines.
-
The main challenge of technology is to facilitate the tasks and to transfer the functions that are usually performed by the humans to the non-humans. However, the pervasion of machines in everyday life requires that the non-humans are increasingly closer in their abilities to the ordinary thought, action and behaviour of the humans. This view merges the idea of the Humaniter, a longstanding myth in the history of technology: an artificial creature that thinks, acts and feels like a human to the point that one cannot make the difference between the two. In the wake of the opposition of Strong AI and Weak AI, this challenge can be expressed in terms of a shift from the performance of intelligence (reason, reasoning, cognition, judgment) to that of sentience (experience, sensation, emotion, consciousness). In other words, the challenge of technology if this possible shift is taken seriously is to move from the paradigm of Artificial Intelligence (AI) to that of Artificial Sentience (AS). But for the Humaniter not to be regarded as a mere myth, any intelligent or sentient machine must pass through a Test of Humanity that refers to or that differs from the Turing Test. One can suggest several options for this kind of test and also point out some conditions and limits to the very idea of the Humaniter as an artificial human.
-
Our society is in the middle of the AI revolution. We discuss several applications of AI, in particular medical causality, where deep-learning neural networks screen through big data bases, extracting associations between a patient’s condition and possible causes. While beneficial in medicine, several questionable AI trading strategies have emerged in finance. Though advantages in many aspects of our lives, serious threats of AI exist. We suggest several regulatory measures to reduce these threats. We further discuss whether ‘full AI robots’ should be programmed with a virtual consciousness and conscience. While this would reduce AI threats via motivational control, other threats such as the desire for AI—human socioeconomic equality could prove detrimental.
-
In the last 10 years, Artificial Intelligence (AI) has seen successes in fields such as natural language processing, computer vision, speech recognition, robotics and autonomous systems. However, these advances are still considered as Narrow AI, i.e. AI built for very specific or constrained applications. These applications have its usefulness in improving the quality of human life; but it is not good enough to do highly general tasks like what the human can do. The holy grail of AI research is to develop Strong AI or Artificial General Intelligence (AGI), which produces human-level intelligence, i.e. the ability to sense, understand, reason, learn and act in dynamic environments. Strong AI is more than just a composition of Narrow AI technologies. We proposed that it has to be a holistic approach towards understanding and reacting to the operating environment and decision-making process. The Strong AI must be able to demonstrate sentience, emotional intelligence, imagination, effective command of other machines or robots, and self-referring and self-reflecting qualities. This paper will give an overview of current Narrow AI capabilities, present the technical gaps, and highlight future research directions for Strong AI. Could Strong AI become conscious? We provide some discussion pointers.
-
The technological singularity is popularly envisioned as a point in time when (a) an explosion of growth in artificial intelligence (AI) leads to machines becoming smarter than humans in every capacity, even gaining consciousness in the process; or (b) humans become so integrated with AI that we could no longer be called human in the traditional sense. This article argues that the technological singularity does not represent a point in time but a process in the ongoing construction of a collective consciousness. Innovations from the earliest graphic representations to the present reduced the time it took to transmit information, reducing the cognitive space between individuals. The steady pace of innovations ultimately led to the communications satellite, fast-tracking this collective consciousness. The development of AI in the late 1960s has been the latest innovation in this process, increasing the speed of information while allowing individuals to shape events as they happen.
-
Cognitive radios (CRs) use artificial intelligence algorithms to obtain an improved quality of service (QoS). CRs also benefit from meta-cognition algorithms that enable them to determine the most suitable intelligent algorithm for achieving their operational goals. Examples of intelligent algorithms that are used by CRs are support vector machines, artificial neural networks and hidden markov models. Each of these intelligent algorithms can be realized in a different manner and used for different tasks such as predicting the idle state and duration of a channel. The CR benefits from jointly using these intelligent algorithms and selecting the most suitable algorithm for prediction at an epoch of interest. The incorporation of meta-cognition also furnishes the CR with consciousness. This is because it makes the CR aware of its learning mechanisms. CR consciousness consumes the CR resources i.e. battery and memory. The resource consumption should be reduced to enhance CR's resources available for data transmission. The discussion in this paper proposes a meta-cognitive solution that reduces CR resources associated with maintaining consciousness. The proposed solution incorporates the time domain and uses information on the duration associated with executing learning and data transmission tasks. In addition, the proposed solution is integrated in a multimode CR. Evaluation shows that the performance improvement for the CR transceiver power, computational resources and channel capacity lies in the range 18.3% - 42.5% , 21.6% - 44.8% and 9.5% - 56.3% on average, respectively.
-
The field of artificial consciousness (AC) has largely developed outside of mainstream artificial intelligence (AI), with separate goals and criteria for success and with only a minimal exchange of ideas. This is unfortunate as the two fields appear to be synergistic. For example, here we consider the question of how concepts developed in AC research might contribute to more effective future AI systems. We first briefly discuss several past hypotheses about the function(s) of human consciousness, and present our own hypothesis that short-term working memory and very rapid learning should be a central concern in such matters. In this context, we then present ideas about how integrating concepts from AC into AI systems to develop an artificial conscious intelligence (ACI) could both produce more effective AI technology and contribute to a deeper scientific understanding of the fundamental nature of consciousness and intelligence.
-
The science of consciousness has made great strides in recent decades. However, the proliferation of competing theories makes it difficult to reach consensus about artificial consciousness. While for purely scientific purposes we might wish to adopt a ‘wait and see’ attitude, we may soon face practical and ethical questions about whether, for example, agents artificial systems are capable of suffering. Moreover, many of the methods used for assessing consciousness in humans and even non-human animals are not straightforwardly applicable to artificial systems. With these challenges in mind, I propose that we look for ecumenical heuristics for artificial consciousness to enable us to make tentative assessments of the likelihood of consciousness arising in different artificial systems. I argue that such heuristics should have three main features: they should be (i) intuitively plausible, (ii) theoretically-neutral, and (iii) scientifically tractable. I claim that the concept of general intelligence — understood as a capacity for robust, flexible, and integrated cognition and behavior — satisfies these criteria and may thus provide the basis for such a heuristic, allowing us to make initial cautious estimations of which artificial systems are most likely to be conscious.
-
The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.
-
There is an ongoing debate about the existence of humanoid robotics. The arguments tend to focus on the ethical claims that there is always deception involved. However, little attention has been paid to the ontological reasons that humanoid robotics are valuable in consciousness research. This paper examines the arguments and controversy around ethical rejection of humanoid robotics, while also summarizing some of the landscape of 4e cognition that highlights the ways our specific humanoid bodies in our specific cultural, social, and physical environments play an indispensable role in cognition, from conceptualization through communication. Ultimately, we argue that there is a compelling set of reasons to pursue humanoid robotics as a major research agenda in AI if the goal is to create an artificial conscious system that we will be able to both recognize as conscious and communicate with successfully.
-
The term artificial intelligence can also be called machine intelligence, as we know everything became more systematic and programmed and manpower is reduced ,in such manner, man-made reasoning has a unique job in all the progression made today. Artificial intelligence frameworks are utilized in each stroll of our day by day life, in short we can say that our lives have additionally gotten further developed with the utilization of this AI innovation. The applications are operated in various fields like manufacturing units,business entities ,Medical sciences ,in the field of law and technology called driverless vehicles which can sense the environment broadly utilizes the idea of AI.The principle point or the main aim of the study is of is to make a general awareness about artificial intelligence to the public.The study has utilized 1850 respondents to comprehend their perspectives on Artificial Intelligence framework .Chi square, Anova are the statistical tools used for the study ,Through the examination it is discovered that individuals are exceptionally mindful of the innovation of man-made consciousness framework and they acknowledge that because of the headway of this innovation the human employments or human jobs are profoundly diminished and the 21 st century are in its boom to use machine intelligence and industries are in a situation where they can’t work without AI as because many of the challenges faced are solved by AI because many of the works are completed within a short span of time without facing any risks.
-
In this paper, we propose a hypothesis that consciousness has evolved to serve as a platform for general intelligence. This idea stems from considerations of potential biological functions of consciousness. Here we define general intelligence as the ability to apply knowledge and models acquired from past experiences to generate solutions to novel problems. Based on this definition, we propose three possible ways to establish general intelligence under existing methodologies for constructing AI systems, namely solution by simulation, solution by combination and solution by generation. Then, we relate those solutions to putative functions of consciousness put forward, respectively, by the information generation theory, the global workspace theory, and a form of higher order theory where qualia are regarded as meta-representations. Based on these insights, We propose that consciousness integrates a group of specialized generative/forward models and forms a complex in which combinations of those models are flexibly formed and that qualia are meta-representations of first-order mappings which endow an agent with the ability to choose which maps to use to solve novel problems. These functions can be implemented as an ``artificial consciousness''. Such systems can generate policies based on a small number of trial and error for solving novel problems. Finally, we propose possible directions for future research into artificial consciousness and artificial general intelligence.