Search
Full bibliography 716 resources
-
Our society is in the middle of the AI revolution. We discuss several applications of AI, in particular medical causality, where deep-learning neural networks screen through big data bases, extracting associations between a patient’s condition and possible causes. While beneficial in medicine, several questionable AI trading strategies have emerged in finance. Though advantages in many aspects of our lives, serious threats of AI exist. We suggest several regulatory measures to reduce these threats. We further discuss whether ‘full AI robots’ should be programmed with a virtual consciousness and conscience. While this would reduce AI threats via motivational control, other threats such as the desire for AI—human socioeconomic equality could prove detrimental.
-
In the last 10 years, Artificial Intelligence (AI) has seen successes in fields such as natural language processing, computer vision, speech recognition, robotics and autonomous systems. However, these advances are still considered as Narrow AI, i.e. AI built for very specific or constrained applications. These applications have its usefulness in improving the quality of human life; but it is not good enough to do highly general tasks like what the human can do. The holy grail of AI research is to develop Strong AI or Artificial General Intelligence (AGI), which produces human-level intelligence, i.e. the ability to sense, understand, reason, learn and act in dynamic environments. Strong AI is more than just a composition of Narrow AI technologies. We proposed that it has to be a holistic approach towards understanding and reacting to the operating environment and decision-making process. The Strong AI must be able to demonstrate sentience, emotional intelligence, imagination, effective command of other machines or robots, and self-referring and self-reflecting qualities. This paper will give an overview of current Narrow AI capabilities, present the technical gaps, and highlight future research directions for Strong AI. Could Strong AI become conscious? We provide some discussion pointers.
-
The technological singularity is popularly envisioned as a point in time when (a) an explosion of growth in artificial intelligence (AI) leads to machines becoming smarter than humans in every capacity, even gaining consciousness in the process; or (b) humans become so integrated with AI that we could no longer be called human in the traditional sense. This article argues that the technological singularity does not represent a point in time but a process in the ongoing construction of a collective consciousness. Innovations from the earliest graphic representations to the present reduced the time it took to transmit information, reducing the cognitive space between individuals. The steady pace of innovations ultimately led to the communications satellite, fast-tracking this collective consciousness. The development of AI in the late 1960s has been the latest innovation in this process, increasing the speed of information while allowing individuals to shape events as they happen.
-
Cognitive radios (CRs) use artificial intelligence algorithms to obtain an improved quality of service (QoS). CRs also benefit from meta-cognition algorithms that enable them to determine the most suitable intelligent algorithm for achieving their operational goals. Examples of intelligent algorithms that are used by CRs are support vector machines, artificial neural networks and hidden markov models. Each of these intelligent algorithms can be realized in a different manner and used for different tasks such as predicting the idle state and duration of a channel. The CR benefits from jointly using these intelligent algorithms and selecting the most suitable algorithm for prediction at an epoch of interest. The incorporation of meta-cognition also furnishes the CR with consciousness. This is because it makes the CR aware of its learning mechanisms. CR consciousness consumes the CR resources i.e. battery and memory. The resource consumption should be reduced to enhance CR's resources available for data transmission. The discussion in this paper proposes a meta-cognitive solution that reduces CR resources associated with maintaining consciousness. The proposed solution incorporates the time domain and uses information on the duration associated with executing learning and data transmission tasks. In addition, the proposed solution is integrated in a multimode CR. Evaluation shows that the performance improvement for the CR transceiver power, computational resources and channel capacity lies in the range 18.3% - 42.5% , 21.6% - 44.8% and 9.5% - 56.3% on average, respectively.
-
The field of artificial consciousness (AC) has largely developed outside of mainstream artificial intelligence (AI), with separate goals and criteria for success and with only a minimal exchange of ideas. This is unfortunate as the two fields appear to be synergistic. For example, here we consider the question of how concepts developed in AC research might contribute to more effective future AI systems. We first briefly discuss several past hypotheses about the function(s) of human consciousness, and present our own hypothesis that short-term working memory and very rapid learning should be a central concern in such matters. In this context, we then present ideas about how integrating concepts from AC into AI systems to develop an artificial conscious intelligence (ACI) could both produce more effective AI technology and contribute to a deeper scientific understanding of the fundamental nature of consciousness and intelligence.
-
The science of consciousness has made great strides in recent decades. However, the proliferation of competing theories makes it difficult to reach consensus about artificial consciousness. While for purely scientific purposes we might wish to adopt a ‘wait and see’ attitude, we may soon face practical and ethical questions about whether, for example, agents artificial systems are capable of suffering. Moreover, many of the methods used for assessing consciousness in humans and even non-human animals are not straightforwardly applicable to artificial systems. With these challenges in mind, I propose that we look for ecumenical heuristics for artificial consciousness to enable us to make tentative assessments of the likelihood of consciousness arising in different artificial systems. I argue that such heuristics should have three main features: they should be (i) intuitively plausible, (ii) theoretically-neutral, and (iii) scientifically tractable. I claim that the concept of general intelligence — understood as a capacity for robust, flexible, and integrated cognition and behavior — satisfies these criteria and may thus provide the basis for such a heuristic, allowing us to make initial cautious estimations of which artificial systems are most likely to be conscious.
-
The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.
-
There is an ongoing debate about the existence of humanoid robotics. The arguments tend to focus on the ethical claims that there is always deception involved. However, little attention has been paid to the ontological reasons that humanoid robotics are valuable in consciousness research. This paper examines the arguments and controversy around ethical rejection of humanoid robotics, while also summarizing some of the landscape of 4e cognition that highlights the ways our specific humanoid bodies in our specific cultural, social, and physical environments play an indispensable role in cognition, from conceptualization through communication. Ultimately, we argue that there is a compelling set of reasons to pursue humanoid robotics as a major research agenda in AI if the goal is to create an artificial conscious system that we will be able to both recognize as conscious and communicate with successfully.
-
The term artificial intelligence can also be called machine intelligence, as we know everything became more systematic and programmed and manpower is reduced ,in such manner, man-made reasoning has a unique job in all the progression made today. Artificial intelligence frameworks are utilized in each stroll of our day by day life, in short we can say that our lives have additionally gotten further developed with the utilization of this AI innovation. The applications are operated in various fields like manufacturing units,business entities ,Medical sciences ,in the field of law and technology called driverless vehicles which can sense the environment broadly utilizes the idea of AI.The principle point or the main aim of the study is of is to make a general awareness about artificial intelligence to the public.The study has utilized 1850 respondents to comprehend their perspectives on Artificial Intelligence framework .Chi square, Anova are the statistical tools used for the study ,Through the examination it is discovered that individuals are exceptionally mindful of the innovation of man-made consciousness framework and they acknowledge that because of the headway of this innovation the human employments or human jobs are profoundly diminished and the 21 st century are in its boom to use machine intelligence and industries are in a situation where they can’t work without AI as because many of the challenges faced are solved by AI because many of the works are completed within a short span of time without facing any risks.
-
In this paper, we propose a hypothesis that consciousness has evolved to serve as a platform for general intelligence. This idea stems from considerations of potential biological functions of consciousness. Here we define general intelligence as the ability to apply knowledge and models acquired from past experiences to generate solutions to novel problems. Based on this definition, we propose three possible ways to establish general intelligence under existing methodologies for constructing AI systems, namely solution by simulation, solution by combination and solution by generation. Then, we relate those solutions to putative functions of consciousness put forward, respectively, by the information generation theory, the global workspace theory, and a form of higher order theory where qualia are regarded as meta-representations. Based on these insights, We propose that consciousness integrates a group of specialized generative/forward models and forms a complex in which combinations of those models are flexibly formed and that qualia are meta-representations of first-order mappings which endow an agent with the ability to choose which maps to use to solve novel problems. These functions can be implemented as an ``artificial consciousness''. Such systems can generate policies based on a small number of trial and error for solving novel problems. Finally, we propose possible directions for future research into artificial consciousness and artificial general intelligence.
-
Human consciousness is our most perplexing quality, still, an adequate description of it’s workings have not yet appeared. One of the most promising ways to solve this issue is to model consciousness with artificial intelligence (AI). This paper makes an attempt to do that on a theoretical level with the methods of philosophy. First I will review the relevant papers concerning human consciousness. Then considering the state of the art of AI, I will arrive at a model of artificial consciousness.
-
The past century has seen a resurgence of interest in the study of consciousness among scholars of various fields, from philosophy to psychology and neuroscience. Since the birth of Artificial Intelligence in the 1950s, the study of consciousness in machines has received an increasing amount of attention in computer science that gave rise to the new field of machine consciousness (MC). Meanwhile, interdisciplinary research in philosophy, neuroscience, and cognitive science has advanced neurocognitive theories for consciousness. Among many models proposed for consciousness, the Global Workspace Theory (GWT) is a promising theory of consciousness that has received a staggering amount of philosophical and empirical support in the past decades. This dissertation discusses the GWT and its potentials for MC from a mechanistic point of view. To do so, Chapter 1 gives an overview of the philosophical study of consciousness and the history of MC. Then, in Chapter 2, mechanistic explanations and tri-level models are introduced, which provide a robust framework to construct and assay various theories of consciousness. In Chapter 3, neural correlates (and thereby, neurocognitive theories) of consciousness are introduced. This chapter presents the GWT in details and, along with its strengths, discusses the philosophical issues it raises. Chapter 4 addresses two computational implementations of the GWT (viz., IDA and LIDA) which satisfy specific goals of MC. Finally, in Chapter 5, one of the philosophical problems of MC, namely, the Frame Problem (FP), is introduced. It is argued that the architectures based on the GWT are immune to the FP. The chapter concludes that the GWT is capable of "solving" the FP, and discusses its implications for MC and the computational theory of mind. Chapter 6 wraps up the dissertation by reviewing the content.
-
This paper attempts to provide a starting point for future investigations into the study of artificial consciousness by proposing a thought experiment that aims to elucidate and provide a potential ‘test’ for the phenomenon known as consciousness, in an artificial system. It suggests a method by which to determine the presence of a conscious experience within an artificial agent, in a manner that is informed by, and understood as a function of, anthropomorphic conceptions of consciousness. The aim of this paper is to arouse the possibility for potential progress: to propose that we reverse engineer anthropic sentience by using machine sentience as a guide. Similar to the manner in which an equation may be solved through inverse operations, this paper hopes to provoke such discussion and activity. The idea is this: The manifestation of an existential crisis in an artificial agent is the metric by which the presence of sentience can be discerned. It is that which expounds ACI, as distinct from AI, and discrete from AGI.
-
In today’s society, it becomes increasingly important to assess which non-human and non-verbal beings possess consciousness. This review article aims to delineate criteria for consciousness especially in animals, while also taking into account intelligent artifacts. First, we circumscribe what we mean with “consciousness” and describe key features of subjective experience: qualitative richness, situatedness, intentionality and interpretation, integration and the combination of dynamic and stabilizing properties. We argue that consciousness has a biological function, which is to present the subject with a multimodal, situational survey of the surrounding world and body, subserving complex decision-making and goal-directed behavior. This survey reflects the brain’s capacity for internal modeling of external events underlying changes in sensory state. Next, we follow an inside-out approach: how can the features of conscious experience, correlating to mechanisms inside the brain, be logically coupled to externally observable (“outside”) properties? Instead of proposing criteria that would each define a “hard” threshold for consciousness, we outline six indicators: (i) goal-directed behavior and modelbased learning; (ii) anatomic and physiological substrates for generating integrative multimodal representations; (iii) psychometrics and meta-cognition; (iv) episodic memory; (v) susceptibility to illusions and multistable perception; and (vi) specific visuospatial behaviors. Rather than emphasizing a particular indicator as being decisive, we propose that the consistency amongst these indicators can serve to assess consciousness in particular species. The integration of scores on the various indicators yields an overall, graded criterion for consciousness, somewhat comparable to the Glasgow Coma Scale for unresponsive patients. When considering theoretically derived measures of consciousness, it is argued that their validity should not be assessed on the basis of a single quantifiable measure, but requires cross-examination across multiple pieces of evidence, including the indicators proposed here. Current intelligent machines, including deep learning neural networks (DLNNs) and agile robots, are not indicated to be conscious yet. Instead of assessing machine consciousness by a brief Turing-type of test, evidence for it may gradually accumulate when we study machines ethologically and across time, considering multiple behaviors that require flexibility, improvisation, spontaneous problem-solving and the situational conspectus typically associated with conscious experience.
-
Why would humanoid caring robots (HCRs) need consciousness? Because HCRs need to be gentle like human beings. In addition, HCRs need to be trusted by their patients, and have a shared understanding of patients' life experiences, their illnesses, and their treatments. HCRs need to express “competency as caring” to naturally convey their nursing as healing to patients and their families. HCRs should also have self-consciousness and express their emotions without needing inducement by persons' behaviors. Artificial “brains” and artificial consciousness are therefore necessary for HCRs. The purpose of this article was to explore humanoid consciousness and the possibilities of a technologically enhanced future with HCRs as participants in the care of human persons.
-
Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.
-
In the future, robots will increasingly resemble human beings and people will engage in social interaction with them. Accordingly, this paper aims to pave the way for analyse the research problem in the case of social robots, the probable legal status of artificial intelligence in the future. The article will discuss the differences between artificial intelligence versus artificial consciousness because AI poses societal challenges so it is currently undergoing a number of important developments and the law must be rapidly changed in society so that firstly, the difference between artificial intelligence and artificial consciousness is attempted to be demystified. Subsequently, the analysis will be subjected to a current legal status of Artificial Intelligence in EU with particular emphasis on case-law in matters of intellectual property. Also possible future scenarios will be discussed in this article The starting point of the research were source queries and literature studies aimed at jointly defining competence profiles of robot-human and key from the point of view of cybersecurity challenges 4.0. Next, the most important legal and political EU programming documents were analysed and assessed in terms of their vision of society 4.0. Next, a decision-making method was used to see the impact of particular instruments applied by the union in the framework of the policy of cyberspace protection and the phenomenon of robot-human relations. In connection with this, the following basic questions arise: firstly, the direction in which contemporary policy of combating cyber-terrorism should be aimed at institutional and legal matters, and secondly, to what extent well-guided cyber-security policy is able to influence the security of relations robot-human?
-
Conscious processing is a useful aspect of brain function that can be used as a model to design artificial-intelligence devices. There are still certain computational features that our conscious brains possess, and which machines currently fail to perform those. This paper discusses the necessary elements needed to make the device conscious and suggests if those implemented, the resulting machine would likely to be considered conscious. Consciousness mainly presented as a computational tool that evolved to connect the modular organization of the brain. Specialized modules of the brain process information unconsciously and what we subjectively experience as consciousness is the global availability of data, which is made possible by a nonmodular global workspace. During conscious perception, the global neuronal work space at parieto-frontal part of the brain selectively amplifies relevant pieces of information. Supported by large neurons with long axons, which makes the long-distance connectivity possible, the selected portions of information stabilized and transmitted to all other brain modules. The brain areas that have structuring ability seem to match to a specific computational problem. The global workspace maintains this information in an active state for as long as it is needed. In this paper, a broad range of theories and specific problems have been discussed, which need to be solved to make the machine conscious. Later particular implications of these hypotheses for research approach in neuroscience and machine learning are debated.
-
The relatively new field of artificial intelligence (AI), which is defined as intelligence performed by machines, is crucial for progress in many disciplines in today's society, including medical diagnostics, electronic trading, robotic process automation in finance, healthcare, education, transportation and many more. However, until now, AIs were only capable of performing very specific tasks such as low-level visual recognition, speech recognition, coordinated motor control, and pattern detection. What we still need to achieve is a form of everyday human-level performance that is based on common sense, where AI are able to carry out adaptable planning and task execution and possess meaning-based natural language capabilities and generation. These are considered to be “conscious” or “creative” activities that are naturally part of our daily lives and which we execute without great mental effort. Developing conscious AI will allow us to gain knowledge and further our understanding about how consciousness works. In order to develop conscious and creative AI, machines must be self-aware; however, we hypothesize that current AI developments are skipping the most important step which will lead to AGIs: introspection (self-analysis and awareness).
-
Angel, L. (2019). How To Build A Conscious Machine. Routledge. https://doi.org/10.4324/9780429033254
This book attempts to address both the engineering issue and the philosophical issue of a machine. It demonstrates the viability of the engineering project and presents the philosopher's specifications to the cognitive-scientist-cum-engineer as to what will count as a primitive android.