Search

Full bibliography 558 resources

  • Our society is in the middle of the AI revolution. We discuss several applications of AI, in particular medical causality, where deep-learning neural networks screen through big data bases, extracting associations between a patient’s condition and possible causes. While beneficial in medicine, several questionable AI trading strategies have emerged in finance. Though advantages in many aspects of our lives, serious threats of AI exist. We suggest several regulatory measures to reduce these threats. We further discuss whether ‘full AI robots’ should be programmed with a virtual consciousness and conscience. While this would reduce AI threats via motivational control, other threats such as the desire for AI—human socioeconomic equality could prove detrimental.

  • In the last 10 years, Artificial Intelligence (AI) has seen successes in fields such as natural language processing, computer vision, speech recognition, robotics and autonomous systems. However, these advances are still considered as Narrow AI, i.e. AI built for very specific or constrained applications. These applications have its usefulness in improving the quality of human life; but it is not good enough to do highly general tasks like what the human can do. The holy grail of AI research is to develop Strong AI or Artificial General Intelligence (AGI), which produces human-level intelligence, i.e. the ability to sense, understand, reason, learn and act in dynamic environments. Strong AI is more than just a composition of Narrow AI technologies. We proposed that it has to be a holistic approach towards understanding and reacting to the operating environment and decision-making process. The Strong AI must be able to demonstrate sentience, emotional intelligence, imagination, effective command of other machines or robots, and self-referring and self-reflecting qualities. This paper will give an overview of current Narrow AI capabilities, present the technical gaps, and highlight future research directions for Strong AI. Could Strong AI become conscious? We provide some discussion pointers.

  • The technological singularity is popularly envisioned as a point in time when (a) an explosion of growth in artificial intelligence (AI) leads to machines becoming smarter than humans in every capacity, even gaining consciousness in the process; or (b) humans become so integrated with AI that we could no longer be called human in the traditional sense. This article argues that the technological singularity does not represent a point in time but a process in the ongoing construction of a collective consciousness. Innovations from the earliest graphic representations to the present reduced the time it took to transmit information, reducing the cognitive space between individuals. The steady pace of innovations ultimately led to the communications satellite, fast-tracking this collective consciousness. The development of AI in the late 1960s has been the latest innovation in this process, increasing the speed of information while allowing individuals to shape events as they happen.

  • Cognitive radios (CRs) use artificial intelligence algorithms to obtain an improved quality of service (QoS). CRs also benefit from meta-cognition algorithms that enable them to determine the most suitable intelligent algorithm for achieving their operational goals. Examples of intelligent algorithms that are used by CRs are support vector machines, artificial neural networks and hidden markov models. Each of these intelligent algorithms can be realized in a different manner and used for different tasks such as predicting the idle state and duration of a channel. The CR benefits from jointly using these intelligent algorithms and selecting the most suitable algorithm for prediction at an epoch of interest. The incorporation of meta-cognition also furnishes the CR with consciousness. This is because it makes the CR aware of its learning mechanisms. CR consciousness consumes the CR resources i.e. battery and memory. The resource consumption should be reduced to enhance CR's resources available for data transmission. The discussion in this paper proposes a meta-cognitive solution that reduces CR resources associated with maintaining consciousness. The proposed solution incorporates the time domain and uses information on the duration associated with executing learning and data transmission tasks. In addition, the proposed solution is integrated in a multimode CR. Evaluation shows that the performance improvement for the CR transceiver power, computational resources and channel capacity lies in the range 18.3% - 42.5% , 21.6% - 44.8% and 9.5% - 56.3% on average, respectively.

  • In today’s society, it becomes increasingly important to assess which non-human and non-verbal beings possess consciousness. This review article aims to delineate criteria for consciousness especially in animals, while also taking into account intelligent artifacts. First, we circumscribe what we mean with “consciousness” and describe key features of subjective experience: qualitative richness, situatedness, intentionality and interpretation, integration and the combination of dynamic and stabilizing properties. We argue that consciousness has a biological function, which is to present the subject with a multimodal, situational survey of the surrounding world and body, subserving complex decision-making and goal-directed behavior. This survey reflects the brain’s capacity for internal modeling of external events underlying changes in sensory state. Next, we follow an inside-out approach: how can the features of conscious experience, correlating to mechanisms inside the brain, be logically coupled to externally observable (“outside”) properties? Instead of proposing criteria that would each define a “hard” threshold for consciousness, we outline six indicators: (i) goal-directed behavior and modelbased learning; (ii) anatomic and physiological substrates for generating integrative multimodal representations; (iii) psychometrics and meta-cognition; (iv) episodic memory; (v) susceptibility to illusions and multistable perception; and (vi) specific visuospatial behaviors. Rather than emphasizing a particular indicator as being decisive, we propose that the consistency amongst these indicators can serve to assess consciousness in particular species. The integration of scores on the various indicators yields an overall, graded criterion for consciousness, somewhat comparable to the Glasgow Coma Scale for unresponsive patients. When considering theoretically derived measures of consciousness, it is argued that their validity should not be assessed on the basis of a single quantifiable measure, but requires cross-examination across multiple pieces of evidence, including the indicators proposed here. Current intelligent machines, including deep learning neural networks (DLNNs) and agile robots, are not indicated to be conscious yet. Instead of assessing machine consciousness by a brief Turing-type of test, evidence for it may gradually accumulate when we study machines ethologically and across time, considering multiple behaviors that require flexibility, improvisation, spontaneous problem-solving and the situational conspectus typically associated with conscious experience.

  • In the future, robots will increasingly resemble human beings and people will engage in social interaction with them. Accordingly, this paper aims to pave the way for analyse the research problem in the case of social robots, the probable legal status of artificial intelligence in the future. The article will discuss the differences between artificial intelligence versus artificial consciousness because AI poses societal challenges so it is currently undergoing a number of important developments and the law must be rapidly changed in society so that firstly, the difference between artificial intelligence and artificial consciousness is attempted to be demystified. Subsequently, the analysis will be subjected to a current legal status of Artificial Intelligence in EU with particular emphasis on case-law in matters of intellectual property. Also possible future scenarios will be discussed in this article The starting point of the research were source queries and literature studies aimed at jointly defining competence profiles of robot-human and key from the point of view of cybersecurity challenges 4.0. Next, the most important legal and political EU programming documents were analysed and assessed in terms of their vision of society 4.0. Next, a decision-making method was used to see the impact of particular instruments applied by the union in the framework of the policy of cyberspace protection and the phenomenon of robot-human relations. In connection with this, the following basic questions arise: firstly, the direction in which contemporary policy of combating cyber-terrorism should be aimed at institutional and legal matters, and secondly, to what extent well-guided cyber-security policy is able to influence the security of relations robot-human?

  • Conscious processing is a useful aspect of brain function that can be used as a model to design artificial-intelligence devices. There are still certain computational features that our conscious brains possess, and which machines currently fail to perform those. This paper discusses the necessary elements needed to make the device conscious and suggests if those implemented, the resulting machine would likely to be considered conscious. Consciousness mainly presented as a computational tool that evolved to connect the modular organization of the brain. Specialized modules of the brain process information unconsciously and what we subjectively experience as consciousness is the global availability of data, which is made possible by a nonmodular global workspace. During conscious perception, the global neuronal work space at parieto-frontal part of the brain selectively amplifies relevant pieces of information. Supported by large neurons with long axons, which makes the long-distance connectivity possible, the selected portions of information stabilized and transmitted to all other brain modules. The brain areas that have structuring ability seem to match to a specific computational problem. The global workspace maintains this information in an active state for as long as it is needed. In this paper, a broad range of theories and specific problems have been discussed, which need to be solved to make the machine conscious. Later particular implications of these hypotheses for research approach in neuroscience and machine learning are debated.

  • Target subject is a module called ‘AI Technology’, which applied the ideas of blended learning. Firstly, lecture-style teaching was conducted with presentation slides in order to explain the contents of a textbook. Secondly, students were required to do exercises and quizzes. By using the last eight weeks, they were asked to create presentation slides outside a class to introduce the up-to-date topics on artificial intelligence. These slides were mutually evaluated among them so that they developed their own slides based on the feedback before the tenth week of the course for the second round of mutual evaluations. Questionnaires concerning students’ understanding technical terms of the field and consciousness-raising towards competence were also conducted before and after the programs. The learning effects of a module in ‘AI Technology’ are compared with my previous research outcome of the module, ‘Artificial Intelligence’. The reasons of difference between both modules are discussed. This paper reports their results.</p>

  • This paper presents the author’s attempt to justify the need for understanding the problem of multilevel mind in artificial intelligence systems. Thus, it is assumed that consciousness and the unconscious are not equal in natural mental processes. The human conscious is supposedly a “superstructure” above the unconscious automatic processes. Nevertheless, it is the unconscious that is the basis for the emotional and volitional manifestations of the human psyche and activity. At the same time, the alleged mental activity of Artificial Intelligence may be devoid of the evolutionary characteristics of the human mind. Several scenarios are proposed for the possible development of a “strong” AI through the prism of creation (or evolution) of the machine unconscious. In addition, we propose two opposite approaches regarding the relationship between the unconscious and the conscious.

  • This paper describes a possible way to improve computer security by implementing a program which implements the following three features related to a weak notion of artificial consciousness: (partial) self-monitoring, ability to compute the truth of quantifier-free propositions and the ability to communicate with the user. The integrity of the program could be enhanced by using a trusted computing approach, that is to say a hardware module that is at the root of a chain of trust. This paper outlines a possible approach but does not refer to an implementation (which would need further work), but the author believes that an implementation using current processors, a debugger, a monitoring program and a trusted processing module is currently possible.

  • Target subject is a module called ‘AI Technology’, which applied the ideas of blended learning. Firstly, lecture-style teaching was conducted with presentation slides in order to explain the contents of a textbook. Secondly, students were required to do exercises and quizzes. By using the last eight weeks, they were asked to create presentation slides outside a class to introduce the up-to-date topics on artificial intelligence. These slides were mutually evaluated among them so that they developed their own slides based on the feedback before the tenth week of the course for the second round of mutual evaluations. Improving consciousness of a module is meaningful. To know the reasons is more significant. For such occasions activities useful for improving consciousness of a module in ‘AI Technology’ are found. Then it is compared with my previous research outcome of the module, ‘Artificial Intelligence’. Students are categorized into four groups with degree scales of consciousness by principal component analysis. This paper reports their results.

  • Recently, there has been considerable interest and effort to the possibility to design and implement conscious robots, i.e., the chance that robots may have subjective experiences. Typical approaches as the global workspace, information integration, enaction, cognitive mechanisms, embodiment, i.e., the Good Old-Fashioned Artificial Consciousness, henceforth, GOFAC, share the same conceptual framework. In this paper, we discuss GOFAC's basic tenets and their implication for AI and Robotics. In particular, we point out the intermediate level fallacy as the central issue affecting GOFAC. Finally, we outline a possible alternative conceptual framework toward robot consciousness.</p>

  • A model of an intentional self-observing system is proposed based on the structure and functions of astrocyte-synapse interactions in tripartite synapses. Astrocyte-synapse interactions are cyclically organized and operate via feedforward and feedback mechanisms, formally described by proemial counting. Synaptic, extrasynaptic and astrocyte receptors are interpreted as places with the same or different quality of information processing described by the combinatorics of tritograms. It is hypothesized that receptors on the astrocytic membrane may embody intentional programs that select corresponding synaptic and extrasynaptic receptors for the formation of receptor-receptor complexes. Basically, the act of self-observation is generated if the actual environmental information is appropriate to the intended observation processed by receptor-receptor complexes. This mechanism is implemented for a robot brain enabling the robot to experience environmental information as “its own”. It is suggested that this mechanism enables the robot to generate matches and mismatches between intended observations and the observations in the environment, based on the cyclic organization of the mechanism. In exploring an unknown environment the robot may stepwise construct an observation space, stored in memory, commanded and controlled by the intentional self-observing system. Finally, the role of self-observation in machine consciousness is shortly discussed.

  • The major aim of artificial general intelligence’s (AGI) is to allow a machine to perform general intelligence tasks similar to human counterparts. Hypothetically, this general intelligence in a machine can be achieved by establishing cross-domain optimization and learning machine approaches. However, contemporary artificial intelligence (AI) capabilities are only limited to narrow and specific domains utilizing machine learning. Consciousness concept is particularly interesting topic to attain the approaches because it simultaneously encodes and processes all types of information and seamlessly integrates them. Over the last several years, there has been a resurgence of interest in testing theories of consciousness using computer models. The studies of these models are classified into four categories: external behavior associated with consciousness, cognitive characteristics associated with consciousness, a computational architecture correlate of human consciousness and phenomenally of conscious machine. The critical challenge is to determine whether these artificial systems are capable of conscious states by providing a measurement the extent to which the systems are succeeded in realizing consciousness in a machine. Several tests for machine consciousness have been proposed yet their formulation is based on extrinsic measurement of consciousness. Yet extrinsic measurement is not inclusive because many conscious artificial systems behave implicitly. This research proposes a new framework to test machine consciousness based on intrinsic measurement so-called Pak Pandir test. The framework leverages three quantum double-slit settings and information integration theory as consciousness definition of choice.

  • Mind and intelligence are closely related with the consciousness. Indeed, artificial intelligence (AI) is the most promising avenue towards artificial consciousness (AC). However, in literature, consciousness has been considered as the least amenable to being understood or replicated by AI. Further, computational theories of mind (CTMs) render the mind as a computational system and it is treated as a substantial hypothesis within the purview of AI. However, the consciousness, which is a phenomenon of mind, is partially tackled by this theory and it seems that the CTM is not corroborated considerably in this pursuit. Many valuable contributions have been incorporated by the researchers working strenuously in this domain. However, there is still scarcity of globally accepted computational models of consciousness that can be used to design conscious intelligent machines. The contributions of the researchers entail consciousness as a vague, incomplete and human-centred entity. In this paper, attempt has been made to analyse different theoretical and intricate issues pertaining to mind, intelligence and AC. Moreover, this paper discusses different computational models of the consciousness and critically analyses the possibility of generating the machine consciousness as well as identifying the characteristics of conscious machine. Further, different inquisitive questions, e.g., “Is it possible to devise, project and build a conscious machine?”, “Will artificially conscious machines be able to surpass the functioning of artificially intelligent machines?” and “Does consciousness reflect a peculiar way of information processing?” are analysed.

  • Given that consciousness is an essential ingredient for achieving Singularity, the notion that an Artificial General Intelligence device can exceed the intelligence of a human, namely, the question of whether a computer can achieve consciousness, is explored. Given that consciousness is being aware of one’s perceptions and/or of one’s thoughts, it is claimed that computers cannot experience consciousness. Given that it has no sensorium, it cannot have perceptions. In terms of being aware of its thoughts it is argued that being aware of one’s thoughts is basically listening to one’s own internal speech. A computer has no emotions, and hence, no desire to communicate, and without the ability, and/or desire to communicate, it has no internal voice to listen to and hence cannot be aware of its thoughts. In fact, it has no thoughts, because it has no sense of self and thinking is about preserving one’s self. Emotions have a positive effect on the reasoning powers of humans, and therefore, the computer’s lack of emotions is another reason for why computers could never achieve the level of intelligence that a human can, at least, at the current level of the development of computer technology.

  • Some humans may seek to purchase pain-feeling robots for the purpose of torturing them – a sad fact about some humans. This chapter explores the Chinese room argument against robot pain. The role of the Chinese room thought experiment is to establish the truth of the claim that it is indeed possible for something or someone to run any arbitrarily selected program without thereby understanding Chinese. One way to construct a robotic copy of a human is by gradually transforming a human into a robot by a sequence of prosthetic replacements of the human’s naturally occurring parts, especially parts of the nervous system, with artificial analogs. Like all physiological systems in the human body, the nervous system is composed of causally interacting cells. The chapter emphasizes the ways in which the thought experiments in the respective arguments attempt to marshal hypothetical first-person accessible evidence concerning how one’s own mental life appears to oneself.

  • Humans are active agents in the design of artificial intelligence (AI), and our input into its development is critical. A case is made for recognizing the importance of including non-ordinary functional capacities of human consciousness in the development of synthetic life, in order for the latter to capture a wider range in the spectrum of neurobiological capabilities. These capacities can be revealed by studying self-cultivation practices designed by humans since prehistoric times for developing non-ordinary functionalities of consciousness. A neurophenomenological praxis is proposed as a model for self-cultivation by an agent in an entropic world. It is proposed that this approach will promote a more complete self-understanding in humans and enable a more thoroughly mutually-beneficial relationship between in life in vivo and in silico.

  • Artificial intelligence capacities for consciousness are not equivalent to human consciousness—the level of autonomous, independent, volitional behavioral control characterized by independently functioning biologic systems. There is strong evidence, however, that artificially created systems including Web-based search engines empirically demonstrate machine-based equivalents of aspects of consciousness. They have attained this capacity as based on high-level (in some cases suprabiologic) capabilities in defined aspects of consciousness (intelligence, attention, autonomy, and intention). Such systems also have the capacity to meet multiple definition criteria for having cognitive processing that is approximately equivalent to dreaming. Computer–human interface systems have expanded both human and machine capacity to address and extend scientific understanding at all epistemological levels of current inquiry. Complexity theories of consciousness can be used to theoretically support such an attainment of conscious function.

Last update from database: 3/23/25, 8:36 AM (UTC)