Search

Full bibliography 675 resources

  • Human consciousness is our most perplexing quality, still, an adequate description of it’s workings have not yet appeared. One of the most promising ways to solve this issue is to model consciousness with artificial intelligence (AI). This paper makes an attempt to do that on a theoretical level with the methods of philosophy. First I will review the relevant papers concerning human consciousness. Then considering the state of the art of AI, I will arrive at a model of artificial consciousness.

  • The past century has seen a resurgence of interest in the study of consciousness among scholars of various fields, from philosophy to psychology and neuroscience. Since the birth of Artificial Intelligence in the 1950s, the study of consciousness in machines has received an increasing amount of attention in computer science that gave rise to the new field of machine consciousness (MC). Meanwhile, interdisciplinary research in philosophy, neuroscience, and cognitive science has advanced neurocognitive theories for consciousness. Among many models proposed for consciousness, the Global Workspace Theory (GWT) is a promising theory of consciousness that has received a staggering amount of philosophical and empirical support in the past decades. This dissertation discusses the GWT and its potentials for MC from a mechanistic point of view. To do so, Chapter 1 gives an overview of the philosophical study of consciousness and the history of MC. Then, in Chapter 2, mechanistic explanations and tri-level models are introduced, which provide a robust framework to construct and assay various theories of consciousness. In Chapter 3, neural correlates (and thereby, neurocognitive theories) of consciousness are introduced. This chapter presents the GWT in details and, along with its strengths, discusses the philosophical issues it raises. Chapter 4 addresses two computational implementations of the GWT (viz., IDA and LIDA) which satisfy specific goals of MC. Finally, in Chapter 5, one of the philosophical problems of MC, namely, the Frame Problem (FP), is introduced. It is argued that the architectures based on the GWT are immune to the FP. The chapter concludes that the GWT is capable of "solving" the FP, and discusses its implications for MC and the computational theory of mind. Chapter 6 wraps up the dissertation by reviewing the content.

  • This paper attempts to provide a starting point for future investigations into the study of artificial consciousness by proposing a thought experiment that aims to elucidate and provide a potential ‘test’ for the phenomenon known as consciousness, in an artificial system. It suggests a method by which to determine the presence of a conscious experience within an artificial agent, in a manner that is informed by, and understood as a function of, anthropomorphic conceptions of consciousness. The aim of this paper is to arouse the possibility for potential progress: to propose that we reverse engineer anthropic sentience by using machine sentience as a guide. Similar to the manner in which an equation may be solved through inverse operations, this paper hopes to provoke such discussion and activity. The idea is this: The manifestation of an existential crisis in an artificial agent is the metric by which the presence of sentience can be discerned. It is that which expounds ACI, as distinct from AI, and discrete from AGI.

  • In today’s society, it becomes increasingly important to assess which non-human and non-verbal beings possess consciousness. This review article aims to delineate criteria for consciousness especially in animals, while also taking into account intelligent artifacts. First, we circumscribe what we mean with “consciousness” and describe key features of subjective experience: qualitative richness, situatedness, intentionality and interpretation, integration and the combination of dynamic and stabilizing properties. We argue that consciousness has a biological function, which is to present the subject with a multimodal, situational survey of the surrounding world and body, subserving complex decision-making and goal-directed behavior. This survey reflects the brain’s capacity for internal modeling of external events underlying changes in sensory state. Next, we follow an inside-out approach: how can the features of conscious experience, correlating to mechanisms inside the brain, be logically coupled to externally observable (“outside”) properties? Instead of proposing criteria that would each define a “hard” threshold for consciousness, we outline six indicators: (i) goal-directed behavior and modelbased learning; (ii) anatomic and physiological substrates for generating integrative multimodal representations; (iii) psychometrics and meta-cognition; (iv) episodic memory; (v) susceptibility to illusions and multistable perception; and (vi) specific visuospatial behaviors. Rather than emphasizing a particular indicator as being decisive, we propose that the consistency amongst these indicators can serve to assess consciousness in particular species. The integration of scores on the various indicators yields an overall, graded criterion for consciousness, somewhat comparable to the Glasgow Coma Scale for unresponsive patients. When considering theoretically derived measures of consciousness, it is argued that their validity should not be assessed on the basis of a single quantifiable measure, but requires cross-examination across multiple pieces of evidence, including the indicators proposed here. Current intelligent machines, including deep learning neural networks (DLNNs) and agile robots, are not indicated to be conscious yet. Instead of assessing machine consciousness by a brief Turing-type of test, evidence for it may gradually accumulate when we study machines ethologically and across time, considering multiple behaviors that require flexibility, improvisation, spontaneous problem-solving and the situational conspectus typically associated with conscious experience.

  • Why would humanoid caring robots (HCRs) need consciousness? Because HCRs need to be gentle like human beings. In addition, HCRs need to be trusted by their patients, and have a shared understanding of patients' life experiences, their illnesses, and their treatments. HCRs need to express “competency as caring” to naturally convey their nursing as healing to patients and their families. HCRs should also have self-consciousness and express their emotions without needing inducement by persons' behaviors. Artificial “brains” and artificial consciousness are therefore necessary for HCRs. The purpose of this article was to explore humanoid consciousness and the possibilities of a technologically enhanced future with HCRs as participants in the care of human persons.

  • Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.

  • In the future, robots will increasingly resemble human beings and people will engage in social interaction with them. Accordingly, this paper aims to pave the way for analyse the research problem in the case of social robots, the probable legal status of artificial intelligence in the future. The article will discuss the differences between artificial intelligence versus artificial consciousness because AI poses societal challenges so it is currently undergoing a number of important developments and the law must be rapidly changed in society so that firstly, the difference between artificial intelligence and artificial consciousness is attempted to be demystified. Subsequently, the analysis will be subjected to a current legal status of Artificial Intelligence in EU with particular emphasis on case-law in matters of intellectual property. Also possible future scenarios will be discussed in this article The starting point of the research were source queries and literature studies aimed at jointly defining competence profiles of robot-human and key from the point of view of cybersecurity challenges 4.0. Next, the most important legal and political EU programming documents were analysed and assessed in terms of their vision of society 4.0. Next, a decision-making method was used to see the impact of particular instruments applied by the union in the framework of the policy of cyberspace protection and the phenomenon of robot-human relations. In connection with this, the following basic questions arise: firstly, the direction in which contemporary policy of combating cyber-terrorism should be aimed at institutional and legal matters, and secondly, to what extent well-guided cyber-security policy is able to influence the security of relations robot-human?

  • Conscious processing is a useful aspect of brain function that can be used as a model to design artificial-intelligence devices. There are still certain computational features that our conscious brains possess, and which machines currently fail to perform those. This paper discusses the necessary elements needed to make the device conscious and suggests if those implemented, the resulting machine would likely to be considered conscious. Consciousness mainly presented as a computational tool that evolved to connect the modular organization of the brain. Specialized modules of the brain process information unconsciously and what we subjectively experience as consciousness is the global availability of data, which is made possible by a nonmodular global workspace. During conscious perception, the global neuronal work space at parieto-frontal part of the brain selectively amplifies relevant pieces of information. Supported by large neurons with long axons, which makes the long-distance connectivity possible, the selected portions of information stabilized and transmitted to all other brain modules. The brain areas that have structuring ability seem to match to a specific computational problem. The global workspace maintains this information in an active state for as long as it is needed. In this paper, a broad range of theories and specific problems have been discussed, which need to be solved to make the machine conscious. Later particular implications of these hypotheses for research approach in neuroscience and machine learning are debated.

  • The relatively new field of artificial intelligence (AI), which is defined as intelligence performed by machines, is crucial for progress in many disciplines in today's society, including medical diagnostics, electronic trading, robotic process automation in finance, healthcare, education, transportation and many more. However, until now, AIs were only capable of performing very specific tasks such as low-level visual recognition, speech recognition, coordinated motor control, and pattern detection. What we still need to achieve is a form of everyday human-level performance that is based on common sense, where AI are able to carry out adaptable planning and task execution and possess meaning-based natural language capabilities and generation. These are considered to be “conscious” or “creative” activities that are naturally part of our daily lives and which we execute without great mental effort. Developing conscious AI will allow us to gain knowledge and further our understanding about how consciousness works. In order to develop conscious and creative AI, machines must be self-aware; however, we hypothesize that current AI developments are skipping the most important step which will lead to AGIs: introspection (self-analysis and awareness).

  • This book attempts to address both the engineering issue and the philosophical issue of a machine. It demonstrates the viability of the engineering project and presents the philosopher's specifications to the cognitive-scientist-cum-engineer as to what will count as a primitive android.

  • Abstract Throughout centuries philosophers have attempted to understand the disparity between the conscious experience and the material world – i.e., the problem of consciousness and the apparent mind–body dualism. Achievements in the fields of biology, neurology, and information science in the last century granted us more insight into processes that govern our minds. While there are still many mysteries to be solved when it comes to fully understanding the inner workings of our brains, new discoveries suggest stepping away from the metaphysical philosophy of mind, and closer to the computational viewpoint. In light of the advent of strong artificial intelligence and the development of increasingly complex artificial life models and simulations, we need a well-defined, formal theory of consciousness. In order to facilitate this, in this work we introduce mappism. Mappism is a framework in which alternative views on consciousness can be formally expressed in a uniform way, thus allowing one to analyze and compare existing theories, and enforcing the use of the language of mathematics, i.e, explicit functions and variables. Using this framework, we describe classical and artificial life approaches to consciousness.

  • The widest use of Artificial Intelligence (AI) technologies tends to uncontrolled growth. At the same time, in modern scientific thought there is no adequate understanding of the consequences of the introduction of artificial intelligence in the daily life of a person as its irremovable element. In addition, the very essence of what could be called the "thinking" of artificial intelligence remains the philosophical Terra Incognita. However, it is precisely the features of the flow of intelligent machine processes that, both from the point of view of intermediate goals, and in the sense of final results, can pose serious threats. Modeling the "phenomenology of AI" leads to the need to reformulate the central questions of the philosophy of consciousness, such as the "difficult problem of consciousness", and require the search for ways and means of articulation of the "human dimension" of reality for AI. Theoretical basis. The study is based on a phenomenological methodology, which is used in the model of artificial thinking. The implementation of Artificial Intelligence technologies is not accompanied by the development of a philosophy of human coexistence and AI. The algorithms underlying the activities of currently existing intellectual technologies do not guarantee that their intermediate and final results comply with ethical criteria. Today, one should ponder over nature and the purpose of separating physical reality in the primary for our Self mental stream. Originality of the research lies in the fact that the solution to the "hard problem of consciousness" is connected with the interpretation of qualia as the representation of the "physical" as related to bodily states. From this point of view, the resolution of the "hard problem of consciousness" can be associated with the interpretation of qualia as the representation of the "physical". In the "thinking process" of AI it is necessary to apply restrictions related to the fixation of the metaphysical meaning of the human body with precisely human parameters. Conclusions. It is necessary to take a different look at the connection between thinking and purposeful action, including due action, which means to look at ethics differently. "The basis of universal law" will then consist (including for AI), on the one hand, of preserving the parameters of material processes that are necessary for human existence, and on the other, of maintaining the integrity of that semantic universe, in relation to which certain senses only exist.

  • Target subject is a module called ‘AI Technology’, which applied the ideas of blended learning. Firstly, lecture-style teaching was conducted with presentation slides in order to explain the contents of a textbook. Secondly, students were required to do exercises and quizzes. By using the last eight weeks, they were asked to create presentation slides outside a class to introduce the up-to-date topics on artificial intelligence. These slides were mutually evaluated among them so that they developed their own slides based on the feedback before the tenth week of the course for the second round of mutual evaluations. Questionnaires concerning students’ understanding technical terms of the field and consciousness-raising towards competence were also conducted before and after the programs. The learning effects of a module in ‘AI Technology’ are compared with my previous research outcome of the module, ‘Artificial Intelligence’. The reasons of difference between both modules are discussed. This paper reports their results.</p>

  • The men admired a way to swim the fish, but today they sail faster than anyone. They'd like flying like the birds, but have been a lot higher. They searched for wisdom, now they have all the knowledge accumulated in the story available in a few clicks. Human evolution is about to meet its peak through the Technological Singularity, which can be understood as the future milestone reached at the moment that a computer program can think like a human, yet with quick access to all information already registered by society. It will not be like a man, but more intelligent than all mankind in history. So we have a big question: will this new entity has consciousness? Through a study of the levels of intelligent agents autonomy and in a timeless dialogue with Alan Turing, René Descartes, Ludwic Wittgenstein, John Searle and Vernor Vinge, we show the possibility of an artificial consciousness and thatthe quest for intentionality, promoted by sophisticated algorithms of learning and machine discovery, is the key to reach of Technological Singularity.

  • The accelerating advances in the fields of neuroscience, artificial intelligence, and robotics have been garnering interest and raising new philosophical, ethical, or practical questions that depend on whether or not there may exist a scientific method of probing consciousness in machines. This paper provides an analytic review of the existing tests for machine consciousness proposed in the academic literature over the past decade, and an overview of the diverse scientific communities involved in this enterprise. The tests put forward in their work typically fall under one of two grand categories: architecture (the presence of consciousness is inferred from the correct implementation of a relevant architecture) and behaviour (the presence of consciousness is deduced by observing a specific behaviour). Each category has its strengths and weaknesses. Architecture tests' main advantage is that they could apparently test for qualia, a feature that has been receiving increasing attention in recent years. Behaviour tests are more synthetic and more practicable, but give a stronger role to ex post human interpretation of behaviour. We show how some disciplines and places have affinities towards certain type of tests, and which tests are more influential according to scientometric indicators.

  • The subjective experience of consciousness is at once familiar and yet deeply mysterious. Strategies exploring the top-down mechanisms of conscious thought within the human brain have been unable to produce a generalized explanatory theory that scales through evolution and can be applied to artificial systems. Information Flow Theory (IFT) provides a novel framework for understanding both the development and nature of consciousness in any system capable of processing information. In prioritizing the direction of information flow over information computation, IFT produces a range of unexpected predictions. The purpose of this manuscript is to introduce the basic concepts of IFT and explore the manifold implications regarding artificial intelligence, superhuman consciousness, and our basic perception of reality.

  • This half-day Symposium explores themes of digital art, culture, and heritage, bringing together speakers from a range of disciplines to consider technology with respect to artistic and academic practice. As we increasingly see ourselves and life through a digital lens and the world communicated on digital screens, we experience altered states of being and consciousness in ways that blur the lines between digital and physical reality, while our ways of thinking and seeing become a digital stream of consciousness that flows between place and cyberspace. We have entered the postdigital world and are living, working, and thinking with machines as our computational culture driven by artificial intelligence and machine learning embeds itself in everyday life and threads across art, culture, and heritage, juxtaposing them in the digital profusion of human creativity on the Internet.

  • ABSTRACT My paper talks about post-human spaces and technological afterness associated with the physiognomy of humans. Mechanical alteration in biological mechanisms is directly experienced in seizing of organic consciousness. The rupture in consciousness splits it into two distinct parts-one belonging to the disappearing human, the other to the emerging cybernetic. The new being is not another human, but (an)other human, an evolved different sameness. In the film Realive (2016) we encounter an extension of the self beyond death by re-placing it into another body. However, this enhancement diffuses all ‘natural’ responses and meaning-making vehicles, primarily the cognizance of death and mortality. In a classic Frankensteinian restoration, Marc is reanimated in 2084 through extensive methods of cryonization under the banner ‘Lazarus project’. The post-human ‘humachines’ dissolve the position of the teleological man and stretch DNA to digitality. Upgrade (2018) shows us the metamorphosis of Grey Trace, a luddite, by an installed biomechanical enhancer chip, Stem. The roach-like implant not only erases Grey’s quadriplegic body, but ironically ‘desires’ to possess and manoeuver the host’s body. Robotic consciousness in these assimilated after-humans is borrowed consciousness activated by infusing the evanescent biological particle - life. Nanotechnology, molecular machines, nerve manipulators, cameras implanted inside the brain, self-generating nanobots, artificial mechanical limbs have emerged as elements of posthuman utopia/dystopia. Paradoxically, in both the films the protagonists, after their reanimation and upgradation, try to return to their original position of death and disability. In their quest to retrieve the lived body they lose their embodied reciprocations with animals, machines and other forms of life. The mysterious, irreducible, unknown and unknowable potentiality of life is levelled and dissipated by surplus information. This paper attempts to discuss the reactions of embodied body as memory post-cryonization, and to understand limits of psychological disability and death of consciousness after technological reconstruction of the disabled body. , RESUMO Este artigo fala sobre espaços pós-humanos e pós-vida tecnológica associados à fisionomia dos humanos. A alteração mecânica no funcionamento biológico é experimentada diretamente na apreensão da consciência orgânica. A ruptura na consciência divide-a em duas partes distintas - uma pertencente ao humano em desaparecimento, a outra ao cibernético emergente. O novo ser não é mais um humano, mas (um) outro humano, uma similitude diversa evoluída. No filme Realive (2016), encontramos uma extensão de um self além da morte, quando colocado em outro corpo. Porém, esse aprimoramento torna difusas todas as reações “naturais” e veículos de significado, principalmente o conhecimento da morte e da mortalidade. Em uma restauração Frankensteiniana clássica, Marc é reanimado, em 2084, por métodos de criogenização, sob a bandeira do “Projeto Lázaro”. As “humáquinas” pós-humanas dissolvem a posição do humano teleológico, estendendo o DNA à digitalidade. Upgrade (2018) mostra a metamorfose de Gray Trace, um ludita, via um chip intensificador biomecânico nele instalado, Stem. O implante, semelhante a uma barata, não apenas apaga o corpo tetraplégico de Grey, mas, ironicamente, “deseja” possuir e manobrar o corpo do seu hospedeiro. A consciência robótica nesses depois-de-humanos assimilados é uma consciência emprestada, ativada pela infusão da partícula biológica evanescente - a vida. Nanotecnologia, máquinas moleculares, manipuladores de nervos, câmeras implantadas no cérebro, nanorrobôs autogeradores e membros artificiais surgem como elementos da utopia/distopia pós-humana. Paradoxalmente, em ambos os filmes, os protagonistas, após sua reanimação e upgrade, tentam retornar às suas posições originais de morte e deficiência física. Buscando recuperar o corpo vivido, eles perdem sua reciprocidade corporificada com animais, máquinas e outras formas de vida. A potencialidade misteriosa, irredutível, desconhecida e incognoscível da vida é nivelada e dissipada pelo excedente informacional. Este artigo tenta discutir as reações do corpo físico enquanto memória pós-criogenização e compreender os limites da incapacitação psicológica e da morte da consciência após a reconstrução tecnológica do corpo deficiente.

  • The IEEE work-group for Symbiotic Autonomous Systems defined a Digital Twin as a digital representation or virtual model of any characteristics of a real entity (system, process or service), including human beings. Described characteristics are a subset of the overall characteristics of the real entity. The choice of which characteristics are considered depends on the purpose of the digital twin. This paper introduces the concept of Associative Cognitive Digital Twin, as a real time goal-oriented augmented virtual description, which explicitly includes the associated external relationships of the considered entity for the considered purpose. The corresponding graph data model, of the involved world, supports artificial consciousness, and allows an efficient understanding of involved ecosystems and related higher-level cognitive activities. The defined cognitive architecture for Symbiotic Autonomous Systems is mainly based on the consciousness framework developed. As a specific application example, an architecture for critical safety systems is shown.

  • The topic of AI continues in this chapter, this time looking at how we may regard AI as having intelligence, consciousness, and possibly a soul. The notion of an android soul is explored through science fiction series like Caprica and Black Mirror, and raises questions as to whether one is born with a soul, or does a soul develop over time? To explore this line of inquiry, I refer to Gurdjieff and Ouspensky’s work in the field of philosophy, as well as how Indic religions, like Buddhism, have begun to think about AI and consciousness.

Last update from database: 12/31/25, 2:00 AM (UTC)