Search

Full bibliography 716 resources

  • Abstract Throughout centuries philosophers have attempted to understand the disparity between the conscious experience and the material world – i.e., the problem of consciousness and the apparent mind–body dualism. Achievements in the fields of biology, neurology, and information science in the last century granted us more insight into processes that govern our minds. While there are still many mysteries to be solved when it comes to fully understanding the inner workings of our brains, new discoveries suggest stepping away from the metaphysical philosophy of mind, and closer to the computational viewpoint. In light of the advent of strong artificial intelligence and the development of increasingly complex artificial life models and simulations, we need a well-defined, formal theory of consciousness. In order to facilitate this, in this work we introduce mappism. Mappism is a framework in which alternative views on consciousness can be formally expressed in a uniform way, thus allowing one to analyze and compare existing theories, and enforcing the use of the language of mathematics, i.e, explicit functions and variables. Using this framework, we describe classical and artificial life approaches to consciousness.

  • The widest use of Artificial Intelligence (AI) technologies tends to uncontrolled growth. At the same time, in modern scientific thought there is no adequate understanding of the consequences of the introduction of artificial intelligence in the daily life of a person as its irremovable element. In addition, the very essence of what could be called the "thinking" of artificial intelligence remains the philosophical Terra Incognita. However, it is precisely the features of the flow of intelligent machine processes that, both from the point of view of intermediate goals, and in the sense of final results, can pose serious threats. Modeling the "phenomenology of AI" leads to the need to reformulate the central questions of the philosophy of consciousness, such as the "difficult problem of consciousness", and require the search for ways and means of articulation of the "human dimension" of reality for AI. Theoretical basis. The study is based on a phenomenological methodology, which is used in the model of artificial thinking. The implementation of Artificial Intelligence technologies is not accompanied by the development of a philosophy of human coexistence and AI. The algorithms underlying the activities of currently existing intellectual technologies do not guarantee that their intermediate and final results comply with ethical criteria. Today, one should ponder over nature and the purpose of separating physical reality in the primary for our Self mental stream. Originality of the research lies in the fact that the solution to the "hard problem of consciousness" is connected with the interpretation of qualia as the representation of the "physical" as related to bodily states. From this point of view, the resolution of the "hard problem of consciousness" can be associated with the interpretation of qualia as the representation of the "physical". In the "thinking process" of AI it is necessary to apply restrictions related to the fixation of the metaphysical meaning of the human body with precisely human parameters. Conclusions. It is necessary to take a different look at the connection between thinking and purposeful action, including due action, which means to look at ethics differently. "The basis of universal law" will then consist (including for AI), on the one hand, of preserving the parameters of material processes that are necessary for human existence, and on the other, of maintaining the integrity of that semantic universe, in relation to which certain senses only exist.

  • Target subject is a module called ‘AI Technology’, which applied the ideas of blended learning. Firstly, lecture-style teaching was conducted with presentation slides in order to explain the contents of a textbook. Secondly, students were required to do exercises and quizzes. By using the last eight weeks, they were asked to create presentation slides outside a class to introduce the up-to-date topics on artificial intelligence. These slides were mutually evaluated among them so that they developed their own slides based on the feedback before the tenth week of the course for the second round of mutual evaluations. Questionnaires concerning students’ understanding technical terms of the field and consciousness-raising towards competence were also conducted before and after the programs. The learning effects of a module in ‘AI Technology’ are compared with my previous research outcome of the module, ‘Artificial Intelligence’. The reasons of difference between both modules are discussed. This paper reports their results.</p>

  • The men admired a way to swim the fish, but today they sail faster than anyone. They'd like flying like the birds, but have been a lot higher. They searched for wisdom, now they have all the knowledge accumulated in the story available in a few clicks. Human evolution is about to meet its peak through the Technological Singularity, which can be understood as the future milestone reached at the moment that a computer program can think like a human, yet with quick access to all information already registered by society. It will not be like a man, but more intelligent than all mankind in history. So we have a big question: will this new entity has consciousness? Through a study of the levels of intelligent agents autonomy and in a timeless dialogue with Alan Turing, René Descartes, Ludwic Wittgenstein, John Searle and Vernor Vinge, we show the possibility of an artificial consciousness and thatthe quest for intentionality, promoted by sophisticated algorithms of learning and machine discovery, is the key to reach of Technological Singularity.

  • The accelerating advances in the fields of neuroscience, artificial intelligence, and robotics have been garnering interest and raising new philosophical, ethical, or practical questions that depend on whether or not there may exist a scientific method of probing consciousness in machines. This paper provides an analytic review of the existing tests for machine consciousness proposed in the academic literature over the past decade, and an overview of the diverse scientific communities involved in this enterprise. The tests put forward in their work typically fall under one of two grand categories: architecture (the presence of consciousness is inferred from the correct implementation of a relevant architecture) and behaviour (the presence of consciousness is deduced by observing a specific behaviour). Each category has its strengths and weaknesses. Architecture tests' main advantage is that they could apparently test for qualia, a feature that has been receiving increasing attention in recent years. Behaviour tests are more synthetic and more practicable, but give a stronger role to ex post human interpretation of behaviour. We show how some disciplines and places have affinities towards certain type of tests, and which tests are more influential according to scientometric indicators.

  • The subjective experience of consciousness is at once familiar and yet deeply mysterious. Strategies exploring the top-down mechanisms of conscious thought within the human brain have been unable to produce a generalized explanatory theory that scales through evolution and can be applied to artificial systems. Information Flow Theory (IFT) provides a novel framework for understanding both the development and nature of consciousness in any system capable of processing information. In prioritizing the direction of information flow over information computation, IFT produces a range of unexpected predictions. The purpose of this manuscript is to introduce the basic concepts of IFT and explore the manifold implications regarding artificial intelligence, superhuman consciousness, and our basic perception of reality.

  • This half-day Symposium explores themes of digital art, culture, and heritage, bringing together speakers from a range of disciplines to consider technology with respect to artistic and academic practice. As we increasingly see ourselves and life through a digital lens and the world communicated on digital screens, we experience altered states of being and consciousness in ways that blur the lines between digital and physical reality, while our ways of thinking and seeing become a digital stream of consciousness that flows between place and cyberspace. We have entered the postdigital world and are living, working, and thinking with machines as our computational culture driven by artificial intelligence and machine learning embeds itself in everyday life and threads across art, culture, and heritage, juxtaposing them in the digital profusion of human creativity on the Internet.

  • ABSTRACT My paper talks about post-human spaces and technological afterness associated with the physiognomy of humans. Mechanical alteration in biological mechanisms is directly experienced in seizing of organic consciousness. The rupture in consciousness splits it into two distinct parts-one belonging to the disappearing human, the other to the emerging cybernetic. The new being is not another human, but (an)other human, an evolved different sameness. In the film Realive (2016) we encounter an extension of the self beyond death by re-placing it into another body. However, this enhancement diffuses all ‘natural’ responses and meaning-making vehicles, primarily the cognizance of death and mortality. In a classic Frankensteinian restoration, Marc is reanimated in 2084 through extensive methods of cryonization under the banner ‘Lazarus project’. The post-human ‘humachines’ dissolve the position of the teleological man and stretch DNA to digitality. Upgrade (2018) shows us the metamorphosis of Grey Trace, a luddite, by an installed biomechanical enhancer chip, Stem. The roach-like implant not only erases Grey’s quadriplegic body, but ironically ‘desires’ to possess and manoeuver the host’s body. Robotic consciousness in these assimilated after-humans is borrowed consciousness activated by infusing the evanescent biological particle - life. Nanotechnology, molecular machines, nerve manipulators, cameras implanted inside the brain, self-generating nanobots, artificial mechanical limbs have emerged as elements of posthuman utopia/dystopia. Paradoxically, in both the films the protagonists, after their reanimation and upgradation, try to return to their original position of death and disability. In their quest to retrieve the lived body they lose their embodied reciprocations with animals, machines and other forms of life. The mysterious, irreducible, unknown and unknowable potentiality of life is levelled and dissipated by surplus information. This paper attempts to discuss the reactions of embodied body as memory post-cryonization, and to understand limits of psychological disability and death of consciousness after technological reconstruction of the disabled body. , RESUMO Este artigo fala sobre espaços pós-humanos e pós-vida tecnológica associados à fisionomia dos humanos. A alteração mecânica no funcionamento biológico é experimentada diretamente na apreensão da consciência orgânica. A ruptura na consciência divide-a em duas partes distintas - uma pertencente ao humano em desaparecimento, a outra ao cibernético emergente. O novo ser não é mais um humano, mas (um) outro humano, uma similitude diversa evoluída. No filme Realive (2016), encontramos uma extensão de um self além da morte, quando colocado em outro corpo. Porém, esse aprimoramento torna difusas todas as reações “naturais” e veículos de significado, principalmente o conhecimento da morte e da mortalidade. Em uma restauração Frankensteiniana clássica, Marc é reanimado, em 2084, por métodos de criogenização, sob a bandeira do “Projeto Lázaro”. As “humáquinas” pós-humanas dissolvem a posição do humano teleológico, estendendo o DNA à digitalidade. Upgrade (2018) mostra a metamorfose de Gray Trace, um ludita, via um chip intensificador biomecânico nele instalado, Stem. O implante, semelhante a uma barata, não apenas apaga o corpo tetraplégico de Grey, mas, ironicamente, “deseja” possuir e manobrar o corpo do seu hospedeiro. A consciência robótica nesses depois-de-humanos assimilados é uma consciência emprestada, ativada pela infusão da partícula biológica evanescente - a vida. Nanotecnologia, máquinas moleculares, manipuladores de nervos, câmeras implantadas no cérebro, nanorrobôs autogeradores e membros artificiais surgem como elementos da utopia/distopia pós-humana. Paradoxalmente, em ambos os filmes, os protagonistas, após sua reanimação e upgrade, tentam retornar às suas posições originais de morte e deficiência física. Buscando recuperar o corpo vivido, eles perdem sua reciprocidade corporificada com animais, máquinas e outras formas de vida. A potencialidade misteriosa, irredutível, desconhecida e incognoscível da vida é nivelada e dissipada pelo excedente informacional. Este artigo tenta discutir as reações do corpo físico enquanto memória pós-criogenização e compreender os limites da incapacitação psicológica e da morte da consciência após a reconstrução tecnológica do corpo deficiente.

  • The IEEE work-group for Symbiotic Autonomous Systems defined a Digital Twin as a digital representation or virtual model of any characteristics of a real entity (system, process or service), including human beings. Described characteristics are a subset of the overall characteristics of the real entity. The choice of which characteristics are considered depends on the purpose of the digital twin. This paper introduces the concept of Associative Cognitive Digital Twin, as a real time goal-oriented augmented virtual description, which explicitly includes the associated external relationships of the considered entity for the considered purpose. The corresponding graph data model, of the involved world, supports artificial consciousness, and allows an efficient understanding of involved ecosystems and related higher-level cognitive activities. The defined cognitive architecture for Symbiotic Autonomous Systems is mainly based on the consciousness framework developed. As a specific application example, an architecture for critical safety systems is shown.

  • The topic of AI continues in this chapter, this time looking at how we may regard AI as having intelligence, consciousness, and possibly a soul. The notion of an android soul is explored through science fiction series like Caprica and Black Mirror, and raises questions as to whether one is born with a soul, or does a soul develop over time? To explore this line of inquiry, I refer to Gurdjieff and Ouspensky’s work in the field of philosophy, as well as how Indic religions, like Buddhism, have begun to think about AI and consciousness.

  • What is the capacity of an informal network of organizations to produce answers in response to complex tasks requiring the integration of masses of information designed as a high-level cognitive and collective activity? Are some network configurations more favourable than others to accomplish these tasks? We present a method to make these assessments, inspired by the Information Integration Theory issued from the modelling of consciousness. First we evaluate the informational network created by the sharing of information between organizations for the realization of a given task. Then we assess the natural network ability to integrate information, a capacity determined by the partition of its members whose information links are less efficient. We illustrate the method by the analysis of various functional integrations of Southeast Asian organizations, creating a spontaneous network participating in the study and management of interactions between health and environment. Several guidelines are then proposed to continue the development of this fruitful analogy between artificial and organizational consciousness (refraining ourselves from assuming that one or the other exists).

  • This paper presents the author’s attempt to justify the need for understanding the problem of multilevel mind in artificial intelligence systems. Thus, it is assumed that consciousness and the unconscious are not equal in natural mental processes. The human conscious is supposedly a “superstructure” above the unconscious automatic processes. Nevertheless, it is the unconscious that is the basis for the emotional and volitional manifestations of the human psyche and activity. At the same time, the alleged mental activity of Artificial Intelligence may be devoid of the evolutionary characteristics of the human mind. Several scenarios are proposed for the possible development of a “strong” AI through the prism of creation (or evolution) of the machine unconscious. In addition, we propose two opposite approaches regarding the relationship between the unconscious and the conscious.

  • This paper describes a possible way to improve computer security by implementing a program which implements the following three features related to a weak notion of artificial consciousness: (partial) self-monitoring, ability to compute the truth of quantifier-free propositions and the ability to communicate with the user. The integrity of the program could be enhanced by using a trusted computing approach, that is to say a hardware module that is at the root of a chain of trust. This paper outlines a possible approach but does not refer to an implementation (which would need further work), but the author believes that an implementation using current processors, a debugger, a monitoring program and a trusted processing module is currently possible.

  • Recent studies have suggested that individuals are not able to develop a sense of joint agency during joint actions with automata. We sought to examine whether this lack of joint agency is linked to individuals’ inability to co-represent the automaton-generated actions. Fifteen participants observed or performed a Simon response time task either individually, or jointly with another human or a computer. Participants reported the time interval between their response (or the co-actor response) and a subsequent auditory stimulus, which served as an implicit measure of participants’ sense of agency. Participants’ reaction times showed a classical Simon effect when they were partnered with another human, but not when they collaborated with a computer. Furthermore, participants showed a vicarious sense of agency when co-acting with another human agent but not with a computer. This absence of vicarious sense of agency during human-computer interactions and the relation with action corepresentation are discussed.

  • This book discusses what artificial intelligence can truly achieve: Can robots really be conscious? Can we merge with AI, as tech leaders like Elon Musk and Ray Kurzweil suggest? Is the mind just a program? Examining these issues, the author proposes ways we can test for machine consciousness, questions whether consciousness is an unavoidable byproduct of sophisticated intelligence, and considers the overall dangers of creating machine minds

  • A basic structure and behavior of a human-like AI system with conscious like functions is proposed. The system is constructed completely with artificial neural networks (ANN), and an optimal-design approach is applied. The proposed system using recurrent neural networks (RNN) which execute learning under dynamic equilibrium is a redesign of ANN in the previous system. The redesign using the RNNs allows the proposed brain-like autonomous adaptive system to be more plausible as a macroscopic model of the brain. By hypothesizing that the “conscious sensation” that constitutes the basis for phenomenal consciousness, is the same as “state of system level learning”, we can clearly explain consciousness from an information system perspective. This hypothesis can also comprehensively explain recurrent processing theory (RPT) and the global neuronal workspace theory (GNWT) of consciousness. The proposed structure and behavior are simple but scalable by design, and can be expanded to reproduce more complex features of the brain, leading to the realization of an AI system with functions equivalent to human-like consciousness.

  • Target subject is a module called ‘AI Technology’, which applied the ideas of blended learning. Firstly, lecture-style teaching was conducted with presentation slides in order to explain the contents of a textbook. Secondly, students were required to do exercises and quizzes. By using the last eight weeks, they were asked to create presentation slides outside a class to introduce the up-to-date topics on artificial intelligence. These slides were mutually evaluated among them so that they developed their own slides based on the feedback before the tenth week of the course for the second round of mutual evaluations. Improving consciousness of a module is meaningful. To know the reasons is more significant. For such occasions activities useful for improving consciousness of a module in ‘AI Technology’ are found. Then it is compared with my previous research outcome of the module, ‘Artificial Intelligence’. Students are categorized into four groups with degree scales of consciousness by principal component analysis. This paper reports their results.

  • There is a little information on how to design a social robot that effectively executes consciousness-emotion (C-E) interaction in a socially acceptable manner. In fact, development of such socially sophisticated interactions depends on models of human highlevel cognition implemented in the robot’s design. Therefore, a fundamental research problem of social robotics in terms of effective C-E interaction processing is to define a computational architecture of the robotic system in which the cognitive-emotional integration occurs and determine cognitive mechanisms underlying consciousness along with its subjective aspect in detecting emotions. Our conceptual framework rests upon assumptions of a computational approach to consciousness, which points out that consciousness and its subjective aspect are specific functions of the human brain that can be implemented into an artificial social robot’s construction. Such research framework of developing C-E addresses a field of machine consciousness that indicates important computational correlates of consciousness in such an artificial system and the possibility to objectively describe such mechanisms with quantitative parameters based on signaldetection and threshold theories.

  • Recently, there has been considerable interest and effort to the possibility to design and implement conscious robots, i.e., the chance that robots may have subjective experiences. Typical approaches as the global workspace, information integration, enaction, cognitive mechanisms, embodiment, i.e., the Good Old-Fashioned Artificial Consciousness, henceforth, GOFAC, share the same conceptual framework. In this paper, we discuss GOFAC's basic tenets and their implication for AI and Robotics. In particular, we point out the intermediate level fallacy as the central issue affecting GOFAC. Finally, we outline a possible alternative conceptual framework toward robot consciousness.</p>

Last update from database: 5/16/26, 1:00 AM (UTC)