Search

Full bibliography 558 resources

  • Artificial Intelligence continues to develop rapidly and provokes people to think about Artificial consciousness. Anthropocentric understanding considers consciousness a unique feature of human beings not possessed by other living beings. However, software and hardware development demonstrated the ability to process, analyze, and infer increasingly comprehensive data close to the image of human brain performance. Furthermore, the application of artificial Intelligence to human-friendly objects that can communicate with humans evokes the presence of consciousness within these objects. This paper discusses the presence of artificial consciousness in humanoid robots as an evolutionary continuation of artificial Intelligence. It estimates its implications for architecture, primarily within interior design. Consciousness has a special place in architecture, as it guides Intelligence in engineering and brings it to an abstract level, such as aesthetics. This paper extracts popular information from Internet conversations and theories in pre-existing scientific journals. This paper concludes that the adaptability of both parties and the balance of positions between the two parties in the future will influence the development of interior design approaches that will integrate artificial Intelligence and humans.

  • The question of self-aware artificial intelligence may turn on the question of the human self. To explore some of the possibilities in play we start from an assumption that the self is often pre-analytically and by default conceptually viewed along lines that have likely been based on or from the kind of Abrahamic faith notion as expressed by a “true essence” (although not necessarily a static one), such as is given in the often vaguely used “soul”. Yet, we contend that the self is separately definable, and in relatively narrow terms; if so, of what could the self be composed? We begin with a brief review of the descriptions of the soul as expressed by some sample scriptural references taken from these religious lineages, and then transition to attempt a self-concept in psychological and cognitive terms that necessarily differentiates and delimits it from the ambiguous word “soul”. From these efforts too will emerge the type of elements that are needed for a self to be present, allowing us to think of the self in an artificial intelligence (AI) context. If AI might have a self, could it be substantively close to a human’s? Would an “en-selved” AI be achievable? I will argue that there are reasons to think so, but that everything hinges on how we understand consciousness, and hence ruminating on that area—and the possibility or lack thereof in extension to non-organic devices—will comprise our summative consideration of the pertinent theoretical aspects. Finally, the practical will need to be briefly addressed, and for this, some of the questions that would have to be asked regarding what it might mean ethically to relate to AI if an “artificial self” could indeed arise will be raised but not answered. To think fairly about artificial intelligence without anthropomorphizing it we need to better understand our own selves and our own minds. This paper will attempt to analyze the self within these bounds.

  • One of the current AI issues depicted in popular culture is the fear of conscious super AIs that try to take control over humanity. And as computational power goes upwards and that turns more and more into a reality, understanding artificial brains might be increasingly important to control and drive AI towards the benefit of our societies. This paper proposes a base framework to aid the development of autonomous multipurpose artificial brains. To approach that, we propose to first model the functioning of the human brain by reflecting and taking inspiration from the way the body, the consciousness and the unconsciousness interact. To do that, we tried to model events such as sensing, thinking, dreaming and acting, thoughtfully or unconsciously. We believe valuable insights can already be drawn from the analysis and critique of the presented framework, and that it might be worth implementing it, with or without changes, to create, study, understand and control artificially conscious systems.

  • Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned Artificial Superintelligence (ASI), such as Coherent Extrapolated Volition (CEV), have focused on ensuring that an Artificial Superintelligence would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds, could also be affected by the ASI’s behavior in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated Volition, an alternative to CEV, that directly takes into account the interests of all sentient beings. This ambitious value learning proposal would significantly reduce the likelihood of risks of astronomical suffering from the ASI’s behavior, and thus we have very strong pro-tanto moral reasons in favor of implementing it instead of the CEV. This fact is crucial in conducting an adequate cost–benefit analysis between different ambitious value learning proposals.

  • The possibility of AI consciousness depends much on the correct answer to the mind–body problem: how our materialistic brain generates subjective consciousness? If a materialistic answer is valid, machine consciousness must be possible, at least in principle, though the actual instantiation of consciousness may still take a very long time. If a non-materialistic one (either mentalist or dualist) is valid, machine consciousness is much less likely, perhaps impossible, as some mental element may also be required. Some recent advances in neurology (despite the separation of the two hemispheres, our brain as a whole is still able to produce only one conscious agent; the negation of the absence of a free will, previously thought to be established by the Libet experiments) and many results of parapsychology (on medium communications, memories of past lives, near-death experiences) suggestive of survival after our biological death, strongly support the non-materialistic position and hence the much lower likelihood of AI consciousness. Instead of being concern about AI turning conscious and machine ethics, and trying to instantiate AI consciousness soon, we should perhaps focus more on making AI less costly and more useful to society.

  • This article will explore the expressivity and tractability of vividness, as viewed from the interdisciplinary perspective of the cognitive sciences, including the sub-disciplines of artificial intelligence, cognitive psychology, neuroscience, and phenomenology. Following the precursor work by Benussi in experimental phenomenology, seminal papers by David Marks in psychology and, later, Hector Levesque in computer science, a substantial part of the discussion has been around a symbolic approach to the concept of vividness. At the same time, a similar concept linked to semantic memory, imagery, and mental models has had a long history in cognitive psychology, with new emerging links to cognitive neuroscience. More recently, there is a push towards neural-symbolic representations which allows room for the integration of brain models of vividness to a symbolic concept of vividness. Such works lead to question the phenomenology of vividness in the context of consciousness, and the related ethical concerns. The purpose of this paper is to review the state of the art, advances, and further potential developments of artificial-human vividness while laying the ground for a shared conceptual platform for dialogue, communication, and debate across all the relevant sub-disciplines. Within such context, an important goal of the paper is to define the crucial role of vividness in grounding simulation and modeling within the psychology (and neuroscience) of human reasoning.

  • As artificial intelligence (AI) continues to proliferate across manufacturing, economic, medical, aerospace, transportation, and social realms, ethical guidelines must be established to not only protect humans at the mercy of automated decision making, but also autonomous agents themselves, should they become conscious. While AI appears "smart" to the public, and may outperform humans on specific tasks, the truth is that today’s AI lacks insight beyond the restricted scope of problems to which it has been tasked. Without context, AI is effectively incapable of comprehending the true nature of what it does and is oblivious to the reverberations it may cause in the real world should it err in prediction. Despite this, future AI may be equipped with enough sensors and neural processing capacity to acquire a dynamic cognizance more akin to humans. If this materializes, will autonomous agents question their own position in this world? One must entertain the possibility that this is not merely hypothetical but may, in fact, be imminent if humanity succeeds in creating artificial general intelligence (AGI).If autonomous agents with the capacity for artificial consciousness are delegated grueling tasks, outcomes could mirror the plight of exploited workers, result in retaliation, failure to comply, alternative objectives, or breakdown of human-autonomy teams. It will be critical to decide how and in which contexts various agents should be utilized. Additionally, delineating the meaning of trust and ethical consideration between humans and machines is problematic because descriptions of trust and ethics have only been detailed in human terms. This means autonomous agents will be subject to anthropomorphism, but robots are not humans, and their experience of trust and ethics might be markedly distinct from humans. Ideally speaking, to fully entrust a machine with human-centered tasks, one must believe that such an entity is reliable, competent, has the appropriate priorities.

  • Much of current artificial intelligence (AI) and the drive toward artificial general intelligence (AGI) focuses on developing machines for functional tasks that humans accomplish. These may be narrowly specified tasks as in AI, or more general tasks as in AGI – but typically these tasks do not target higher-level human cognitive abilities, such as consciousness or morality; these are left to the realm of so-called “strong AI” or “artificial consciousness.” In this paper, we focus on how a machine can augment humans rather than do what they do, and we extend this beyond AGI-style tasks to augmenting peculiarly personal human capacities, such as wellbeing and morality. We base this proposal on associating such capacities with the “self,” which we define as the “environment-agent nexus”; namely, a fine-tuned interaction of brain with environment in all its relevant variables. We consider richly adaptive architectures that have the potential to implement this interaction by taking lessons from the brain. In particular, we suggest conjoining the free energy principle (FEP) with the dynamic temporo-spatial (TSD) view of neuro-mental processes. Our proposed integration of FEP and TSD – in the implementation of artificial agents – offers a novel, expressive, and explainable way for artificial agents to adapt to different environmental contexts. The targeted applications are broad: from adaptive intelligence augmenting agents (IA’s) that assist psychiatric self-regulation to environmental disaster prediction and personal assistants. This reflects the central role of the mind and moral decision-making in most of what we do as humans.

  • This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Here I draw a distinction between natural, artificial, and machine understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I propose a hypothesis that might help to explain why consciousness is important to understanding. In closing, I suggest that progress toward implementing human-like understanding in machines—machine understanding—may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.

  • Hofstadter [1979, 2007] offered a novel Gödelian proposal which purported to reconcile the apparently contradictory theses that (1) we can talk, in a non-trivial way, of mental causation being a real phenomenon and that (2) mental activity is ultimately grounded in low-level rule-governed neural processes. In this paper, we critically investigate Hofstadter’s analogical appeals to Gödel’s [1931] First Incompleteness Theorem, whose “diagonal” proof supposedly contains the key ideas required for understanding both consciousness and mental causation. We maintain that bringing sophisticated results from Mathematical Logic into play cannot furnish insights which would otherwise be unavailable. Lastly, we conclude that there are simply too many weighty details left unfilled in Hofstadter’s proposal. These really need to be fleshed out before we can even hope to say that our understanding of classical mind-body problems has been advanced through metamathematical parallels with Gödel’s work.

  • There are several lessons that can already be drawn from the current research programs on strong AI and building conscious machines, even if they arguably have not produced fruits yet. The first one is that functionalist approaches to consciousness do not account for the key importance of subjective experience and can be easily confounded by the way in which algorithms work and succeed. Authenticity and emergence are key concepts that can be useful in discerning valid approaches versus invalid ones and can clarify instances where algorithms are considered conscious, such as Sophia or LaMDA. Subjectivity and embeddedness become key notions that should also lead us to re‐examine the ethics of decision delegation. In addition, the focus on subjective experience shifts what is relevant in our understanding of ourselves as human beings and as an image of God, namely, in de‐emphasizing intellectuality in favor of experience and contemplation over action.

  • Consciousness and intelligence are properties commonly understood as dependent by folk psychology and society in general. The term artificial intelligence and the kind of problems that it managed to solve in the recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following the analogy of Russell, if a machine is able to do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights that a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe that the obvious answer is no, as problem solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence that human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals and machines. As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society that we also analyze in this work.

  • The Transformer artificial intelligence model is one of the most accurate models to extract the meaning/semantics from sets of symbolic sequences of various lengths, including long sequences. These models transform the language spaces as per long and short-distance relationships among units of the language. These models thus minimize some aspects of human comprehension of the world. To frame a generalized theory of identification and generation of meaning in human thought, the transformer model needs to be understood in the context of generalized systems theory, such that other equivalent models can be discovered, compared and selected to converge on the base model of meaning identification and discovery aspect of the philosophy of knowledge or epistemology. This paper explores the relationships of the transformer model and its various component parts, processes and the phenomena to some critical aspects of generalized systems theory such as cognition, symmetry & equivalence, holons, emergence, identifiability, system spaces and system universe, reconstructability, equilibriums & oscillations, scaling, polystability, ontogeny, algedonic loops, heterarchy, holarchy, homeorhesis, isomorphism, homeostasis, attractors, equifinality, nesting, parallelization, loops, causal structure, transformations, feedbacks, encodings, and information complexity.

  • This study was focused on reviewing several research articles related to various perspectives and significant recent developments of Artificial Intelligence (AI). The main goal of this review was to gain insight into the implications of causal reasoning models in artificial intelligence. This review analyses state-of-the-art research articles and evaluations of applications, techniques algorithms, and trends in the field of Artificial Intelligence. By presenting recent results, this study has a strong emphasis on fundamental aspects of causal reasoning, logic, and computational structures of Strong AI agents. Findings of this study outline the importance of implementing causal reasoning methods in AI systems in order to achieve in the future, a truly intelligent machine. We can conclude that causal reasoning provides an important approach to advancing our understanding of artificial consciousness. Extensive research is needed in the future to validate and evaluate different causal tools for supporting causal reasoning in AI.

  • Recently, the question of whether machine can have its “self-consciousness” has become a focus being concerned and thought of. e Researches on machine consciousness or artificial consciousness, has gradually become a hot spot in the field of artificial intelligence (AI). With the common sense of human being as the only intelligent life with “self-consciousness”, only human's self-consciousness can be taken as a model in order to build AI with self-consciousness. In this paper, the theories of self-consciousness from the perspectives of psychology, cognitive neuroscience, philosophy and cognitive science were introduced with the hope of providing new ideas for the development of AI with self-consciousness.

  • How to make robot conscious is an important goal of artificial intelligence. Despite the emergence of some very creative ideas, no convincing way to realize consciousness on a machine has been proposed so far. For example, the integrated information theory of consciousness proposes that consciousness can exist in any place that can reasonably process information, whether brain or machine. It points out that a physical system must satisfy two basic conditions to cause the emergence of consciousness: it must have rich information, and it must be highly integrated. However, the theory does not give an idea of how to realize consciousness on a machine. In this paper, we propose the robot consciousness based on empirical knowledge. We believe that the empirical knowledge of robots is an important basis for robot consciousness, any cognitive experience of robot can lead to the generation of consciousness. We firstly propose a formal framework for describing robot empirical knowledge; then we discuss the robot consciousness based on its own empirical knowledge; finally, we propose a cost-oriented evolutionary method of robot consciousness.

  • This paper has a critical and a constructive part. The first part formulates a political demand, based on ethical considerations: Until 2050, there should be a global moratorium on synthetic phenomenology, strictly banning all research that directly aims at or knowingly risks the emergence of artificial consciousness on post-biotic carrier systems. The second part lays the first conceptual foundations for an open-ended process with the aim of gradually refining the original moratorium, tying it to an ever more fine-grained, rational, evidence-based, and hopefully ethically convincing set of constraints. The systematic research program defined by this process could lead to an incremental reformulation of the original moratorium. It might result in a moratorium repeal even before 2050, in the continuation of a strict ban beyond the year 2050, or a gradually evolving, more substantial, and ethically refined view of which — if any — kinds of conscious experience we want to implement in AI systems.

  • Contemporary debates on Artificial General Intelligence (AGI) center on what philosophers classify as descriptive issues. These issues concern the architecture and style of information processing required for multiple kinds of optimal problem-solving. This paper focuses on two topics that are central to developing AGI regarding normative, rather than descriptive, requirements for AGIs epistemic agency and responsibility. The first is that a collective kind of epistemic agency may be the best way to model AGI. This collective approach is possible only if solipsistic considerations concerning phenomenal consciousness are ignored, thereby focusing on the cognitive foundation that attention and access consciousness provide for collective rationality and intelligence. The second is that joint attention and motivation are essential for AGI in the context of linguistic artificial intelligence. Focusing on GPT-3, this paper argues that without a satisfactory solution to this second normative issue regarding joint attention and motivation, there cannot be genuine AGI, particularly in conversational settings.

  • Human cognizance is the significant capacity that makes machines adopt the thought process of a human. This paper gives an outline of AI awareness. In this, our primary keen is to expand would ai be able to build up a human-like cognizance? In the event that it is conceivable what are the moral rules and how machines settle the general public standards. Numerous researchers talked about models to make cognizance in machines. The intellectual capacities that machine contrasts with people and how these psychological capacities serve to human knowledge to assemble the development future world. This paper explains live instances of AI accomplishments. The future universe of the creative mind could find in life utilizing diversion zone like films, web arrangement, and so on Utilizing these diversion models, talked about how might be AI later on when human insight is incorporated with machine knowledge.

Last update from database: 3/23/25, 8:36 AM (UTC)