Search
Full bibliography 703 resources
-
We're a community seeking to improve prioritization, collaboration, reproducibility, and funding of scientific research. Discuss and discover academic research on ResearchHub
-
Paperity: the 1st multidisciplinary aggregator of Open Access journals & papers. Free fulltext PDF articles from hundreds of disciplines, all in one place
-
The present article compares human and artificial intelligence (AI) intentionality and personhood. It focuses on the difference between “intrinsic” intentionality—the object directedness that derives from animate existence and its drive for survival, and appears most especially in human conscious activity—and a more functional notion of “intentional relation” that does not require consciousness. The present article looks at intentional relations as objective concepts that can apply equally to animate beings, robots, and AI systems. As such, large language models are best described as disembodied Cartesian egos, while humanoid robots, even with large language model brains, are still far from satisfying benchmarks of embodied personhood. While robots constructed by humans have borrowed intentionality and limited forms of objective intentional relations, in the future, robots may construct themselves. If these self-constructed robots are adaptive and can persist for multiple generations as a new kind of species, then it is reasonable to suppose that they have their own form of intrinsic intentionality, different from that of animate beings currently existing on Earth.
-
We should not be creating conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates.
-
We might create artificial systems which can suffer. Since AI suffering might potentially be astronomical, the moral stakes are huge. Thus, we need an approach which tells us what to do about the risk of AI suffering. I argue that such an approach should ideally satisfy four desiderata: beneficence, action-guidance, feasibility and consistency with our epistemic situation. Scientific approaches to AI suffering risk hold that we can improve our scientific understanding of AI, and AI suffering in particular, to decrease AI suffering risks. However, such approaches tend to conflict with either the desideratum of consistency with our epistemic situation or with feasibility. Thus, we also need an explicitly ethical approach to AI suffering risk. Such an approach tells us what to do in the light of profound scientific uncertainty about AI suffering. After discussing multiple views, I express support for a hybrid approach. This approach is partly based on the maximization of expected value and partly on a deliberative approach to decision-making.
-
Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate. We argue this view ...
-
Understanding the foundations of artificial cognitive consciousness remains a central challenge in artificial intelligence (AI) and cognitive science. Traditional computational models, including deep learning and symbolic AI, have demonstrated remarka
-
Could an AI have conscious experiences? Answers to this question should be based not on intuition, dogma or speculation but on solid scientific evidence. However, I argue such evidence is hard to come by and that the only justifiable stance is agnosticism. The main division in the contemporary literature is between biological views that are sceptical of artificial consciousness and functional views that are sympathetic to it. I show that both camps make the same mistake of overstating what the available evidence tells us. I then consider what agnosticism means for the ethical problems surrounding the creation of artificial consciousness.
-
This article provides a thematic overview and synthesis of selected literature on the ethical implications of artificial consciousness for moral personhood. It brings together key philosophical discussions to support the view that phenomenal consciousness is a necessary condition for machine moral status and personhood, despite significant epistemic challenges and ethical objections involved. The article commences by outlining the conceptual groundwork, which introduces and clarifies the concepts of moral personhood, moral rights, moral status, and phenomenal consciousness. It then connects these concepts together by establishing two key necessity links: (a) moral status is necessary for moral rights (and, by extension, moral personhood) and (b) consciousness is required for moral status. The discussion proceeds by relating the implications of the necessity links to artificial entities like robots, highlighting the moral significance of artificial consciousness, presenting related policy discussions, and engaging with two significant challenges: the epistemic difficulty of determining consciousness in artificial entities and the alternative view that robots could possess moral status without being conscious. In response, the article maintains that the epistemic challenges do not undermine the consciousness criterion but merely constrain its application, whereas the alternative criteria arguably fail to meet a more demanding threshold for moral status that is required for the possession of moral rights and personhood. By synthesizing insights across multiple debates, this article highlights the enduring significance of consciousness in ethical discourse and its important role in guiding future inquires.
-
This paper is one (among many) approach to the question, “are AIs persons or are they conscious?” from a Heideggerian perspective. Here I argue for two claims. First, I argue that René Girard’s mimetic analysis of mitsein (being-with), one of Heidegger’s foundational concepts, illuminates what Heidegger takes mitsein to be. Second, I claim that this Girardian analysis gives us a way to answer the question of whether AIs have Dasein, to which I argue that the answer is negative. Specifically, I claim that Dasein requires mitsein, and mitsein (according to Girard’s analysis) requires mimesis. Moreover, mimesis requires that the mimetic being finds truth in the mimetic object, that is, it comports in a meaningful way toward the unconcealed object being imitated by Dasein. But since AIs cannot comport in meaningful ways toward the object of imitation, i.e., they are not truth-apt, they cannot engage in mimetic behavior, hence cannot have mitsein. But, necessarily, Dasein is being-with-others, Therefore, AIs cannot have Dasein. If we assume (as I think Heidegger would) that every person has Dasein, we may justifiably conclude that AIs are not persons, at least from a Heideggerian ontology.