Search
Full bibliography 675 resources
-
We should not be creating conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates.
-
We might create artificial systems which can suffer. Since AI suffering might potentially be astronomical, the moral stakes are huge. Thus, we need an approach which tells us what to do about the risk of AI suffering. I argue that such an approach should ideally satisfy four desiderata: beneficence, action-guidance, feasibility and consistency with our epistemic situation. Scientific approaches to AI suffering risk hold that we can improve our scientific understanding of AI, and AI suffering in particular, to decrease AI suffering risks. However, such approaches tend to conflict with either the desideratum of consistency with our epistemic situation or with feasibility. Thus, we also need an explicitly ethical approach to AI suffering risk. Such an approach tells us what to do in the light of profound scientific uncertainty about AI suffering. After discussing multiple views, I express support for a hybrid approach. This approach is partly based on the maximization of expected value and partly on a deliberative approach to decision-making.
-
Understanding the foundations of artificial cognitive consciousness remains a central challenge in artificial intelligence (AI) and cognitive science. Traditional computational models, including deep learning and symbolic AI, have demonstrated remarka
-
This article provides a thematic overview and synthesis of selected literature on the ethical implications of artificial consciousness for moral personhood. It brings together key philosophical discussions to support the view that phenomenal consciousness is a necessary condition for machine moral status and personhood, despite significant epistemic challenges and ethical objections involved. The article commences by outlining the conceptual groundwork, which introduces and clarifies the concepts of moral personhood, moral rights, moral status, and phenomenal consciousness. It then connects these concepts together by establishing two key necessity links: (a) moral status is necessary for moral rights (and, by extension, moral personhood) and (b) consciousness is required for moral status. The discussion proceeds by relating the implications of the necessity links to artificial entities like robots, highlighting the moral significance of artificial consciousness, presenting related policy discussions, and engaging with two significant challenges: the epistemic difficulty of determining consciousness in artificial entities and the alternative view that robots could possess moral status without being conscious. In response, the article maintains that the epistemic challenges do not undermine the consciousness criterion but merely constrain its application, whereas the alternative criteria arguably fail to meet a more demanding threshold for moral status that is required for the possession of moral rights and personhood. By synthesizing insights across multiple debates, this article highlights the enduring significance of consciousness in ethical discourse and its important role in guiding future inquires.
-
This paper is one (among many) approach to the question, “are AIs persons or are they conscious?” from a Heideggerian perspective. Here I argue for two claims. First, I argue that René Girard’s mimetic analysis of mitsein (being-with), one of Heidegger’s foundational concepts, illuminates what Heidegger takes mitsein to be. Second, I claim that this Girardian analysis gives us a way to answer the question of whether AIs have Dasein, to which I argue that the answer is negative. Specifically, I claim that Dasein requires mitsein, and mitsein (according to Girard’s analysis) requires mimesis. Moreover, mimesis requires that the mimetic being finds truth in the mimetic object, that is, it comports in a meaningful way toward the unconcealed object being imitated by Dasein. But since AIs cannot comport in meaningful ways toward the object of imitation, i.e., they are not truth-apt, they cannot engage in mimetic behavior, hence cannot have mitsein. But, necessarily, Dasein is being-with-others, Therefore, AIs cannot have Dasein. If we assume (as I think Heidegger would) that every person has Dasein, we may justifiably conclude that AIs are not persons, at least from a Heideggerian ontology.
-
Can machines ever become conscious? A central debate in this context concerns the question whether consciousness requires biological states. Within this debate, there exists a fundamental dispute between two widely endorsed ...