Your search

In authors or contributors
  • The present article compares human and artificial intelligence (AI) intentionality and personhood. It focuses on the difference between “intrinsic” intentionality—the object directedness that derives from animate existence and its drive for survival, and appears most especially in human conscious activity—and a more functional notion of “intentional relation” that does not require consciousness. The present article looks at intentional relations as objective concepts that can apply equally to animate beings, robots, and AI systems. As such, large language models are best described as disembodied Cartesian egos, while humanoid robots, even with large language model brains, are still far from satisfying benchmarks of embodied personhood. While robots constructed by humans have borrowed intentionality and limited forms of objective intentional relations, in the future, robots may construct themselves. If these self-constructed robots are adaptive and can persist for multiple generations as a new kind of species, then it is reasonable to suppose that they have their own form of intrinsic intentionality, different from that of animate beings currently existing on Earth.

  • This article provides a thematic overview and synthesis of selected literature on the ethical implications of artificial consciousness for moral personhood. It brings together key philosophical discussions to support the view that phenomenal consciousness is a necessary condition for machine moral status and personhood, despite significant epistemic challenges and ethical objections involved. The article commences by outlining the conceptual groundwork, which introduces and clarifies the concepts of moral personhood, moral rights, moral status, and phenomenal consciousness. It then connects these concepts together by establishing two key necessity links: (a) moral status is necessary for moral rights (and, by extension, moral personhood) and (b) consciousness is required for moral status. The discussion proceeds by relating the implications of the necessity links to artificial entities like robots, highlighting the moral significance of artificial consciousness, presenting related policy discussions, and engaging with two significant challenges: the epistemic difficulty of determining consciousness in artificial entities and the alternative view that robots could possess moral status without being conscious. In response, the article maintains that the epistemic challenges do not undermine the consciousness criterion but merely constrain its application, whereas the alternative criteria arguably fail to meet a more demanding threshold for moral status that is required for the possession of moral rights and personhood. By synthesizing insights across multiple debates, this article highlights the enduring significance of consciousness in ethical discourse and its important role in guiding future inquires.

Last update from database: 2/1/26, 2:00 AM (UTC)