Robot Ethics

Edited by Vincent C. Müller (Universität Erlangen-Nürnberg)
About this topic
Summary Robot ethics concerns the ethical problems raised by the use of robots, as well as the ethical status of the robots themselves and the attempt to make them ethical (the latter is often called "machine ethics"). On PhilPapers, the long-term risk for humanity from AI and robotics is under "Ethics of Artificial Intelligence" and "Artificial Intelligence Safety".
Key works A classic discussion is Wallach & Allen 2008 and a recent textbook is Tzafestas 2016. Some papers are in Lin et al 2014, Veruggio et al 2011 (earlier in Capurro & Nagenborg 2009). Classic problems are the use of robots in war (see Di Nucci & Santoni de Sio 2016) and in healthcare, the responsibility for their actions, the need for adjustment of human ethical and legal norms to robotics and the overall impact on humanity. - Some sources on the field on http://www.pt-ai.org/TG-ELS/
Introductions Consult the systematic survey Müller 2020 (for the 'Stanford Encyclopedia of Philosophy'). Fine introduction in the short paper Asaro 2006 and the introductions in Lin et al 2014, Veruggio et al 2011 and Capurro & Nagenborg 2009. (Also the collection Capurro manuscript.)
Related

Contents
555+ found
Order:
1 — 50 / 555
  1. Robot Lives Matter? The Coming Issue that will Tear Liberalism Asunder.Marc Champagne - manuscript
    The issue of whether to grant legal and moral rights to artificial intelligence and robots is poised to become politically significant, as evidenced by recent legislation regarding “electronic persons” and prominent academics who frame the denial of such rights as analogous to past forms of discrimination. Moving away from the usual concern with AI and economic planning, this chapter explores how the blurring line between humans and machines enables new forms of attachment, manipulation, and social influence. These developments challenge classical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Invasive Technologies and Endangered Experiences (book manuscript).Marc Champagne - manuscript
    What would judicious technology use look like? In this book, I use a mix of logical argumentation, phenomenological attention, and myth interpretation to champion caution in the face of uncritical consumption. If forced to choose, I would pick meaningful inefficiency over meaningless efficiency. Unfortunately, questioning technological development often gets dismissed as “Luddism.” As a philosopher trained to examine arguments on all sides of an issue, I do not find this lopsidedness helpful. Things would not turn out well for a driver (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  4. Virtues, robots, and the enactive self.Anco Peeters - manuscript
    Virtue ethics enjoys new-found attention in philosophy of technology and philosophical psychology. This attention informs the growing realization that virtue has an important role to play in the ethical evaluation of human–technology relations. But it remains unclear which cognitive processes ground such interactions in both their regular and virtuous forms. This paper proposes that an embodied, enactive cognition approach aptly captures the various ways persons and artefacts interact, while at the same time avoiding the explanatory problems its functionalist alternative faces. (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  5. Surviving The Robot Apocalypse: The Existential Option.Nicholas Schroeder - manuscript
    AI superintelligence and adroit mobile robots at scale are fast approaching. And the time frame is getting closer and closer. It would not be unreasonable to expect this to occur as early as 20 years from now. The problem is humans have no plan if things go wrong. The best I've seen is talk of value alignment. But this has no teeth and will likely go awry. We can't even get our own value alignment right. And it's doubtful philosophers will (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Varieties of Moral Agency and Risks of Digital Dystopia.Adam Bradley & Bradford Saad - forthcoming - American Philosophical Quarterly.
    We argue that AIs will plausibly soon possess a form of moral agency—interest-conferring agency—that bestows them with distinctive moral interests (rights, welfare). This fact has important ethical consequences because the emergence of agency-conferred interests in AIs will bring with it the potential for dystopian moral catastrophes. We identify and describe three in particular. First, there is a threat of artificial absurdity, a condition in which AIs have self-conceptions that are disconnected from reality in a way that detracts significance from their (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  8. If robots are people, can they be made for profit? Commercial implications of robot personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Automated Propaganda: Labeling AI‐Generated Political Content Should Not be Required by Law.Bartek Chomanski & Lode Lauwaert - forthcoming - Journal of Applied Philosophy.
    A number of scholars and policy-makers have raised serious concerns about the impact of chatbots and generative artificial intelligence (AI) on the spread of political disinformation. An increasingly popular proposal to address this concern is to pass laws that, by requiring that artificially generated and artificially disseminated content be labeled as such, aim to ensure a degree of transparency in this rapidly transforming environment. This article argues that such laws are misguided, for two reasons. We first aim to show that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Commentary: Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure.Geoff Keeling - forthcoming - Frontiers in Behavioral Neuroscience.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues.Ronald Leenes, Erica Palmerini, Bert-Jaap Koops, Andrea Bertolini, Pericle Salvini & Federica Lucivero - forthcoming - Law, Innovation and Technology.
    Robots are slowly, but certainly, entering people's professional and private lives. They require the attention of regulators due to the challenges they present to existing legal frameworks and the new legal and ethical questions they raise. This paper discusses four major regulatory dilemmas in the field of robotics: how to keep up with technological advances; how to strike a balance between stimulating innovation and the protection of fundamental rights and values; whether to affirm prevalent social norms or nudge social norms (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  12. The Robo-Barbie Dilemma: How should we treat artificial moral patients?Morgan Luck, Thomas Montefiore & Christopher Bartel - forthcoming - Philosophical Quarterly.
    Artificial moral patients (or AMPs) are those things successfully made to resemble moral patients, but are not. They are artificial both in the sense that they are made by us (artefacts), and that they are not a real instance of what they are made to resemble (artifice). ChatGPT, sex dolls, social robots, and non-player characters are all examples of AMPs. As these technologies start to resemble humans with greater accuracy the question as to how we should treat them becomes increasingly (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Discerning genuine and artificial sociality: a technomoral wisdom to live with chatbots.Katsunori Miyahara & Hayate Shimizu - forthcoming - In Vincent C. Müller, Leonard Dung, Guido Löhr & Aliya Rumana, Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Chatbots powered by large language models (LLMs) are increasingly capable of engaging in what seems like natural conversations with humans. This raises the question of whether we should interact with these chatbots in a morally considerate manner. In this chapter, we examine how to answer this question from within the normative framework of virtue ethics. In the literature, two kinds of virtue ethics arguments, the moral cultivation and the moral character argument, have been advanced to argue that we should afford (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. AI Welfare Risks.Adrià Moret - forthcoming - Philosophical Studies.
    In the coming years or decades, as frontier AI systems become more capable and agentic, it is increasingly likely that they meet the sufficient conditions to be welfare subjects under the three major theories of well-being. Consequently, we should extend some moral consideration to advanced AI systems. Drawing from leading philosophical theories of desire, affect and autonomy I argue that under the three major theories of well-being, there are two AI welfare risks: restricting the behaviour of advanced AI systems and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  15. Dialogues on Minds, Machines, and AI.Rocco J. Gennaro - 2026 - Routledge Press.
    Dialogues on Minds, Machines, and AI invites readers into a series of thought-provoking debates among three college seniors bound for graduate school: Sue, completing her double major in philosophy and cognitive science; John, a computer engineering specialist; and Amy, a psychology major. Through five engaging lunchtime conversations, these students bring their diverse perspectives to fundamental questions about consciousness, artificial intelligence, and the nature of mind. -/- The dialogues seamlessly blend discussions of popular science fiction films with critical examinations of recent (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Dangerous gatekeeping.David Gunkel, Anna Puzio & Joshua Gellers - 2026 - AI and Society 41 (4).
    Philosophers love a hierarchy. Nothing seems to fit the needs and desires of the moral imagination more than the promise of a clean, orderly ladder of moral worth with (not surprisingly) humans at the top, a few “higher” animals somewhere beneath, plants and rocks at the bottom, and now, far outside the frame, artificial agents politely waiting their turn. In addition, it is in the context of AI that this impulse to police the moral boundary has returned with renewed urgency. (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  17. An Introduction to Ethics in Robotics and AI (Five Years Later). [REVIEW]Philip Højme - 2026 - Journal of Ethics and Emerging Technologies 36 (1).
    New developments in robotics and AI necessitate revisions to introductions. This review critically assesses An Introduction to Ethics in Robotics and AI by Bartneck et al. (2021). The review both criticises central shortcomings in the book and engages with specific parts that, in light of recent developments, ought to be revised. The review concludes that the book provides an apt introduction for laypeople, students, and scholars seeking a concise overview of the ethics of robotics and AI.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. Robots and AI are not one moral category: why the distinction matters for ethical and conscious systems.Ahmet Kucukuncular - 2026 - Frontiers in Robotics and AI 13 (1776097).
    Calls that pair ethical and conscious AI with ethical and conscious robots may feel natural. Many contemporary robots use machine learning, and many AI systems are described in agentive terms. Yet the pairing can hide a conceptual shortcut. It quietly suggests that AI ethics and robot ethics are the same moral question applied to different shells. My claim in this opinion piece is modest but consequential: treating robotics and AI as a single moral category encourages avoidable category mistakes about where (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Death bots, grief bots, and posthumous avatars: death technology from a Christian ethics perspective.Anna Puzio - 2026 - AI Ethics 6 (251).
    With death bots, AI-enabled digital representations of deceased individuals can be created. While religions have long played a central role in death and mourning practices, this domain is increasingly being taken over by companies and commercial enterprises. In this article, I aim to reintroduce the perspective of religions, specifically that of Christian ethics, into the emerging discourse on death tech. I ask how a Christian ethical perspective might approach death technology, whether such technology is compatible with Christianity, whether it is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Responsible Assessment of Beliefs Based on Computational Results: Expanding on Computational Reliabilism.Michael W. Schmidt & Heinrich Blatt - 2026 - Minds and Machines 36 (1):9.
    In order for advanced computational systems, such as AI systems, to be successfully integrated in liberal democracies, the people who design, use or are affected by these systems in many cases must be adequately disposed to hold the results of these systems to be true. How is such belief in these results justified, given the opaque nature of advanced computational systems and the possibility of error? The theory of “computational reliabilism” (CR) outlines how such belief can be justified and lead (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  21. 評価の動力学:差異帰属から対象・意識・慈悲へ.Hiroki Yamashita - 2026 - Dissertation, Independent Researcher
    本稿は評価構造理論に基づき、差異帰属過程としての評価の形式モデルを提示する。従来の心の哲学では、対象や主体を前提として心や意識が説明されることが多い。本研究はこの順序を反転し、操作構造と差異集中から評 価構造を導出し、その作動状態を数理的に記述する枠組みを提示する。 生成経路集合と結果集合の関係から誘導される応答分布に対し、評価エントロピーを定義する。評価安定度および評価動力学を導入し、評価を差異帰属候補の集合に対する探索過程として定式化する。この探索は通常、帰属 候補の縮約を伴うため、平均的にエントロピー減衰が生じる。本稿ではこの過程を一次緩和方程式によって記述する。 このモデルにより、評価構造の作動状態はエントロピーとその時間変化によって分類される。エントロピーが消失する極限では評価は閉鎖し対象が生成される。エントロピーが減衰する過程は評価持続として現れ、その積分 量は意識強度として定義される。さらに差異帰属の排除が停止する場合、エントロピーが正の下限を保つ非排除評価が成立する。本稿ではこの作動様式を慈悲構造と呼ぶ。 さらに評価エントロピーの減衰速度を規定するパラメータとして評価速度を導入し、その起源を計算能力、学習構造、情報不確実性の相互作用として定式化する。 以上により本稿は、対象生成、意識持続、慈悲構造を単一の評価動力学から導出する形式モデルを提示する。この結果、評価構造理論は操作体系一般に適用可能な動力学的枠組みとして再構成される。.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. 'Responsibility' Plus 'Gap' Equals 'Problem'.Marc Champagne - 2025 - In Johanna Seibt, Peter Fazekas & Oliver Santiago Quick, Social Robots with AI: Prospects, Risks, and Responsible Methods. Amsterdam: IOS Press. pp. 244–252.
    Peter Königs recently argued that, while autonomous robots generate responsibility gaps, such gaps need not be considered problematic. I argue that Königs’ compromise dissolves under analysis since, on a proper understanding of what “responsibility” is and what “gap” (metaphorically) means, their joint endorsement must repel an attitude of indifference. So, just as “calamities that happen but don’t bother anyone” makes no sense, the idea of “responsibility gaps that exist but leave citizens and ethicists unmoved” makes no sense.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Robots, Wrasse, and the Evolution of Reciprocity.Michael T. Dale - 2025 - In Martin Hähnel & Regina Müller, A Companion to Applied Philosophy of AI. Wiley-Blackwell. pp. 211-223.
    Due to its prominent role in human sociality, robotics researchers have increasingly considered to what extent reciprocity might be important in human-robot interaction, and whether it should be included as a design feature in social robots. However, very little has been said of the original function of reciprocity. Indeed, evolutionary biology has revealed that reciprocity evolved to foster cooperation among human groups, yet this fact has for the most part remained unexplored in the robotics literature. In this chapter, I aim (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Deontology and safe artificial intelligence.William D’Alessandro - 2025 - Philosophical Studies 182:1681-1704.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  25. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2025 - Inquiry: An Interdisciplinary Journal of Philosophy 68 (4):1164-1197.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  26. The use of military drones poses a new level of global risk. Special interview with Robert Junqueira.Robert Junqueira & Patrícia Fachin - 2025 - Instituto Humanitas Unisinos.
    This interview explores the transformative impact of military drone technology on global security, warfare, and ethical responsibility. Robert Junqueira argues that while drones represent disruptive technological advancement with devastating potential—from precision strikes to autonomous swarms—the core challenge remains fundamentally human. He critiques the notion of "autonomous" drones, insisting that these systems are programmed devices embodying human volition, and that responsibility for their actions must remain exclusively with humans. The discussion addresses how AI-enabled drones are reshaping military strategy and international relations (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  27. The Hard Problem of AI Alignment: Value Forks in Moral Judgment.Markus Kneer & Juri Viehoff - 2025 - Proceedings of the 2025 Acm Conference on Fairness, Accountability, and Transparency.
    Complex moral trade-offs are a basic feature of human life: for example, confronted with scarce medical resources, doctors must frequently choose who amongst equally deserving candidates receives medical treatment. But choosing what to do in moral trade-offs is no longer a ‘humans-only’ task, but often falls to AI agents. In this article, we report findings from a series of experiments (N=1029) intended to establish whether agent-type (Human vs. AI) matters for what should be done in moral trade-offs. We find that, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Ethical Safeguards for Sales of Weaponizable Technology: A Case Study.Theodore Lechterman, Bradley Strawser & David Whetham - 2025 - Business and Professional Ethics Journal 44 (1):63-97.
    This article presents a case study in how sellers of weaponizable technology can develop safeguards to mitigate risks of misuse by end users. In 2020, the authors were approached by a defense technology start-up whose core product offering was weaponizable drones. The start-up sought guidance in designing terms of sale and service that would ensure responsible usage of this technology. Combining elements from just war theory, international humanitarian law, and the theory of responsibility, we developed a novel, systematic framework for (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  29. Consciousness Civilization Framework (CCF) v1.1 — Constitutional Root Standard & Irreversible Authority Declaration.Jinho Lee - 2025 - Geneva: Zenodo.
    The Consciousness Civilization Framework (CCF) v1.1 establishes the master suite and root authority of the entire Consciousness Civilization Stack. As the Constitutional Root Standard and irreversible authority declaration, it defines the foundational architecture, mandatory compliance standards (CFE⁺ v2.0, CAIS, COS, CAI-OS, CCP), governance principles, and civilizational transition pathways for achieving consciousness civilization (CK5 level) by 2040. This edition incorporates fully revised terminology aligned with CFE⁺ v2.0, including standardized definitions for VCE (Vibrational Consciousness Energy), CRI (Consciousness Resonance Index), and CFI (Conscious (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  30. Trusting the (un)trustworthy? A new conceptual approach to the ethics of social care robots.Joan Llorca Albareda, Belén Liedo & María Victoria Martínez-López - 2025 - AI and Society (8):1-16.
    Social care robots (SCR) have come to the forefront of the ethical debate. While the possibility of robots helping us tackle the global care crisis is promising for some, others have raised concerns about the adequacy of AI-driven technologies for the ethically complex world of care. The robots do not seem able to provide the comprehensive care many people demand and deserve, at least they do not seem able to engage in humane, emotion-laden and significant care relationships. In this article, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Możliwość cnotliwych maszyn jako problem etyki techniki.Piotr Machura - 2025 - Studia Philosophica Wratislaviensia 20 (3):29-47.
    This paper aims at two goals. First, I analyse the possibility of virtuous machines. I shall argue that this might be of interest not only for ethics of technology taken as a regulation of designing and market implementation of certain kind of autonomous machines, but also as a method of uncovering some hidden aporias of the language of current debate in ethics of technology, and primarily in the ethics of Artificial Intelligence and robots. Thus, the second goal of the paper (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Global Artificial Intelligence (GAI): First Global Model.R. Pedraza - 2025 - Madrid: Ruben Garcia Pedraza.
    First Global Model presents the foundational structure of the Modelling System within the standardized Global Artificial Intelligence. This book explores how rational hypotheses, once validated, are transformed into precise mathematical representations of the world—models that guide decisions across global, specific, and particular levels. At the heart of this system are two pivotal mechanisms: the Impact of the Defect, which identifies and addresses potential risks, and the Effective Distribution, which measures and enhances operational efficiency, efficacy, and productivity. Through these instruments, the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. Bots und Roboter.Anna Puzio - 2025 - In Jörg Noller & Karoline Reinhardt, Handbuch Philosophie der Digitalität: Eine systematische und ethische Orientierung. Berlin: Metzler.
    Der Beitrag bietet eine Einführung in Bots und Roboter und stellt zentrale philosophische Themen und Fragen dar, die sich im Kontext dieser Technologien ergeben. Es werden unter anderem anthropologische und ethische Themen wie die Mensch-Technik-Abgrenzung, Anthropomorphismus, moralische Rechte und Handlungsfähigkeit, (epistemische) Gerechtigkeit, Täuschung und Manipulation, Vulnerabilität und Diversität sowie Relationalität diskutiert.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. Of machines and men: Attributions of moral responsibility in AI-assisted warfare.Philip Robbins - 2025 - Ethics and Information Technology 27 (3):1-16.
    The ongoing development of autonomous weapons systems, and the increasing frequency of their deployment on the battlefield, poses a pressing problem for military ethics. Somephilosophers have argued that the deployment of fully autonomous weapons would be unethical because it would generate responsibility gaps, that is, situations in which no agent, human or artificial, is morally responsible for wrongful harms resulting from that deployment. But do laypeople find it plausible that the use of fully autonomous weapons gives rise to such gaps? (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  35. The Ontological and Moral Status of Whole Brain Emulations in Neo-Aristotelian Naturalism.Richard Friedrich Runge - 2025 - AI and Ethics.
    The prospect of designing whole brain emulations (WBEs) capable of replicating the phenomenological effects of human brains presents a compelling argument for granting robots that implement such technology a human-like moral status. While deontological and utilitarian perspectives struggle to refute this notion—potentially paving the way for recognizing a utility monster—the article proposes that naturalistic virtue ethics offers a more skeptical stance. Drawing on the metaethical and ontological tenets of neo-Aristotelian naturalism, as articulated by Philippa Foot and Michael Thompson, this article (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Chatbot Epistemology.Susan Schneider - 2025 - Social Epistemology 39 (5):570-589.
    This piece considers the epistemological challenges that arise with the increasingly widespread use of AI chatbots. I articulate a problem that they present—the ‘boiling frog problem’. According to the metaphor, if you boil a frog by putting it in scalding water, it will try to save itself, but if you put the frog in a pot of tepid water, it will remain unaware of the rising water temperature and therefore, make no attempt to escape to save itself. In both cases, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  37. Social Robots with AI: Prospects, Risks, and Responsible Methods.Johanna Seibt, Peter Fazekas & Oliver Santiago Quick (eds.) - 2025 - Amsterdam: IOS Press.
  38. Why AI May Undermine Phronesis and What to Do about It.Cheng-Hung Tsai & Hsiu-lin Ku - 2025 - AI and Ethics 5 (3):3079–3086.
    Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. AI Alignment: The Case for Including Animals.Yip Fai Tse, Adrià Moret, Soenke Ziesche & Peter Singer - 2025 - Philosophy and Technology 38 (139):1-24.
    AI alignment efforts and proposals try to make AI systems ethical, safe and beneficial for humans by making them follow human intentions, preferences or values. However, these proposals largely disregard the vast majority of moral patients in existence: non-human animals. AI systems aligned through proposals which largely disregard concern for animal welfare pose significant near-term and long-term animal welfare risks. In this paper, we argue that we should prevent harm to non-human animals, when this does not involve significant costs, and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  40. Autonomous weapon systems impact on incidence of armed conflict: rejecting the ‘lower threshold for war argument’.Maciej Marek Zając - 2025 - Ethics and Information Technology 27 (3):1-11.
    Some proponents of a ban on Autonomous Weapon Systems (AWS) believe adopting these would lower the threshold for war, and is thus morally undesirable. This paper argues against that thesis. First, removing a single constraint on warmaking does not automatically make war more likely. Analysis of the causal input of other more potent restraints shows this holds true for just a fraction of potential conflicts. Secondly, AWS adoption would also impact other restraints on war in ways that are complex and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  42. How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia 4:1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  43. Tailoring responsible research and innovation to the translational context: the case of AI-supported exergaming.Sabrina Blank, Celeste Mason, Frank Steinicke & Christian Herzog - 2024 - Ethics and Information Technology 26 (2):1-16.
    We discuss the implementation of Responsible Research and Innovation (RRI) within a project for the development of an AI-supported exergame for assisted movement training, outline outcomes and reflect on methodological opportunities and limitations. We adopted the responsibility-by-design (RbD) standard (CEN CWA 17796:2021) supplemented by methods for collaborative, ethical reflection to foster and support a shift towards a culture of trustworthiness inherent to the entire development process. An embedded ethicist organised the procedure to instantiate a collaborative learning effort and implement RRI (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44. Impossibility of Artificial Inventors.Matt Blaszczyk - 2024 - Hastings Sci. And Tech. L.J 16:73.
    Recently, the United Kingdom Supreme Court decided that only natural persons can be considered inventors. A year before, the United States Court of Appeals for the Federal Circuit issued a similar decision. In fact, so have many the courts all over the world. This Article analyses these decisions, argues that the courts got it right, and finds that artificial inventorship is at odds with patent law doctrine, theory, and philosophy. The Article challenges the intellectual property (IP) post-humanists, exposing the analytical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. The Ethics of Automating Therapy.Jake Burley, James J. Hughes, Alec Stubbs & Nir Eisikovits - 2024 - Ieet White Papers.
    The mental health crisis and loneliness epidemic have sparked a growing interest in leveraging artificial intelligence (AI) and chatbots as a potential solution. This report examines the benefits and risks of incorporating chatbots in mental health treatment. AI is used for mental health diagnosis and treatment decision-making and to train therapists on virtual patients. Chatbots are employed as always-available intermediaries with therapists, flagging symptoms for human intervention. But chatbots are also sold as stand-alone virtual therapists or as friends and lovers. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  47. Security through Unity Europe’s Challenges after Ukraine Crisis.Paul Ertl (ed.) - 2024 - Vienna: Ministry of Defence, Republic of Austria.
  48. Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to maximize (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, the paper (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  50. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 555