Research Overview


Imagine you've been fired from your job by a reliable and competent decision-maker. This decision-maker reached their conclusion in the right way—and they can provide reasons and explanations for their decision. Does it matter who, or what, the decision-maker is? The guiding claim of my research is that it matters whether morally laden decisions are made by moral agents. My work considers what makes something a moral agent, which entities are moral agents, and why we should care about moral agency—all with a focus on AI. These questions form three interrelated research strands.

Non-Prototypical Moral Agency

This theoretical strand of my research concerns the nature of moral agency, with a particular focus on moral agents (and potential moral agents) beyond the prototypical adult human. Broadly, on my account, there are different types of moral agency—and while there are certain necessary conditions for each type, these conditions can be instantiated in a variety of ways.

Applied Ethics of Technology

This applied strand of my research concerns how we ought to integrate AI systems into our moral practices. The papers in this strand focus on how our use of AI in moral decision-making contexts should be limited by AI systems’ lack of moral agency.

Empirically Engaged Philosophy of Moral Agency

This empirical strand of my research concerns the capabilities of AI systems (empirically informed and technically grounded philosophy) and the ways in which humans think about moral agency (experimental philosophy).

Works in Progress


  • Moral Agents Unlike Us | AIES 2025 draft
    I argue that moral agency isn't all that matters—even if AI systems are moral agents, they will be different from us in normatively significant ways and will thus play different roles from humans in the moral community.
  • Artificial Moral Behavior
    I argue that delegating moral decisions to AI systems is wrong because doing so replaces events that should be moral actions with mere behaviors.