Skip to main content
Dr Jessica Morley

About me ✦

I'm an Associate Research Scientist at Yale's Digital Ethics Center. I work on the political philosophy of health AI: what it should be optimising for, why it has been assigned the wrong target, and the information infrastructure needed to get to the right one. This means normative philosophy, causal inference, and health policy, usually at the same time. I have, at times, been referred to as a digilante — an accusation I cannot deny.

Dr Jessica Morley

I am a de Vries-Sherif Associate Research Scientist at the Digital Ethics Center at Yale University, where I lead an independent research programme on digital health, health data, and AI. My research draws on philosophy, public health, health economics, and regulatory science to interrogate dominant narratives about technology in healthcare; and to build the translational frameworks that might actually close the gap between ethical principles and practice.

Before Yale, I was Policy Lead at the Bennett Institute for Applied Data Science at the University of Oxford, where I was lead researcher for Better, Broader, Safer, the UK Government-commissioned Goldacre Review. I was also ethics and public engagement lead for OpenSAFELY, the UK's largest secure health analytics platform, which played a critical role in the COVID-19 response. Before that, I was at NHSX (now NHS England's Transformation Directorate), where I helped lead policy development for the NHS AI Lab business case, the NHS cloud policy, and data offshoring policy, and co-wrote the national strategy AI for Healthcare: How to Get It Right.

I did my PhD and MSc at the Oxford Internet Institute (Exeter College), where my doctoral thesis — on designing an algorithmically enhanced NHS — was accepted without corrections. I also hold a BA in Geography from St. Anne's College, Oxford.

🔍 The Digilante

I have a habit of finding the places where NHS data policy does not do what it claims to do. Sometimes this means showing that re-identification risks in major research datasets are being underestimated. Sometimes it means demonstrating that governance frameworks designed to protect patients are actually protecting institutions.

I do this not because I think health data research should stop — quite the opposite — but because complacency about data governance is the fastest route to the public trust collapse that would stop it.

Three Problems, One Diagnosis ✦

Despite millions being spent on the development, deployment, and use of healthcare AI it is failing to generate a justifiable return on investment at scale. I think this is a systems-level problem resulting from three interconnected issues:

(a) Target Misspecification

Health AI optimises for the wrong thing. The field has over-focused on optimising the health of individuals — a target that is neither desirable nor achievable — rather than public health: improve the conditions under which populations are healthy. Correcting this requires normative theory; specifically, a capability-based account of health drawn from Venkatapuram, enriched with prioritarian equity constraints from Parfit and Daniels.

(b) A Missing Translational Pipeline

AI is a complex sociotechnical systems-level technology. It rewires everything from what counts as evidence of illness, to the steps involved in specific care pathways. Yet, too often it is treated as a plug-and-play technology, something that can be bought off the shelf and simply slotted in to existing systems, like replacing a blood pressure cuff. This results in risk. What is needed is a comparable translational pipeline, like that which exists for drugs and devices but for AI instead.

(c) Epistemic Capture

The people who build health AI systems and the people who experience health systems do not, by and large, overlap. The result is that the knowledge encoded in these systems is structurally partial. Addressing this requires methods for epistemic rebalancing, not just "patient engagement" exercises.

Publications

My work has been published in Nature, The Lancet, The BMJ, Social Science & Medicine, Minds and Machines, and many others. Over 8,300 citations; h-index 32.

See full publication list →

Professional Service

  • NHS England AI Advisory Board
  • UK National Data Guardian Panel
  • Associate Editor, BMJ Digital Health & AI
  • LSHTM Centre for Data and Statistical Science for Health
  • Wellcome Trust Grant Review Committee

Where I've been ✨

2026–

de Vries-Sherif Associate Research Scientist, Digital Ethics Center, Yale University

Leading an independent research programme on digital health, health data, and AI. Co-organising the Global Health in the Age of AI symposium and the HASTE workshop with MIT.

2024–25

Postdoctoral Research Fellow, Digital Ethics Center, Yale University

Developed new conceptual frameworks including the 'inverse data quality law.' Selected as Yale's institutional candidate for the NIH Director's Early Independence Award.

2019–23

Policy Lead, Bennett Institute for Applied Data Science, University of Oxford

Lead researcher for Better, Broader, Safer (the Goldacre Review). Ethics, PPIE, and governance lead for OpenSAFELY. Senior leadership team member.

2018–23

Research Associate, Digital Ethics Lab, Oxford Internet Institute

PhD and MSc at Exeter College. Doctoral thesis on designing an algorithmically enhanced NHS, accepted without corrections. Best Paper Award at NeurIPS 2019.

2017–20

Technology Advisor / AI Subject Matter Expert, NHSX / DHSC

Co-wrote 'AI for Healthcare: How to Get It Right.' Helped lead policy on the NHS AI Lab, cloud policy, and data offshoring.

2012–17

Early Career: NHS Commissioning, Mapa Research, Mintel

Health research and policy analysis across the NHS and the private sector. BA in Geography from St. Anne's College, Oxford.

When I'm not thinking about the governance of health data, you can probably find me making friendship bracelets, constructing outfits that could be described as "aggressively colourful," or building elaborate Taylor Swift theories. ✨