Top About Work Writing Speaking Projects Reading
Connor Dunlop

Connor Dunlop

AI governance, verification infrastructure, geopolitics.

Founding team, Lucid Computing.

About

I work on how governments and institutions can verify trustworthy behaviour in advanced AI systems — through policy, through verification infrastructure, and through the physical hardware itself.

I head up policy and strategy at Lucid Computing. Lucid was founded on the belief that significant chunks of the economy and government will be run by AI in the near future. Just as the human economy runs on trust — think proofs of identity like passports, or proofs of reputation like credit scores — we urgently need to build similar proofs of trust for the AI economy. Lucid is building the hardware-rooted verification infrastructure to make that possible. My role involves ensuring our technology is adapted to the needs of governments seeking to securely deploy advanced AI, and informing what we build through ecosystem engagement and high-level briefings on AI verification technology.

Previously I set up and led the EU and global governance programme at the Ada Lovelace Institute, where my research informed the International AI Safety Report, the EU Code of Practice on General-Purpose AI (GPAI), and the UK government’s white paper on AI regulation, among others. Before that I worked in EU public affairs, at the UN Refugee Agency’s innovation unit, and on emerging technology at The Hague Centre for Strategic Studies.

I grew up in Belfast, spent five years in Brussels, and now live in London.

Work

Lucid Computing · Director of Policy & Strategy
Ada Lovelace Institute · Head of EU & Global Governance
Dentons Global Advisors; Nove Public Affairs · Senior Associate, EU Technology Policy
UN Refugee Agency · Consultant, Innovation Unit
The Hague Centre for Strategic Studies · Assistant Analyst

Other

Fellow — Newspeak House, London present
Fellowship Supervisor — GovAI, University of Oxford 2026–present
Member — OECD AI Expert Network on AI Incidents 2025–present
Adjunct Professor — ESSCA Ecole de Management 2023–2025
Programme Committee — TAIG-ICML 2025

Writing

Research

General Purpose AI Models with Systemic Risks
Chapter 23 in The EU Artificial Intelligence Act · Bloomsbury
Forthcoming.
Hardware-Rooted Trust Anchors for Sovereign AI Processing
Borenovic, Dunlop, Asad, Shah, Leshin, Dunia · ICDS 2025
Cryptographic verification of location, identity, and confidentiality in cloud environments.
An Autonomy-Based Classification: AI Agents, Liability and Learnings from the UK Automated Vehicles Act
Smakman, Soder, Dunlop · Accepted at three workshops, NeurIPS 2024
The Role of Governments in Increasing Interconnected Post-Deployment Monitoring of AI
Bernadi, Stein, Dunlop · SoLaR Workshop, NeurIPS 2024
Safety Frameworks and Standards: A Comparative Analysis to Advance Risk Management of Frontier AI
AIGI, University of Oxford

Policy

Safe Before Sale: Learnings from the FDA’s Model of Life Sciences Oversight for Foundation Models
Ada Lovelace Institute
Referenced in the inaugural International AI Safety Report.
The Value Chain of General-Purpose AI
Ada Lovelace Institute
Referenced by the UK Government White Paper on AI.
An Infrastructure for Safety and Trust in European AI
Ada Lovelace Institute
Safe Beyond Sale: Post-Deployment Monitoring of AI
Ada Lovelace Institute
Explainer: What Is a Foundation Model?
Ada Lovelace Institute
Widely referenced explainer on general-purpose AI.

Commentary

Provable Hardware Trust for AI
Op-ed on hardware-rooted verification for AI sovereignty.
Regulating AI Foundation Models Is Crucial for Innovation
Euractiv · op-ed

Speaking & Media

Al Jazeera — live interview on the implications of test-time compute and DeepSeek for frontier AI development
GenLaw Workshop, ICML 2024 — speaker
CPDP.ai 2024 — ‘FLOPs and Beyond: Decoding the AI Act’s Systemic Risk Criteria’
TechPolicy.Press — ‘An FDA for AI?’ podcast
European AI Fund — interview on AI governance
TechPolicy.Press — ‘The EU AI Act Enters Final Negotiations’ podcast
Euractiv — ‘European Standards and the AI Act’ podcast
Vision Weekend Europe 2024 — ‘An Ecosystem Approach to AI Risk’

Projects

Lucid Computing

A startup building hardware-rooted verification of important properties of AI software, and the security of the hardware they run on. Heading up policy and strategy — ensuring our technology is adapted to the needs of governments seeking to securely deploy advanced AI.

Verifiable Compute Foundation

Advising and defining the organisational roadmap for a foundation developing reference architectures and red-teaming infrastructure for treaty-verifiable datacentres.

Newspeak House

Building an open-source platform to sustain engagement with human-authored writing in the post-AGI era.

GovAI Fellowship Supervision

Supervising two research fellows at the Centre for the Governance of AI, University of Oxford.

Reading

Currently reading

The Complete Stories — Flannery O’Connor
Democracy in America — Alexis de Tocqueville
Seeing Like a State — James C. Scott
Middlemarch — George Eliot
Inside the Whale and Other Essays — George Orwell
Dubliners — James Joyce

Recently read

The Name of the Wind — Patrick Rothfuss
Pachinko — Min Jin Lee
There Is No Antimemetics Division — qntm
Fathers and Sons — Ivan Turgenev
Notes from the Underground — Fyodor Dostoevsky
All About Love — bell hooks
East of Eden — John Steinbeck
Novacene — James Lovelock
My Struggle, Book 1 — Karl Ove Knausgård
Beyond Good and Evil — Friedrich Nietzsche