🚧 Site currently under construction! 🚧
All details and information are subject to change as we finalize our website.

Research

Research Areas

Interpretability

Understanding how AI systems make decisions and what they learn from data.

Governance

Developing frameworks for responsible AI development and deployment.

Evaluations

Developing methods to assess AI capabilities, alignment, and safety properties.

Oversight / Control

Ensuring meaningful human oversight and control over AI systems.

AI Agency

Understanding and managing autonomous AI behavior and decision-making.

Security

Protecting AI systems from adversarial attacks and malicious use.

Recent Research by Durham AISI Members

Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models

Leask, P., & Al Moubayed, N. (2025, July). Inference-Time Decomposition of Activations (ITDA): A Scalable Approach to Interpreting Large Language Models. Presented at International Conference on Machine Learning (ICML 2025), Vancouver, Canada

Interpretability ICML 2025

Sparse Autoencoders Do Not Find Canonical Units of Analysis

Leask, P., Bussmann, B., Pearce, M. T., Isaac Bloom, J., Tigges, C., Al Moubayed, N., Sharkey, L., & Nanda, N. (2025, April). Sparse Autoencoders Do Not Find Canonical Units of Analysis. Presented at The Thirteenth International Conference on Learning Representations, Singapore

Interpretability ICLR 2025

Research Opportunities

Undergraduates

We can advise and support you on dissertation and individual study projects.

Faculty

We can signpost promising research directions and funding opportunities, and support you throughout.