NA
πŸ“ Berlin, Germany
πŸŽ“ MPP Hertie School Β· MA Distinction KCL
πŸ”¬ AI Governance & Policy Implementation
🌍 Mercatus Center Β· SAIGE Incubator Β· TU Berlin Β· Hertie Γ— IPAI

The Founder

Nafisah Animashaun

Researcher Β· AI Governance & Policy Implementation

Hertie School Β· Mercatus Center Β· SAIGE Β· Scale AI Β· Fleetwood Strategy Β· formerly KCL

DataBias started as a question during my MA at King's College London: why was it so difficult to find Women of Colour and gender-non-conforming researchers working on AI ?

The answer wasn't a lack of work. It was a lack of infrastructure. You had to already know the names, already be in the right rooms, already have the network. That didn't sit right with me, so I decided to build something that made that work visible.

I'm a researcher working at the intersection of AI governance, digital culture, and algorithmic accountability. My work focuses on how data systems shape power: who gets seen, who gets left out, and how those dynamics get reproduced at scale in the institutions, platforms, and policies we build around AI.

I didn't start here. I spent years working in marketing and arts and culture in the UK, a world I loved, and one I'm glad I had. But the researchers and scholars whose work lives in DataBias's catalogue changed my trajectory. Reading Algorithms of Oppression. Watching the Gender Shades results land. Following Timnit Gebru's refusal to be silenced by one of the most powerful companies in the world. Realising the questions I cared most about were the ones these women were already asking β€” rigorously, bravely, and often at personal cost. So I followed them into this field. This database is, in part, my way of saying thank you for that.

What I'm working on now

I'm currently an MPP student at the Hertie School in Berlin, focused on digital regulation, platform governance, and ethical technology policy. Alongside my studies I hold several research and policy roles β€” all oriented around the same question: how do we design AI governance that actually works in practice, not just in theory?

APR 2026 – JUL 2026 Β· ONGOING

AI Safety Germany (SAIGE) Incubator β€” Mentee

AI Governance & Policy Track

AI Safety

Selected for a competitive national incubator on AI safety and risk analysis. Researching AI harm rates in Germany using adapted epidemiological methods β€” developing policy metrics that translate technical analysis into actionable guidance for policymakers.

FEB 2026 – PRESENT

Research Assistant β€” Public Sector AI Adoption

Hertie School Γ— Possible Γ— IPAI Β· Berlin

Governance

Comparative policy research on AI adoption across local governments in Germany and the Global South β€” examining how governance frameworks adapt across institutional settings and where they break down in practice.

JAN 2026 – PRESENT

Research Assistant β€” Digital Industry & AI Deployment

TU Berlin Β· SEISMEC (EU Horizon Europe Project)

Research

Analysis of AI integration in European industry β€” how governance requirements translate across different organisational and cultural contexts, with stakeholder engagement across diverse practitioners.

JUL 2025 – PRESENT

Ronald Coase Fellow

Mercatus Center at George Mason University

Policy

Policy-oriented analysis on institutional design and how governance structures support effective outcomes β€” with focus on the gap between how policy assumes institutions work and how they actually function.

JAN 2025 – PRESENT

Governance & Technology Assessment Consultant

Fleetwood Strategy Β· London

Policy

Policy analysis on public sector AI integration β€” including the Palantir–NHS implementation β€” examining where policy design breaks down in institutional practice and synthesising findings for policymakers.

APR 2024 – AUG 2025

AI Trainer & Research Analyst

Scale AI

AI Safety

Evaluated frontier AI systems, identifying gaps between policy safeguards and implementation reality. Assessed how institutional incentives shape actual outcomes versus stated governance goals.

Education

AUG 2025 – PRESENT

Master of Public Policy

Hertie School, Berlin

Digital Regulation Β· Platform Governance Β· Ethical Technology Policy

SEP 2022 – JAN 2024

MA Digital Culture & Society β€” Distinction

King's College London

Platform governance Β· Algorithmic accountability Β· Technology regulation Β· Where DataBias began.

Who sent me in this direction

These researchers changed the course of my career. I found their work while still in arts and culture, and I couldn't look away. They are among the first in DataBias's catalogue β€” and writing to them is one of the stranger, more meaningful things I'll do with this project.

Safiya Umoja Noble β€” Algorithms of Oppression

Professor & Chair, UCLA Β· Director, Center on Race & Digital Justice

Her book was the first time I saw my experience with biased technology named and analysed with real rigour. She showed me that the things I had noticed weren't individual glitches β€” they were structural. It reframed everything I thought I understood about how data and power work together.

Timnit Gebru β€” DAIR Institute

Founder & Executive Director, Distributed AI Research Institute

Watching her refuse to be silenced by one of the most powerful companies in the world β€” and then build her own institution outside it β€” showed me what principled, independent research looks like when the stakes are high and the pressure to stay quiet is enormous.

Joy Buolamwini β€” Algorithmic Justice League

Founder, AJL Β· Author, Unmasking AI Β· Poet of Code

The Gender Shades research made visible what many of us had felt in our own encounters with technology. The way she combined art, research, and advocacy β€” and got companies to actually change β€” gave me a model for what this kind of work can be when it's done with both rigour and humanity.

Deborah Raji β€” AI Auditing & Accountability

Mozilla Fellow / UC Berkeley Researcher

She turned critique into methodology. Auditing AI systems, naming what fails and why, building the evidentiary case that made companies pull products β€” that's the kind of research that changes what institutions actually do. She made me take governance seriously as a craft.

Rediet Abebe β€” Algorithms & Inequality

Assistant Professor, UC Berkeley Β· Co-founder, Black in AI

The first Black female professor in UC Berkeley's engineering college history. Her presence in those rooms changes what future researchers can imagine for themselves. Her work on algorithms and distributive justice gave me language for questions I'd been circling for years without names for them.

Abeba Birhane β€” Dataset Auditing & ML Values

Mozilla Fellow / Researcher

Her work on the values embedded in ML systems and the harms in large datasets made me think hard about what DataBias's own data practices need to look like. You can't build something that claims to address bias without being rigorous about your own methods.

Get in touch

Whether you're a researcher who wants to be included in the database, a funder, a journalist covering AI and representation, or someone working on something adjacent who wants to talk β€” I'd genuinely love to hear from you.

Best route: LinkedIn or email. For DataBias enquiries: hello@databias.org.

Connect on LinkedIn β†’ Email Me
AI Governance✦Policy Implementation✦Digital Culture✦Algorithmic Accountability✦Hertie School Berlin✦King's College London✦SAIGE Incubator✦Mercatus Center✦AI Governance✦Policy Implementation✦Digital Culture✦Algorithmic Accountability✦Hertie School Berlin✦King's College London✦SAIGE Incubator✦Mercatus Center✦

Research Philosophy

What I believe about AI governance

Governance must be grounded in implementation

Frameworks fail when they don't account for how institutions actually work β€” their incentives, their constraints, their cultures. Working at Scale AI, I watched policies get circumvented when they didn't fit implementation reality. That's not a technology problem. It's a governance design problem.

Context is not an excuse for inconsistency

Growing up in interfaith communities, I learned that different cultures and institutions solve problems differently β€” and that's a feature, not a bug. What works in Berlin may need adaptation in Lagos. But there are underlying principles that can transfer. Finding those principles is the real work of comparative governance.

Visibility is a prerequisite for accountability

You cannot hold systems accountable for harms that aren't named. You cannot fund research that you can't find. DataBias is infrastructure before it is advocacy β€” but infrastructure that shifts access is political, whether it names itself that way or not.

Work Together

Interested in DataBias or my research?

I'm open to conversations about research collaboration, DataBias partnerships, speaking, and funding. The best introduction is a short message about what you're working on.

Connect on LinkedIn β†’
"Governance frameworks must be grounded in practice. They need to understand real constraints β€” institutional, cultural, economic." β€” Nafisah Animashaun