The Founder
Researcher Β· AI Governance & Policy Implementation
Hertie School Β· Mercatus Center Β· SAIGE Β· Scale AI Β· Fleetwood Strategy Β· formerly KCL
DataBias started as a question during my MA at King's College London: why was it so difficult to find Women of Colour and gender-non-conforming researchers working on AI ?
The answer wasn't a lack of work. It was a lack of infrastructure. You had to already know the names, already be in the right rooms, already have the network. That didn't sit right with me, so I decided to build something that made that work visible.
I'm a researcher working at the intersection of AI governance, digital culture, and algorithmic accountability. My work focuses on how data systems shape power: who gets seen, who gets left out, and how those dynamics get reproduced at scale in the institutions, platforms, and policies we build around AI.
I didn't start here. I spent years working in marketing and arts and culture in the UK, a world I loved, and one I'm glad I had. But the researchers and scholars whose work lives in DataBias's catalogue changed my trajectory. Reading Algorithms of Oppression. Watching the Gender Shades results land. Following Timnit Gebru's refusal to be silenced by one of the most powerful companies in the world. Realising the questions I cared most about were the ones these women were already asking β rigorously, bravely, and often at personal cost. So I followed them into this field. This database is, in part, my way of saying thank you for that.
I'm currently an MPP student at the Hertie School in Berlin, focused on digital regulation, platform governance, and ethical technology policy. Alongside my studies I hold several research and policy roles β all oriented around the same question: how do we design AI governance that actually works in practice, not just in theory?
APR 2026 β JUL 2026 Β· ONGOING
AI Safety Germany (SAIGE) Incubator β Mentee
AI Governance & Policy Track
AI SafetySelected for a competitive national incubator on AI safety and risk analysis. Researching AI harm rates in Germany using adapted epidemiological methods β developing policy metrics that translate technical analysis into actionable guidance for policymakers.
FEB 2026 β PRESENT
Research Assistant β Public Sector AI Adoption
Hertie School Γ Possible Γ IPAI Β· Berlin
GovernanceComparative policy research on AI adoption across local governments in Germany and the Global South β examining how governance frameworks adapt across institutional settings and where they break down in practice.
JAN 2026 β PRESENT
Research Assistant β Digital Industry & AI Deployment
TU Berlin Β· SEISMEC (EU Horizon Europe Project)
ResearchAnalysis of AI integration in European industry β how governance requirements translate across different organisational and cultural contexts, with stakeholder engagement across diverse practitioners.
JUL 2025 β PRESENT
Ronald Coase Fellow
Mercatus Center at George Mason University
PolicyPolicy-oriented analysis on institutional design and how governance structures support effective outcomes β with focus on the gap between how policy assumes institutions work and how they actually function.
JAN 2025 β PRESENT
Governance & Technology Assessment Consultant
Fleetwood Strategy Β· London
PolicyPolicy analysis on public sector AI integration β including the PalantirβNHS implementation β examining where policy design breaks down in institutional practice and synthesising findings for policymakers.
APR 2024 β AUG 2025
AI Trainer & Research Analyst
Scale AI
AI SafetyEvaluated frontier AI systems, identifying gaps between policy safeguards and implementation reality. Assessed how institutional incentives shape actual outcomes versus stated governance goals.
AUG 2025 β PRESENT
Master of Public Policy
Hertie School, Berlin
Digital Regulation Β· Platform Governance Β· Ethical Technology Policy
SEP 2022 β JAN 2024
MA Digital Culture & Society β Distinction
King's College London
Platform governance Β· Algorithmic accountability Β· Technology regulation Β· Where DataBias began.
These researchers changed the course of my career. I found their work while still in arts and culture, and I couldn't look away. They are among the first in DataBias's catalogue β and writing to them is one of the stranger, more meaningful things I'll do with this project.
Professor & Chair, UCLA Β· Director, Center on Race & Digital Justice
Her book was the first time I saw my experience with biased technology named and analysed with real rigour. She showed me that the things I had noticed weren't individual glitches β they were structural. It reframed everything I thought I understood about how data and power work together.
Founder & Executive Director, Distributed AI Research Institute
Watching her refuse to be silenced by one of the most powerful companies in the world β and then build her own institution outside it β showed me what principled, independent research looks like when the stakes are high and the pressure to stay quiet is enormous.
Founder, AJL Β· Author, Unmasking AI Β· Poet of Code
The Gender Shades research made visible what many of us had felt in our own encounters with technology. The way she combined art, research, and advocacy β and got companies to actually change β gave me a model for what this kind of work can be when it's done with both rigour and humanity.
Mozilla Fellow / UC Berkeley Researcher
She turned critique into methodology. Auditing AI systems, naming what fails and why, building the evidentiary case that made companies pull products β that's the kind of research that changes what institutions actually do. She made me take governance seriously as a craft.
Assistant Professor, UC Berkeley Β· Co-founder, Black in AI
The first Black female professor in UC Berkeley's engineering college history. Her presence in those rooms changes what future researchers can imagine for themselves. Her work on algorithms and distributive justice gave me language for questions I'd been circling for years without names for them.
Mozilla Fellow / Researcher
Her work on the values embedded in ML systems and the harms in large datasets made me think hard about what DataBias's own data practices need to look like. You can't build something that claims to address bias without being rigorous about your own methods.
Whether you're a researcher who wants to be included in the database, a funder, a journalist covering AI and representation, or someone working on something adjacent who wants to talk β I'd genuinely love to hear from you.
Best route: LinkedIn or email. For DataBias enquiries: hello@databias.org.
Research Philosophy
Frameworks fail when they don't account for how institutions actually work β their incentives, their constraints, their cultures. Working at Scale AI, I watched policies get circumvented when they didn't fit implementation reality. That's not a technology problem. It's a governance design problem.
Growing up in interfaith communities, I learned that different cultures and institutions solve problems differently β and that's a feature, not a bug. What works in Berlin may need adaptation in Lagos. But there are underlying principles that can transfer. Finding those principles is the real work of comparative governance.
You cannot hold systems accountable for harms that aren't named. You cannot fund research that you can't find. DataBias is infrastructure before it is advocacy β but infrastructure that shifts access is political, whether it names itself that way or not.
Work Together
I'm open to conversations about research collaboration, DataBias partnerships, speaking, and funding. The best introduction is a short message about what you're working on.
Connect on LinkedIn β"Governance frameworks must be grounded in practice. They need to understand real constraints β institutional, cultural, economic." β Nafisah Animashaun