Safiya Umoja Noble
Professor, UCLA
🌎 USA
Author of Algorithms of Oppression. Exposes how search engines and AI systems encode racial and gender bias. Foundational voice in tech criticism and Black feminist scholarship.
The Directory
Women of colour working in AI safety, governance, and algorithmic justice — from across the globe.
Professor, UCLA
🌎 USA
Author of Algorithms of Oppression. Exposes how search engines and AI systems encode racial and gender bias. Foundational voice in tech criticism and Black feminist scholarship.
Founder, DAIR Institute
🌍 Ethiopia / USA
Pioneer in AI ethics. Co-authored foundational Datasheets for Datasets and the Stochastic Parrots paper. Founded DAIR to centre community-based AI research after her departure from Google.
Founder, Algorithmic Justice League
🌍 Ghana / USA
Her Gender Shades research exposed racial and gender disparities in commercial facial analysis AI. Author of Unmasking AI. Testified before US Congress on AI accountability.
Mozilla Fellow / UC Berkeley
🌍 Nigeria / USA
Pioneered AI auditing methodology. Exposed failures of commercial AI systems in deployment. Her work directly shaped Amazon and IBM pulling their facial recognition products.
Co-Executive Director, AI Now Institute
🌏 India / USA
Ensures AI systems are accountable to the public rather than corporate interests. Focuses on policy reform, regulation, and structural analysis of AI power at the AI Now Institute, NYU.
Assistant Professor, UC Berkeley
🌍 Ethiopia / USA
Works on AI and inequality, mechanism design and poverty. Co-founded Black in AI. First Black female professor in UC Berkeley's engineering college. Research examines how computational tools can improve equity.
Assistant Professor & PI, AI Accountability Lab, Trinity College Dublin
🌍 Ethiopia / Ireland
Founder and Principal Investigator of the AI Accountability Lab at TCD. TIME100 Most Influential People in AI (2023). Served on the UN Secretary-General's AI Advisory Body and Ireland's AI Advisory Council. Known for auditing AI models and training datasets for harmful content and examining values embedded in ML systems.
aial.ie →PhD Researcher, King's College London
🇬🇧 UK
Researching Black women's digital intimacy and online experiences in the UK context. Examines how platforms shape and constrain Black women's digital lives.
CEO, AI for the People
🌍 Zambia / USA
Advocates for civil rights protections in the age of AI. Works at the intersection of racial justice and technology policy. Founded AI for the People to address algorithmic discrimination.
Author / Political Analyst
🌍 Kenya
Author of Digital Democracy, Analogue Politics. Analyses how technology intersects with democracy and governance in East Africa. Critical voice on digital colonialism and tech power in the Global South.
Researcher, Article 19
🌏 India / UK
Works on AI policy, surveillance, and human rights law. Research examines how AI enables government surveillance and the legal frameworks needed to protect rights.
AI Governance & Ethics Manager, Kainos
🇬🇧 Scotland, UK
Award-winning AI Ethicist and Digital Leaders AI 100 UK honouree. Designs and operationalises AI governance frameworks aligned with the EU AI Act, GDPR, and ISO standards. Women in AI Ethics Fellow, BBC Scotland content creator on AI literacy, and contributor to Diverse Spectrum — an open-source equitable image dataset.
aiethicshub.co.uk →Founder, Humane Intelligence
🌏 Bangladesh / USA
Expert in responsible AI and bias evaluation. Formerly Global Lead for Responsible AI at Accenture. Founded Humane Intelligence to advance participatory AI auditing and red-teaming practices globally.
Journalist & Author
🌎 USA / International
Author of Empire of AI (2025), a landmark investigation into OpenAI and the global consequences of the AI industry. Former AI reporter at The Atlantic and MIT Technology Review. Her work exposes the labour exploitation and power dynamics behind AI development, particularly in the Global South.
karenhao.co →Executive Director, Humane Intelligence
🌎 USA
Leads Humane Intelligence, a nonprofit advancing participatory AI evaluation and open-source red-teaming tools. Formerly Director of Tech for Social Good at GitHub, Senior Advisor at WHO, and Director of Programme Management, AI Safety at MLCommons. Author of two novels. 15+ year career in technology for global social good.
humane-intelligence.org →Policy Manager, Data & Society Research Institute
🌎 USA
AI policy expert leading state-level policy engagement at Data & Society. Founding member of the US AI Safety Institute Consortium, where she advocated for a sociotechnical approach to AI safety. HUMAN Residency Fellow — her poetry collection centres a Black feminist analysis of AI. Previously tech equity fellow at The Greenlining Institute.
Director of Strategy, BASE / AI Governance Consultant, GovAI Coalition
🌎 USA
Public-interest cybersecurity and technology policy researcher specialising in AI safety evaluation, privacy engineering, and AI governance for the public sector. Works with municipalities across the US through the GovAI Coalition on AI procurement, digital security, and risk management. MSc Information & Cybersecurity, UC Berkeley.
Chief Innovation Officer, Journotech / Founder, NewsAssist AI
🇬🇧 UK
AI innovator, journalist, and ethical technology leader based in the UK. Leads responsible AI innovation, policy development, and ethical deployment at Journotech. Has trained 300+ professionals across 21 countries on responsible AI use, governance frameworks, and secure AI deployment.
AI Policy Lead, Commonwealth of Pennsylvania / Non-Resident Fellow, New America
🌎 USA
Develops and writes AI governance solutions for the Commonwealth of Pennsylvania, with expertise in risk, security, and data privacy. Published research on AI nutrition labels for generative AI tools at New America. Previously at Rubrik, Kapor Center, and Bipartisan Policy Center.
Emerging Technology Fellow, US Census Bureau
🌎 USA
AI policy and governance professional at the intersection of AI and public policy. Served as the first AI Officer at the Federal Energy Regulatory Commission. Previously worked on global technology governance at Meta and TikTok. Now develops experimental policy frameworks and advises federal leadership on ethical AI governance at scale.
Global Policy Team Lead, Google
🌍 Nigeria / International
Shapes policies for the responsible use and access to generative AI products and hardware platforms at Google. Focuses on trustworthy technologies that respect individual and societal rights. Extensive expertise in government consultations, digital rights advocacy, and safety considerations in the global digital economy.
Senior Manager, Ethical Use Policy, Salesforce
🌎 USA
Leads operationalisation of Salesforce's AI acceptable use policies, specialising in AI governance, product safety, and cross-functional policy. Previously at Twitter, where she built the platform's first recommendations explainer, advancing algorithmic transparency and user choice. MPP, University of Maryland.
Head of AI Policy, Duco
🌐 International
Leads AI safety and governance for enterprise clients at Duco. Specialises in model fine-tuning, adversarial testing, and regulatory compliance. Previously at Meta, the Federation of American Scientists, and the US Census Bureau. Researches language-specific gaps in AI safety training datasets and publishes a widely-read AI policy newsletter.
Non-Resident Fellow, Center for Long-Term Cybersecurity / Research Director, BASE
🌎 USA
Researches global security implications of artificial intelligence. Previously Research Associate at the Frontier Model Forum, AI Capabilities Analyst at CISA, and Junior AI Fellow at CSET. Public Interest Technology Fellow at the US Census Bureau. Research Director at BASE.
Senior Fellow, Portulans Institute
🌐 International
Independent research and policy consultant specialising in AI governance, digital rights, and technology-driven institutional innovation. Has worked with OpenAI, Stanford HAI, CIVICUS, Global Witness, and the International Labour Organization. Work spans AI risk assessment, red-teaming advanced AI systems, and cross-national policy datasets for policymakers.
Join the Directory
Are you a woman of colour working in AI safety, AI governance, or algorithmic justice? We'd love to include you. It takes about 10 minutes and is completely free.
You can review your profile before anything is published, and update or remove it at any time.
Submit Your Profile →"There is no established website that specialises in this area — creating a need for a platform like this, especially outside the USA." — DataBias original proposal, 2022