Responsible AI is a Global South question now
If the next wave of frontier models is trained, fine-tuned, and red-teamed without us, we will inherit their assumptions.
The frontier-model conversation has, until very recently, been a conversation between maybe five companies, three jurisdictions, and a small number of universities. The Global South has not been absent from it — there are researchers, deployments, even policy contributions — but it has not been present in the way that matters: in the upstream choices about what these systems are optimized for.
That matters because the assumptions go all the way down. A safety classifier trained on one set of cultural intuitions will mislabel another. A red team that has never reasoned in Pashto will not know what a jailbreak looks like in Pashto.
What participation looks like
It is not enough to ship the same product to more countries. Participation means data, compute, evaluation, governance — and the right to say no to a deployment, on grounds the upstream team is obliged to take seriously. We are early enough that this is still possible. We will not be early for much longer.