Artificial Intelligence and “Justice” are currently brought together under several headings, namely, AI for Social Good, Ethical AI, Responsible AI, Fair, Accountable and Transparent AI, Cyberjustice and AI for Democracy. Each of these domains has its conceptualisation of the potential benefits and harms of AI, which are not necessarily informed by marginalised voices or those who may experience potential harms. In addition, these domains are located within a wider context of AI research and governance that has pervasive challenges of representation and accessibility. The primary aim of the fellowship, therefore, is to surface other ways of thinking about AI and its impacts that do not centre (primarily) White, Western European notions of innovation, morality and justice. The project utilises theoretical contributions from queer, intersectional feminist scholarship to focus on two key factors in how AI is likely to impact questions of justice: power and marginalisation.
We have named our research group after a quote from Pratyusha Kalluri of the Radical AI Network, who proposed that asking whether AI is good or fair is not the right question. We have to look at how it “shifts power”. Power relationships preserve inequality within our society in real and material terms. How will AI contribute to those inequalities? Is there any chance AI can help to foster new balances of power and if so, what will this look like in practice? Those are sub-questions with which our research team is concerned, and they require new datasets, new ways of looking at the data we do have, and long-term views of power over time.