About

Artificial Intelligence and “Justice” are currently brought together under several headings, namely, AI for Social Good, Ethical AI, Responsible AI, Fair, Accountable and Transparent AI, Cyberjustice and AI for Democracy. Each of these domains has its conceptualisation of the potential benefits and harms of AI, which are not necessarily informed by marginalised voices or those who may experience potential harms. In addition, these domains are located within a wider context of AI research and governance that has pervasive challenges of representation and accessibility. The primary aim of the fellowship, therefore, is to surface other ways of thinking about AI and its impacts that do not centre (primarily) White, Western European notions of innovation, morality and justice. The project utilises theoretical contributions from queer, intersectional feminist scholarship to focus on two key factors in how AI is likely to impact questions of justice: power and marginalisation.

We have named our research group after a quote from Pratyusha Kalluri of the Radical AI Network, who proposed that asking whether AI is good or fair is not the right question. We have to look at how it “shifts power”. Power relationships preserve inequality within our society in real and material terms. How will AI contribute to those inequalities? Is there any chance AI can help to foster new balances of power and if so, what will this look like in practice? Those are sub-questions with which our research team is concerned, and they require new datasets, new ways of looking at the data we do have, and long-term views of power over time.

News

Projects

placeholder

In this project, we are working to develop a framework for thinking about the impacts of AI that is broader, more interconnected in terms of short-, medium- and long-term beneficiaries, and entangled with issues such as power, wealth, and the resulting influence on economic, political and legal factors which make certain applications of AI more likely. One of the challenges of thinking about an "ethical" AI or an "AI for Social Good" is the tendency to move conversations into the cultural realm, where goodness and badness may be viewed as culturally subjective terms. One approach to resolve this problem is to unify different definitions and arrive at a set of "universal" principles. Another approach is to align work with a set of goals that has already been vetted by an international community of stakeholders. In our view, these approaches do not adequately consider longer-term, indirect impacts that are likely to influence matters of justice, nor do they include a wide-enough scope for potential beneficiaries and potential harms to anticipate how this technology will impact our world. In this project, we are working with collaborators Soraya Kouadri Mostéfaoui and Syed Mustafa Ali to explore ways of considering the impacts of AI and its subfields that include decolonial perspectives of world systems thinking

Complementing a broader, ecological perspective on the current and future impacts of AI, within the current paradigm of racialised, industrial capitalism, we also wish to identify what might emerge under different circumstances or directed toward goals that are not normatively connected to AI. Queerness, as a world-building concept, involves disrupting, dismantling and dissolving normative constructs that exclude and oppress. What are the ways in which queer people are engaging with this technology? What does the project of "queering AI" mean thus far? These are some of the questions we will be exploring with artists, writers, researchers and activists associated with the queer community.

placeholder
placeholder

One important strand of our research is concerned with how AI Researchers and Developers come to hold the viewpoints they have on the impacts of AI and their role in determining those impacts. What levels of impact do AI researchers consider? How do they conceptualise the harms and benefits of AI? What principles do they follow, if any, to mitigate potential harms? Where did they learn this? Through this line of research we hope to understand more about where and how those working on AI begin to formulate their ideas about what AI should become.

Publications

2025

Sides, Teresa; Kbaier, Dhouha; Farrell, Tracie and Third, Aisling (2025). Bridging trust gaps: Stakeholder perspectives on AI adoption in the United Kingdom NHS primary care. DIGITAL HEALTH, 11 Bridging trust gaps: Stakeholder perspectives on AI adoption in the United Kingdom NHS primary care

Brown, Venetia; Larasati, Retno; Kwarteng, Joseph and Farrell, Tracie (2025). Understanding AI and Power: Situated Perspectives from Global North and South Practitioners. AI & Society (Early Access). Understanding AI and Power: Situated Perspectives from Global North and South Practitioners

Pavón Pérez, Ángel; Fernandez, Miriam; Farrell, Tracie; Nozza, Debora and de Kock, Christine (2025). Foreword: Towards a Safer Web for Women - First International Workshop on Protecting Women Online. In: WWW '25: The ACM Web Conference 2025, 28 Apr - 02 May 2025, Sydney, Australia. Foreword: Towards a Safer Web for Women - First International Workshop on Protecting Women Online

2024

Brown, Venetia; Larasati, Retno; Third, Aisling and Farrell, Tracie (2024). A Qualitative Study on Cultural Hegemony and the Impacts of AI. In: AAAI/ACM Conference on AI, Ethics, and Society (AIES-24), 21-23 Oct 2024, San Jose, CA, USA. A Qualitative Study on Cultural Hegemony and the Impacts of AI

Bayer, Vaclav; Mulholland, Paul; Hlosta, Martin; Farrell, Tracie; Herodotou, Christothea and Fernandez, Miriam (2024). Co‐creating an equality diversity and inclusion learning analytics dashboard for addressing awarding gaps in higher education. British Journal of Educational Technology, 55(5), pp. 2058–2074. Co‐creating an equality diversity and inclusion learning analytics dashboard for addressing awarding gaps in higher education

Bakir, Mehmet Emin; Farrell, Tracie and Bontcheva, Kalina (2024). Abuse in the time of COVID-19: the effects of Brexit, gender and partisanship. Online Information Review, 48(5), pp. 1045–1062. Abuse in the time of COVID-19: the effects of Brexit, gender and partisanship

2023

Kwarteng, Joseph; Farrell, Tracie; Third, Aisling and Fernandez, Miriam (2023). Annotators’ Perspectives: Exploring the Influence of Identity on Interpreting Misogynoir. In: ASONAM 2023: The 2023 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 6-9 Nov 2023, Kusadasi, Turkey. Annotators’ Perspectives: Exploring the Influence of Identity on Interpreting Misogynoir

Sides, T.; Farrell, T. and Kbaier, D. (2023). Understanding the Acceptance of Artificial Intelligence in Primary Care. In: 25TH INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION, 23-28 Jul 2023, AC Bella Sky Hotel and Bella Center, Copenhagen, Denmark. Understanding the Acceptance of Artificial Intelligence in Primary Care

Farrell, Tracie and Kouadri Mostéfaoui, Soraya (2023). False Hopes in Automated Abuse Detection (Short Paper). In: HHAI-WS 2023: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI), 26-27 Jun 2023, Munich, Germany. False Hopes in Automated Abuse Detection (Short Paper)

Mensio, Martino; Burel, Gregoire; Farrell, Tracie and Alani, Harith (2023). MisinfoMe: A Tool for Longitudinal Assessment of Twitter Accounts’ Sharing of Misinformation. In: UMAP '23: 31st ACM Conference on User Modeling, Adaptation and Personalization, 26-29 Jun 2023, Limassol Cyprus. MisinfoMe: A Tool for Longitudinal Assessment of Twitter Accounts’ Sharing of Misinformation

Kwarteng, Joseph; Farrell, Tracie; Third, Aisling; Burel, Gregoire and Fernandez, Miriam (2023). Understanding Misogynoir: A Study of Annotators’ Perspectives. In: 15th ACM Web Science Conference 2023 (WebSci ’23), 30 Apr - 01 May 2023, Evanston, TX, USA. Understanding Misogynoir: A Study of Annotators’ Perspectives

Brown, Venetia; Collins, Trevor and Braithwaite, N St.J (2023). Teaching roles and communication strategies in interactive web broadcasts for practical lab and field work at a distance. Frontiers in Education, 8 Teaching roles and communication strategies in interactive web broadcasts for practical lab and field work at a distance

Brown, Venetia Amanda (2023). The Role of Interactive Web Broadcasts in Fostering Distance Learning Students’ Engagement with Practical Lab and Fieldwork. [Thesis] The Role of Interactive Web Broadcasts in Fostering Distance Learning Students’ Engagement with Practical Lab and Fieldwork

Media

After AI Symposium 2025

In our first Symposium (After AI 2024) we were welcomed by a range of scholarship examining the use value of our current and potential practices with AI. We encourage an ever more expansive disciplinary range of submissions for After AI 2025, enabling us to expand the fields in which those invested in AI might share their predictions, desires, and insights.

ChatGPT event

The Shifting Power team presented "ChatGPT, your new neoliberal friend", a reflection on the Whiteness of the internet and its impact on informational searches using ChatGPT, at the open forum session entitled "ChatGPT and Friends: How Generative AI is Going to Change Everything" (this could link to John's website) on 23rd March. Slides will be made available on the event website.

After AI Symposium 2024

“After AI” Symposium is an interdisciplinary holistic discussion on Artificial Intelligence. “Shifting Power” is a UKRI Future Leaders Fellowship project exploring the long-term impacts of AI, particularly those that do not centre (primarily) White, Western European notions of innovation, morality and justice. To help us explore the future of this technology, and the consequences we expect it to have on people, society, the planet and beyond, the Shifting Power team is hosting a symposium on what happens “After AI”. As part of our current explorations into the topic with researchers and enthusiasts from all over the world, we have learned that our understanding of AI and its impacts is often coupled with a sense of urgency, what’s right in front of our faces, the real and present impacts of AI now and in the near future. With this event, we want to look further. We are therefore asking you to use whatever form of prediction you find relevant – science, intuition, divine inspiration, prophecy, astrology, folklore, etc. – and help us explore the question: After AI?