About

Artificial Intelligence and “Justice” are currently brought together under several headings, namely, AI for Social Good, Ethical AI, Responsible AI, Fair, Accountable and Transparent AI, Cyberjustice and AI for Democracy. Each of these domains has its conceptualisation of the potential benefits and harms of AI, which are not necessarily informed by marginalised voices or those who may experience potential harms. In addition, these domains are located within a wider context of AI research and governance that has pervasive challenges of representation and accessibility. The primary aim of the fellowship, therefore, is to surface other ways of thinking about AI and its impacts that do not centre (primarily) White, Western European notions of innovation, morality and justice. The project utilises theoretical contributions from queer, intersectional feminist scholarship to focus on two key factors in how AI is likely to impact questions of justice: power and marginalisation.

We have named our research group after a quote from Pratyusha Kalluri of the Radical AI Network, who proposed that asking whether AI is good or fair is not the right question. We have to look at how it “shifts power”. Power relationships preserve inequality within our society in real and material terms. How will AI contribute to those inequalities? Is there any chance AI can help to foster new balances of power and if so, what will this look like in practice? Those are sub-questions with which our research team is concerned, and they require new datasets, new ways of looking at the data we do have, and long-term views of power over time.

News

Talk by Dr Syed Mustafa Ali: “The Political {Economy, Ecology, Theology} of AI”

Dr Syed Mustafa Ali talk titled “The Political {Economy, Ecology, Theology} of AI” for Ecology of AI (EcAI) Workshop 2023 @ HHAI 2023 Conference, Munich, Germany

 

https://youtube/MYkIPQnNnlA

Read More

KMi Research Fellow Tracie Farrell Discusses Ethical AI with ITV

Research Fellow Dr Tracie Farrell spoke with ITV on Tuesday afternoon, July 13th, about the consequences of Artificial Intelligence on society and her research team’s efforts to reframe

Read More

Call for participation: Interview Participant

We cordially invite you to be a valued participant in an exciting research study focusing on how AI researchers understand and shape the potential impact of their work on

Read More

Projects

placeholder

In this project, we are working to develop a framework for thinking about the impacts of AI that is broader, more interconnected in terms of short-, medium- and long-term beneficiaries, and entangled with issues such as power, wealth, and the resulting influence on economic, political and legal factors which make certain applications of AI more likely. One of the challenges of thinking about an "ethical" AI or an "AI for Social Good" is the tendency to move conversations into the cultural realm, where goodness and badness may be viewed as culturally subjective terms. One approach to resolve this problem is to unify different definitions and arrive at a set of "universal" principles. Another approach is to align work with a set of goals that has already been vetted by an international community of stakeholders. In our view, these approaches do not adequately consider longer-term, indirect impacts that are likely to influence matters of justice, nor do they include a wide-enough scope for potential beneficiaries and potential harms to anticipate how this technology will impact our world. In this project, we are working with collaborators Soraya Kouadri Mostéfaoui and Syed Mustafa Ali to explore ways of considering the impacts of AI and its subfields that include decolonial perspectives of world systems thinking

Complementing a broader, ecological perspective on the current and future impacts of AI, within the current paradigm of racialised, industrial capitalism, we also wish to identify what might emerge under different circumstances or directed toward goals that are not normatively connected to AI. Queerness, as a world-building concept, involves disrupting, dismantling and dissolving normative constructs that exclude and oppress. What are the ways in which queer people are engaging with this technology? What does the project of "queering AI" mean thus far? These are some of the questions we will be exploring with artists, writers, researchers and activists associated with the queer community.

placeholder
placeholder

One important strand of our research is concerned with how AI Researchers and Developers come to hold the viewpoints they have on the impacts of AI and their role in determining those impacts. What levels of impact do AI researchers consider? How do they conceptualise the harms and benefits of AI? What principles do they follow, if any, to mitigate potential harms? Where did they learn this? Through this line of research we hope to understand more about where and how those working on AI begin to formulate their ideas about what AI should become.

Discover all our projects

Publications

Kwarteng, J., Farrell, T., Third, A. and Fernandez, M., (2023) Annotators’ Perspectives: Exploring the Influence of Identity on Interpreting Misogynoir, ASONAM 2023: The 2023 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining

Farrell, TKouadri Mostéfaoui, S (2023) False Hopes in Automated Abuse Detection (Short Paper), CEUR Workshop Proceedings of the Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023)

Sides, T., Farrell, T. and Kbaier, D., (2023) Understanding the Acceptance of Artificial Intelligence in Primary Care, 25TH INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION

Mensio, M., Burel, G., Farrell, T. and Alani, H., (2023) MisinfoMe: A Tool for Longitudinal Assessment of Twitter Accounts’ Sharing of Misinformation, Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization

Kwarteng, J., Farrell, T., Third, A., Burel, G. and Fernandez, M., (2023) Understanding Misogynoir: A Study of Annotators’ Perspectives, 15th ACM Web Science Conference 2023 (WebSci ’23)

Media

ChatGPT event

The Shifting Power team presented "ChatGPT, your new neoliberal friend", a reflection on the Whiteness of the internet and its impact on informational searches using ChatGPT, at the open forum session entitled "ChatGPT and Friends: How Generative AI is Going to Change Everything" (this could link to John's website) on March 23rd. Slides will be made available on the event website.