Shifting Power: Artificial Intelligence Researchers’ Perspectives from the Margins
Dr Venetia Brown from the Shifting Power team presented Read More
Artificial Intelligence and “Justice” are currently brought together under several headings, namely, AI for Social Good, Ethical AI, Responsible AI, Fair, Accountable and Transparent AI, Cyberjustice and AI for Democracy. Each of these domains has its conceptualisation of the potential benefits and harms of AI, which are not necessarily informed by marginalised voices or those who may experience potential harms. In addition, these domains are located within a wider context of AI research and governance that has pervasive challenges of representation and accessibility. The primary aim of the fellowship, therefore, is to surface other ways of thinking about AI and its impacts that do not centre (primarily) White, Western European notions of innovation, morality and justice. The project utilises theoretical contributions from queer, intersectional feminist scholarship to focus on two key factors in how AI is likely to impact questions of justice: power and marginalisation.
We have named our research group after a quote from Pratyusha Kalluri of the Radical AI Network, who proposed that asking whether AI is good or fair is not the right question. We have to look at how it “shifts power”. Power relationships preserve inequality within our society in real and material terms. How will AI contribute to those inequalities? Is there any chance AI can help to foster new balances of power and if so, what will this look like in practice? Those are sub-questions with which our research team is concerned, and they require new datasets, new ways of looking at the data we do have, and long-term views of power over time.
Dr Venetia Brown from the Shifting Power team presented Read More
Dr Dan McQuillan talk titled “Resisting AI” for The 2nd Ecology of AI Hybrid-Workshop 2024 @ HHAI 2024 Conference, Malmo, Sweden
https://youtube/WCnSCB19l7A
On Friday May 17th, the Shifting Power team in KMi, alongside Dr Justin Hunt from Queen Mary University and Arts Council England, and Dr Ben Sweeting from the Radical Research
In this project, we are working to develop a framework for thinking about the impacts of AI that is broader, more interconnected in terms of short-, medium- and long-term beneficiaries, and entangled with issues such as power, wealth, and the resulting influence on economic, political and legal factors which make certain applications of AI more likely. One of the challenges of thinking about an "ethical" AI or an "AI for Social Good" is the tendency to move conversations into the cultural realm, where goodness and badness may be viewed as culturally subjective terms. One approach to resolve this problem is to unify different definitions and arrive at a set of "universal" principles. Another approach is to align work with a set of goals that has already been vetted by an international community of stakeholders. In our view, these approaches do not adequately consider longer-term, indirect impacts that are likely to influence matters of justice, nor do they include a wide-enough scope for potential beneficiaries and potential harms to anticipate how this technology will impact our world. In this project, we are working with collaborators Soraya Kouadri Mostéfaoui and Syed Mustafa Ali to explore ways of considering the impacts of AI and its subfields that include decolonial perspectives of world systems thinking
Complementing a broader, ecological perspective on the current and future impacts of AI, within the current paradigm of racialised, industrial capitalism, we also wish to identify what might emerge under different circumstances or directed toward goals that are not normatively connected to AI. Queerness, as a world-building concept, involves disrupting, dismantling and dissolving normative constructs that exclude and oppress. What are the ways in which queer people are engaging with this technology? What does the project of "queering AI" mean thus far? These are some of the questions we will be exploring with artists, writers, researchers and activists associated with the queer community.
One important strand of our research is concerned with how AI Researchers and Developers come to hold the viewpoints they have on the impacts of AI and their role in determining those impacts. What levels of impact do AI researchers consider? How do they conceptualise the harms and benefits of AI? What principles do they follow, if any, to mitigate potential harms? Where did they learn this? Through this line of research we hope to understand more about where and how those working on AI begin to formulate their ideas about what AI should become.
Bayer, V., Mulholland, P., Hlosta, M., Farrell, T., Herodotou, C. and Fernandez, M., (2024) Co‐creating an equality diversity and inclusion learning analytics dashboard for addressing awarding gaps in higher education, British Journal of Educational Technology
Farrell, T., Alani, H. and Mikroyannidis, A., (2024) Mediating learning with learning analytics technology: guidelines for practice, Teaching in Higher Education
Kwarteng, J., Farrell, T., Third, A. and Fernandez, M., (2023) Annotators’ Perspectives: Exploring the Influence of Identity on Interpreting Misogynoir, ASONAM 2023: The 2023 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Farrell, TKouadri Mostéfaoui, S (2023) False Hopes in Automated Abuse Detection (Short Paper), CEUR Workshop Proceedings of the Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023)
Sides, T., Farrell, T. and Kbaier, D., (2023) Understanding the Acceptance of Artificial Intelligence in Primary Care, 25TH INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION
The Shifting Power team presented "ChatGPT, your new neoliberal friend", a reflection on the Whiteness of the internet and its impact on informational searches using ChatGPT, at the open forum session entitled "ChatGPT and Friends: How Generative AI is Going to Change Everything" (this could link to John's website) on March 23rd. Slides will be made available on the event website.