Ensuring Contestability in Artificial Intelligence Systems

Thursday 26 June 2025

As we increasingly rely on artificial intelligence (AI) systems to make important decisions, it’s essential that these systems are designed with transparency and accountability in mind. A new study sheds light on a crucial aspect of AI development: contestability.

Contestability refers to the ability for users to challenge or dispute the decisions made by an AI system. This is particularly important in high-stakes applications like healthcare, finance, and law enforcement, where incorrect or biased decisions can have severe consequences.

Researchers at Data Science Institute UTS Australia and IT University Austria set out to develop a framework for evaluating contestability in AI systems. They identified eight key properties that contribute to a system’s contestability, including the provision of explanations, the availability of dispute mechanisms, and the presence of built-in safeguards.

The team applied their framework to three case studies: an automated credit scoring system, a personalized news recommendation algorithm, and a machine learning-based loan application process. The results were striking: each system fell short in various aspects of contestability, highlighting the need for improvements.

For instance, the credit scoring system provided basic explanations but lacked transparency in its decision-making process. Users could only challenge decisions through an online portal, which was available only in English and required digital literacy. In contrast, the news recommendation algorithm offered more detailed explanations but failed to provide users with a clear understanding of how their feedback influenced the recommendations.

The loan application system, while providing some contestability mechanisms, lacked adaptivity and transparency in its decision-making process. Users could not directly challenge decisions or request changes to their applications, and the appeals process was slow and lacked external oversight.

These findings have significant implications for the development of AI systems. By prioritizing contestability, developers can create more transparent, accountable, and trustworthy AI systems that benefit both users and society as a whole.

To improve contestability, researchers recommend increasing transparency in decision-making processes, providing actionable explanations, and making dispute mechanisms accessible to all stakeholders. They also suggest implementing built-in safeguards, such as guarantees against retaliation for users who challenge decisions.

By incorporating these principles into AI development, we can create systems that are not only more effective but also more just and equitable. As our reliance on AI continues to grow, it’s essential that we prioritize contestability to ensure that these systems serve the greater good.

Cite this article: “Ensuring Contestability in Artificial Intelligence Systems”, The Science Archive, 2025.

Ai, Transparency, Accountability, Contestability, Decision-Making, Artificial Intelligence, Machine Learning, Fairness, Equity, Bias, Decision Support Systems.

Reference: Catarina Moreira, Anna Palatkina, Dacia Braca, Dylan M. Walsh, Peter J. Leihn, Fang Chen, Nina C. Hubig, “Explainable AI Systems Must Be Contestable: Here’s How to Make It Happen” (2025).

Leave a Reply