Datasets
Standard Dataset
A Systematic Review of Responsibility and Accountability in Data-driven and AI Systems[Dataset]

- Citation Author(s):
- Submitted by:
- Shi Yun Ng
- Last updated:
- Tue, 04/08/2025 - 06:44
- DOI:
- 10.21227/bh4y-v867
- License:
- Categories:
- Keywords:
Abstract
When artificial intelligence (AI) systems take actions or make decisions, the issue of accountability for the outcomes or decisions made by AI systems comes into question. The rapid advancement of AI technology poses a greater challenge, as current legal and ethical frameworks struggle to keep up with innovation, resulting in a lack of standardization in existing policies that comprehensively address accountability in AI. Understanding the decisions made by AI systems is often complex due to the intricate nature of these systems, which stakeholders and the general public often perceive as a "black box." For organisations with limited resources, challenges such as identifying experts who can fulfil the specific roles of AI developers, data scientists, and ethics specialists can be daunting, further complicating the assignment of responsibilities to each individual involved in the AI development and deployment process. This paper aims to bridge the gap between businesses and the public in embracing AI accountability and responsibility. A Multivocal Literature Review (MLR) is used to review literature that integrates both academic (formal) and grey (informal) sources and reports an up-to-date systematic review of current governance frameworks related to AI accountability and responsibility from January 2023 to March 2025. A comparison of AI governance covering accountability and responsibility is included, sourced from both academic and grey literature. The study examines the challenges encountered when implementing AI governance frameworks within industries and society. Finally, recommendations for fostering accountability and responsibility in AI systems are provided, offering policymakers and researchers a roadmap for future studies.
This Excel sheet contains two tabs: one named "Database 2023" and the other "Database 24-25" These tabs present the results retrieved from selected academic databases. Conditional formatting is used to highlight relevant articles based on their thematic focus:
Blue: Related to accountability
Green: Related to responsibility
Orange: Related to AI ethics
Purple: Related to AI governance
It’s important to note that the initial article searches included a wide range of alternative keywords. The purpose of the highlighting is to further refine the selection, narrowing it down to articles specifically focused on AI accountability and responsibility within the context of AI ethics and governance.