AI Governance and Algorithmic Auditing in Financial Institutions: Lessons From Singapore
20 Pages Posted: 13 May 2025 Last revised: 1 Apr 2025
Date Written: March 31, 2025
Abstract
This paper examines the role of algorithmic auditing as a mechanism for responsible AI development and deployment in the financial sector, with a particular focus on Singapore's regulatory and institutional initiatives. Against the backdrop of fragmented global AI governance frameworks, the study analyzes how Singapore has developed operational toolssuch as the Veritas Toolkit, AI Verify, Project Moonshot, and Project Mindforge-that go beyond abstract ethical principles to provide measurable, use-case-specific standards for auditing AI systems. These initiatives contribute to standardizing audit practices, enhancing transparency, and bridging trust gaps between financial institutions, regulators, and stakeholders. The paper finds that Singapore's model is notable for its regulator-led, collaborative approach and its focus on sectoral applicability, particularly in high-risk areas like credit scoring and fraud detection. However, it also identifies key limitations: the voluntary nature of these frameworks, the challenges of replicability in larger or more fragmented jurisdictions, and the lack of universally accepted standards for algorithmic auditing. Moreover, the study highlights the need to broaden auditing efforts to include organizational and human factors, recognizing that the use and interpretation of AI outputs are equally critical in managing risk. Ultimately, the paper offers insights into how Singapore's experience can inform the development of scalable, enforceable, and effective algorithmic auditing frameworks in global financial services.
Keywords: AI audits, artificial intelligence, fintech, financial regulation, AI governance
Suggested Citation: Suggested Citation