• Algorithms
  • Artificial Intelligence
Algorithmic Explainability Working Group: Discussion Note 01
Download
Share

Automated Decision Systems (ADS) have proliferated in recent years, in India as elsewhere, enabled by a rise in big data. They are increasingly crucial components of consumer-market and citizen-state interaction. Thinking through effective, holistic regulatory guidelines that can inform both self- and government regulatory frameworks is therefore critical. A rich body of literature and experience shows the potential downsides of poorly implemented ADS in the absence of such frameworks.

The Working Group on FATE (Fair, Accountable, Trustworthy and Explainable) Standards for ADS in India, anchored by IDFC Institute’s Data Governance Network and CPC Analytics, aims to develop such frameworks for specific use cases. This discussion paper summarises the key takeaways from its first session with a diverse group of academic, industry and policy experts. These takeaways will inform the Working Group’s scope of work.

 

Key Takeaways

Standard-setting for algorithms must be contextual and specific. Fairness, accountability, transparency and explainability mean different things in different use case scenarios and for different stakeholders.

Explainability may be at odds with efficiency and performance. The more robust and sophisticated an algorithm, the less explainable its decision-making. This raises the question of whether full explainability is necessary for achieving FATE design’s objectives.

In certain contexts, outcome mapping could serve as the proxy for explainability. Assessing outcomes could help understand which tool to deploy, and determine if bias exists in the system.

FATE standards thus must be narrowly drafted, keeping not just sectoral but use case variance in mind. Given how contextual each element of FATE is, operationalising a broad standard is impossible.