Author: Nayantara Ranganathan
This article first appeared in Asia Times on 9 August 2018. The views are of the individual authors.
In the last week of July, two events happened in rapid succession: the release of India’s much-awaited draft data-protection legislation, and a breaking news story that Watson, IBM’s computing system that helps physicians recommend individualized cancer treatments, was offering “unsafe and incorrect” options. Interestingly, the report accompanying the draft bill cited the tie-up between Manipal Hospitals, one of India’s large chain of hospitals, and IBM Watson as an illustration of the benefits that “artificial intelligence” can bring us.
Now, if a cancer patient were circumspect about getting symptoms analyzed by IBM Watson, he or she could pursue treatment in a hospital that has not tied up with machine-learning platforms to devise plans, doing something critical in this context: exercising a choice not to be subject to IBM Watson.
Between the hype-fueled adoption of machine learning and artificial intelligence in governance on the one hand, and the unfathomable nature of their harms on the other, lies a crucial issue that has been lost so far.
The question to ask, therefore, is whether the draft bill and report, prepared by the Srikrishna Committee, set up to study data protection and privacy in India, provides safeguards such as opt-outs or explanations to people. The short answer is no, but the explanation why this is so is a long one.
Read the full article here.