The discussion about the possible utility of AI in improving efficiency or increasing productivity is touching newer areas of society. Most recently, it was announced by a senior judge that AI tools will be used to improve efficiency of the national judicial system. The argument is built around decreasing backlog of the cases as well as helping in research and improved decision-making. The desire to improve the efficiency of a jammed system must be welcomed.
Similar language - as the one noted above - is used by leaders and decision-makers in the fields of medicine (improved diagnosis, error reduction, etc) and education, among others. All of these statements tend to have a customary line or two about appropriate ethical guardrails. And that is where lies the problem.
The problem is not in having ethical guardrails, or checks, but the vague and cursory approach to that. While the utility aspect is described in great detail with specific goals, the risk part is all lumped together in vague, non-specific in abstract terms that does not do justice to deeper understanding of the risks and harm.
There is no denying that AI tools are becoming increasingly common and accessible. Their potential is also well-documented. But so is the potential harm. Weaponisation of technology - any technology - against those who are vulnerable is well-established and well-documented. For example, we know that while digitisation of national ID cards can bring lots of benefits to those who are privileged (myself included). But the same tools can be used to block access to basic services of many at the touch of a button. The stories of stateless Bengali members of our community in Machhar Colony and elsewhere - who are denied their due rights - have been shared in these pages and those of other newspapers. Similar stories of weaponisation of technology against those who are weak, vulnerable or on the wrong side of the powerful have been brought to the fore in newspapers and academic literature. Ethnic minorities, refugees, migrants, the poor and those viewed with suspicion by the powerful pay a heavy price of this weaponisation of technology. Bias in algorithms, serious errors that may be there but hard to detect early and an over-reliance on technology that is far from perfect always hurt the weak more than they hurt the rich and the powerful.
The ethical structure around technology start from four basic principles - namely autonomy (i.e. respect for persons, meaning that humans are means unto themselves and not a means to an end), nonmaleficence (i.e. do no harm), beneficence (i.e. an obligation to not just not do harm, but to contribute to welfare), and justice (i.e. distribution of benefit in a just and equitable manner). The idea is that these principles are not independent and need to be used in concurrence. In a practical sense this means that use of new technology must not harm, do actual and measurable good (that cannot be done in the absence of this technology), allow for autonomy and produce benefits that are distributed in a just manner. When we make the argument that using AI does good in a particular context, can we also argue that it does not harm the person in question in any way? Can we argue that the risk to their autonomy or privacy is negligible? And when we talk about the benefit justice, is the benefit to the system, to us, or the person on the other end who we think is benefitting? These questions may seem a distraction in a time of techno-philia or euphoria about the cutting-edge stuff, but these are the foundations of a caring and just society. We need rich, honest and rigourous discussion with realistic specific scenarios, and our humanity at the centre, to guide our thinking and decision-making.
In the classroom, the clinic or the courtroom, the use of new technologies must be guided by a single principle - one that ensures that it helps not the one who is most powerful or privileged, but ensuring that it helps, and never hurts, the weakest.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ