Fixation with the fantastic

The issue of bias in algorithms is one that cannot simply be ignored


Muhammad Hamid Zaman July 29, 2025 3 min read
The author is a Professor and the Director of Center on Forced Displacement at Boston University

print-news

As the applications of AI continue to touch myriad spheres of our lives, healthcare is often discussed as the one with perhaps the greatest benefit to humanity. In Pakistan, the intersection of health and AI has gotten several groups excited and among the applications, one that is often discussed is in the area of mental health.

There are startups that are exploring the possibilities of improved diagnosis and efficient care in an area that has largely been overlooked in the national health landscape. Most of these AI-mental health startups pitch their idea as a game changer for socio-economically disadvantaged and underserved communities.

While there are many important and urgent reasons to focus on mental health, including limited awareness, stigma and few trained and licensed providers, there are also deeply disturbing trends in the AI-health space that are likely to harm countless vulnerable persons who have limited resources, are marginalised due to their social status and are unlikely to find justice if something goes wrong.

As I hear about yet another local startup (run by those who know little about health systems) using large language models, or a chatbot, to diagnose anxiety, bipolar disorder, depression or other mental health conditions, I get deeply disturbed by lack of a serious discussion of ethics, context or implications on vulnerable communities.

First, we have to recognise that those who struggle with mental health conditions are vulnerable persons with complex issues. The care that they need requires not just a careful understanding of their condition, but also trust with the provider. That empathy and trust is at the core of long-term care, and so is the unique social and cultural context that cannot be coded into a model.

One does not need to be a philosopher or a bioethicist to recognise that chatbots will never provide the deep empathy that a committed healthcare provider would.

Second, medical errors with AI based diagnosis are not as uncommon as the health-tech entrepreneurs would like us to believe. They are common and serious, and all the more concerning when one is dealing with mental health patients. In a context like Pakistan, where there are few safety nets for patients, these errors can have life-changing consequences

. Imagine a poor person who is negatively impacted by a bot or an AI diagnosis platform: where does that person go to seek justice? What guardrails protect him or her? The argument that there are no safety nets or regulation anyway (with or without AI) is not a reason to march ahead with a new technology and only increase the vulnerability. Instead, it should be a reason to pause, reflect and ensure that we create more safety nets, not less.

Third, the issue of bias in algorithms is one that cannot simply be ignored. The models that underpin many algorithms are developed using data from high income countries where the context is substantially different from ours. As a result the diagnostic ability for situations that our communities face is going to be significantly limited and may lead to serious harm.

Fourth, issues around data privacy are serious, and even more so when it comes to vulnerable groups. Once again, one has to think about guardrails that currently do not exist. For example, could the data of groups that are marginalised (because of their religion, ethnicity, sect or gender) be used against them? How do we ensure that their information is not weaponised against them? What protections do we have to ensure that the data is safe, protected and beyond the reach of individuals and institutions that are powerful, and may not care about their welfare?

As someone who teaches in a biomedical engineering department, I know that technology is exciting and fast paced, and offers an extraordinary promise. But without serious ethical considerations, care and deliberate efforts to protect the weakest members of society, that promise often materialises for a selected few. Everyone else becomes a statistic of failure.

That path should be unacceptable to us all - not because some inherent dislike of innovation, but because the idea of harming anyone in the name of progress should always be appalling.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ