AI wants access to your medical records — should you say yes?

AI is transforming drug development by scanning data, designing molecules, and repurposing existing meds

AI is rapidly changing diagnostics and treatment of diseases. PHOTO: www.shiftmed.com

Yesterday brought a surprise-not-so-surprise moment in the tech world, with Anthropic announcing new healthcare features for its AI assistant, Claude. Users can now connect their medical records and health apps directly to the platform. Interestingly, this comes just days after OpenAI launched ChatGPT Health.

So what does this mean in simple terms? You can now upload or connect things like lab results, medical records, and even data from fitness or health apps. The AI can read this information and explain it in easy language, help you notice patterns over time, organise your health details, and even suggest what questions you might want to ask your doctor.

Think of it as a smart assistant helping you make sense of confusing medical paperwork, not a doctor replacing real medical advice. For now, these features are mainly available in the U.S. and are still being tested in beta.

AI’s impact on healthcare and its potential for Pakistan

Health is one of the fastest-growing areas for AI. Around the world, it is already helping doctors detect lung cancer earlier (with up to 30% more accuracy) through image scans, predict disease risks, speed up drug discovery, automate hospital paperwork, and support remote patient monitoring. Chatbots are being used to triage symptoms, schedule appointments, and answer basic health questions.

An estimated 40 million people already ask chatbots questions every day about symptoms, medicines, diet, and wellness. Companies see this as a way to make healthcare information more accessible for people who struggle to understand medical language or face long waits to see a doctor.

AI is also quietly transforming how new medicines are developed. Instead of scientists spending years manually testing thousands of chemical compounds, AI can scan massive datasets to identify disease targets, design new drug molecules, test them virtually, and even suggest how existing medicines might be reused for new illnesses.

In some cases, processes that once took months or years can now happen in days, making drug development faster, cheaper, and more responsive during health emergencies.

In Pakistan, where hospital queues are long, consultations are rushed, and medical records are often scattered across files and WhatsApp images, tools like this could feel genuinely useful.

Many people already rely on platforms like Marham and Sehat Kahani to connect with doctors online. With responsible AI integration, these platforms could become even more powerful.

AI could help patients organise years of lab reports before a virtual consultation, translate complex medical terms into simple Urdu or Roman Urdu, flag missing tests, or remind patients to take medicines on time.

AI could also help doctors quickly review patient history instead of spending half the consultation reading screenshots and handwritten reports. These are not futuristic fantasies. They are small improvements that could reduce confusion, save time, and improve access—if implemented responsibly.

Can we really trust AI with our health data?

The companies behind these tools say safety and privacy are built into the system. Anthropic says health data shared with Claude is not used to train its AI models, and users can control what they share or disconnect access at any time. The company also says its systems meet healthcare privacy standards.

OpenAI, meanwhile, stresses that ChatGPT Health is not meant for diagnosis or treatment, but for understanding patterns and everyday health information. Both companies emphasise that AI should support people and doctors, not replace human judgement.

Despite these reassurances, doctors and digital safety experts remain cautious. AI can misunderstand medical data, miss context, or oversimplify complex health conditions. A wrong explanation could cause panic or delay proper treatment. Data security is another concern, especially when sensitive health information is stored online.

The real question is not only whether AI can handle our medical data, but whether we are ready to trust private tech companies with something so personal—especially in countries where digital rules and health systems are still fragile. The next time you upload a blood report to a chatbot, it might be worth pausing to ask what you are getting, and what you may be giving up.

 

Load Next Story