The Rise of AI in Healthcare: Weighing the Benefits and Risks
The integration of Artificial Intelligence (AI) in healthcare has sparked intense debate among medical professionals and experts. With over 230 million people globally seeking health and wellness advice from ChatGPT every week, according to OpenAI, the demand for AI-powered health guidance is undeniable. OpenAI’s recent launch of ChatGPT Health and acquisition of healthcare tech startup Torch for $60 million has further accelerated this trend. Anthropic’s introduction of Claude for Healthcare has also joined the fray, marking a significant shift towards AI-driven health advice.
For a world grappling with healthcare inequities, including skyrocketing insurance costs in the US and care deserts in remote regions, democratized health information and advice seem like a positive development. However, the complexities of large AI companies’ operations raise crucial questions that health tech experts are eager to address. Saurabh Gombar, a clinical instructor at Stanford Health Care and chief medical officer of Atropos Health, an AI clinical decision support platform, expresses concerns about the accuracy and reliability of AI-generated health advice.
Accuracy and Trust: The Primary Concerns
“What I am worried about as a clinician is that there is still a high level of hallucinations and erroneous information that sometimes makes it out of these general-purpose LLMs to the end user,” Gombar said. He emphasizes that while AI can provide helpful guidance, it is not a substitute for human medical expertise. The risk of AI providing misleading or incomplete information can have serious consequences, particularly in cases where patients rely solely on chatbots for health advice.
For instance, a doctor might recognize left shoulder pain as a non-traditional sign of a heart attack in certain patients, whereas a chatbot might only suggest taking over-the-counter pain medication. Conversely, if a patient consults a human doctor after being misdiagnosed by a chatbot, it can erode trust in the medical profession. Google’s AI Overviews have already faced criticism for providing inaccurate and false health information, and ChatGPT, Claude, and other chatbots have faced similar criticism for hallucinations and misinformation.
Data Privacy and Security: A Growing Concern
The issue of data privacy and security is also a pressing concern. OpenAI and Anthropic claim that their health tools are secure and compliant with the Health Insurance Portability and Accountability Act (HIPAA) in the US. However, Alexander Tsiaras, founder and CEO of the AI-driven medical record platform StoryMD, argues that there is more to consider. “It’s not the protection from being hacked. It’s the protection of what they will do with the data after,” Tsiaras said.
Tsiaras points to the techno-optimism of Silicon Valley elites, such as OpenAI CEO Sam Altman, and their track record of prioritizing profit over data protection. The risk of chatbots reinforcing delusions and harmful thought patterns in people with mental illness is also a concern. Andrew Crawford, senior counsel for privacy and data at the Center for Democracy and Technology, emphasizes the need for AI companies to prioritize data protection over profit.
As the healthcare landscape continues to evolve, it is essential to address these concerns and ensure that AI-powered health advice is accurate, reliable, and secure. Nasim Afsar, a physician and former chief health officer at Oracle, views ChatGPT Health as an early step towards intelligent health but acknowledges that it is far from a complete solution. For more information on this topic, read the full article Here.
Image Source: observer.com

