Advertisement
Dark Mode Light Mode

Physician AI expert cautions clinicians and execs: Be wary of AI challenges

Physician AI expert cautions clinicians and execs: Be wary of AI challenges Physician AI expert cautions clinicians and execs: Be wary of AI challenges

Dr. Ronald Rodriguez holds a unique title in healthcare. He’s professor of medical education and program director of the nation’s first MD/MS in Artificial Intelligence dual degree at The University of Texas Health Science Center at San Antonio. The five-year dual degree was launched in 2023.

Claim Your Reward
How to Claim Your PayGy Coin?
Step 1: Read the article for 12 seconds.
Step 2: Wait till a Gold Button appears, click on it.
Step 3: The “Claim My Reward” link will appear at the bottom of the article, scroll down and click on it.

Rodriguez, who also holds a doctorate in cellular biology, is at the forefront of AI’s transformation of healthcare. He is well aware of all the positive ways AI and automation are benefiting healthcare already. But he also sees some aspects of the technology that should give pause to clinicians and IT executives.

Random Image 4217 Random Ad

This is part one of a two-part interview with Rodriguez. Here he points out matters of AI in healthcare that require great care by professionals – even places where he believes professionals are getting it wrong. Part two, coming soon, will be in video format and discuss the doctor’s groundbreaking work in healthcare AI education.

Q. What are some clinicians potentially doing wrong today with generative AI tools, and how can hospital and health system CIOs and other IT and privacy leaders do to make sure generative AI, today, is used correctly?

A. They are not protecting protected health information effectively. Many of the commercial large language model servers take the prompts and data uploaded to their servers and use it for further training later. In many cases, providers are cutting and pasting aggregate clinical data and asking the large language model to reorganize, summarize and provide an assessment.

Unfortunately, many times the patient’s PHI is contained in the lab reports, image reports or prior notes in ways that might not be readily apparent to the provider. Failure to eliminate the PHI is a tier 2 HIPAA violation. Each offense could potentially result in a separate fine. IT providers are able to tell when PHI is being cut and pasted and can warn users not to do it. Often this is already happening.

However, currently most of these systems are not enforcing compliance with these rules at the individual level. CIOs and technology leaders at hospitals and health systems can develop PHI removal tools that protect against these violations. Many of the LLM providers allow settings that prevent data sharing; however, enforcement of those settings is at the provider’s discretion and not ensured.

Q. You say: “Our current business model of AI use is an ecosystem where each prompt generates a cost based on the number of tokens. This incremental cost currently is modeled such that it is more likely to actually increase healthcare costs than reduce them.” Please explain what you mean by using a clear example that shows how costs go up.

DOUBLE REWARD!
Watch the Video Till The End!

A. Let’s take DAX and Abridge, which are systems that take a recording of the patient-provider interaction, transcribe the interaction and summarize it for use in a note. The costs of these systems is based on actual usage.

The systems make life much easier for physicians, but there is no way to bill the patient for these extra costs through third-party payers. Instead, the only current option to pay for these incremental costs is for the providers to see more patients. Seeing more patients means third-party providers will see more claims, which ultimately will be reflected in higher premiums or lower benefits or both.

Other systems that automate answering patient questions using LLMs may provide immediate feedback to patients with simple questions but also comes at a cost which is incremental. Those costs currently also are not billable, and hence the result is pressure to see more patients.

Let’s consider a hospital system implementing one of these generative AI tools to assist physicians with clinical documentation. A single physician might interact with the AI engine multiple times per patient visit.

Now, multiply this across hundreds or thousands of physicians within a health system working across multiple shifts, and the cumulative cost of AI usage quickly skyrockets. Even if AI improves documentation efficiency, the operational expense of frequent AI queries may offset or even exceed the savings from reduced administrative work.

So far, AI usage models are pay-per-use and are not like traditional software with fixed licensing fees. So, the more an organization integrates AI into daily workflows, the higher the financial burden becomes.

Unless hospitals and healthcare providers negotiate cost-effective pricing structures, implement usage controls or develop in-house AI systems, they may find themselves in a situation where AI adoption leads to escalating operational costs rather than the anticipated savings.

Random Image 4253 Random Ad

Q. You told me: “Safeguards need to be put in place before we will ever realize a true improvement in our overall medical errors. Over-reliance on AI to correct mistakes could potentially result in different types of errors.” Please elaborate on the problem, and please discuss the needed safeguards, in your opinion.

A. LLMs are prone to hallucinations under certain situations. While some providers are very good at avoiding those situations – we actually teach our students how to avoid such situations – many are not aware. A new source of medical errors can be introduced if these errors are not caught. One way to safeguard against this is to use agentic specialty-specific AI LLMs.

These systems perform double checks on the information, confirm its veracity and use sophisticated methods to minimize errors. However, such systems are not built into the off-the-shelf LLMs like ChatGPT or Claud AI. They will cost more to use, and they will require a larger investment in infrastructure.

Investment in the infrastructure to protect privacy, prevent unintended sharing of PHI, and protect against predictable LLMs and misconceptions rampant in the internet data scraping used for pretraining of the foundational LLMs, will be required. Policies to enforce compliance will also be necessary.

Q. How should hospitals and health systems go about developing proper ethical policies, guidelines and oversight?

A. As AI technologies rapidly advance, major medical organizations need to provide guidance documents, and boilerplate policies that can help institutions adopt best practices. This can be accomplished at several levels.

Participation in oversight organizations and medical groups like the AMA, AAMC and governmental oversight committees can help solidify a common framework for ethical AI data access and use policies.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

WATCH NOW: Seattle Children’s Chief AI Officer talks better outcomes through the technology


Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
The Road Map to Alien Life Passes Through the ‘Cosmic Shoreline’

The Road Map to Alien Life Passes Through the ‘Cosmic Shoreline’

Next Post
March's full "Blood Worm Moon" arrives with a lunar eclipse. Here's when and where to see it.

March's full "Blood Worm Moon" arrives with a lunar eclipse. Here's when and where to see it.

JOIN US NOW!
START EARNING Today!

99412514 4bc5 4d2e a6b9 14a8aee68dbc