AI-Powered Healthcare? A Reality Check on Health Data

Advertisement

Yeah, ChatGPT was successful in passing the test for medical licensure. The recent Health Datapalooza tech conference, however, was driven by one idea: America’s health data infrastructure is still much more analogous to “no intelligence” than “artificial intelligence.”

Regardless of whether the speakers were from the government, health systems, insurance, or physician associations, they all sounded frustrated. According to Dr. Lisa Simpson, president of conference sponsor AcademyHealth, we need to know which data advances actually lead to better treatment, but we don’t.

Advertisement

Dr. Robert Califf, commissioner of the Food and Drug Administration, said that “it almost seems as though we created a system designed not to answer the issue” (FDA).

Califf ought to be aware. He developed computerized predictive analytics decades ago as a practicing cardiologist to enhance patient care. After serving as the FDA’s director for a time, he left for Silicon Valley to oversee strategy and policy at Alphabet companies Verily Life Sciences and Google Health. He has a long history of involvement in medication clinical trials and translational research.

The system is disaggregated and fragmented, and there is no organization around a common source of transparent, high-quality information, Califf continued.

At a panel discussion on generative AI (artificial intelligence), which also included a physician executive from Google, which has its own Bard system, and a physician executive from Microsoft, a significant investor in the business that created ChatGPT, this conclusion was confirmed. ChatGPT recently announced a passing score on the medical licensure exam, but Dr. Michael Y. Uohara, chief medical officer of Microsoft Federal, issued a warning that it would be “many, many years down the line” before the data could be used properly in a real patient file.

Advertisement

The constraints were listed by Dr. Jacqueline Shreibati, clinical lead at Google. You need a framework to assess where the answers come from because AI systems are taught on data. Many data sets include biases and fundamental problems, the speaker claimed. The fact that these role models state “I don’t know” when they are uncertain is more significant.

Of course, the issue is the well-known “garbage in, trash out” one. But, the fragmentation problem is at least equally urgent. Therefore, for instance, Dr. Allison Arwady, the director of the Chicago Department of Public Health, bemoaned a widespread lack of data consistency and organization that prevents the Illinois vaccine registry from freely exchanging information with the neighboring Indiana registry.

As this is going on, Dr. Michelle Schreiber, director of the quality measurement and value-based incentives department at the Centers for Medicare & Medicaid Services, stated that doctors are struggling with a perplexing “cacophony of quality indicators which are comparable, but are actually one-offs” (CMS). We are not even able to compare metrics between government programs.

Workarounds that take a lot of effort and money abound even when statistics are comparable. The same electronic health record cannot communicate with other versions of the same record. The Fast Healthcare Interoperability Resources (FHIR) interoperability standard hasn’t always been backward compatible between different iterations. The way that different organizations map their data to FHIR varies.

Privacy issues are a constant. The Center for Democracy & Technology’s Samir Jain, vice president of policy, issued a warning that “the entire ecosystem of information can be weaponized.”

Meanwhile, dangerously false misinformation and “malinformation” (intentionally misleading information) can spread through the public mind much more quickly than the facts can catch up, as the tragic Covid pandemic grimly illustrated. The FDA is starting an anti-misinformation campaign, but I’m curious what Califf plans to say to the House of Representatives members who spread false information and whose party now has control of his budget.

Notwithstanding all the difficulties, the good news from Datapalooza was that many of the obstacles to data systematization are being methodically overcome. While the Office of the National Coordinator (ONC) for Health Information Technology works on interoperability with an alphabet soup of institutions, including the sizable Veterans Health Administration, CMS is collaborating with public and private entities to “align” measurements.

That leads us to TEFCA, which is not a brand-new frying pan non-stick coating. TEFCA, short for Trusted Exchange Framework and Common Agreement, is a requirement of the bipartisan 21st Century Cures Act and stands for secure clinical information sharing. Its goal is to drive the medical industry out of its isolation. Despite the lengthy regulation process ahead, ONC chief Micky Tripathi pledged his organization would succeed by utilizing “the combined energies of the data geeks and the patients.”

At Datapalooza, both those groups were in great numbers. Aneesh Chopra, president of CareJourney and once the first chief technology officer of the United States, a data geek and entrepreneur, advocated for a “coalition of the willing” to establish a learning network to enhance FHIR-based information transmission. The National Council for Quality Assurance (NCQA) executive vice president, Dr. Eric Schneider, discussed the creation of electronic standards to enable “the use of data we envisioned at the founding of NCQA 30 years ago.”

Additionally, the Coalition for Trust in Health & Science was launched according to Dr. Reed Tuckson, a well-known independent consultant with extensive expertise in business, politics, and the community. Its goal is to combat false information and deception as soon as it appears. It is made up of more than 50 healthcare organizations.

People are killed by disinformation, mistrust, and more misinformation, according to Tuckson.

Most encouragingly, there were numerous signs that the future would be more patient-focused. Chrissa McFarlane, the creator of Patientory, was there. Patientory uses blockchain technology to give patients ownership over their own data when they take part in clinical trials. There was Janice Tufte, a well-known patient advocate who is currently serving on the executive committee of the Gravity Project, a group working to standardize social and health data for health fairness. Grace Cordovano, a consultant and qualified patient advocate, was fervently describing how technology may help individuals caring for loved ones avoid burnout. AcademyHealth provided funding to more than 20 patients so they could attend the seminar.

Michael Wilkening, a technology expert and former secretary of health and human services for the state of California, expressed optimism. “I see a lot of wonderful individuals out there trying to move forward and taking lessons from the past,” the speaker said.

To be clear, there is remarkable entrepreneurial activity in both the public and private sectors focused on new and improved uses of health data, and there has been significant progress made thus far. Maneesh Juneja, a digital health futurist, predicted a situation that is now quite plausible: one day, a patient may ask an AI agent, “Tell me when my next relapse will be,” in response to a question.

It is appropriate to be excited by transformational progress, but it is also important to be realistic about what lies ahead. Without properly addressing the interoperability, dependability, comparability, privacy, and trustworthiness of data, it will be unable to adequately solve entrenched concerns relating to cost, access, and quality.

The difficulty was best summed up by Califf, a veteran of both business and government. Technology is not the constraining element, according to Califf. “Our culture and the way we conduct ourselves is the limiting element.”

Advertisement

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.