Source: HIMSS Media Screenshot
The promise of artificial intelligence in healthcare is enormous – with algorithms able to find answers to big questions in big data, and automation helping clinicians in so many other ways.
On the other hand, there are “examples after examples,” according to the HHS Office of Civil Rights, of AI and machine learning models trained on bad or biased data and resulting in discrimination that can make it ineffective or even unsafe for patients.
The federal government and health IT industry are both motivated to solve AI’s bias problem and prove it can be safe to use. But can they “get it right”?
That’s the question moderator Dan Gorenstein, host of the podcast Tradeoffs, asked this past Friday at the Office of the National Coordinator for Health IT’s annual meeting on Friday morning. Answering it, he said, was imperative.
Though the rooting out of racial bias in algorithms is still uncertain territory, the government is rolling out action after action on AI, from pledges of ethics in healthcare AI orchestrated by the White House to a series of regulatory requirements like ONC’s new AI algorithm transparency requirements.
Federal agencies are also actively participating in industry coalitions and forming task forces to study the use of analytics, clinical decision support and machine learning across the healthcare space.
FDA drives the ‘rule of the road’
It takes a lot of time and money to demonstrate performance across multiple subgroups and get an AI product through the Food & Drug Administration, which can frustrate the developers.
But like highly-controlled banking certification processes that every financial company has to go through, said Troy Tazbaz, director of digital health at the FDA, the government along with the healthcare industry must develop a similar approach towards artificial intelligence.
“The government cannot regulate this alone because it is moving at a pace that requires a very, very clear engagement between the public/private sector,” he said.
Tazbaz said the government and industry are working to agree on a set of objectives, like AI security controls and product life cycle management.
When asked what the FDA can do better at with getting products out, Suchi Saria, founder, CEO and chief scientific officer of Bayesian Health and founding director of research and technical strategy at the Malone Center for Engineering in Healthcare at Johns Hopkins University, said she appreciates rigorous validation processes because they make AI products better.
However, she wants to shrink the FDA approval timeline to two and three months and she said she thinks it can be done without compromising quality.
Tazbaz acknowledged that while there are procedural improvements that could be made – “initial third-party auditors are one possible consideration,” it’s not really possible to define a timeline.
“There is no one size fits all process,” he said.
Tazbaz added that while FDA is optimistic and excited about how AI can solve so many challenges in healthcare, the risks associated with AI product integration into a hospital are far too great not to be as pragmatic as possible.
Algorithms are subject to data drift, so when the production environment is a health system, discipline must be maintained.
“If you are designing something based on the criticality of the industry that you are developing for, your processes, your development discipline, has to match that criticality,” he said.
Tazbaz said the government and the industry must be aligned based on the biggest needs of where technology can be used to solve problems, and “drive the discipline” from there.
“We have to be open and honest about where we start,” he said.
When the operational discipline is there, “then you are able to prioritize where you want this technology to be integrated and in what order,” he explained.
Saria noted that the AI blueprint being created by the Coalition for Health AI has been followed by work to build assurance labs to create and accelerate the delivery of more products into the real world.
Knowing ‘the full context’
Ricky Sahu, founder GenHealth.ai and 1up.health, asked Tazbaz and Saria for their thoughts on how to be prescriptive about when an AI model has bias and when it’s solving a problem based on a particular ethnicity.
“Teasing apart racial bias from the underlying demographics and predispositions of different races and people is actually very difficult,” he said.
What needs to happen is “integrating a lot of know-how and context that’s well beyond the data” – medical knowledge around a patient population, best practice, standard of care, etc., Saria responded.
“And this is another reason why when we build solutions it needs to be close to any monitoring, any tuning, any of this reasoning really has to be close to the solution,” she said.
“We have to know the full context to be able to reason about it.”
Statisticians translating for docs
With 31 source attributes, ONC aims to capture categories of AI in a product label’s breakdown – despite the lack of consensus in the industry on the best way of representing those categories.
The functionality of an AI nutrition label “has to be such that the customer, let’s say the provider organization, the customer of Oracle could fill that out,” explained National Coordinator for Health IT Micky Tripathi.
With them, ONC is not recommending whether an organization uses the AI or not, he said.
“We’re saying give that information to the provider organization and let them decide,” said Tripathi, noting the information has to be available to the governing board, but it’s not required they be available to the frontline user.
“We start with a functional approach to a certification, and then as the industry starts to wrap their arms around the more standardized way of doing it, then we turn that into a specific technical standard.”
Oracle, for instance, is putting together an AI “nutrition label” and looking at how to display fairness as part of that ONC certification development.
Working in partnership with industry, ONC can come to a consensus that moves the AI industry forward.
“The best standards are ones that come from the bottom-up,” Tripathi said.
Gorenstein asked Dr. James Ellzy, vice president federal, health executive and market lead at Oracle Health what doctors want from the nutrition label.
“Something I can digest in seconds,” he said.
Ellzy explained that with such little time with patients for discussion and a physical exam, “there may only be five minutes left to figure out what we should do going forward.”
“I don’t have time to figure out and read a long narrative on this population. I need you to truly tell me based on you seeing what patient I have, and based on that, productivity of 97% this applies to your patient and here’s what you should do,” he said.
A reckoning for healthcare AI?
The COVID-19 pandemic shined a spotlight on a crisis in the standard of care, said Jenny Ma, senior advisor in the HHS Office for Civil Rights.
“We saw, particularly, with age discrimination and disability discrimination an incredible uptick where very scarce resources were being allocated unfairly in a discriminatory manner,” she said.
“It was a very startling experience to see first-hand how poorly equipped not only Duke was but many health systems in the country to meet low-income marginalized populations,” added Dr. Mark Sendak of the Duke Institute for Health Innovation.
OCR, while a law enforcement agency, did not take punitive actions during the public health emergency, Ma noted.
“We worked with states to figure out how to develop fair policies that would not discriminate and then issued guidance accordingly,” she said.
However, at OCR, “we see all sorts of discrimination that is occurring within the AI space and elsewhere,” she said.
Noting that Section 1557 of the Affordable Care Act non-discrimination statute is not intended to be set in stone, it is intended to create additional regulations as needed to address discrimination.
OCR has received 50,000 comments for proposed section 1557 revisions that are still being reviewed, she noted.
Sendak said that enforcement of non-discrimination in AI is reasonable.
“I actually am very pleased that this is happening, and that there is this enforcement,” he said.
As part of Duke’s Health AI Partnership, Sendak said he personally conducted most of 90 health system interviews.
“I asked people how do you assess bias or inequity? And everyone’s answer was different,” he said.
When bias is uncovered in an algorithm, it “forces a very uncomfortable internal dialogue with health system leaders to recognize what is in the data, and the reason it’s in the data, is because it occurred in practice,” he said.
“In many ways, contending with these questions is forcing or reckoning that I think has implications beyond AI.”
If the FDA looks at the developers’ AI “ingredients” and ONC “makes that ingredient list available to hospital settings and providers, what OCR is trying to do is say, ‘Hey, when you grab that product from the shelf, and you look at that list, you also are an active participant,'” said Ma.
Sendak said that one of his biggest concerns is the need for technical assistance, noting several organizations with fewer resources had to pull out of the health AI Partnership because they couldn’t make time for interviews or participate in workshops.
“Like it or not, the health systems that are going to have the hardest time evaluating the potential for bias or discrimination have the lowest resources,” he said.
“They’re the most likely to depend on external kinds of procurement for adoption of AI,” he added. “And they’re the most likely to end up on a landmine they’re not aware of.
“These regulations have to come with on-the-ground support for healthcare organizations,” said Sendak, to applause.
“There are single providers who might be using this technology not knowing what’s embedded in it and get caught with a complaint by their patients,” Ma acknowledged.
“We’re absolutely willing to work with those providers,” but OCR will be looking to see if providers train staff appropriately on bias in AI, take an active role in implementing AI and establish and maintain audit mechanisms.
The AI partnership may look different next year or two, Ma said.
“I think there’s alignment across the ecosystem, as regulators and the regulated continue to define the way we avoid bias and discrimination,” she said.
Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.
Source: Read Full Article