Skip to main content

Q&A: Elevance Health on guardrails and scaling AI in healthcare

Ratnakar Lavu, chief digital information officer at Elevance Health, sat down with MobiHealthNews to discuss the insurer’s AI framework, including guardrails, bias testing and model validation from design through deployment.
By Jessica Hagen , Executive Editor
Ratnakar Lavu, chief digital information officer at Elevance Health

Photo courtesy of Elevance Health

LOS ANGELES –  Ratnakar Lavu, chief digital information officer at Elevance Health, sat down with MobiHealthNews for an in-person interview to discuss the framework the health insurance company uses to validate and scale AI in healthcare.

MobiHealthNews: How do you validate AI models as safe at Elevance Health, especially agentic AI?

Ratnaker Lavu: First, we actually have a robust, responsible AI program within the company itself, and let me take you through. When we start with AI solutions, whether we build it for our members, for our providers or internally, or for our associates, we actually go through this governance process, and it starts actually from the inception itself–inception of an idea.

So, when people are thinking about building an AI solution, we think about how to focus on transparency, how to actually think about responsible AI, how to think about guardrails, how to think about bias testing and other things. So from a design standpoint, we start the process there. That's how our responsible AI process is.

Then as we go into execution, we also validate those responsible AI parameters, and then when we go into production, we actually have people validate that. And so we also have built a platform that has those components embedded in it already. So, as they deploy a model to service a member or a provider or within a workflow, the platform itself has the guardrails embedded within it, the bias testing embedded within it, and so we actually do multiple sets of controls. And then once it's deployed, after it is deployed, the teams actually go in and ensure that there's no deviation from how we have designed it to begin with.

MHN: Does Elevance pilot or implement technology that's not ready to be scaled?

Lavu: Actually, we have scaled quite a few things. So, we do pilot certain things, but then we pilot it with the intent of scaling. And I'll give you a couple of examples where we have started off testing certain things and have scaled it.

So, within customer service, when a member actually calls customer service, we actually aggregate a lot of information because we didn't want them to look at multiple systems while the member is waiting on the call, because there's benefits information, there's claims information, prior auth information–there's a lot of information that they have to correlate. We actually did that through AI, and then summarized it for the agent themself so that they can actually focus on servicing the member. Then what we did was we did a post-call wrap up, so that we can understand how they have serviced the member, and then if there's any improvements that we need to make.

Now, when we did this, we actually piloted it with a few agents and few members, and then we understood what we have to fine-tune and then we fine-tune it and scale it. Now we have about a million post-call wrap ups every single day.

MHN: What are you nervous about when you think about AI being implemented into healthcare at this point? Is there anything that you're nervous about?

Lavu: Well, I think there are a couple of things. We're really focused on the three components that we believe will simplify things for our members, providers and our associates.

One is we want to bring personalized care journeys to life to service our members.

The second is, we want to simplify the interaction so that the providers don't have to spend a lot of time on administrative work. They can actually focus on the, you know, kind of the health outcomes for our members.

And then the third is, we want to be able to provide the right tools and capabilities for our associates to service the members and providers.

So, we have a framework. We also have the framework that I talked about, the Guiding Principles of Responsible AI. As long as vendors and providers meet those guidelines and this framework, and we can actually test and prove out that they can scale, we actually continue to implement those, but they have to follow our responsible AI framework.

MHN: Do you think that there is a point where technology can become too powerful, where it actually does more harm than it does good?

Lavu: My thing right now is healthcare is so complex right now. We all actually have our own experiences with healthcare. I actually see a tremendous amount of potential with technology to simplify this, because there are so many different things to connect, so many dots to connect, and I think there's so much potential of AI bringing sets of information, simplifying things for our members, providers and our associates, that I actually think there's huge potential there, and I think those are the places where we're really focused.