Podcasts

Keeping Clinical Trials Running Smoothly

July 22, 2022

Clinical trials can get derailed for a variety of reasons that may have nothing to do with whether a drug works or not. Lokavant has developed an artificial intelligence platform that tracks disparate sources of clinical trials data in real time and through its predicative abilities alert companies to potential problems as they begin to emerge. The company said the system not only saves clinical trial sponsors time and money, but also improves the quality of outcomes. We spoke to Rohit Nambisan, CEO of Lokavant, about the company’s clinical trial data platform, how it works, and the role its system is playing in Ergomed’s Rare Disease Innovation Center.

 

Daniel Levine: Rohit, thanks for joining us.

Rohit Nambisan: Thanks Danny for having me on the show.

Daniel Levine: We’re going to talk about clinical trials and rare diseases, preventable issues that can cause their failure, and how Lokavant is working to address this with its clinical intelligence platform. I’d like to start with the types of data quality issues that are encountered in clinical trials. Broadly, walk me through the key issues that lead to clinical trials failing when they otherwise could be avoided.

Rohit Nambisan: Sure, happy to. I think just before I jump directly to answer your question, I should mention that we’re seeing in the last 12-15 years just more trials and more complexities in trials, even by the amount of endpoints, as well as the amount, the complexity around the eligibility criteria. We see more vendors being leveraged to address these issues. So, there’s more point solutions and more discrete, outsourced vendors to manage. And frankly, we also are seeing higher clinical operator burnout right now. So, there’s more complex studies, more of these studies, and more vendors are being managed by less operators, less staff, and frankly, less experienced staff. I think you compound that with rare disease where you also have often slightly less experienced investigators just by virtue of the fact that you’re looking for very, very niche patient populations. And they may not be distributed at very established academical medical centers all the time. So, I think that when you think about the amount of different types of data that are being leveraged by all these more complex trials, you can see that there could be a lot of issues in just ingesting and mapping that data in a manner that allows you to assess them on a reliable basis in flight, to understand where there could be non-compliance in a study as per how the protocol is specifying that the study should be administered, and frankly that’s compounded by the fact that you also have slightly less experienced investigators or less experienced clinical trialists that are addressing these studies. So, we see a wide variety of different types of data quality issues from major and minor protocol deviations to monitoring visits out-of-spec, delays and site startup that can compound on the data quality issues, high amounts of screening or discontinuation that may be true, but also just may be a mapping issue or may be an actual underlying data recording issue. So frankly, there’s just a large spectrum of different types of quality issues we encounter on a day-to-day basis in a trial and as those compound, and as those aggregate over time, it gets very, very challenging to address them if you’re not seeing them happening or seeing them manifest in real time.

Daniel Levine: You’ve alluded to the fact that we are generating more and more data. We’ve gotten very good at generating data. I’m wondering to what extent do things like the growing volume of data affect data quality and data management?

Rohit Nambisan: It’s a good question. I’d say it’s going to sound a little bit like a broken record here to say that there’s more data, there’s more rules, there’s more procedures, less experienced investigators. But that means that they need to be checked more often to make sure that the trial is conducted in accordance with the protocol. Monitoring helps, but we’ve seen a lot of articles that have come out recently saying that a hundred percent STV doesn’t really significantly address all the most critical quality issues, although it may be actually comforting to those who are used to that process. I think that there’s a couple issues here. First on the data management side, because we’re in a highly risky area of the market where you need to validate any one of the source systems, any one of the software you’re going to use in a clinical trial. It’s a lot more difficult to address data management because there’s just more vendors. There’s more siloing of data from multiple vendors, which can lead to missing or incorrectly mapped data when aggregated for analysis. On top of that, you can also see that data quality issues can be real or actually be created by the data management inconsistencies. So, you could actually see data coming in and say, okay, there’s an actual issue here. Was that actually an issue that was a non-compliant event at the time of capturing the data or was it mapped incorrectly to the wrong attribute? And so, I’m seeing an issue associated with operator error in terms of how the mapping was managed and what I’m seeing right now when I’m assessing the data for non-compliance. Also, to compound that, the amount of novel sources that we’re seeing across multiple different studies, rare and otherwise, we’re just seeing so much volume. And you stated it Danny that there’s just a lot more data coming in, but I don’t think it captures it precisely enough. There’s just so many different types of new data coming in as well that maybe have not been anticipated or been seen by clinical operators. And so just the level of complexity in being able to say, does this data source, is this how it should be coming in? What should I map that to in my overall assessment of quality? It’s just much more complicated to manage these days.

Daniel Levine: Listeners of this podcast will be familiar with some of the challenges that rare disease clinical trials face, but what makes rare disease trials more challenging? And how does that exacerbate the issues you’ve discussed?

Rohit Nambisan: Rare disease drug development is the fastest growing segment in therapeutics development. I think today it accounts for one third of drugs in development. And I’d say some of the particular factors to rare disease that make it even more challenging—they’re just very hard to plan and execute if you think about the scarcity of relevant data, because oftentimes when you’re planning your study, you’re looking at comps, you’re looking at similar therapeutic areas or similar indications. And by virtue of the definition of rare, there’s just not going to be a lot of that out there to be able to plan your study effectively, on top of being able to plan your study. And it’s challenging to plan your study, given the lack of comparative data out there. And I mentioned this before, there’s just generally more inexperienced investigators. So, you need to work with folks that are just not as comfortable or not as used to administering clinical trials. And then finally there’s limited participant populations. So each participant in a trial is that much more significant. You don’t want to see a number of discontinuation events. You don’t want to see screen failure rates that are higher than average, because that might be indicative of an issue with training. That might be indicative of an issue with screening challenges at a particular site or set of sites. And so, if you want to maintain those participants in the study, if you want to ensure that you’re actually capturing as much participant volume and engagement as possible, you need to check. You need to basically address all those factors oriented to the data coming in and understanding what that means and how the conduct the study is being administered.

Daniel Levine: Lokavant has developed what it calls a clinical intelligence platform. What is your platform doing? How does it work?

Rohit Nambisan: I think the simplest way to state this is, our mission is really to make clinical trials smarter. And what that really means is we leverage this data. We basically create these intelligent hypotheses on whether a therapeutic is safe and efficacious for a particular patient population. And then we collect a lot of data. And then by close to the end of the study, we start looking at that data and understanding is this actually proving that this is a safe efficacious treatment? That assumes that the data is actually clean. That assumes the data is structured well and the data is collected in a compliant manner. Many of those assumptions may not be true, especially as trial complexity continues to increase. So, what we have been able to do with Lokavant’s platform is we’ve created a capability for us to connect, ingest, and map to any clinical trial data source system. Those could be traditional source systems like the EDC CTMS, IXRT, et cetera, but they can also be links, warehouses, CSV, smartsheets. There’s a wide technological maturity in the clinical research space right now. So we need to connect with that data. We’ve got to ingest it and map it in a very reliable fashion and automate that process thereafter, after we put the mapping in place. What that allows us to do is real-time ingestion of updated data on a daily basis. Actually, right now we can do it up to six times a day. When we bring that data in, we normalize that in a unified manner, using our unified data model known as Lokavant’s canonical data model. And then we ping that to understand what is going on in the study in real time, when we have that data in, we can just do descriptive analysis. You can say, what has happened in the last few hours? What has happened in the last day, where are their non-compliant events? Where is their enrollment slowing? Where are their challenges with particular startup issues on sites in particular regions? But we can do things that are a little bit more sophisticated than that and how we’ve done that is to date in the last two and a half years, we’ve aggregated about 2000 studies worth of operational data from clinical studies and we’ve leveraged that data to do more sophisticated analysis, such as descriptive analysis, why didn’t an event occur, by comparing it against similar studies in our repository; or predictive analysis, when specific issues or likely events happen that might cause issues to your study; or prescriptive analysis, what should I do if this incident occurs based on addressing similar issues in similar studies?

Daniel Levine: I’m wondering if you can expand on one part of that, which is the predictive analysis. It seems rather compelling to me that you may be able to alert a clinical trial to a problem that may derail it before it happens. Can you explain?

Rohit Nambisan: Yeah. That’s exactly right. In fact, we leverage our historical and concurrent data, our data repository, as well as incoming live study data to create such predictions. I’ll give you one example. We have an enrollment forecast model and this enrollment forecast model predicts the odds of success for successfully completing enrollment within a given time window, depending on the number of sites selection, the location of those sites, when sites are activated, and a number of other different features. How we do this is we take that study, particularly that we’re deployed on. And we look for similar studies to that in our repository. And that can be across a variety of different features, such as, accrual number, for participant accrual number, the actual countries used in the therapeutic area, the phase, et cetera. There’s a whole variety of these features. Then we pull those, what we call lookalike studies, right? And we create a prior probabilities, or a prior distribution, to say, what will an enrollment forecast look like based on the collective intelligence of those lookalike studies. Now we’re talking about rare disease. And even as the market moves into niche, specialized indications across the board, I get faced often times with study teams that say, okay, that’s great that you created a forecast based on historical studies, but my study is not like any other study. It might be rare. It might be highly specialized, for a therapeutic. So how can you address my needs based on prior studies that may not be exactly like mine. And what we’ve done with our model is because we create that connectivity to the real time data sources, we can pull that data in real time as well. Now at the start of a study, the study data from that given study, we are deploying on is not very robust. There’s very few data points, right? Cause it’s early in the study. So there’s not too much confidence in generating a forecast just from that study data. That’s where we use the historical and concurrent data to generate that because it’s better powered. So each day the study actually progresses. We get more and more data from that actual study. And so we’ve become more and more confident with the prediction provided by the study data alone. So, we take a weighted average between the historical forecast and the within study forecast each day that the study continues. We weight that more and more and more to the within study data. By the end of the study, it’s almost indexed completely. The forecast is almost indexed completely by the within study data. And so, we’ve been able to generate a signal that’s as powerful as possible from historical studies when we have very little study data and as the study data actually aggregates and gets cruising and powers itself, we can actually generate the forecast from that particular set of data in an automated fashion. Does that answer your question, Danny?

Daniel Levine: Yeah, absolutely. Who is the customer? Is it the trial sponsor, clinical research organization, or is it investigators in their institutions?

Rohit Nambisan: At the outset of Lokavant, we were actually born within Roivant Sciences, which effectively is a collection of biopharma or biotech companies. And we were actually born out of need. We initially took a lead user innovation approach saying that we’re having challenges addressing and managing our studies effectively based on the complexity of studies, the complexity of data sources, and the fact that we’re not being served effectively by a number of the outsource vendors we were working with. So, we generated this technology to improve our trials, eat our own dog food, so to speak. So, the initial customer at the outset was on the biotech sponsor side. That being said, since we launched and externalized our own startup company in 2020, which is a heck of a year to launch a startup company, we’ve now grown to include a variety of different sponsors of different sizes, as well as contract research or clinical research organizations as customers as well. Now, what we’ve been able to note by creating this intelligence platform is there’s a wide variety of applications that we can enable to sites and investigators and eventually participants as well. And where that stems from is that if we are providing a view to a CRO or a view to a sponsor in terms of where sites are non-compliant and where sites are deficient, for example, shouldn’t the site have that information, shouldn’t they be armed with that as well. And because our platform has very forward, rules-based permissioning, we can enable certain components, certain views, and certain data to be accessed by those sites so that all these different stakeholders that make critical decisions that will affect the quality of the study, receive up to date information that’s specific to them and what they can do to improve the outcomes of the study.

Daniel Levine: And what’s the business model? Are you selling a product, a subscription, is it software as a service?

Rohit Nambisan: Sure. It’s a subscription model. We have a platform that we license, and then there are particular applications that we offer on top of that platform, such as risk monitoring, or milestone tracking, operational analytics, cockpit, or we have medical monitoring offerings so that we have a variety of offerings that we put on top and they’re licensed separately. The one thing that we’ve noted with our platform as well is we built it in such a way that’s highly configurable. So, I talked about the data integration. We can ingest, map any data source in a clinical study. That’s one piece of configurability. We also can spin up any type of metric or analysis, each different study team metrics themselves against particular things to manage that study optimally. We can spin up configurable metrics and we can visualize that in a configurable manner. Whatever charts, bar charts, we build that all from the ground up, we don’t use a Looker or a Tableau. We built it ourselves, the visualization system as well. Because of that, we are able to actually capture multiple new use cases just in the last few months alone. We’re building for segments of the market, an ETMF intelligence solution as well. And that came from need after using our platform understanding, well, this helps me a lot in upstream quality issues. I need to understand what’s going on downstream as well in terms of if my ETMF is structured, if it’s compliant and complete and it has everything it needs to reduce the risk of notes to files or reduce the risk of a failure and regulatory submission just by virtue of the ETMF not being compliant. So, we’ve been able to now spin up a number of new application modules that we price individually.

Daniel Levine: How does your technology affect the time, cost, or quality of a clinical trial and the data it collects? Have you done any studies to determine that?

Rohit Nambisan: Yes we have. I’ll just give you a quick response at the outset and give you a couple examples. We have, shall we say, holistic study risk models that look across a variety of factors for monitoring protocol, deviation, startup, data management, et cetera, and each one of these individual models, maps to a composite score across time, cost, and quality. So, at any time in the trial, we can identify how a study is faring with regard to cost, time, and quality, assessing all the incoming data through our algorithms. And just a couple examples of the value we’ve been able to show. On one study, we were able to detect systemic protocol deviations, preventing patients lost to follow up empowering a primary endpoint. And the impact of this was actually greater than 12 patients lost to follow up prevented, and about three months of the enrollment timeline saved. Another case, we were able to identify a set of site non-compliance issues at high enrolling sites, eight months before the traditional methods were able to do so. We avoided closure of a major site and loss of all that patient data. And you can imagine how impactful that might be in a rare disease study. We also eliminated the need for opening news sites, which approximately saved the study team about a little over six months and just about half a million dollars.

Daniel Levine: Is there a specific case study you can offer with regards to rare disease?

Rohit Nambisan: Sure. In fact, the one that I just mentioned, the milestone tracking use case about psych noncompliance was from a rare disease study, but I can, expand upon that to another use case. We’re collaborating right now in a rare disease study that’s enrolling about 90 participants across 68 sites in 17 different countries. With this study, we have deployed our risk monitoring application and our milestone tracking applications. We’re actually also going to be deploying in the next couple months, a medical monitoring application. So, you can understand that study teams can understand any inconsistencies related to the clinical data, not just the operational data coming in and what these deployments have enabled is they’ve enabled the opportunity for the study team to actually audit their CRO at the same time as give them the insights at their fingertips into what is going on at a particular site, what is going on in a particular country so they can better manage their studies. In doing so, we’ve identified that there’s been some data entry issues in certain countries that have created challenges for enrollment, actually not getting a full view of what the enrollment is to date. We’ve also identified that there were some mapping issues in terms of bringing CTMS data into a consolidated view so that you could actually say that these sites were on board, the site is activated, et cetera. The study team wasn’t getting the full view, so we’ve empowered that study team to have independent oversight, not only of their vendors, but of their entire study as well.

Daniel Levine: We talked about some of the challenges around rare disease clinical trials. The one that that stands at the forefront is the fact that you’re dealing with small patient populations. And coupled with that is the fact that they’re usually geographically dispersed. There’s been growing interest in the use of decentralized clinical trials, and COVID 19 helped accelerate this trend, but in rare diseases where populations can be spread throughout the world and have difficulty traveling, there’s been particular interest. How does your technology address some of the challenges of conducting a decentralized trial and can it help ensure the consistency of data quality, which is a big concern with those?

Rohit Nambisan: Yeah, it’s a great question. We’ve seen a huge adoption in decentralized and direct data capture methods in the last two and a half years, as you stated. One thing that we’ve been able to hook into here is the fact that the more than the majority, in fact, I’d say more than 95 percent of decentralized studies are actually hybrid studies. In fact, they have site-based data capture components, and they also have decentralized data capture components. And that is fantastic in terms of reducing participant burden and, to some degree, site burden, not only during the time of the pandemic, but going forward as well. But most of the decentralized platforms out there are very focused on collection of decentralized data and that’s makes total sense. But if more than 95 percent of studies are hybrid in some sense, you still have a siloing—one from data capture from DCTS and data capture from site-based systems. That presents a conundrum to clinical operators and folks that are managing studies that are looking at metrics and understanding non-compliance between the DCT and the site-based data components, because Lokavant is agnostic to source. DCT is treated like just another source that gets brought in just like site-based data capture. So, we can actually connect. And, let’s say, unify this less, but albeit slightly fragmented ecosystem and no slight on DCT, it’s fantastic what it’s done to reduce site and participant burden. And we’ve done some studies with collaborators of ours, like Thread, to show that DCT methods have actually preserved enrollment and a number of other great metrics during the pandemic. Whereas traditional methods have not been able to address that as well, especially during the pandemic, but there still is that challenge of unifying the data model between DCT and non-DCT components in a study. And we’ve been able to address that with our Lokavant canonical data model and the analytics that are agnostic to where this data’s coming from.

Daniel Levine: Lokavant in April joined with the clinical research organization, Ergomed to launch a rare disease innovation center. What is the innovation center seeking to do?

Rohit Nambisan: The innovation center is very focused on moving the needle on how rare disease studies are planned and executed. And we’ve been very fortunate to have a great collaborator in Ergomed focused on moving that needle for rare disease studies and to be a collaborator with them in this innovation center, very much focused on bringing our intelligence platform to bear both for the planning and execution of studies. We’re now touching both areas of planning and execution. Initially, our collaboration started out specifically on how do we bring data to bear given the challenges in the scarcity of data out there to plan rare disease studies, how do we bring relevant data to bear agnostically  to that source third party, or even within Lokavant’s data asset, connect that together and have the best benchmarks, the best information about HCPs and investigators and sites that have experience in this indication and related indications to better plan the studies prior to execution. And now, we’re actually transitioning to a place where we’re starting to help with some of the site-to-CRO interactions, to support those sites better based on the data coming in, because as we just discussed, every participant counts and ensuring that the site is doing everything in its power and has everything in its power to maintain compliance, maintain drug supply, and ensure a very exemplary patient and participant experience. So that’s another area we’re getting into to support the rare disease innovation center in terms of study execution.

Daniel Levine: And is there plans to share learnings from the work being done at the innovation center with the broader rare disease community at all?

Rohit Nambisan: Yeah. In fact, we already presented, I think it was at SCOPE Europe, myself and Zizi from Ergomed were able to share some of the initial learnings around the collaboration and what we’re doing for study planning. We are looking forward to opportunities to share more as our collaboration matures, especially on the study execution side, but definitely looking to solicit feedback from the rare disease community on ways we can improve our processes and our approach and our offerings to market to improve the participant experience, the investigator experience. And of course, the CRO and sponsor experience to improve the outcomes in rare disease studies.

Daniel Levine: Rohit Nambisan, CEO of Lokavant. Rohit, thanks so much for your time today.

Rohit Nambisan: Thanks for having me on the show.

This transcript has been edited for clarity and readability.

 

 

Stay Connected

Sign up for updates straight to your inbox.

FacebookTwitterInstagramYoutube