How a Familiar Face Can Lead to a Rare Disease Diagnosis

September 10, 2021

Many rare diseases cause unique changes to facial features that can provide insights for doctors seeking searching for a diagnosis. Researchers at Children’s National Hospital have developed software uses machine learning technology and images captured with a cellphone to quickly recognize disease patterns not immediately obvious to the human eye to help physicians accelerate the diagnosis of genetic syndromes by recommending further investigation or referral to a specialist in seconds. We spoke to Marius George Linguraru, who led the Children’s National team that developed the digital biometric analysis software, about the diagnostic tool, how it works, and a deal with a newly formed company to commercialize the technology. 


Daniel Levine: Marius, thanks for joining us. Welcome.

Marius Linguraru: Thank you for the invitation.

Daniel Levine: We’re going to talk about the diagnostic odyssey that rare disease patients face, technology developed at Children’s National Hospital that uses artificial intelligence and biometric analysis to diagnose patients with rare genetic diseases, and a licensing deal for a newly created company to commercialize this technology. Let’s start with the problem though. How much of a challenge is it today for patients with rare diseases to get a fast and accurate diagnosis?

Marius Linguraru: Well, that’s a very good question. And I think it also questions a question that has a lot of answers because it really varies where the patient is located. That comes with different opportunities and different healthcare systems. So for patients who have a direct connection to an elite medical system it may happen faster and more accurately than for patients who are in areas with under resourced systems—that may be a different type of conundrum. So I think it very much depends on the location of the patient and the resources that are available for a child, for a child’s family to access specialized healthcare services.

Daniel Levine: You led a team of researchers at Children’s National to develop a means of using biometric data to diagnose patients. How does it work?

Marius Linguraru: We started working with our geneticists and, in a conversation with scientists like myself, I’m a computer scientist, and other colleagues who are bioengineers, we were trying to understand what is the process of doing diagnosis and identification of children with genetic conditions? So what I like to say is that what we do in our line of work in science is distill the brain of geneticists or the brain of clinicians who work with these children. In other words, we trying to understand what is the logic behind the diagnosis? What is the process of managing patients with rare diseases and then to design computer code, which we designed ourselves or with the help of the computer as machine learning allows—that’s to design code that reproduces in a way, the mentality, the intuition, the talent and art of a condition, and then apply it in everyday work.

Daniel Levine: You’re not alone in seeking to harness facial recognition and biometric measures using AI. I’m thinking here of FDNA and Face2Gene. Is there something fundamentally different about your approach that distinguishes it?

Marius Linguraru: Well, I think there probably are a lot of commonalities between their work and what we’re doing.  If I may start from the other angle, to my knowledge, and this is based just on general knowledge about what Face2Gene and FDNA are doing, we use artificial intelligence to look at faces. What I think may be different, and again, this is just based on my general knowledge of this, is that our technology is looking at identifying dysmorphology, identifying kids that we don’t know if they are at risk or not to have a genetic condition. So, where the technology we developed at Children’s National Hospital and in the Sheikh Zayed Institute for Pediatric Surgical Innovation is really focused at, is first line of care when any child may be seen by a general physician who is not trained in genetics and who’s not a specialist. And then identify that point if [the child] needs to be referred for the care. Therefore, this is the first plug-in for putting a child on the right path for care and the workup that comes with that.

Daniel Levine: What is the system actually doing once it has an image of a face?

Marius Linguraru: Well, so our system, and right now just for clarity and speaking about the technology we developed the Children’s National, because as you mentioned earlier this has been licensed and that may be a path for a product. But basically, the technology puts up a flag if this morphology is present on the child’s face and the system is trained with this morphology that comes from, genetic conditions; therefore genetically based dysmorphology.

Daniel Levine: Is the system able to learn things and recognize commonalities between patients with the same condition that we might not even be aware of? Is it used to make that determination or have you told them what to look for?

Marius Linguraru: That is exactly how we started this work with two options in mind. One of them is when you look at patients with rare diseases, and we know that there are very few and access to data is very limited. Some of the work that we did in the past, and we had a number of studies that we did together with our colleagues, from the National Institutes of Health, we looked at commonalities between faces of children that have a condition and those who don’t, taking into consideration the diversity in the population. And by that, I mean age, sex, race, and ethnicity. And at that point, we were doing some studies in which we wanted to identify what are some patterns that are quantifiable and are precise in determining the presence or the absence of a genetic condition. And that is something that I think is extremely useful for clinicians in different areas of the world who do not have access to technologies such as ours, who may find genetic tests to be overwhelmingly expensive, who see children in different systems of healthcare as I mentioned earlier, they are providing educational resources that can help them identify conditions based on these metrics. On the other hand, as our work grew and access to data became more widely available, we started to allow the computer to learn more and more about patterns that may not have been obvious to our eye or to our intuition when we were computing these biometrics, therefore allowing the computer to have more and more independence in the determination of what constitutes a pattern on the face that may be associated with the genetic condition.

Daniel Levine: Once the system makes a determination what are the next steps? Would you do a confirmatory genetic test?

Marius Linguraru: Again, it’s a very complex question that may have multiple types of answers, because that depends again on where the test would be performed. And often I am confronted with a question about what can we do with a child in Washington, DC, right where our hospital is based. And as you were saying, it may be a very straightforward path. There is a referral to a clinical geneticist. There may be a genetic test performed if the initial consultation determines that to be necessary. And then there is an entire system of care that evolves around the child because there would be potential tests to look at cardiac function, pulmonary function, endocrine function, depending on what the needs and the path of care for the condition of the child might be. But if a child is surrounded by a healthcare system that does not have access to all these resources, then what you do there? This is not what our technology does, so I just want to be clear here that I am speaking anecdotally and not about what we had developed at Children’s National, but I’m talking about the need to put the child on the most adequate pattern of care that is available to the child at that moment in time and that the geographical location. That could be telehealth services, that could be, cardiac ultrasound, for instance, as immediate tests done at the point of care. That would be very importantly, educational for parents and families who need to understand how to best take care of their child. So all of this has to be done, not independently, a technology such as this has to be integrated into the community and health resources that are available to the child.

Daniel Levine: How might this speed up the process of getting a correct diagnosis?

Marius Linguraru: You are correct. If a technology such as ours would be used for screening and potentially newborn screening, because the earlier the diagnosis is performed in conditions that have a genetic origin, the most beneficial for the child, because preventive care is key in this and your audience Danny probably knows very well. How this works—I’m thinking particularly of a well-known example—that of Down syndrome where, in the 1960s, children who were born with Down syndrome in the United States were expected to have a duration of life of up to 10 years on average. And that was because the path to diagnosis was slower. And therefore the action of preventive care was also delayed, which was putting a child’s life in danger. These days children born with Down syndrome are expected to be adults with a fully functional life into their fifties and even with a longer life expectancy. And this is, again, because preventive care is performed and because the system for identifying children with Down syndrome is much more evolved through prenatal screening, to newborn screening, to different tests that are performed clinically.

Daniel Levine: How many diseases can the system recognize today and how large a number of diseases do you think it might ultimately be able to detect as it’s exposed to more images and more conditions?

Marius Linguraru: That’s a great question. And I will start with the latter part of it. That’s something that really excites me about artificial intelligence and methods of machine learning and deep learning, such as using our technology: the systems become smarter with the more information that they learn. So at least theoretically, the more it learns, the better we’ll identify conditions. So, there should not be a cap ther. There can be more and more and more. The expectation that our technology has is that there is some facial dysmorphology. So again, we identified this morphology. We do not pinpoint which condition it is. And I will give you an example that. We showed in a very recent publication in the Lancet Digital Health, in which we had data from children from 28 countries that had 128 conditions. So this is just showing you in a way, if you want the tip of the iceberg, we looked at 128 conditions, and we were identifying them with an accuracy that is far superior to what we know clinicians who are not trained in genetics can achieve by just looking at their pediatric populations that they see in their offices.

Daniel Levine: What role does age play in the ability to diagnose a patient? Is this something you can do as a newborn screen, or does the patient need to get to a certain age depending on the condition for the dysmorphia to be recognizable?

Marius Linguraru: Well, the younger the patient is, and put on the right path of care, the best for everyone, right? For the child, for the child’s family, for the healthcare system that will have to take care of the patient. And we’ll incur more and more costs the later the condition is discovered. So, the aim of this and the holy grail of this type of technology is to do it as early as possible, which if I would have a choice, I would rip on the moment of birth, but the moment of birth maybe a little too early, because faces of babies may be puffy. There may be damage on the face right after birth and depending on what type of birth. So, a couple of days later, all these effects may be gone and the facial mapping, the biometrics may become a lot clearer. I think your question is also whether we can identify dysmorphia so early or do we have to wait until the condition appears on the face. What we did in our studies, we looked at different age groups and we grouped them into infants basically younger than two years, then two to five years, basically preschoolers, then school age adolescents, and so on. And we did not find that there was a significant difference in the way our technology performs on these different age groups. That’s to be said, we did not perform a study on the words.

Daniel Levine: Is it known how accurate the system is today? Has anything been done to validate it?

Marius Linguraru: The evaluation of the system that we did in the study that I was mentioning that we just published was showing an accuracy of 88 percent on average. And that was over all ages, all ethnicities and races, and for both sexes.

Daniel Levine: And has the system shown an ability to improve as it goes?

Marius Linguraru: That is something that is fundamental to artificial intelligence, right? It becomes better with more data. When of course there is something to learn, which I think is absolutely the case in this morphology, but that is something that’s we will see again in the future when you have peopled more data.

Daniel Levine: And ultimately, is this something if it’s commercialized, that would be regulated by FDA? And if so, do you know what the regulatory path would be?

Marius Linguraru: I will not speculate on the regulatory path for that, but I think it is fair to expect that the FDA will have to regulate this type of product.

Daniel Levine: It would seem that this is a relatively inexpensive diagnostic tool compared to something like whole genome sequencing. What’s the implications for that from a global perspective?

Marius Linguraru: I think it’s accessibility and portability, and maybe accessibility plus portability equals equity. I’m thinking again about where the markets are going in the world. The healthcare market remains expensive and elitist in many parts of the world, but smartphone technology is readily available everywhere, and it’s still one of the fastest growing markets normally. So the technology that can be used with smart devices at the point of care would have a great potential for basically providing a specialist in your pocket to patients who otherwise would not have access to services that can improve their quality of life.

Daniel Levine: In July Children’s National Hospital announced that it had entered into a licensing agreement with MGeneRx. What is MGeneRx?

Marius Linguraru: MGeneRx is a startup company that approached our hospital with an interest to produce technology that has a positive social impact.

Daniel Levine: Well, what will they do now? What work needs to be done to turn this into a commercial product, then any guess as to how soon it might become available.

Marius Linguraru: I do not have that information, but I think for any startup, it is essential to create a product fast because startups depend on the success of their products and the speed to which they bring products to market.

Daniel Levine: Marius George Linguraru, who led the Children’s National team that developed the digital biometric analysis technology and a principal investigator in the precision medical imaging laboratory at the Sheikh Zayed Institute for Pediatric Surgical Innovation. Marius, thanks for your time today.

Marius Linguraru: Thank you very much.


This transcript has been edited for clarity and readability.

Stay Connected

Sign up for updates straight to your inbox.