Express Healthcare
Home  »  Radiology  »  Qure.ai is focussed on automating reads of X-ray, CT and MRI Scans

Qure.ai is focussed on automating reads of X-ray, CT and MRI Scans

91
Read Article

Can you give us an over view of Qure.ai?

201805ehm19
Prashant Warier

Fractal is the larger entity and Qure.ai is a subsidiary of Fractal. Fractal is a 17-year-old company now with offices in around 13 locations around the world. We have about 1100 plus people and we work with fortune 500 clients. We provide analytics and AI services for all these clients. Now, Qure.ai is focussed on automating the reads of X-ray, CT and MRI scans. Both, Fractal and Qure.ai use artificial intelligence that’s the only common part.

Can you throw some light on evolution of radiology and how is Qure.ai leveraging the new advancements?

Starting around 2012, there was a revolution about how machines understand. Concepts like deep learning and machine learning were widespread. In 2012, a professor in University of Toronto built the machine learning which works like a human brain. Typically, layers of neuron which learns to understand images just like we do. For example, if I have to train an algorithm what the bottle is, I have to give a lot of details about the bottle, like features of the bottle i.e. there is a cap, there is a theoretical shape, colour of the bottle and so on. It’s a rule-based thing, the algorithm figures out the bottle based on the rule that I have coded in. Today that’s not the case, given millions of images with different labels, the algorithms today will automatically learn to understand that it’s a bottle. It functions in the same way as the human brains. For example, in 2015, algorithm could look at an image and automatically label it, saying that there is black and white dog jumping over a ball. It’s a detailed annotation. It can extract a lot of the meaning from the image. So, we thought if an algorithm can do that, why can’t we start doing this for X-rays, CT and MRI medical images. Around October 2015, we started working on this particular problem. The number of images in the world is at very high scale, if you look at for example Kenya, they have got 200 radiologists for a population around 54 million people. From 1999 to 2010, the number of images that radiologists had to go through has increased by seven times in developing or developed countries. Compared to the number of images to be read, radiologists are just two to three per cent. Secondly, because of lack of radiologists, there is a huge number of diagnostic errors.

When you look at the chest X-rays for example, today the error rate is 20 to 23 per cent, where 20 per cent of the diagnosis is missed or wrong. There is huge opportunity to reduce diagnostic errors in reading X-rays, CT and MRI scans by using automation there. Also, quantifying and measuring the volumes of tumour region is extremely time-consuming. We can automate this process to make more time available for radiologists to diagnosis of diseases. We are training an algorithm to understand something which is complex and requires a lot of expertise.

When we speak of all the development that is happening in machine learning and in AI from last few years in organisations like Google and Facebook, they have million and billions of images to train on. Various categories are assigned to different images. This has been created by different organisations like Stanford University, Google and Facebook. Getting access to similar size of data set for radiology images was very hard initially, but we cracked it. It is very hard process to get access to same size of data primarily because there are a lot of privacy issues.

Automation can help to generate a report but how do you think it would be able to replace a doctor or a radiologist?

Radiology is just one part of the chain, but final confirmatory test is the microbiology test. So, diagnosis that is performed by radiologist will be replaced by AI and report could be generated automatically. We can only speed up the process. We are not saying that we will replace the radiologist completely, but we could help the physician to take final decision. For example, when the trauma patient visits the emergency care, radiologist might not be available, but physician can easily take decision based on auto-generated report. We are not recommending the treatment based on auto-generated report at this point. It’s risky. This report can act as an extra evidence to identify the disease and thereby physician would get more confidence. The idea is to augment the radiologist and the physician instead of replacing them.

We have integrated multiple devices and these devices are used by radiologist, integrating with their workflow. Currently we are residing at Mumbai, Delhi and Bangalore.

We know that human bias is present in report but what would happen if there’s an error in the report generated by AI?

We are deploying AI solutions in two different ways. We are doing a pre-report where report will be generated immediately unlike in case of scanning done by radiologist. This report is available for the radiologist as a major report. But, there is an opportunity of radiologists getting biased by what they observe in the report.

So we are also planning to do a post-report too. We are trying to eliminate the bias and create an opportunity where we ask the physician to make a report first and then it is rectified using AI generated report. It is difficult to say which one is a better way, since second method eliminates bias, but first method is more productive.

When we look at most practises, 80 per cent of cases are normal. If I as a radiologist have to look at 100 kids in a day, some of them will be normal and some of them will be abnormal. But, if the algorithm helps to determine that 80 kids are normal and 20 are abnormal, the radiologist can start looking at the abnormal kids first and then look at the other 80. It helps radiologist to processes and increases the productivity. We are still in the stage where we are yet to figure out which method is better. This is completely new as hardly any companies are doing it. So, we have an opportunity to build a market and see how people start reacting to it.

What are your plans on expansion?

We are located at Bangalore, Delhi, and Mumbai and expanded to Nepal. We are also planning to expand to many other countries like Canada, Myanmar, Zimbabwe etc.

[email protected]

Comments are closed.