Thursday, August 6, 2020

Top ten COVID-19 impact on Healthcare Imaging and IT.

The onslaught of the COVID-19 virus has impacted many from an emotional and financial perspective and dramatically changed the way healthcare is being delivered. From a personal emotional perspective, a few of my family members were diagnosed positive, some of my friends had their loved ones hospitalized, and I recently lost a good friend and colleague due to the virus.
However, out of a “bad thing” usually “good things” happen as there is a sense of urgency and focus to 
deliver healthcare faster and better while keeping social distance. We did not only find out what worked in this environment, but also what did not work and where are the gaps that need to be filled to be ready for the COVID-19 aftermath and for potential future pandemics. 

Here are my observations:

1.       When there is a need, there is a way to change policies – To quote Christopher Roth, Vice-Chair of Radiology at Duke, who said during one of the many excellent SIIM webinars, “this pandemic was as dramatic and life changing as the implementation of a new EMR, but with the difference that instead of taking 2-3 years, it had to be done in less than a month. Therefore there was no time for committee meetings, no time for training and planning, but instead practitioners had to learn and make changes as-you-go.”

New uses for modalities were invented, for example, instead of bringing a COVID patient to a radiology department to perform an exam, with the result that a cleanup crew has to take half an hour to clean and disinfect it again for the next patient, it might be better to take a chest X-ray with a portable unit at the bed-side in the ICU or ER or patient room. Federal guidelines for reimbursement of non-standard procedures, which under normal circumstances would not be reimbursed were quickly changed and adapted.

2.       POCUS use has sky-rocketed – The emergence of hand-held ultrasound (Point Of Care Ultrasound, or POCUS) over the last 2 years could not have come at a better time. These systems are relatively affordable as the cost ranges between $2k and $6k, and as they connect to either a standard phone or dedicated phone-size screen or tablet, a healthcare practitioner can carry one in his or her pocket and make an assessment on the spot.

Uploading the images that a physician wants to keep as part of the electronic health record has been a challenge that has been addressed by the standards community in the form of an “Encounter Based Imaging” IHE profile. As a recent JACR study showed, its usage did not impact downstream ultrasound volumes which is good news for those who feared that it would cannibalize the “standard” ultrasound procedures.

3.       Telemedicine has shown a massive increase – Telemedicine takes place in three modes: 1) Synchronous where a patient is talking real-time to a healthcare practitioner, 2) A-synchronous where the communication takes place in the form of texts, emails, uploaded documents, etc.,  and 3) Telemonitoring or Virtual Observation.

Telemonitoring does not only include monitoring a patient at home but also monitoring inpatients such as in the ICU. The less a practitioner has to interact physically with an infected patient, the lower the risk of spreading the infection and the lower the need for PPE usage.
Estimates for telemedicine business range between a 7 to 10 fold increase over the next 5 years. If you consider an individual practitioner, the increase could be dramatic from having virtually no telemedicine consults to converting more than 70% of their practices to remote consults. This increase became the ultimate test of the scalability of the platforms that are being used. It can only be expected that when the pandemic wanes there will be a certain percentage of those applications kept in place.

A positive effect also has been that tele-visits are now chargeable because of changed regulations, let’s hope that some of these “emergency rules” by CMS will stay in place as there is no reason for a patient to show up in a doctor’s office for simple things that can be dealt with remotely.

4.       The cyber security attack surface has been greatly enlarged – Many non-clinical healthcare workers have been working from home, clinical workers might be working from home as well, and last but not least, because of teleconsultations, patients are now also directly connected to providers. This is especially challenging for smaller providers who might not have the IT resources to deal with this.

5.       Patients have become users of an organization technical infrastructure – According to a survey, most of the telehealth consultations used commercial applications such as Zoom (23%), Facetime (17%) and Skype (9%) with telehealth platforms (34%) in the minority. One cannot assume that every patient is familiar with the functionality of these tools, and some of them are definitely more user-friendly than others. Who is the patient going to call if they cannot get into the tele consult application? IT support had to ramp up significantly to support patients as well as their remote employees.

6.       Telemedicine extended beyond COVID calls – The same survey showed that only 14% of visits were related to COVID symptoms. The other 86% of the calls ranged from urgent care to scheduled visits, behavioral health, chronic illness management (diabetes, cardiac, others…), and surgical follow ups. Again, the social distancing requirement showed that a significant percentage of routine visits can be done equally well remotely.

7.       Artificial Intelligence (AI) has proven not to be a panacea (yet) – As most AI algorithms are based on deep learning it requires a significant amount of training data which was certainly in the beginning not readily available. It is getting better as many institutions make their data available to researchers. Many AI vendors were “reprogramming” their algorithms from existing applications, such as pneumonia, for COVID which has proven not to work as well. In addition, it was and is still not clear what modality is the best to diagnose COVID, is it a chest X-ray, a CT, an ultrasound or other modality? The advantage of imaging is that it is almost real time, or at least has a much faster turn-around time than having to wait for a lab test result.

8.       Digital pathology is a major laggard – With tele consults and teleradiology being widely available it is definitely frustrating to see how it is currently challenging if not impossible to exchange a digitized pathology slide, especially in the US due to a lack of regulatory approvals and interoperability. Some countries, notably the Netherlands already have a nationwide digital pathology exchange set up to for this. There is no reason why this kind of implementation could not be deployed in the US, as a matter of fact this is the main topic of an upcoming seminar on this subject.

9.       How to get access to all of the records is still very challenging – Just from anecdotal experience, after one of my good friends had arranged for her scheduled in-person visit to be changed to a telehealth visit with a major institution for a second opinion, the physician did not have access to the most recent X-rays. The fact that my friend had the CD did not really help as there was no upload mechanism for them in the platform/portal they were using. Having all the information in a timely and complete manner is even more of a challenge with these telehealth consults.

10.   A major workflow redesign is needed – I was rather impressed with the new workflow when I had an in-person appointment with my specialist. I was instructed to text my arrival to the front-desk, upon which a nurse came to my car with a wireless tablet to confirm my identity, take my temperature, ask basic questions and when I “passed,” escorted me to the clinic straight into an exam room using a path that would limit any close encounters with other patients or practitioners. Similarly, hospitals now have a special dedicated entrance for suspected COVID cases. 


In conclusion, the pandemic has had a major impact on healthcare IT and accelerated some of the “dormant” applications to a degree that will very likely stay, most of it for the better. I recall the last visit of my spouse with the surgeon one week after she was discharged following a minor surgery, upon which the surgeon took a quick look at her scar and determined in a matter of seconds that all was OK. There is no reason for that type of visit to be in person as she could simply take a picture with her phone and email it or during a synchronous telehealth session point her phone to the incision to show it. Telehealth is in many cases more efficient and creates less of a burden for patients and has the potential to lower costs as well, let’s hope that the result of many of these COVID impacts will remain for the better.


Tuesday, June 23, 2020

How Workflow Bottlenecks are Choking the AI deployment Tsunami.


The introduction of AI in medical imaging could not have come at a better time with the COVID-19 pandemic, as AI applications for detection, diagnosis and acquisition support. especially when using Telemedicine. have shown to be invaluable managing these patients both at healthcare institutions as well as at home. There are a couple of caveats however, using this new technology, first the regulatory constraints limiting new AI algorithms because the FDA needs to catch up with approvals, second, as with any Deep Learning algorithm, AI for healthcare needs lots of data to train the algorithm, which is a limiting factor for COVID cases even although several hospitals are making their COVID patient data files publicly available. But, despite these limitations, institutions are ready to deploy AI for this particular use case together with other applications that have been identified and are addressed by literally hundreds of companies developing these novel applications.

However, early implementations of AI have come across a major obstacle: how to adopt it to the workflow as it has caused a true “traffic jam” of data to be routed to several algorithms, and the results from these AI applications, in the form of annotations, reports, markers, screen saves and other indications, to be routed to their destinations such as the EMR, PACS, reporting systems or viewers. This orchestration has to occur synchronized with other information flows for example, an AI result has to be available either before or at the time of the reporting of the imaging studies, and has to be available together with lab or other results, which might need delaying or queuing these other non-AI information flows to be effective.

What is needed to manage this is an AI “conductor” that orchestrates the flow of images, results, reports between all the different parties such as modalities, reporting systems, EMR, and obviously the AI applications, the latter of which could be on-premise or in the cloud. Note that the number of AI apps eventually reach hundreds if you take into account that an algorithm might be modality specific (CT, MR, US etc.), and be specialized for different body parts and/or diseases. Scalability is a key requirement of this critical device but also many other features.

A simple “DICOM router” will not be able to orchestrate this rather complex workflow. To assist users with identifying the required features, I created three levels of routers as shown in the figure.

Level 1 can do simple forwarding and multiplexing, queue management and has a simple rules engine to determine what to send where. 

The second level has additional features as it can perform “fuzzy routing” i.e. based on fuzzy logic, prefetch information using proxies (i.e. querying multiple sources while giving a single return), do conversions of data and file formats, anonymize the data and is scalable. 

The third level has all of the level 1 and 2 functionality and extends it to AI specific routing, can modify images header and split studies, perform worklist proxies (i.e. query multiple worklists while appearing as a single thread), and has secure connectivity to meet “zero-trust” requirements. It supports not only “traditional” DICOM, HL7 but also webservices such as WADO and FHIR and supports IHE. It can also perform static and dynamic routing, do data conversions, filter the data, split studies, normalize the data, anonymize it if so desired, and provide support for several different formats and support for Structured reports, annotations, to name a few. As a matter of fact, a fully featured AI conductor requires at least 25 distinctly different functions as described in detail in this white paper (link).

In conclusion, there is a serious workflow issue deploying AI, but the good news is that there are solutions available, some in the public domain with limited features and some as commercial products. Make sure you know what you need before shopping around, the link to the comprehensive white paper on this subject has a handy checklist you can use when you are shopping at your (virtual) HIMSS, SIIM or RSNA trade shows or when “Zooming” with your favorite vendor. You can download the white paper here.




Friday, April 17, 2020

Open Source PACS solutions for LMIC regions.

Students at PACS bootcamp in Tanzania
sponsored by RAD-AID

Using an open source PACS solution instead of a commercial PACS could be attractive to LMIC (Low and Middle Income Countries) as it provides a good start to gain experience with managing digital medical images with a relatively low entry cost. In this paper we’ll discuss the PACS features that can be offered by open source providers, implementations strategies, and lessons learned.

Why would someone want to use an open source PACS?

·         The most important reason is its lower cost as it is free (kind of), i.e. there are no software and/or licensing fees. The exception is for the operating system, which can be open source as well if one uses Linux or a variant, and, if applicable, other utilities such as a commercial database, but again, they can be an open source product as well. There is a significant cost involved for the hardware, i.e. servers, PC’s, medical grade monitors for the radiologists and the network infrastructure, i.e. cabling, routers and switches. The latter assumes that there is not a reliable network in place which is often the case in LMIC’s, therefore, a dedicated network is often a requirement.

·         Open source PACS allows an organization to find out what they need as they are changing from using hardcopy films to a digital environment with which they have often no experience and/or exposure. As many open source PACS systems have a free and commercial version, it is easy to migrate at a later date to the paid version, which provides the upgrades and support as the organization feels comfortable with the vendor.

·         This is not only applicable to LMIC regions, but an open source PACS can be used to address a missing feature in your current system. For example, they can be used as a DICOM router.
·         The open source PACS can function as a free back-up in case the commercial production PACS goes down as part of an unscheduled or scheduled downtime.

·         It can be used as a “test-PACS” for troubleshooting, diagnostics and training.

But the main reason is still the cost advantage. If a LMIC hospital has to choose between a purchasing a used CT or MRI for let’s say $350k US, which could have a major impact on patient care as it might be the only one in a large region serving a big population, and  investing in a PACS system, the choice is clear: they will first get the modality and then use maybe another $50k or so to buy the hardware servers, PC’s and monitors and string cable to get a network in place and install an open source PACS. One should also be aware that the argument of not having any vendor support for an open source PACS is grossly over-rated. I have seen some good dealers and support but also some very poor service engineers, so even if you would use a commercial PACS, the chance that you get any decent support is often slim in the LMIC region.

Let’s now talk about the PACS architecture as there is a difference between a “bare-bones” (BB-PACS), a typical (T-PACS) and a fully featured (FF-PACS). This is important as in many cases you might only need a BBPACS to meet the immediate needs in a LMIC hospital or clinic. 

A TPACS takes in images from different modalities, indexes them in a database, aka Image Manager, archives them in such a way that they can be returned to users, and provides a workflow manager to allow for multiple radiology users to simultaneously access the studies using different worklist criteria. For example, the workflow manager would allow the studies to be accessed using different specialties (neuro, , pediatrics) and/or body parts (extremities, breast, head) as a filter while indicating if a study is being read by someone else, its priority, and if it has been reported. The TPACS also has a tight integration with its workstations, the PACS archive, and database through the workflow manager, i.e. these workstations would typically be from the same vendor that provides the PACS archive and database.

The FF-PACS would be a T-PACS and also have reporting capability, preferably using Voice Recognition and a Modality Worklist Provider that interfaces with the digital modalities with an ordering system to allow the technologist at the modality to pick from a list instead of having to re-enter the patient demographics and selecting the appropriate study.

A BB-PACS would be merely a PACS database and archive. It would not have a workflow manager and one could use an open source workstation from another vendor. Almost all open source PACS systems are of the BB-PACS kind, which means that one has to select a preferable open source viewer with it as well.

How are these open source PACS systems implemented? In the developed world, it typically happens top-down, i.e. a hospital has a Radiology Information System (RIS) that places the orders, which is replaced in most institutions by an ordering feature in the EMR. These orders are than converted from a HL7 into a DICOM worklist format by a Worklist provider. The images that are being acquired are sent to the PACS and the radiologist uses a Voice Recognition System to create the reports.

In the LMIC regions, it typically starts bottom-up. The first step is converting the modalities from film to digital by replacing their film processors with CR reader technology or upgrading their x-ray systems to include a Direct Digital Detector. They might get a CT and/or MRI that also prints studies on a film printer. They now have digital images that need to be viewed on a viewing station, archived and managed, therefore a PACS is needed. That is when the vendors start pitching their commercial PACS products, usually a FF-PACS or T-PACS, which are typically unaffordable, hence the choice to implement an open source, BB-PACS with a couple of open source view stations.

It is critical at this point to use a medical grade monitor for the radiologist to make a diagnosis as commercial grade monitors are not calibrated to map each image pixel value into a greyscale value that can be distinguished by a user. These monitors do not need to have the high resolution (3MP or 5MP) as is commonly used in developed countries, but a 2MP will suffice, knowing that to see the full resolution the user will have to zoom in or pan the image in a higher resolution. These 2MP monitors are at least three or more times less expensive than their high-resolution versions. The only disadvantage is that they require a little bit more time for the interpretation to be done as the user has to zoom to see the full spatial resolution.

After having installed a BB-PACS and used it for a few years, the institution will have a better idea of what their specific requirements are for the PACS system and they can make a much better decision for what they want to do next. There are three options:
1.       Expand the current open source BB-PACS, e.g. upgrade the storage capacity, replace the server, have a more robust back-up solution and add a commercial workstation workflow manager, a Modality Worklist Provider and reporting system. This assumes there is a mechanism to enter orders, i.e. through a RIS or EMR.
2.       Keep the BB-PACS and turn it into a Vendor Neutral Archive (VNA) and purchase a commercial T-PACS which serves as a front end to the radiologist. The new PACS might store images for 3-6 months and the “old” PACS will function as the permanent archive.
3.       Replace the BB-PACS with a commercial T-PACS or even a FF-PACS assuming the funds are available and you are looking for a cost effective solution.

Note that the advantage of option 1 and 2 is that you don’t need to migrate the images from the old to the new PACS, which can be a lengthy and potential costly endeavor.

What are some of the open source PACS systems? The most common options are Conquest, ClearCanvas server, Orthanc, DCM4CHEE and its variant Dicoogle. Conquest and ClearCanvas are Windows based, Orthanc can be both Windows or Linux and DCM4CHEE is Linux based. Conquest is the most popular for being used as a router and for research and the easiest to install (literally a few minutes). ClearCanvas is also relatively easy to install, DCM4CHEE is the most involved but there is now a docker available that makes the process easier. DCM4CHEE is also the most scalable. For open source viewers, one can use the ClearCanvas viewer, which is the most popular, or a web-based viewer such as Oviyam with DCM4CHEE. RadiAnt is another option and Osirix is the primary choice for a MAC. There are several other options for viewers, one can do a search and try them out, but be aware that they differ greatly with regard to functionality and robustness. Another consideration is continuing support, as an example, the gold standard for the open source viewer used to be E-film, but that company was acquiredby a commercial vendor who stopped supporting the open source version which is a problem with the frequent OS upgrades especially when based on Windows.

What are some of the lessons learned with installing the open source PACS:
·         Be prepared to assign an in-house IT and/or clinical person who is computer literate to support the PACS. This person will be responsible for day-to-day support, back-ups, managing scheduled and unscheduled downtimes, adding additional modalities and interfaces with a RIS, EMR or reporting system as they are being introduced. This staff member will also be responsible for troubleshooting any issues that might occur. They will also be the go-to person for questions about its usage and he or she will train incoming users. These so-called PACS administrators are a well-established profession in the developed world, but it will be a challenge initially to justify a designated position for these people to the department and hospital administration in the LMIC region as it is a new position.
·         How will these PACS administrators get their knowledge? There are fortunately many on-line resources, including on-line training, and organizations such as RAD-aid, which has been conducting PACS bootcamp training session in LMIC regions to educate these professionals.
·         PACS is a mission critical resource that has impact on the infrastructure (power, network, HVAC, etc.). In most cases the existing network is not secure and reliable enough and/or does not have sufficient bandwidth, which requires a dedicated network with its own switches and routers.
·         It is preferred to use locally sourced hardware for the IT components to allow for a service contract and access to parts. The only problem you might have is to get medical grade monitors in some regions as they are not as popular yet.
·         Pay attention to the reading environment for diagnostics, I had to instruct people to switch off their lightboxes that were used to look at old films and even paint some outside windows to reduce the ambient light. Use medical grade monitors for diagnostic reading.
·         Use good IT practices that includes implementing cyber security measures, reliable back-up and OS patch management.
·         Create a set of Policies and Procedures for the PACS that include access control, who can import and export data on CD’s and how that is done, unscheduled and scheduled down-time procedures, and everything else needed to manage a relatively complex healthcare imaging and IT system.

In conclusion, open source PACS systems are a very viable, if not the only option due to cost constraints, in LMIC regions, especially for the first phase. One should be aware that these open source PACS systems are very much a bare bones solution with limited functionality, however they allow the user to get started and find out their specific requirements. If additional funds become available, one can upgrade later to enhance functionality or replace it with a commercial PACS which can become either “front-end” to the existing PACS or a replacement.

Resources:

Thursday, March 19, 2020

Healthcare AI Regulatory Considerations.

Based on the information provided during the recent FDA sponsored workshop, “The Evolving
Role of Artificial Intelligence in Radiological Imaging,” here are the key US FDA regulatory considerations you should be aware of.

1. AI software applications are fundamentally different in that an AI algorithm is created and improved by feeding it data so it can learn, and eventually, if it implements Deep Learning, it can learn and improve autonomously based on new data. AI is a big business opportunity.

According to an analysis by Accenture, the market for AI applications for preliminary diagnosis and automated diagnosis is $8 billion. The same analysis points out that there is a 20 percent unmet demand for clinicians in the US by 2026, which can be addressed by AI. 


It became clear during the conference that the prediction made in November of 2016 by Geoffrey Hinton that deep learning would put radiologists out of a job within 5 years was a gross miscalculation. No jobs have been lost as of today, by contrast, the number of studies to be reviewed is increasing to almost 100 billion images per year, to be read by approximately 34,000 radiologists, requiring more and more images to be read faster and more efficiently. The use of AI to eliminate “normal” cases, especially for screening exams such as for breast cancer or TB in chest images, will only be a big relief for radiologists.


2.       AI will not make radiologists obsolete but rather will change their focus as the image by itself might become less important than the overall patient context. We spend a lot of time improving image quality by reducing image artifacts and increasing resolution so a physician can make a better diagnosis. However, as one of the speakers brought up, using autonomous AI could potentially eliminate the need of creating an image, by basing the diagnosis directly on the information in the raw data. Why would we need an image? Remember, the image was created to optimally present information to a human, ideally matching our eye-brain detection and interpretation. If we apply the AI algorithm on the acquired data without worrying about the image, we could use it on CT raw data streaming straight from the detector, or the signals directly from the MR high frequency coils, the ultrasound sound waves, or the EKG electrical signals, or whatever information comes from any kind of detector. Images have served the physicians very well for many years. In some cases, “medical imaging” will be implemented without the need to produce an image and we might need to rename it to become “medical diagnosing” instead. I believe that a radiologist is first and foremost an MD and thinking that they will be out of a job when there is less of an emphasis on the images seems misguided.

3.       AI algorithms are often focused on a single characteristic, which is a problem when using them in an autonomous mode causing incidental findings to go unnoticed. There were two good examples given during the workshop, the first one was an ultrasound of the heart of a fetus which looked perfectly normal. So, if one would run an AI algorithm to look for defects, it would pass as being OK. However, in this particular case as shown in the image, the heart was outside the chest, aka Ectopia Cordis, a rare condition, but if present should be diagnosed early to treat accordingly. The other example was for autonomous AI detection of fractures. Fractures are very common for children as I can attest personally having many grandkids who are very active. One of the speakers mentioned that in some cases when looking at the fracture there are incidental findings of bone cancer, something that a “fracture algorithm” would not detect. So, maybe my previous hypothesis that an image might become eventually obsolete is not quite correct, unless we have an all-encompassing AI detection algorithm that can identify every potential finding.
The problem with creating an all-encompassing AI is that there are some very rare findings and diseases for which there is relatively little data available. It is easy to get access to tens of thousands of chest images or breast images with lung or breast cancer from the public domain for example from NCI, however for rare cases there might be not enough data available to be statistically significant to train and validate an AI algorithm.

4.       There are still many legal questions and concerns about AI applications. As an analogy, the electric car company Tesla is being sued right now by the surviving family of the person who died after his car crashed in a highway median because the autopilot misread the lane lines. Many people die because they crash into the medians because of human error, however, there is much less tolerance for errors made by machines than by humans. The question is who is accountable if an algorithm fails with subsequent patient harm or even death, the hospital, the responsible physician, or vendor of the AI algorithm?

5.       A discussion about any new technology would not be complete without a discussion about standards. How is an algorithm integrated into an existing PACS viewer or medical device software and how is the output of the AI encoded? The IHE has just released a set of profiles that address both the AI results and workflow integration in two profiles. Implementors are encouraged to support these standards and potential users are encouraged to request them in their RFP’s.

6.       There are three different US FDA regulatory approval and oversight classifications for medical devices and software:
      1.       Class 1: Low risk, such as an image router. This classification requires General Controls to be applied (Good Manufacturing practices, complaint handling, etc.)
      2.       Class 2: Moderate risk such as a PACS system or medical monitor, as well as Computer Aided Detection software. This classification requires both general as well as special controls to be applied. These devices and software require a 510(k) premarket clearance.
For a moderate risk device that does NOT have a predicate device, a new procedure has been developed aka a “de novo” filing. For example, the first Computer Aided Acquisition device which was approved in January 2020 followed the de novo process.
      3.       Class 3: High risk such as Computer Aided Diagnosis which requires general controls AND Premarket Approval (PMA).

7.       AI can be distinguished into the following categories:
a.       CADe or Computer Aided Detection – These aid in localizing and marking of regions that may reveal specific abnormalities. The first application was for breast CAD, initially approved in 1997, followed by several other organ CAD applications. CADe has recently (as of January 2020) be reclassified to NOT need a PMA but rather being class 2 and needing only a 510(k).
b.       CADx or Computer Aided Diagnosis – Aids in characterizing and assessing disease type, severity, stage and progression
c.       CADe/x or Computer Aided Detection and Diagnosis – This is a combination of the first two classifications as it will do both localizing as well as characterizing the condition.
d.       CADt or Computer Aided Triage – This aids in prioritizing/triaging time sensitive patient detection and diagnosis. Based on a CADe and/or CADx finding, it could immediately alert a physician or put it on the top of a worklist to be evaluated.
e.       CADa/o or Computer Aided Acquisition/Optimization – Aids in the acquisition/optimization of images and diagnostic signals. The first CADa/o was approved in January 2020 for ultrasound to provide help to non-medical users to acquire images. Being first-in-class, it followed the de novo clearance process.

8.       Other dimensions or differentiation between the different AI algorithms are:
·         Is the algorithm “locked” or if it is continuously adaptive? An example of a locked algorithm was the first CADe application for digital mammography, its algorithm was locked and it is still basically the same as when the FDA cleared its initial filing in 1996. An adaptive algorithm will continue to learn and supposedly improve.
·         What is the reader paradigm? AI can serve as the first reader, which then possibly determines its triage, as a concurrent reader, e.g. it will do image segmentation or annotation while a physician is looking at an image, as a secondary reader, such as used to replace a double read for mammography, or it can include no human reader being autonomous. The first clearance for a fully autonomous AI application, based on having a better specificity and sensitivity than a human reader, was for diabetic retinopathy which was cleared in January of 2019.
·         What is the oversight? Is there no oversight, is it sporadic, or continuous? Note that this is different from the reader paradigm, a fully autonomous AI algorithm application might still require regular oversight as part of the QA checking and post-market surveillance, especially if the algorithm is not locked but adaptive.

9.       The FDA has several product codes for AI applications. The labeling and relationship between these codes, the various CAD(n) definitions and corresponding Class 1,2,3 and “de novo” classifications is
inconsistent and unclear. The majority of the products, i.e. more than 60 percent are cleared under the PACS product code (LLZ) as that is the most logical place for any image processing and analysis related filings, the remainder is cleared under 6 different CAD categories (QAS, QFM, QDQ,POK, QBS, and the most recent QJU) and a handful others. If a vendor wants to file a new algorithm, the easiest path is to convince the FDA that it fits under LLZ as there are many predicates and a lot of examples, assuming that the FDA approves that approach. I would assume that they want to steer new submissions towards the new classifications, however as you can see from the chart, there are very few predicates, sometimes only a single one.

10.   Choosing the correct size and type of dataset that is used for the learning is challenging:
·         There are no guidelines on the number of cases that are to be included in the dataset that is used for the algorithm to learn and to validate its implementation. The unofficial FDA position is that the data should be “statistically significant,” which means that it requires intensive interaction with the FDA to make sure it meets its criteria.
·         Techniques and image quality vary a lot between images, to the extent that certain images might not even be useful as part of the dataset.
·         One needs to make sure that the dataset is representative for the body part, disease, and population characteristics. It has been acknowledged that a dataset from e.g. Chinese citizens might not be applicable for a population in US, Europe or Africa. In addition, it became clear that it might need to be retrained based on the type of institution (compare a patient population at a VA medical center with the patients at a clinic in a suburb) and even geographic location (compare Cleveland with Portland, the youth in Cleveland being the most obese in all of the US).
·         There is a big difference between different manufacturers on how to represent their data.  This requires the normalizing and/or preparation of the data to make sure the algorithm can work on it. Even for CR/DR there are different detector/plate characteristics, different noise patterns, image processing applied by the vendor, different LUT’s applied, etc.
The figure shows the intensity values for different MRI’s.

11.   There should be a clear distinction between the three different datasets that are used for different purposes:
·         The training dataset that is used to train the AI algorithm.
·         After the initial training is done, one would use a tuning dataset to optimize the algorithm.
·         As soon as the algorithm development is complete, it will become part of the overall architecture and is verified with an integration test, which tests against the detailed design specs. This is followed by a system test that verifies against the system requirements, and lastly by a final Validation and Verification, which test against the user requirements using a separate Test dataset.

12.   AI clearance changed the traditional process in that now pre-clearance testing and validation and post-market surveillance are required. The pre-clearance is covered by the pre-submission, aka as the Q-Submission program, which has a separate set of guidelines and is extensively used by AI vendors. It is basically a set of meetings with the FDA with the focus on determining that the clinical testing is statistically significant and that the filing strategy is acceptable. Last year, there were 2200 pre-submissions out of 4000 submissions, which shows that it has become common practice. The FDA strongly encourages this approach.

The post-market surveillance is very important for non-locked algorithms, i.e. the ones that are self-learning and supposedly continuously improving. The challenge is to make sure that the algorithms are getting better and not worse, which requires post-market surveillance. There was a lot of discussion about the post-market surveillance and a consensus that it is needed but there were no guidelines available (yet) on how this would work.

13.   There are a couple of applicable documents that are useful when looking to get FDA clearance for an AI application: the Q-submission process, the De Novo classification request, and regulatory framework discussion paper.

The FDA initiative to have an open discussion in the form of a workshop was an excellent idea and brought forth a lot of discussion and valuable information. You can find a link to the many presentations at their website. It was obvious that the regulatory framework for AI applications is still very much under discussion. Key take-aways are the use of pre-submissions to have an early dialogue with the FDA about the acceptable clinical data used for training and validation, and regulatory product classification and approach, as well as the need for a post market assessment, which is not defined (yet) especially for adaptive AI algorithms.

The de novo approach will also be very useful for the “to-be-defined” product definitions and it might be expected that the list of product classifications will grow as more products are introduced. AI is here to stay and the sooner the FDA has a well defined process and approach, the faster these products can make an impact to the healthcare industry and patient care.

Tuesday, December 31, 2019

Top ten healthcare imaging informatics trends for 2020.


Several new trends have emerged over the past five years in the imaging and informatics field. Using the terminology from the Garter hype cycle[1], some of them have not made it beyond the innovation trigger (yet), some ended up at the peak of inflated expectations, others ended up in the trough of disillusionment, and some have emerged to become somewhat mature technologies. I used the hype cycle categorization to show where the top ten trends are right now and where I believe they might end up a year from now.


1.       Augmented reality (AR) – Augmented reality superimposes a computer-generated picture on a user’s view, which is typically on a patient. It has great potential, imagine medical students working on a virtual patient or using a virtual scalpel performing a virtual surgery instead of practicing on a human cadaver. Dissections can be done virtually, and one could even simulate errors similar to what happens with a flight simulator.

There are several start-ups that are working on this technology, but much improvement is still needed. It might take another iteration of a Google glass-like set of goggles to replace the big somewhat cumbersome glass sets I have seen so far. It will also be challenging to replace manual controls with 100 percent voice controls and/or a combination of voice and other bodily controls. This technology is still in its infancy, there are a couple of trial sites, mostly for surgery and we don’t know yet all of the pitfalls, and where it will be used, so this technology is definitely at the beginning of the hype curve.

2.       Artificial Intelligence (AI) – If you attended the recent RSNA tradeshow or are following the literature, you would see that AI is very much in the hype phase right now. I counted more than 100 dedicated AI companies, most of them have not gotten FDA clearance yet for their new algorithms, and the FDA is struggling to deal with these submissions as they don’t know how to deal with an application that is “self-learning” and has a potentially unpredictable future behavior.

There are ethical concerns as well, especially as these algorithms need a lot of data that is currently stored in the archives of many hospitals, and/or in the cloud of the big three cloud providers, which are supposedly being mined in a way that protects patient privacy, but in many cases without their consent. There are also concerns that some of the algorithms were tested on a limited, biased subset of patients and not including all of the various races and cultures with different gene-pools.

Given the momentum, this technology will continue on its hype curve as there will be several new applications being cleared by the FDA. It will take another three years before their first round of financing will run out and they will have to show a realistic ROI and potential to their initial investors. I expect that it will take another few years before it reaches its peak and users will see what it can and can’t do, before it will start to drop down into the valley of disillusionment and eventually mature.

3.       Blockchain (BC) – this technology still gets a lot of attention, but it is closer to its peak as it has become clear what you can do and what you can’t do with it. Blockchain provides a distributed ledger that is public, which makes its application in healthcare limited as most healthcare applications are looking to preserve privacy. The fact that it is immutable, however, provides opportunities with registries as you want those to be widely available and accessible. Occasionally you might hear about a physician practicing without proper licensing in a certain state, which would become a much less likely event if we had a publicly available registry in place.

Another application might be for large public data sets with anonymized patient information that can be used to test new algorithms for AI or healthcare practitioner training. People have become aware that blockchain has limited applications in healthcare and we are waiting for some of those to materialize so we can learn its pro’s and con’s before it matures.

4.       3-D printing – 3D printing is not new; it has been used for more than 25 years to build prototypes and rare parts. Its prerequisite is the presence of a computer-generated model that can be interpreted by the printer and then printed using the so-called additive manufacturing technique instead of the conventional machining, casting and forging process.

What is new is that these printers have become less expensive, 3-D modeling more sophisticated, and the recent standardization for a PACS workstation interface has given this application a boost. Its application is somewhat limited as it is currently used mostly for surgery planning and simulation. A 3D print can provide a visual model especially for complex procedures. There is still a great opportunity for surgical implants and prosthetics assuming that one can print using the right materials. What better use of the technology than replacement of an artificial limb, for example, that matches exactly the other body part in the case of a paired body part or one that fits exactly. Storing and labeling these 3-D models is still somewhat of a challenge especially if one creates many of them. This technology still has to go up toward its peak before it will fall back and become a mature technology.

5.       FHIR – This new interface standard has skyrocketed in its hype. It is widely touted as the solution to all of the current interoperability problems and has a large support from ONC (Office of National Coordinator) in the US. There are a few, limited applications being introduced on a very big scale, for example, the Apple iPhone has a plug-in allowing you to access your medical record in participating hospitals. I have seen more deployments internationally than domestically, for example, some in western Europe and one in Saudi Arabia where patients can access nationwide scheduling using their phone apps. There are a couple of challenges which will cause it to reach its peak of inflated expectations over the next one or two years.

The biggest issue is the speed of its introduction and corresponding lack of maturity. The first formal, normalized release (R4) was not approved until early 2019. The term “normalized” is deceiving as FHIR is basically a compilation of many individual mini specifications for the various resources of which in R4 there are only 11 in a final, normalized state out of the close to 150 resources. One could therefore argue that less then 10 percent of the spec is stable and ready for implementation as more than 90 percent can and most likely will have substantial changes to its interface.

Also, I believe that the current lack of interoperability in healthcare is not so much the lack of a well-defined interface, despite the issues with HL7 version 2 and the overkill and verbosity of CDA documents, but more due to the siloed architectures of our current EMR’s and other healthcare applications, and resistance by vendors and institutions to share information. It might require the pending “anti-blocking” rule by ONC to get some real teeth, success stories to become more widely known, and the standard to get more mature before it reaches its peak. I am worried about the speed and momentum because the faster you go, the more damaging it will be if you crash. As of now, FHIR is still going full speed ahead and it might take another two or three years before we will see it go past its peak.

6.       Cloud – The cloud right now is beyond its hype and traveling down its negative slope. Using the cloud as a back-up for data recovery has been mature for many years, but the advantages and disadvantages of outsourcing your IT solutions to a cloud platform has become clearer. From a security perspective, most healthcare institutions spend between three percent and six percent of their IT budgets on cyber security and they are becoming aware that it is hard to compete with the thousands of security experts that are employed by the big cloud providers. It also has become clear that you still need to manage your cloud applications well, especially the configuration settings, which became clear after the 2019 Capitol One breach, which is touted as one of the largest breaches ever. There is a lack of trust by the general public on how their data in the cloud is being used by the big cloud providers and whether it is sufficiently anonymized.

The cloud is not always a good solution as there could be a shift from the cloud to edge computing when processing real-time data. A typical response time from the cloud would be about one second which is fine when accessing and retrieving information for human interpretation but when making a split-second decision such as used for remote surgery, the cloud is too slow. The good news is that we typically don’t need to make these fast decisions, unlike self-driving cars that need to avoid potential obstructions. So, the cloud is definitely past its initial hype and next year we’ll discover more of its inflated expectations before we’ll see it mature in a few years.

7.       IOMT – The number of IOMT (Internet Of Medical Things) devices will continue to explode. The problem is that people are becoming highly dependent on these devices as illustrated by the recent data outage of DexCom, a company which allows caregivers to monitor the blood-sugar levels of their kids, parents and others. When this communication suddenly became disrupted, there was a semi-panic among caregivers.

This device is not the only IOMT device that is being introduced, there are intelligent extensions to a person’s handheld device to measure vital signs allowing for a telemedicine consult and wearables that are able to record and communicate with pacemakers, intelligent drug dispensers, a scale and other devices. Challenges with these devices are the unrealistic reliance on these technologies and corresponding immaturity, unreliability and lack of redundancy.

These IOMT devices interface easily with your mobile devices using blue tooth for example, but what about the next step, i.e. how does it get into an EMR? There is a mechanism in any EMR to upload the nurse observations and vitals that are measured by the same nurse, but how about uploading that information from my smart watch when I come into the doctor’s office?

Last but not least, there is a concern about cyber security provisions in this technology as potential weaknesses in pacemakers and IV pumps have been published. All of that makes IOMT still immature and it will take a few more years before it will start to slope up again and get to a plateau of productivity.

8.       VNA – Vendor Neutral Archives (VNA’s) used to be the biggest hype in medical imaging two or three years back. Initially it was positioned as the best solution to avoid costly and lengthy data migrations from hospitals that were switching PACS vendors. Each major PACS vendor was scrambling to catch up and replacing their PACS Archive labels with a VNA label, which created confusion in the marketplace about the functionality of a “true” VNA. Subsequently, the VNA became the source for image display for non-radiology physicians requiring web interfaces and connections with a Health Information Exchange using XDS protocols.

As of now, VNA is being repositioned as an enterprise archive with its own set of problems. There is a lack of synchronization of the data between the various PACS systems and VNA, for example, if there is a change in the patient demographics, images have to be deleted, or other adjustments made. Standards exist to address this issue as defined by IHE but there is little uptake on the PACS side for those.

The biggest stumbling block is the lack of a uniform workflow by non-radiology or cardiology specialties and inconsistent access for patient orders and/or encounter information. Also, as institutions are starting to rely on a single VNA to manage images from 100+ institutions, there are some serious concerns and issues around redundancy and scalability.

The VNA is definitely not mature yet, but its pitfalls are identified, it is slowly going up the slope of enlightenment, which will continue for the next two or three years. The concern is also that because most of the independent VNA vendors have been acquired by PACS vendors, their rate of innovation will be slowed down because of the transition and lack of focus.

9.       DR – Digital Radiography (DR) has been replacing Computerized Radiography (CR) for the past several years by being able to capture digital X-ray exposure and convert it directly into an electronic signal producing a picture withing a few seconds, instead of having to scan a CR plate in a reader. However, CR technology is still great for developing countries, whereby they are still converting from film to digital. [MO1] The DR plate technology has been greatly improved, it has come down in price but is still not cheap, plates used to be more than $50k and are now getting closer to $20k. They are now wireless, so you don’t need a cable to connect a plate that is being used for portable X-rays, which can be a safety hazard. As a matter of fact, I heard firsthand from a radiology administrator who had a technologist who tripped over the cable and he had to deal with a worker’s compensation case.

The battery life of the removable plates is getting better, with some lasting up to half a day or more. In addition, the way they are sealed has also improved providing better protection against leakage of bodily fluids. However, most of them are still based on glass silicon so they are heavy and subject to damage if dropped. All of these factors, price, battery life, leakage protection and weight can still be improved, which is why the technology has not reached its plateau of productivity and is still ascending on the slope of technology enlightenment. This will continue for the next few years.

10.   POC Ultrasound – Point of Care (POC) ultrasound has a big potential. It is inexpensive (~ $2k to $6k), portable, and adds value when used correctly. It could potentially become the equivalent of a stethoscope for the physicians. In addition, it can become a tool for non-physicians such as midwives or Physician Assistants.

Because of its low price point, there is a huge market opportunity in developing countries where there are no diagnostic tools at all available in the majority of cases. The factors that will cause it eventually to get over the hill of inflated expectations over the next one to two years are immature hardware and software product features, for example, some of the probes are known to get really warm, and there is still a lack of clinical measurements in the software such as needed for cardiac echo and OB/GYN, and there is no universal architecture yet as some of these devices can use a standard iPhone or Android phone/tablet and some require a company provided proprietary handheld device. Some of the POC devices require a cloud connection which is a problem when working in an area without connectivity and the business models vary between monthly fees and one-time purchase.

Last but not least, acquired images need to be archived, and there is an issue with matching the images with the correct metadata containing the patient information and any other encounter based important information.

In conclusion, the Gartner hype cycle has been criticized for a lack of evidence that every technology actually goes through this cycle, however, in my opinion, it seems to apply to most of the new technologies I have seen developing over the past several decades. Also, note that the ranking of these technologies in this article is my own personal opinion, and I might be wrong, and I promise to produce an update a year from now and admit any assumptions that appeared to have been incorrect. The main purpose of this ranking is to use this as input when making a decision to implement these technologies. It is fine to take a bet if you are a risky person and like to be on the “bleeding edge,” but if not, you might want to think twice about using a technology that is labeled immature or super-hyped. And of course, you can disagree with my ranking; I always encourage feedback and discussion.