Tuesday, June 23, 2020

How Workflow Bottlenecks are Choking the AI deployment Tsunami.


The introduction of AI in medical imaging could not have come at a better time with the COVID-19 pandemic, as AI applications for detection, diagnosis and acquisition support. especially when using Telemedicine. have shown to be invaluable managing these patients both at healthcare institutions as well as at home. There are a couple of caveats however, using this new technology, first the regulatory constraints limiting new AI algorithms because the FDA needs to catch up with approvals, second, as with any Deep Learning algorithm, AI for healthcare needs lots of data to train the algorithm, which is a limiting factor for COVID cases even although several hospitals are making their COVID patient data files publicly available. But, despite these limitations, institutions are ready to deploy AI for this particular use case together with other applications that have been identified and are addressed by literally hundreds of companies developing these novel applications.

However, early implementations of AI have come across a major obstacle: how to adopt it to the workflow as it has caused a true “traffic jam” of data to be routed to several algorithms, and the results from these AI applications, in the form of annotations, reports, markers, screen saves and other indications, to be routed to their destinations such as the EMR, PACS, reporting systems or viewers. This orchestration has to occur synchronized with other information flows for example, an AI result has to be available either before or at the time of the reporting of the imaging studies, and has to be available together with lab or other results, which might need delaying or queuing these other non-AI information flows to be effective.

What is needed to manage this is an AI “conductor” that orchestrates the flow of images, results, reports between all the different parties such as modalities, reporting systems, EMR, and obviously the AI applications, the latter of which could be on-premise or in the cloud. Note that the number of AI apps eventually reach hundreds if you take into account that an algorithm might be modality specific (CT, MR, US etc.), and be specialized for different body parts and/or diseases. Scalability is a key requirement of this critical device but also many other features.

A simple “DICOM router” will not be able to orchestrate this rather complex workflow. To assist users with identifying the required features, I created three levels of routers as shown in the figure.

Level 1 can do simple forwarding and multiplexing, queue management and has a simple rules engine to determine what to send where. 

The second level has additional features as it can perform “fuzzy routing” i.e. based on fuzzy logic, prefetch information using proxies (i.e. querying multiple sources while giving a single return), do conversions of data and file formats, anonymize the data and is scalable. 

The third level has all of the level 1 and 2 functionality and extends it to AI specific routing, can modify images header and split studies, perform worklist proxies (i.e. query multiple worklists while appearing as a single thread), and has secure connectivity to meet “zero-trust” requirements. It supports not only “traditional” DICOM, HL7 but also webservices such as WADO and FHIR and supports IHE. It can also perform static and dynamic routing, do data conversions, filter the data, split studies, normalize the data, anonymize it if so desired, and provide support for several different formats and support for Structured reports, annotations, to name a few. As a matter of fact, a fully featured AI conductor requires at least 25 distinctly different functions as described in detail in this white paper (link).

In conclusion, there is a serious workflow issue deploying AI, but the good news is that there are solutions available, some in the public domain with limited features and some as commercial products. Make sure you know what you need before shopping around, the link to the comprehensive white paper on this subject has a handy checklist you can use when you are shopping at your (virtual) HIMSS, SIIM or RSNA trade shows or when “Zooming” with your favorite vendor. You can download the white paper here.




Friday, April 17, 2020

Open Source PACS solutions for LMIC regions.

Students at PACS bootcamp in Tanzania
sponsored by RAD-AID

Using an open source PACS solution instead of a commercial PACS could be attractive to LMIC (Low and Middle Income Countries) as it provides a good start to gain experience with managing digital medical images with a relatively low entry cost. In this paper we’ll discuss the PACS features that can be offered by open source providers, implementations strategies, and lessons learned.

Why would someone want to use an open source PACS?

·         The most important reason is its lower cost as it is free (kind of), i.e. there are no software and/or licensing fees. The exception is for the operating system, which can be open source as well if one uses Linux or a variant, and, if applicable, other utilities such as a commercial database, but again, they can be an open source product as well. There is a significant cost involved for the hardware, i.e. servers, PC’s, medical grade monitors for the radiologists and the network infrastructure, i.e. cabling, routers and switches. The latter assumes that there is not a reliable network in place which is often the case in LMIC’s, therefore, a dedicated network is often a requirement.

·         Open source PACS allows an organization to find out what they need as they are changing from using hardcopy films to a digital environment with which they have often no experience and/or exposure. As many open source PACS systems have a free and commercial version, it is easy to migrate at a later date to the paid version, which provides the upgrades and support as the organization feels comfortable with the vendor.

·         This is not only applicable to LMIC regions, but an open source PACS can be used to address a missing feature in your current system. For example, they can be used as a DICOM router.
·         The open source PACS can function as a free back-up in case the commercial production PACS goes down as part of an unscheduled or scheduled downtime.

·         It can be used as a “test-PACS” for troubleshooting, diagnostics and training.

But the main reason is still the cost advantage. If a LMIC hospital has to choose between a purchasing a used CT or MRI for let’s say $350k US, which could have a major impact on patient care as it might be the only one in a large region serving a big population, and  investing in a PACS system, the choice is clear: they will first get the modality and then use maybe another $50k or so to buy the hardware servers, PC’s and monitors and string cable to get a network in place and install an open source PACS. One should also be aware that the argument of not having any vendor support for an open source PACS is grossly over-rated. I have seen some good dealers and support but also some very poor service engineers, so even if you would use a commercial PACS, the chance that you get any decent support is often slim in the LMIC region.

Let’s now talk about the PACS architecture as there is a difference between a “bare-bones” (BB-PACS), a typical (T-PACS) and a fully featured (FF-PACS). This is important as in many cases you might only need a BBPACS to meet the immediate needs in a LMIC hospital or clinic. 

A TPACS takes in images from different modalities, indexes them in a database, aka Image Manager, archives them in such a way that they can be returned to users, and provides a workflow manager to allow for multiple radiology users to simultaneously access the studies using different worklist criteria. For example, the workflow manager would allow the studies to be accessed using different specialties (neuro, , pediatrics) and/or body parts (extremities, breast, head) as a filter while indicating if a study is being read by someone else, its priority, and if it has been reported. The TPACS also has a tight integration with its workstations, the PACS archive, and database through the workflow manager, i.e. these workstations would typically be from the same vendor that provides the PACS archive and database.

The FF-PACS would be a T-PACS and also have reporting capability, preferably using Voice Recognition and a Modality Worklist Provider that interfaces with the digital modalities with an ordering system to allow the technologist at the modality to pick from a list instead of having to re-enter the patient demographics and selecting the appropriate study.

A BB-PACS would be merely a PACS database and archive. It would not have a workflow manager and one could use an open source workstation from another vendor. Almost all open source PACS systems are of the BB-PACS kind, which means that one has to select a preferable open source viewer with it as well.

How are these open source PACS systems implemented? In the developed world, it typically happens top-down, i.e. a hospital has a Radiology Information System (RIS) that places the orders, which is replaced in most institutions by an ordering feature in the EMR. These orders are than converted from a HL7 into a DICOM worklist format by a Worklist provider. The images that are being acquired are sent to the PACS and the radiologist uses a Voice Recognition System to create the reports.

In the LMIC regions, it typically starts bottom-up. The first step is converting the modalities from film to digital by replacing their film processors with CR reader technology or upgrading their x-ray systems to include a Direct Digital Detector. They might get a CT and/or MRI that also prints studies on a film printer. They now have digital images that need to be viewed on a viewing station, archived and managed, therefore a PACS is needed. That is when the vendors start pitching their commercial PACS products, usually a FF-PACS or T-PACS, which are typically unaffordable, hence the choice to implement an open source, BB-PACS with a couple of open source view stations.

It is critical at this point to use a medical grade monitor for the radiologist to make a diagnosis as commercial grade monitors are not calibrated to map each image pixel value into a greyscale value that can be distinguished by a user. These monitors do not need to have the high resolution (3MP or 5MP) as is commonly used in developed countries, but a 2MP will suffice, knowing that to see the full resolution the user will have to zoom in or pan the image in a higher resolution. These 2MP monitors are at least three or more times less expensive than their high-resolution versions. The only disadvantage is that they require a little bit more time for the interpretation to be done as the user has to zoom to see the full spatial resolution.

After having installed a BB-PACS and used it for a few years, the institution will have a better idea of what their specific requirements are for the PACS system and they can make a much better decision for what they want to do next. There are three options:
1.       Expand the current open source BB-PACS, e.g. upgrade the storage capacity, replace the server, have a more robust back-up solution and add a commercial workstation workflow manager, a Modality Worklist Provider and reporting system. This assumes there is a mechanism to enter orders, i.e. through a RIS or EMR.
2.       Keep the BB-PACS and turn it into a Vendor Neutral Archive (VNA) and purchase a commercial T-PACS which serves as a front end to the radiologist. The new PACS might store images for 3-6 months and the “old” PACS will function as the permanent archive.
3.       Replace the BB-PACS with a commercial T-PACS or even a FF-PACS assuming the funds are available and you are looking for a cost effective solution.

Note that the advantage of option 1 and 2 is that you don’t need to migrate the images from the old to the new PACS, which can be a lengthy and potential costly endeavor.

What are some of the open source PACS systems? The most common options are Conquest, ClearCanvas server, Orthanc, DCM4CHEE and its variant Dicoogle. Conquest and ClearCanvas are Windows based, Orthanc can be both Windows or Linux and DCM4CHEE is Linux based. Conquest is the most popular for being used as a router and for research and the easiest to install (literally a few minutes). ClearCanvas is also relatively easy to install, DCM4CHEE is the most involved but there is now a docker available that makes the process easier. DCM4CHEE is also the most scalable. For open source viewers, one can use the ClearCanvas viewer, which is the most popular, or a web-based viewer such as Oviyam with DCM4CHEE. RadiAnt is another option and Osirix is the primary choice for a MAC. There are several other options for viewers, one can do a search and try them out, but be aware that they differ greatly with regard to functionality and robustness. Another consideration is continuing support, as an example, the gold standard for the open source viewer used to be E-film, but that company was acquiredby a commercial vendor who stopped supporting the open source version which is a problem with the frequent OS upgrades especially when based on Windows.

What are some of the lessons learned with installing the open source PACS:
·         Be prepared to assign an in-house IT and/or clinical person who is computer literate to support the PACS. This person will be responsible for day-to-day support, back-ups, managing scheduled and unscheduled downtimes, adding additional modalities and interfaces with a RIS, EMR or reporting system as they are being introduced. This staff member will also be responsible for troubleshooting any issues that might occur. They will also be the go-to person for questions about its usage and he or she will train incoming users. These so-called PACS administrators are a well-established profession in the developed world, but it will be a challenge initially to justify a designated position for these people to the department and hospital administration in the LMIC region as it is a new position.
·         How will these PACS administrators get their knowledge? There are fortunately many on-line resources, including on-line training, and organizations such as RAD-aid, which has been conducting PACS bootcamp training session in LMIC regions to educate these professionals.
·         PACS is a mission critical resource that has impact on the infrastructure (power, network, HVAC, etc.). In most cases the existing network is not secure and reliable enough and/or does not have sufficient bandwidth, which requires a dedicated network with its own switches and routers.
·         It is preferred to use locally sourced hardware for the IT components to allow for a service contract and access to parts. The only problem you might have is to get medical grade monitors in some regions as they are not as popular yet.
·         Pay attention to the reading environment for diagnostics, I had to instruct people to switch off their lightboxes that were used to look at old films and even paint some outside windows to reduce the ambient light. Use medical grade monitors for diagnostic reading.
·         Use good IT practices that includes implementing cyber security measures, reliable back-up and OS patch management.
·         Create a set of Policies and Procedures for the PACS that include access control, who can import and export data on CD’s and how that is done, unscheduled and scheduled down-time procedures, and everything else needed to manage a relatively complex healthcare imaging and IT system.

In conclusion, open source PACS systems are a very viable, if not the only option due to cost constraints, in LMIC regions, especially for the first phase. One should be aware that these open source PACS systems are very much a bare bones solution with limited functionality, however they allow the user to get started and find out their specific requirements. If additional funds become available, one can upgrade later to enhance functionality or replace it with a commercial PACS which can become either “front-end” to the existing PACS or a replacement.

Resources:

Thursday, March 19, 2020

Healthcare AI Regulatory Considerations.

Based on the information provided during the recent FDA sponsored workshop, “The Evolving
Role of Artificial Intelligence in Radiological Imaging,” here are the key US FDA regulatory considerations you should be aware of.

1. AI software applications are fundamentally different in that an AI algorithm is created and improved by feeding it data so it can learn, and eventually, if it implements Deep Learning, it can learn and improve autonomously based on new data. AI is a big business opportunity.

According to an analysis by Accenture, the market for AI applications for preliminary diagnosis and automated diagnosis is $8 billion. The same analysis points out that there is a 20 percent unmet demand for clinicians in the US by 2026, which can be addressed by AI. 


It became clear during the conference that the prediction made in November of 2016 by Geoffrey Hinton that deep learning would put radiologists out of a job within 5 years was a gross miscalculation. No jobs have been lost as of today, by contrast, the number of studies to be reviewed is increasing to almost 100 billion images per year, to be read by approximately 34,000 radiologists, requiring more and more images to be read faster and more efficiently. The use of AI to eliminate “normal” cases, especially for screening exams such as for breast cancer or TB in chest images, will only be a big relief for radiologists.


2.       AI will not make radiologists obsolete but rather will change their focus as the image by itself might become less important than the overall patient context. We spend a lot of time improving image quality by reducing image artifacts and increasing resolution so a physician can make a better diagnosis. However, as one of the speakers brought up, using autonomous AI could potentially eliminate the need of creating an image, by basing the diagnosis directly on the information in the raw data. Why would we need an image? Remember, the image was created to optimally present information to a human, ideally matching our eye-brain detection and interpretation. If we apply the AI algorithm on the acquired data without worrying about the image, we could use it on CT raw data streaming straight from the detector, or the signals directly from the MR high frequency coils, the ultrasound sound waves, or the EKG electrical signals, or whatever information comes from any kind of detector. Images have served the physicians very well for many years. In some cases, “medical imaging” will be implemented without the need to produce an image and we might need to rename it to become “medical diagnosing” instead. I believe that a radiologist is first and foremost an MD and thinking that they will be out of a job when there is less of an emphasis on the images seems misguided.

3.       AI algorithms are often focused on a single characteristic, which is a problem when using them in an autonomous mode causing incidental findings to go unnoticed. There were two good examples given during the workshop, the first one was an ultrasound of the heart of a fetus which looked perfectly normal. So, if one would run an AI algorithm to look for defects, it would pass as being OK. However, in this particular case as shown in the image, the heart was outside the chest, aka Ectopia Cordis, a rare condition, but if present should be diagnosed early to treat accordingly. The other example was for autonomous AI detection of fractures. Fractures are very common for children as I can attest personally having many grandkids who are very active. One of the speakers mentioned that in some cases when looking at the fracture there are incidental findings of bone cancer, something that a “fracture algorithm” would not detect. So, maybe my previous hypothesis that an image might become eventually obsolete is not quite correct, unless we have an all-encompassing AI detection algorithm that can identify every potential finding.
The problem with creating an all-encompassing AI is that there are some very rare findings and diseases for which there is relatively little data available. It is easy to get access to tens of thousands of chest images or breast images with lung or breast cancer from the public domain for example from NCI, however for rare cases there might be not enough data available to be statistically significant to train and validate an AI algorithm.

4.       There are still many legal questions and concerns about AI applications. As an analogy, the electric car company Tesla is being sued right now by the surviving family of the person who died after his car crashed in a highway median because the autopilot misread the lane lines. Many people die because they crash into the medians because of human error, however, there is much less tolerance for errors made by machines than by humans. The question is who is accountable if an algorithm fails with subsequent patient harm or even death, the hospital, the responsible physician, or vendor of the AI algorithm?

5.       A discussion about any new technology would not be complete without a discussion about standards. How is an algorithm integrated into an existing PACS viewer or medical device software and how is the output of the AI encoded? The IHE has just released a set of profiles that address both the AI results and workflow integration in two profiles. Implementors are encouraged to support these standards and potential users are encouraged to request them in their RFP’s.

6.       There are three different US FDA regulatory approval and oversight classifications for medical devices and software:
      1.       Class 1: Low risk, such as an image router. This classification requires General Controls to be applied (Good Manufacturing practices, complaint handling, etc.)
      2.       Class 2: Moderate risk such as a PACS system or medical monitor, as well as Computer Aided Detection software. This classification requires both general as well as special controls to be applied. These devices and software require a 510(k) premarket clearance.
For a moderate risk device that does NOT have a predicate device, a new procedure has been developed aka a “de novo” filing. For example, the first Computer Aided Acquisition device which was approved in January 2020 followed the de novo process.
      3.       Class 3: High risk such as Computer Aided Diagnosis which requires general controls AND Premarket Approval (PMA).

7.       AI can be distinguished into the following categories:
a.       CADe or Computer Aided Detection – These aid in localizing and marking of regions that may reveal specific abnormalities. The first application was for breast CAD, initially approved in 1997, followed by several other organ CAD applications. CADe has recently (as of January 2020) be reclassified to NOT need a PMA but rather being class 2 and needing only a 510(k).
b.       CADx or Computer Aided Diagnosis – Aids in characterizing and assessing disease type, severity, stage and progression
c.       CADe/x or Computer Aided Detection and Diagnosis – This is a combination of the first two classifications as it will do both localizing as well as characterizing the condition.
d.       CADt or Computer Aided Triage – This aids in prioritizing/triaging time sensitive patient detection and diagnosis. Based on a CADe and/or CADx finding, it could immediately alert a physician or put it on the top of a worklist to be evaluated.
e.       CADa/o or Computer Aided Acquisition/Optimization – Aids in the acquisition/optimization of images and diagnostic signals. The first CADa/o was approved in January 2020 for ultrasound to provide help to non-medical users to acquire images. Being first-in-class, it followed the de novo clearance process.

8.       Other dimensions or differentiation between the different AI algorithms are:
·         Is the algorithm “locked” or if it is continuously adaptive? An example of a locked algorithm was the first CADe application for digital mammography, its algorithm was locked and it is still basically the same as when the FDA cleared its initial filing in 1996. An adaptive algorithm will continue to learn and supposedly improve.
·         What is the reader paradigm? AI can serve as the first reader, which then possibly determines its triage, as a concurrent reader, e.g. it will do image segmentation or annotation while a physician is looking at an image, as a secondary reader, such as used to replace a double read for mammography, or it can include no human reader being autonomous. The first clearance for a fully autonomous AI application, based on having a better specificity and sensitivity than a human reader, was for diabetic retinopathy which was cleared in January of 2019.
·         What is the oversight? Is there no oversight, is it sporadic, or continuous? Note that this is different from the reader paradigm, a fully autonomous AI algorithm application might still require regular oversight as part of the QA checking and post-market surveillance, especially if the algorithm is not locked but adaptive.

9.       The FDA has several product codes for AI applications. The labeling and relationship between these codes, the various CAD(n) definitions and corresponding Class 1,2,3 and “de novo” classifications is
inconsistent and unclear. The majority of the products, i.e. more than 60 percent are cleared under the PACS product code (LLZ) as that is the most logical place for any image processing and analysis related filings, the remainder is cleared under 6 different CAD categories (QAS, QFM, QDQ,POK, QBS, and the most recent QJU) and a handful others. If a vendor wants to file a new algorithm, the easiest path is to convince the FDA that it fits under LLZ as there are many predicates and a lot of examples, assuming that the FDA approves that approach. I would assume that they want to steer new submissions towards the new classifications, however as you can see from the chart, there are very few predicates, sometimes only a single one.

10.   Choosing the correct size and type of dataset that is used for the learning is challenging:
·         There are no guidelines on the number of cases that are to be included in the dataset that is used for the algorithm to learn and to validate its implementation. The unofficial FDA position is that the data should be “statistically significant,” which means that it requires intensive interaction with the FDA to make sure it meets its criteria.
·         Techniques and image quality vary a lot between images, to the extent that certain images might not even be useful as part of the dataset.
·         One needs to make sure that the dataset is representative for the body part, disease, and population characteristics. It has been acknowledged that a dataset from e.g. Chinese citizens might not be applicable for a population in US, Europe or Africa. In addition, it became clear that it might need to be retrained based on the type of institution (compare a patient population at a VA medical center with the patients at a clinic in a suburb) and even geographic location (compare Cleveland with Portland, the youth in Cleveland being the most obese in all of the US).
·         There is a big difference between different manufacturers on how to represent their data.  This requires the normalizing and/or preparation of the data to make sure the algorithm can work on it. Even for CR/DR there are different detector/plate characteristics, different noise patterns, image processing applied by the vendor, different LUT’s applied, etc.
The figure shows the intensity values for different MRI’s.

11.   There should be a clear distinction between the three different datasets that are used for different purposes:
·         The training dataset that is used to train the AI algorithm.
·         After the initial training is done, one would use a tuning dataset to optimize the algorithm.
·         As soon as the algorithm development is complete, it will become part of the overall architecture and is verified with an integration test, which tests against the detailed design specs. This is followed by a system test that verifies against the system requirements, and lastly by a final Validation and Verification, which test against the user requirements using a separate Test dataset.

12.   AI clearance changed the traditional process in that now pre-clearance testing and validation and post-market surveillance are required. The pre-clearance is covered by the pre-submission, aka as the Q-Submission program, which has a separate set of guidelines and is extensively used by AI vendors. It is basically a set of meetings with the FDA with the focus on determining that the clinical testing is statistically significant and that the filing strategy is acceptable. Last year, there were 2200 pre-submissions out of 4000 submissions, which shows that it has become common practice. The FDA strongly encourages this approach.

The post-market surveillance is very important for non-locked algorithms, i.e. the ones that are self-learning and supposedly continuously improving. The challenge is to make sure that the algorithms are getting better and not worse, which requires post-market surveillance. There was a lot of discussion about the post-market surveillance and a consensus that it is needed but there were no guidelines available (yet) on how this would work.

13.   There are a couple of applicable documents that are useful when looking to get FDA clearance for an AI application: the Q-submission process, the De Novo classification request, and regulatory framework discussion paper.

The FDA initiative to have an open discussion in the form of a workshop was an excellent idea and brought forth a lot of discussion and valuable information. You can find a link to the many presentations at their website. It was obvious that the regulatory framework for AI applications is still very much under discussion. Key take-aways are the use of pre-submissions to have an early dialogue with the FDA about the acceptable clinical data used for training and validation, and regulatory product classification and approach, as well as the need for a post market assessment, which is not defined (yet) especially for adaptive AI algorithms.

The de novo approach will also be very useful for the “to-be-defined” product definitions and it might be expected that the list of product classifications will grow as more products are introduced. AI is here to stay and the sooner the FDA has a well defined process and approach, the faster these products can make an impact to the healthcare industry and patient care.

Tuesday, December 31, 2019

Top ten healthcare imaging informatics trends for 2020.


Several new trends have emerged over the past five years in the imaging and informatics field. Using the terminology from the Garter hype cycle[1], some of them have not made it beyond the innovation trigger (yet), some ended up at the peak of inflated expectations, others ended up in the trough of disillusionment, and some have emerged to become somewhat mature technologies. I used the hype cycle categorization to show where the top ten trends are right now and where I believe they might end up a year from now.


1.       Augmented reality (AR) – Augmented reality superimposes a computer-generated picture on a user’s view, which is typically on a patient. It has great potential, imagine medical students working on a virtual patient or using a virtual scalpel performing a virtual surgery instead of practicing on a human cadaver. Dissections can be done virtually, and one could even simulate errors similar to what happens with a flight simulator.

There are several start-ups that are working on this technology, but much improvement is still needed. It might take another iteration of a Google glass-like set of goggles to replace the big somewhat cumbersome glass sets I have seen so far. It will also be challenging to replace manual controls with 100 percent voice controls and/or a combination of voice and other bodily controls. This technology is still in its infancy, there are a couple of trial sites, mostly for surgery and we don’t know yet all of the pitfalls, and where it will be used, so this technology is definitely at the beginning of the hype curve.

2.       Artificial Intelligence (AI) – If you attended the recent RSNA tradeshow or are following the literature, you would see that AI is very much in the hype phase right now. I counted more than 100 dedicated AI companies, most of them have not gotten FDA clearance yet for their new algorithms, and the FDA is struggling to deal with these submissions as they don’t know how to deal with an application that is “self-learning” and has a potentially unpredictable future behavior.

There are ethical concerns as well, especially as these algorithms need a lot of data that is currently stored in the archives of many hospitals, and/or in the cloud of the big three cloud providers, which are supposedly being mined in a way that protects patient privacy, but in many cases without their consent. There are also concerns that some of the algorithms were tested on a limited, biased subset of patients and not including all of the various races and cultures with different gene-pools.

Given the momentum, this technology will continue on its hype curve as there will be several new applications being cleared by the FDA. It will take another three years before their first round of financing will run out and they will have to show a realistic ROI and potential to their initial investors. I expect that it will take another few years before it reaches its peak and users will see what it can and can’t do, before it will start to drop down into the valley of disillusionment and eventually mature.

3.       Blockchain (BC) – this technology still gets a lot of attention, but it is closer to its peak as it has become clear what you can do and what you can’t do with it. Blockchain provides a distributed ledger that is public, which makes its application in healthcare limited as most healthcare applications are looking to preserve privacy. The fact that it is immutable, however, provides opportunities with registries as you want those to be widely available and accessible. Occasionally you might hear about a physician practicing without proper licensing in a certain state, which would become a much less likely event if we had a publicly available registry in place.

Another application might be for large public data sets with anonymized patient information that can be used to test new algorithms for AI or healthcare practitioner training. People have become aware that blockchain has limited applications in healthcare and we are waiting for some of those to materialize so we can learn its pro’s and con’s before it matures.

4.       3-D printing – 3D printing is not new; it has been used for more than 25 years to build prototypes and rare parts. Its prerequisite is the presence of a computer-generated model that can be interpreted by the printer and then printed using the so-called additive manufacturing technique instead of the conventional machining, casting and forging process.

What is new is that these printers have become less expensive, 3-D modeling more sophisticated, and the recent standardization for a PACS workstation interface has given this application a boost. Its application is somewhat limited as it is currently used mostly for surgery planning and simulation. A 3D print can provide a visual model especially for complex procedures. There is still a great opportunity for surgical implants and prosthetics assuming that one can print using the right materials. What better use of the technology than replacement of an artificial limb, for example, that matches exactly the other body part in the case of a paired body part or one that fits exactly. Storing and labeling these 3-D models is still somewhat of a challenge especially if one creates many of them. This technology still has to go up toward its peak before it will fall back and become a mature technology.

5.       FHIR – This new interface standard has skyrocketed in its hype. It is widely touted as the solution to all of the current interoperability problems and has a large support from ONC (Office of National Coordinator) in the US. There are a few, limited applications being introduced on a very big scale, for example, the Apple iPhone has a plug-in allowing you to access your medical record in participating hospitals. I have seen more deployments internationally than domestically, for example, some in western Europe and one in Saudi Arabia where patients can access nationwide scheduling using their phone apps. There are a couple of challenges which will cause it to reach its peak of inflated expectations over the next one or two years.

The biggest issue is the speed of its introduction and corresponding lack of maturity. The first formal, normalized release (R4) was not approved until early 2019. The term “normalized” is deceiving as FHIR is basically a compilation of many individual mini specifications for the various resources of which in R4 there are only 11 in a final, normalized state out of the close to 150 resources. One could therefore argue that less then 10 percent of the spec is stable and ready for implementation as more than 90 percent can and most likely will have substantial changes to its interface.

Also, I believe that the current lack of interoperability in healthcare is not so much the lack of a well-defined interface, despite the issues with HL7 version 2 and the overkill and verbosity of CDA documents, but more due to the siloed architectures of our current EMR’s and other healthcare applications, and resistance by vendors and institutions to share information. It might require the pending “anti-blocking” rule by ONC to get some real teeth, success stories to become more widely known, and the standard to get more mature before it reaches its peak. I am worried about the speed and momentum because the faster you go, the more damaging it will be if you crash. As of now, FHIR is still going full speed ahead and it might take another two or three years before we will see it go past its peak.

6.       Cloud – The cloud right now is beyond its hype and traveling down its negative slope. Using the cloud as a back-up for data recovery has been mature for many years, but the advantages and disadvantages of outsourcing your IT solutions to a cloud platform has become clearer. From a security perspective, most healthcare institutions spend between three percent and six percent of their IT budgets on cyber security and they are becoming aware that it is hard to compete with the thousands of security experts that are employed by the big cloud providers. It also has become clear that you still need to manage your cloud applications well, especially the configuration settings, which became clear after the 2019 Capitol One breach, which is touted as one of the largest breaches ever. There is a lack of trust by the general public on how their data in the cloud is being used by the big cloud providers and whether it is sufficiently anonymized.

The cloud is not always a good solution as there could be a shift from the cloud to edge computing when processing real-time data. A typical response time from the cloud would be about one second which is fine when accessing and retrieving information for human interpretation but when making a split-second decision such as used for remote surgery, the cloud is too slow. The good news is that we typically don’t need to make these fast decisions, unlike self-driving cars that need to avoid potential obstructions. So, the cloud is definitely past its initial hype and next year we’ll discover more of its inflated expectations before we’ll see it mature in a few years.

7.       IOMT – The number of IOMT (Internet Of Medical Things) devices will continue to explode. The problem is that people are becoming highly dependent on these devices as illustrated by the recent data outage of DexCom, a company which allows caregivers to monitor the blood-sugar levels of their kids, parents and others. When this communication suddenly became disrupted, there was a semi-panic among caregivers.

This device is not the only IOMT device that is being introduced, there are intelligent extensions to a person’s handheld device to measure vital signs allowing for a telemedicine consult and wearables that are able to record and communicate with pacemakers, intelligent drug dispensers, a scale and other devices. Challenges with these devices are the unrealistic reliance on these technologies and corresponding immaturity, unreliability and lack of redundancy.

These IOMT devices interface easily with your mobile devices using blue tooth for example, but what about the next step, i.e. how does it get into an EMR? There is a mechanism in any EMR to upload the nurse observations and vitals that are measured by the same nurse, but how about uploading that information from my smart watch when I come into the doctor’s office?

Last but not least, there is a concern about cyber security provisions in this technology as potential weaknesses in pacemakers and IV pumps have been published. All of that makes IOMT still immature and it will take a few more years before it will start to slope up again and get to a plateau of productivity.

8.       VNA – Vendor Neutral Archives (VNA’s) used to be the biggest hype in medical imaging two or three years back. Initially it was positioned as the best solution to avoid costly and lengthy data migrations from hospitals that were switching PACS vendors. Each major PACS vendor was scrambling to catch up and replacing their PACS Archive labels with a VNA label, which created confusion in the marketplace about the functionality of a “true” VNA. Subsequently, the VNA became the source for image display for non-radiology physicians requiring web interfaces and connections with a Health Information Exchange using XDS protocols.

As of now, VNA is being repositioned as an enterprise archive with its own set of problems. There is a lack of synchronization of the data between the various PACS systems and VNA, for example, if there is a change in the patient demographics, images have to be deleted, or other adjustments made. Standards exist to address this issue as defined by IHE but there is little uptake on the PACS side for those.

The biggest stumbling block is the lack of a uniform workflow by non-radiology or cardiology specialties and inconsistent access for patient orders and/or encounter information. Also, as institutions are starting to rely on a single VNA to manage images from 100+ institutions, there are some serious concerns and issues around redundancy and scalability.

The VNA is definitely not mature yet, but its pitfalls are identified, it is slowly going up the slope of enlightenment, which will continue for the next two or three years. The concern is also that because most of the independent VNA vendors have been acquired by PACS vendors, their rate of innovation will be slowed down because of the transition and lack of focus.

9.       DR – Digital Radiography (DR) has been replacing Computerized Radiography (CR) for the past several years by being able to capture digital X-ray exposure and convert it directly into an electronic signal producing a picture withing a few seconds, instead of having to scan a CR plate in a reader. However, CR technology is still great for developing countries, whereby they are still converting from film to digital. [MO1] The DR plate technology has been greatly improved, it has come down in price but is still not cheap, plates used to be more than $50k and are now getting closer to $20k. They are now wireless, so you don’t need a cable to connect a plate that is being used for portable X-rays, which can be a safety hazard. As a matter of fact, I heard firsthand from a radiology administrator who had a technologist who tripped over the cable and he had to deal with a worker’s compensation case.

The battery life of the removable plates is getting better, with some lasting up to half a day or more. In addition, the way they are sealed has also improved providing better protection against leakage of bodily fluids. However, most of them are still based on glass silicon so they are heavy and subject to damage if dropped. All of these factors, price, battery life, leakage protection and weight can still be improved, which is why the technology has not reached its plateau of productivity and is still ascending on the slope of technology enlightenment. This will continue for the next few years.

10.   POC Ultrasound – Point of Care (POC) ultrasound has a big potential. It is inexpensive (~ $2k to $6k), portable, and adds value when used correctly. It could potentially become the equivalent of a stethoscope for the physicians. In addition, it can become a tool for non-physicians such as midwives or Physician Assistants.

Because of its low price point, there is a huge market opportunity in developing countries where there are no diagnostic tools at all available in the majority of cases. The factors that will cause it eventually to get over the hill of inflated expectations over the next one to two years are immature hardware and software product features, for example, some of the probes are known to get really warm, and there is still a lack of clinical measurements in the software such as needed for cardiac echo and OB/GYN, and there is no universal architecture yet as some of these devices can use a standard iPhone or Android phone/tablet and some require a company provided proprietary handheld device. Some of the POC devices require a cloud connection which is a problem when working in an area without connectivity and the business models vary between monthly fees and one-time purchase.

Last but not least, acquired images need to be archived, and there is an issue with matching the images with the correct metadata containing the patient information and any other encounter based important information.

In conclusion, the Gartner hype cycle has been criticized for a lack of evidence that every technology actually goes through this cycle, however, in my opinion, it seems to apply to most of the new technologies I have seen developing over the past several decades. Also, note that the ranking of these technologies in this article is my own personal opinion, and I might be wrong, and I promise to produce an update a year from now and admit any assumptions that appeared to have been incorrect. The main purpose of this ranking is to use this as input when making a decision to implement these technologies. It is fine to take a bet if you are a risky person and like to be on the “bleeding edge,” but if not, you might want to think twice about using a technology that is labeled immature or super-hyped. And of course, you can disagree with my ranking; I always encourage feedback and discussion.


Thursday, December 19, 2019

Inside perspective on the Fuiji-film Hitachi acquisition.


It was about 30 years ago that I first visited Hitachi in Kashiwa, which is about one hour from Tokyo,
as a young CT product manager to discuss the implementation of a DICOM interface in their CT scanners.

Philips had just closed down their CT manufacturing in the Netherlands and were relying on Hitachi to provide them with a low cost, reliable and robust CT scanner, initially for the US market and then for worldwide distribution. This turned out to be a costly mistake for Philips as its management at that time underestimated the Japanese mentality of “Business is war.” It killed the CT market for Philips as Hitachi was slow to implement innovations and used the Philips channel to learn about the US market, which they promptly entered under their own name, not only to sell CT but also MRI’s.

I was amazed at that time with the Hitachi modular approach. Philips modalities, such as their CT and MRI would have completely different architectures, including the backend and even OS (MRI DEC based VAX and CT using the Philips minicomputer). Hitachi cloned their backend and connected a different modality, CT, MR or whatever frontend they needed thus achieving a great economy of scale. Philips eventually had to buy CT technology back by purchasing first Picker and then Elscint but never totally recovered in the CT marketplace where GE and Siemens (as well as Toshiba/Canon) have been dominant.

Hitachi always made pretty good ultrasounds, that is what Japanese companies do well. I visited the ultrasound manufacturing facility at that time and I was amazed by the cleanliness of their manufacturing. As a visitor,  we had to wear a yellow cap to distinguish ourselves, and before we entered the manufacturing floor, we left our shoes outside and put on slippers (way too small for my large Western feet of course) and I saw the most spotless manufacturing I had ever seen, despite the fact that we at Philips had a pretty clean shop as well.

Fast forward 10 years, I had moved from Philips to Kodak and was project manager for computed radiology (CR). Kodak had some of the smartest scientists in Rochester New York and had patented the CR technology several years prior. We referred to this as the “Lucky” patent after the person who filed it. However, Kodak was so fixated on analogue film, which eventually led to their demise, that they had sold the CR patent to Fuji, who promptly commercialized it and in addition applied for many patents around it to lock down this technology. When Kodak woke up and saw its potential ten years later, it had to get those patents back. I am sure it paid more for it than it sold them for. This was so embarrassing for Kodak that Kodak management told me to strip out all references to the original patent in my presentations as I was telling the world that “Kodak invented CR.” Kodak had some serious catching up to do and to speed up the CR commercialization, Kodak found a manufacturer in California, Lumisys, that made a small tabletop CR, which it bought the rights to.

Fast forward 20 years to the 90s and Kodak sells off everything in its portfolio to avoid bankruptcy, including their imaging business that included CR, which was bought by a private investment group that still owns it and rebrands it as Carestream. Fuji has maintained its market position as one of the premier CR providers and also became a pioneer in the PACS business as one of the first companies offering a software-only PACS and viewer. Interestingly enough, the software was mainly developed in the US, as Japan is a very good place to make hardware, but they don’t really know how to develop software well while the opposite is the case in the US. Just think about their motorcycles and look at the sophistication and refinement of a Honda Goldwing vs a Harley.  Hitachi still makes pretty good ultrasounds but never quite made a big dent in the CT/MR market.

As of today, the Fuji PACS business has matured, they have a good market share in some regions, e.g. in my own area, which is the Dallas metroplex, where they are number one. They also have some large contracts especially with the US government, although they seem to lag in innovation in this market. However, they are without any question number one in digital detector technology: CR’s are replaced with DR plates and they just introduced at RSNA 2019 the first super lightweight DR detector without glass (silicon) using a thin layered flexible semiconductor carrier.

Hitachi never quite made a big dent into the US market with its “big iron” devices, i.e. CT and MR compared with the big four (GE, Philips, Siemens, Canon/Toshiba), except for selling to outpatient imaging centers. In my opinion, they were managing their business too much from Japan with a Japanese mindset. If they would have taken their car manufacturing counterparts as an example, which design and manufacture their cars in the US to meet local market preferences, it could have been a different picture. Their ultrasounds are still pretty good; however, it will be hard to compete with a $2,000 Butterfly or $6,000 Philips Lumify. In contrast, fuji has a nice complement with its Sonosite product line of low cost portable Ultraosunds and just announced a handheld called the iViz air. But the main issue for Hitachi is their bottom line, as they are planning to lift operating margins to 10 percent or above by 2021 and their medical business does not meet that objective.

Fuji needs economy of scale. It missed out on the Toshiba deal which was bought by Canon, and it apparently missed out on the AGFA deal, which was just bought out by a European holding company called Dedalus, and I bet that the Carestream private investors wanted too much money. So they were looking for new opportunities, which Hitachi provides. Samsung is also preying as they need to diversify beyond their electronics and mobile business and healthcare is one of their growth initiatives, however it is funny how culture and politics sometimes dominate business and South Koreans just don’t play well with Japanese.

The Hitachi acquisition will get Fuji to a market share of close to 10 percent in the medical device and IT market, still about half that of the big four, so it might be looking for other potential targets. GE seems to have changed its mind about selling off their healthcare division, but who knows, maybe Watson-IBM is next? We’ll see what happens.

In the meantime, maybe it is time that FUJI changes its name from FUjIFILM to FUJI-DIGITAL or take Kodak as an example and call themselves FUJI-STREAM. Regardless, there will be hundreds if not thousands of employees changing their emails and business cards, while others, including myself, their address books, but we are used to that in this fast changing and interesting business of healthcare.

Friday, December 13, 2019

Top 10 healthcare IT cybersecurity recommendations from HIMSS forum.


The HIMSS Healthcare Security forum in Boston is where the CISO’s (Chief Information Security Officers) come together to listen to their peers, government representatives and vendors on what keeps them up at night. And yes, stories about security and privacy breaches are kind of scary as they often create significant damage to the reputation of their institutions and cause financial loss, often in the form of penalties, but even more in recovery costs. For example, as one of the speakers told us, a stolen laptop that had more than 10,000 unencrypted emails from one of their physicians resulted in a $300,000 fine but also required hiring 30 temps to go through each individual email to find the 4,000 that had significant PHI and had to be notified. This incident amounted to a direct cost of more than $1 million dollars.

The best part of this conference, however, was not the swapping of anecdotal stories about breaches but learning what a hospital should be worried about the most, and what should be low on the priority list because one might feel overwhelmed with the many potential threats and breaches.

Here is my top 10 take-aways:

1.       Zero-day events are over-rated. A zero-day event is the first time that a vulnerability is made known before a security patch can be installed, during which time the weakness can be exploited by a hacker or malware. However, it is rare for exploits to take advantage of these zero days on short notice, however, there was one that was identified and exploited within one hour. By  far, the majority of breaches are due to weaknesses that were known for a long time and people had not gotten around to fix them for months or longer. Case in point, the Wannacry ransomware attack that infected 70,000 devices at the NHS hospitals in the UK for a few days in May 2017 was the result of a Microsoft security flaw which had a patch available for several months.

2.       Put pressure on medical device vendors to allow for end-point security. Most large medical device vendors refuse to allow a hospital IT department to put any software or agent on its devices, let alone security related software. However, if you negotiate it upfront as part of the purchasing process and/or are a big enough player in the provider field, they can be swayed to do this. The argument that it invalidates their FDA approval is a myth that is used by vendors inappropriately. Most medical devices use an embedded OS, VxWorks, which is the most common real time operating system in use. There are actually 11 vulnerabilities that have been discovered in the underlying network software used by VxWorks, aka the “Urgent/11,” which has resulted in a FDA safety communication bulletin.

3.       Network segmentation and monitoring is essential. If you are a small hospital and have no leverage with your vendors to negotiate end-point security and/or have old legacy devices, the next best step is to monitor these devices externally. The reason for monitoring is that many of them have obsolete operating systems (XP, Windows 7 or old embedded OS’s) and are vulnerable for exploitation, and by the way, telling your hospital or radiology administrator to replace a CT or MR which cost a $1 million+ because it has a security vulnerability is almost certainly not going to fly. This not only affects medical devices; it also can be an lab system running an old database or webserver (notably Apache) that is obsolete as well.
In these cases, there are two bywords to live by, micro-segmentation and zero trust. Micro-segmentation allows for networks to be configured using software such that certain devices only talk with each other. If a device or application moves, the security policies and attributes move with it. Zero trust means that it is not sufficient to only protect the perimeter; nothing can be trusted anymore as devices might become infected as well, so it shifts the focus to internal protection.

4.       Most password schemes are often pretty much useless. A vendor demonstrated that an encrypted 6-character password using SHA-256, which had the required upper and lower case and non-character, can be cracked in less than one minute using open source tools on a relatively fast server (not even a supercomputer). In addition, to the fact that 52 percent of people use their birthdays, names of their kids, spouses or pets as passwords, with the first character being the uppercase, followed by a “1” and special character “!, which are easy to guess by anyone browsing their Facebook profile in case of a targeted attack, many re-use their passwords.

Almost everyone’s account has been hacked at some point in time, whether it is from your Target account, Equifax account, Bank-One, or any other major breach in the past, so if you use the same password, someone will now be able to access your current bank, Facebook, retirement or other account you might have. One should make sure to use more advanced password encryption and, even better, two-factor authentication or, best, biometric identifiers. In addition, passwords should be changed at a minimum every 90 days (the generally recommended 30 days was suggested as being overkill).

5.       Inventory and purchasing management is critical. One needs to know what devices are purchased, make sure they meet basic security requirements, and know what is connected to the network at what location. Not only do you need to know what devices are connected, you also have to know its “typical” behavior. For example, a CT scanner might access an EMR for a worklist and send images to the PACS. If it suddenly starts to query the hospital billing system or tries to send images to an IP address in Russia, there is an obvious issue.

Characterizing behavior is often done using a network sniffer such as Wireshark. Network security tools can monitor this behavior and there is a good opportunity to use AI to “learn” about the typical behavior so that any deviation from that behavior can be flagged. This goes back to the “zero-trust” principle as mentioned earlier.

6.       Manage and monitor your service providers. In one case, a service engineer connected a legacy CT scanner running XP to an external, unprotected connection to download upgrades and it was promptly infected with malware. This was fortunately detected, as the institution had proper network detection in place. In another case, an x-ray unit crashed its hard disk and, as the service engineer did not want to rebuild it from scratch, he used a cloned version from another nearby hospital, which was infected with malware. And of course, any USB flash drive with upgrades must be scanned for viruses. It sounds almost too hard to believe but one of the speakers had a major security incident because a physician who received a “free” USB stick at the airport in Moscow put it in his hospital PC, which caused great havoc.

7.       BYOD (Bring Your Own Device) is a major challenge. Of all hospitals in the US 71 percent allow some form of BYOD. Physicians like to use their own devices, whether it is for texting a colleague about advice, or taking a picture of a patient in the ER as evidence. In addition, it will not be unusual for a physician connecting an ultrasound probe to a smart phone, the latter may soon become a replacement for the stethoscope. I personally think that the remaining 29 percent of hospitals not allowing BYOD’s will not be able to hold out long. Allowing a BYOD has major implications. First of all, the attack surface is exponentially increased, second, there is a big resistance against IT “taking over” personal devices. Early attempts of IT protection actually caused interactions and interference with other usage. It wouldn’t be the first time that using a VPN that encrypts the clinical messaging impacts the operation of let’s say access by the physician to email or, even worse, Amazon or their brokerage account.

8.       Double your security budget. Compared with other industries, the amount of money spent on healthcare cyber security is many factors less, while the potential gain for hackers is many factors more. A medical record fetches 10 times the price of a credit card on the dark web. Security budgets have been decreasing to about 3 percent of the overall IT budgets in healthcare. Knowing that it would be impossible due to the limited resources of the hospital IT departments to boost the level of spending to that of other industries, it was concluded that you should spend at least twice as much as you do today. An external security consultant should be able to benchmark your current spending with your peers and other industries if you need to convince your management to do so.

That security can be a life-or-death factor was illustrated with the case of the UK ransomware incident where ERs shut down, which means that a stroke victim who has a 30- minute window to be treated could be left out. Imagine if that had happened in the US, it would be a perfect class action suit for negligence because IT did not keep up with patches causing serious patient harm.

9.       Limit your attack surface. There are several ways to do this, first of all reduce the on-device footprint, use Zero Footprint (ZFP) viewers, preferably using standard browsers, which means that as soon as you log off, all information is erased. If there is any confidential information on an electronic device which can easily be carried around and/or stolen or accessed, make sure all of it is encrypted.

Running every application in the cloud as is feasible, however with some caveats. Make sure that cloud access is guaranteed secure and there is redundancy and back-up. The overriding argument for the cloud is that any of the cloud providers have literally thousands of security professionals managing its security, which is no match for your own resources. However, moving to the cloud means that 80 percent of what your cyber security staff knows today becomes irrelevant, as managing the application in the cloud is basically an entirely new job. Therefore, be prepared to retrain your security staff.

Consumerization of healthcare is another major issue impacting the attack surface. Consumerization has many aspects, first, it requires a different mindset from providers. Intermountain Healthcare out of Salt Lake City has been a pioneer with this as it started to call patients “consumers” instead of patients. It hired an ex-Disney executive as Chief Consumer Advocate. Regardless of whether the institution is ready, patient/consumers will come to the hospital with their wearables that provide an EKG, heartrate and vitals recorded, and information provided by their apps that record their glucose level, or connects with their pacemaker recording cardiac events. Note that seven out of ten Americans track healthcare data on their mobile phones. If we want consumers to take responsibility for their health, they also should be able to contribute their own health data to their EMR’s and patient records used by healthcare providers. Imagine the security surface attack level that just has been exponentially increased again.

10.   Concentrate on high risk areas. There was a recent publication report that CD’s with DICOM images could be exploited by embedding an executable in the so-called pre-amble of these files. This caused quite a stir, however, when a new threat is discovered, one should analyze the potential risk, i.e. how likely is it that someone would “execute” a DICOM image file.

The same applies for potential hacking of pacemakers, infusion pumps, anesthesiology equipment and other devices that appear on YouTube videos or in the news as becoming targets for hackers. Instead of worrying about these high-visibility, low-likelihood threats, concentrate on your legacy equipment, worry about patch management, inventory your systems, segment your network and use security dashboards to manage your cyber security. Just implementing a dashboard causes the number of incidents to decrease by as much as 30 percent in six months.

Even though you might not directly be involved with cyber security, a conference such as the HIMSS security forum is very useful as it gives an inside perspective of the challenges we are facing in healthcare.

Here are some excellent resources if you would like to learn more: