Monday, August 5, 2019

DICOM Cyber security threats: Myths and Truths.

A report by Cylera labs identified a potential cyber security threat in DICOM files that are exchanged
on media such as CD, DVD, flash or through email, as well as through DICOM web service communications (DICOMWeb).

The threat was taken seriously enough by the DICOM committee that it issued a FAQs document to address this potential issue. This threat exploits the additional header that is created for media, email and web exchange. Before discussing the potential threat and what to do about it, let’s first discuss what this header looks like and how it is used.

Media exchange files have an additional header, aka the File Meta header which consists of :
1.  A 192 byte pre-amble
2.  The characters DICM to identify that the following is encoded in DICOM format
3.  Additional information that is needed to process the file, such as the file type, encoding (transfer syntax), who created this file, etc. 
4.   The regular DICOM file.  

This additional information (3) is encoded as standard DICOM tags, i.e. Group 0002 encoding. After the Group 0002 encoding, the actual DICOM file which normally would be exchanged using the DICOM communication protocol will start. This encapsulation is commonly referred to as “part10” encoding because it is defined in part 10 of the DICOM standard. 

The potential cyber security threat as mentioned in the article involves the 192 byte preamble as there are no real rules about what it might contain and how it is formatted. The definition of this area is that it is for Application Profile or implementation specified use. The initial use was for early ultrasound readers, but more recently it is generally used for TIFF file encoding so that a file could have “dual personality” i.e. it can be decoded by a TIFF reader as well as a DICOM reader. The DICOM reader will simply skip the pre-amble and process it accordingly. In case of a TIFF encoding, the preamble will have the TIFF identifiers, i.e. 4 bytes that contain “MM\x00\x2a” or “II\x2a\x00” and additional instructions to decode the file structure. This application seems to have some traction with pathology vendors who are very slow implementing the DICOM whole slide image file set as described by David Clunie in a recent article, or could be used potentially by researchers. If not used by a specific implementation, all bytes in this preamble shall be set to 00H as can be seen in the figure.

The definition of this preamble was identified as a “fundamental flaw in the DICOM design” in the Cylera article mentioned earlier. This assertion was made due to the fact that attackers could embed executable code within this area. This would allow attackers to distribute malware and even execute multi-stage attacks.

In my opinion, this “flaw” is overrated. First of all, the preamble was designed with a specific purpose in mind, allowing multiple applications to access and process the files, and, if not used accordingly, it is required to be set to zero’s. Furthermore, a typical DICOM CD/DVD reader would import the DICOM file, stripping off the complete meta-header (preamble, DICM identifier and Group 0002), potentially coerce patient demographics and study information such as the accession number, and import it in the PACS.

If for whatever reason, the import software would want to copy the DICOM file as-is, i.e. including the meta-header, it could check for presence of non-zero’s in the preamble, and if found, either reject or quarantine the file or overwrite it with zeros. The latter would impact potential “dual-personality” files, but it could check for presence of the TIFF header and act accordingly by making an exception for those very limited use cases (how many people are using pathology and/or research applications today?). Last but not least, don’t forget that we are only discussing a potential flaw with DICOM part-10 files that are limited to exchange media, which means that there is nothing to fear for the regular DICOM exchange between your modalities, PACS and view stations, as these files don’t have the meta-file.

But, to be honest, anything in a file which is “for implementation,” specific use, or is proprietary is potentially subject to misuse. There are Z-segments defined in HL7, private tags in DICOM and even a “raw data” file storage in DICOM that can contain anything imaginable. These additional structures were not design flaws but rather defined for very specific business reasons. The good news is that HL7 FHIR will do away with Z-segments as it is replaced with strictly defined extensions defined by conformance rules, but in the meantime we will be dealing with proprietary extensions for many years. Consequently, you better know where your messages originate and whether the originator has its cyber security measures in place.

In conclusion, the possibility of embedding malware in the DICOM preamble is limited to media exchange files only, which, if present, is easily detectable and is in almost every case stripped off anyway prior to importing these. There are definitely vulnerabilities with any “implementation specific” or proprietary additions to standard file formats. Knowing the originator of your files and messages is important, if there is any suspicion, run a virus scanner, have the application strip off and/or replace any proprietary information, and never ever run an executable that could be embedded within these files.

Is it an Image or a Document? Discussing the “grey area” of overlap between images and documents.

There is a major increase in images to be managed by enterprise imaging systems. It is critical to
decide on how to format the images and documents (DICOM or native?) and where to manage them (EMR, PACS/VNA, document management system, other? Below are some thoughts and recommendations you might consider.

Digital medical imaging used to be confined to radiology and cardiology, and on a smaller scale to oncology. Images were created, managed and archived within these departments. If you wanted to see them you would need to access the image management system (PACS) for that department.
Over the past decade, new image sources started to appear, for example, images taken during surgery through a scope, videos recorded by the gastroenterologists of endoscopic procedures, ophthalmologists recorded retinal images, and pathologists began using digital pathology imaging. Point of care (POC) ultrasound also began to be used increasingly, and now there are intelligent scanning probes available that can connect to a smart phone or tablet.

As the sources of imaging grow, the volume of imaging is growing exponentially. Talking with informaticists at major hospitals, it seems there are new image sources every week, whether it is in the ER where people are taking pictures for wound care or during surgery to assist anesthesiologists.
Good examples of the type of imaging that typically takes place outside the traditional radiology and cardiology domain can be seen at a recent webcast on encounter-based imaging workflow. In his presentation, Ken Persons from the Mayo clinic talks about the fact that they have literally 100’s of alternate imaging devices that create tens of thousands of images per month that need to be archived and managed.

Departments that never recorded images before are now doing this, such as videos from physical therapy recording changes in gait after back surgery. In addition to this avalanche of images generated by healthcare practitioners, soon there will be images taken by patients themselves that need to be kept, e.g. of a scar after surgery after they are being sent home. This will replace in-person follow up exams which will save time, effort and be more efficient. Managing these images has become a major challenge and has shifted from departmental systems to enterprise image management systems, i.e. from PACS to VNA’s.

How is non-image data managed? Textual data such as patient demographics, orders, results and billing information is exchanged, while connecting 100+ computer systems in a typical mid-size hospital, through interface engines. Over the past 5-10 years, Hospital Information Systems (HIS) and departmental systems dedicated to radiology (RIS), cardiology (CIS) and other departments, are being replaced by Electronic Medical Record systems (EMRs) and information is accessed in a patient-centric manner.

A physician now has a single log-on to the EMR portal and can access all the clinical text-based information as well as images. Textual information can be stored and managed by an EMR, e.g. for a lab result as discrete information in its database, or linked to as a document, e.g. a scanned lab report or a PDF document. In addition to these documents being managed in the EMR, they can also be managed and stored in a separate document management system with an API to the EMR for retrieval.

There is no single solution for the problem of where to manage (i.e. index and archive) diagnostic radiology reports. Their formats vary widely as discussed in a related post discussing report exchange on CD’s. In addition to standardized formats such as DICOM SR’s and Secondary capture, additional formats appeared including XML, RTF, TXT and native PDF’s. Not only do the diagnostic report formats differ, but also where they are managed. The reports could have been stored in departmental systems (RIS) or in some cases by a broker. A case in point is the AGFA (initially MITRA) broker (now called Connectivity Manager) that functions as a Modality Worklist provider, and in many institutions also is used to store reports. In addition, reports could reside temporarily in the Voice Recognition System, with another copy in the RIS, EMR and PACS. This causes issues with ensuring amendments and changes to these documents stay in sync at various locations.

Before the universal EMR access, many radiology departments would scan in old reports so they could be seen on the radiology workstation, in addition to scanning patient waivers and other related information into their PACS. This is still widely practiced, witnessed by the proliferation of paper scanners in those departments. These documents are converted to DICOM screen-saves (Secondary Capture), or, if you are lucky, as DICOM encapsulated PDF’s which are much smaller in file size than the Secondary Captures. With regard to MPEG’s, for example swallow studies, a common practice is to create so-called Multiframe Secondary Capture DICOM images. All of this DICOM “encapsulation” is done to manage these objects easily within the PACS, which provides convenient access for a radiologist.

The discussion about images and documents poses the question on what the difference is between an image and a document, which would also determine if the “object” is accessed from an image management system (PACS/VNA), which infers that it is in a DICOM format, or from a document management system (a true document management system, or RIS, EMR) which either assumes a XDS document format (using the defined XDS metadata) or some other semi-proprietary indexing and retrieval system. Note that there are several VNA’s that manage non-DICOM objects, but for the purpose of this discussion, it is assumed that a PACS/VNA manages “DICOM-only” objects.
In most cases, the difference between images and documents is obvious, for example, most people agree that a chest X-ray is a typical example of an image, and a PDF file is a clear example of a document, but what about a JPEG picture taken by a phone in the ER, or an MPEG video clip of a swallow study? A document management system can manage this, or, alternatively, we can “encapsulate” it in a DICOM wrapper and make it an image similar to an X-ray, with the same metadata, being managed by a PACS system.

What about an EKG? One could export the data as a PDF file, making it a document or alternatively maintain the original source data for each channel and store it in a DICOM wrapper so it can be replayed back in a DICOM EKG viewer. By the way, one can also encapsulate a PDF in a DICOM wrapper, which is called an “encapsulated PDF” and manage it in a PACS. Lastly, one could take diagnostic radiology reports and encapsulate them as a DICOM Structured report and do the same for a HL7 version 3 CDA document, e.g. a discharge report, and encapsulate it in a DICOM wrapper and store it in the PACS.

All of which shows that there is a grey area with overlap between images and documents, whereby many documents and other objects could be considered either images, or a better word is DICOM objects and managed by the PACS, or alternatively considered documents and managed by a document management system. Imagine you would implement an enterprise image management and document management system, what would your choices be with regard to these overlapping objects?
 Here are my recommendations:
1. Keep PDF’s as native PDF documents, UNLESS they are part of the same imaging study. For example, if you have an ophthalmology study that includes several retinal images and the same study also creates pdf’s, it would be easier to keep them together which means encapsulating the PDF as a DICOM object. But if you have a PDF for example, from a bone densitometry device, without any corresponding images, I suggest storing it as a PDF.
2.  Use the native format as much as possible:
a. There is no reason to encapsulate a CDA in a DICOM or even a FHIR document object, conversions often create loss of information and are often not reversible. Keep them as CDA’s.
b. Manage JPEG’s and MPEG’s (and others, e.g. TIFF etc.)  as “documents.” As a matter of fact, by using the XDS meta-data set to manage these you are better off because you also are able to manage information that is critical in an enterprise environment such as “specialty” and “department,” which would not be available in the DICOM metadata.
c. Use DICOM encoded EKG’s instead of the PDF screenshots.
d. Stay away from DICOM Secondary Capture if there is original data available, remember that those are “screenshots” with limited information, specifically, don’t use the Screen-Captured dose information from CT’s but rather the full fidelity DICOM Structured Reports which have many more details.
3. Stop scanning documents into the PACS/VNA as DICOM secondary capture and/or PDF’s, they don’t belong there, they should be in the EMR and/or document system.

An EMR is very well suited to provide a longitudinal record of a patient, however, none of the EMR’s I know of will store images. Images are typically accessed by a link from the EMR to a PACS/VNA so that they can be viewed in the same window as the patient record on a computer or mobile device. In contrast, documents are often stored in the EMR, but these are typically indexed in a rudimentary manner and most users hate to go through many documents that might be attached to a patient record to look for the one that has the information they are looking for. A better solution for document access is to have a separate enterprise document management system, which should be able to do better job managing these.

Some VNA’s are also capable of managing documents in addition to images, preferably using the XDS infra-structure. As a matter of fact, if you are NOT using the XDS standard, but a semi-proprietary interface instead to store JPEG’s, MPEG’s and all types of other documents, you might have a major issue as you will be locked into a particular vendor with potential future data migration issues.

Also, be aware of the differences between XDS implementations. The initial XDS profile definitions were based on SOAP messaging and document encapsulation, the latest versions include web services, i.e. DICOMWeb-RS for images and FHIR for documents. Web services allow images or documents to be accessed through a URL. Accessing information through web services is how pretty much all popular web-based information delivery happens today e.g. using Facebook, Amazon, and many others. It is very efficient and relatively easy to implement.

Modern healthcare architecture is moving towards deconstructing the traditional EMR/PACS/RIS silo’s to allow for distributed or cloud-based image and information management systems. From the user perspective, who accesses the information through some kind of a computer based portal or mobile device, it does not really matter where the information is stored, as long as there is a standard “connection” or interface that allows access to either an image or document using web services.

Right now is the perfect time to revisit your current architecture and reconsider how and where you manage and archive images and documents. Many hospitals have multiple copies of these objects stored in a format that does not make sense at locations that were dictated by having easy access to the data without considering whether they really belonged there. Instead of cluttering the current systems, especially when planning for the next generation of systems that are going to be FHIR and DICOMWeb enabled, it is important to index and manage your images and documents at the location where they belong in a format that makes sense.

Thursday, August 1, 2019

SIIM19 part 2: Standards update.

As the representatives for the various standards committees (DICOM, FHIR, IHE) reiterated during the recent 2019 SIIM conference in Denver, there are several new interoperability standards available that could make your life easier, but if the user community does not ask for them in their RFP’s and during regular vendor discussions, there is no incentive for these to be implemented.
Obviously, if you don’t know what to ask for, it gets difficult, therefore here is a synopsis of the new DICOM standards developments covered during the SIIM19 conference:

1.       Multi-energy CT imaging – CT scanners are getting equipped to acquire images using different X-ray energy spectra, which then are processed, subtracted, etc. to provide a different clinical perspective. When the initial CT DICOM metadata was defined in the early 1990’s, there were no multi-spectral CT scanners available or even thought of, therefore, to encode this with the “old” CT data requires a lot of customization and proprietary encoding, hence the need for a new series of objects. 

Remember that it does not only require the acquisition devices to support this new standard, which seems to be the least of the worry given the experience with adapting recent new DICOM objects, but more importantly, the PACS/VNA back-ends and especially the PACS and enterprise viewers will need to support it as well. There are 4 additional “families” of CT objects defined, i.e. for image encoding, material quantification, labeling and visualization.

2.       Contrast administration – Most US institutions have implemented an X-ray radiation dose recording and management system, motivated by the US federal requirements to put the dose information in each CT radiology report. The next area for potential legislative requirements and implementation is the contrast administration and corresponding management as contrast can also be detrimental to a person’s body.

The DICOM contrast agent administration reporting capability will facilitate this. The implementation is very similar to the dose reporting, i.e. it will be recorded in a dedicated Structured Report which provides details about the contrast which was programmed at the injector device and what is actually delivered.

3.       3-D printing – The RSNA hosted a big pavilion showing 3-D models and applications, initially for surgery planning, but eventually for implants. This is a new upcoming area, its management is currently shared between surgery and radiology. There is a need to retain and archive these 3-D “print files” and also for standard interfaces to the various 3-D printers. The DICOM standard added an encapsulation of these print files, called STL (an abbreviation of "stereolithography"). STL is file format native to the stereolithography CAD software created by 3D Systems and is also supported by many other software packages; it is widely used for rapid prototyping, 3D printing and computer-aided manufacturing. The 3-D model usage codes defined by DICOM include those used for:
a.       Educational purposes, such as training, patient education, etc.
b.       Tool fabrication for medical procedures such as radiation shields, drilling guides, etc.
c.       External prosthetics
d.       Whole or partial implants
e.       Surgery simulation
f.        Procedure planning
g.       Diagnostics
h.       Quality Control

4.       DICOMWeb – DICOMWeb provides a protocol alternative to the traditional DICOM protocol that is very effective in exchanging information using webservices and therefore is more suitable for mobile applications than the “traditional” DICOM protocol. There are equivalent services of the traditional DICOM Store, Move, and Find by using STOW, WADO and QIDO as well as the capability for bulk transfer (pixel data only) and metadata (header data) only. The webservices have been re-documented by cleaning up the existing documentation. In addition, a new enhancement has been defined to exchange thumbnails, so now instead of selecting the first image of a series as a source for the thumbnail, one can select an image that is representative of a series of images.

5.       Security – Cybersecurity is a big issue because of a recent publication about the possibility of using metadata that contains malicious data to store images on a CD. The Security Working Group together with the MITA cybersecurity people have issued a publication about this issue with precautions, (see press release). The metadata aka preamble could contain an executable; therefore, one is encouraged to use a virus scan and also disable running any executables from the media.

6.       Consistent protocols – for use by XA and MR are important in case a radiologist wants to compare a study with previous ones and also to compare studies that were created in different organizations. A DICOM extension allows for storing these protocols so they can be reused.

7.       Artificial Intelligence (AI) – is getting a lot of attention. Guidelines on how to include AI annotations and how to incorporate these into the workflow are defined. Assuming that the annotations are encoded in a DICOM Structured Report, there is a JSON representation of the DICOM SR defined.

8.       Dermatology – revitalized to address dermoscopy, which uses surface microscopy to evaluate skin lesions and can be used for early detection of skin cancer. It is an extension to the regular photography file definitions with new codes that are added.

9.       Ultrasound – has been revitalized to come up with a proposal to track transducers. This is somewhat of a challenge as not all of these probes are “intelligent” and can exchange a unique identifier. It is important to track transducers for infection control.

As mentioned earlier, if the user community does not request these new features, there is little chance that they will be implemented in a timely fashion by the manufacturers. A rule of thumb that I recommend is that one includes in the RFP an automatic upgrade for all new DICOM features within a reasonable time (e.g. 3 years) unless federal and/or state requirements require it to be sooner such as is the case for dose reporting (and might be for contrast administration).

Wednesday, July 3, 2019

SIIM19: Back to the Patient Perspective.

The annual gathering of healthcare imaging and IT professionals, i.e. SIIM 2019 in Denver kicked off with a moving story by the keynote from a patient, Allison Massari, who survived a life-threatening accident that burned over 50 percent of her body. Her story of the impact healthcare providers had on her recovery set the stage for hundreds of healthcare imaging practitioners, consultants and vendors to exchange their experiences and gave added meaning their professions before talk turned to their products and services, and education of their peers on what is new and what is coming. The meeting had good “vibes” as people were eager to learn and there was excitement about new developments. 

Here are my impressions:
1.       AI is over its initial hype: the initial fear factor that came with the hype of first AI applications that made radiologists anxious about the potential impact on their jobs has faded and it is becoming  obvious that there is still a lot of work to do and a long way to go.
Most AI companies don’t even have FDA approval for their products yet, even though the FDA is stepping up to the plate and is giving special considerations to the fact that many of these products are based on deep learning, whereby the behavior of the software might change over time.
This infographic provides a nice breakdown of FDA approvals over the past several years showing the percentage of radiology algorithms that were approved. AI is finding its way in some of the PACS applications starting with workflow enhancements, there are dose reduction applications for CT screening and some “low hanging fruit” surrounding detection of common diseases.

2.       Enterprise imaging is still very challenging: As Jim Whitfill, the current SIIM chair mentioned during his update, enterprise imaging is what most likely saved SIIM from its demise after the 2008 downturn in membership and conference attendance, as IIP professionals were starting to think about how to do enterprise imaging and subsequently publishing about it in the Journal of Digital Imaging.
The VNA or Vendor Neutral Archive became the vehicle to implement enterprise imaging solutions, however, the non-order (aka encounter-based) workflow for those non-radiology or cardiology departments is poorly defined and there are many different options. See my related post in which I identified more than 100 possible implementations. Talking at SIIM with several implementors, I identified three different strategies:
·         The "top-down" approach – This model implements a vendor-neutral archive (VNA) for radiology and/or cardiology first, and then starts to expand it with other departments, however, there is no single, uniform workflow for those departments resulting in many different options.
·         The "bottom-up" approach – This model, which was used at Stanford University implements a VNA beginning with one department and then adding in other departments using the same workflow (which is DICOM worklist based). After adding many other specialties, they are only now starting to add radiology and eventually cardiology.
·         The “hybrid” approach – This method, which was adopted at the Mayo Clinic, is a combination of both approaches, instead of having many different workflows or only one single workflow, they settled for only a handful, in this case five major workflows for the different departments. You can see details of this discussion at this short video clip.
3.       Teleradiology workflow is very challenging:  there are only a few PACS vendors that do teleradiology well, as a matter of fact, many teleradiology vendors build their own system as the requirements are so different:
a.       The turn-around time requirement is very challenging – a typical turn-around time has to be 5-10 minutes for trauma cases. This means that the workflow is super-optimized.
b.       AI can make a major impact – Hanging protocols are very hard to define as the source for these studies vary widely, some of the studies group all images in a single series, some in multiple series, and the series descriptions are not uniform, therefore, a simple algorithm determining which is the PA and which is the lateral chest and ordering them consistently saves a few mouse clicks which is time. Prioritizing studies based on certain critical findings is important as well. AI definitely assists in the efficiency and automating of repeated tasks.
c.       There is a lack of patient contextual data – There are many challenges to get the prior images for a particular study (see a renewed activity described below) as the use of CD’s for image exchange does not seem to be going away soon. But this workflow is well defined by IHE XDS-I and other profiles, and in many countries other than the US there are successful image exchange implementations based on standards. However, instead of a radiologist logging into an EMR and looking at the images while having the other patient context at their fingertips, a teleradiologist logs into a PACS seeing the image, wanting to have that patient context from potentially many different EMR’s. It is a “reverse” workflow, instead of being EMR-driven pulling multiple imaging studies, it is PACS driven wanting to pull multiple EMR documents. This is a new challenge which is not quite addressed yet; ideally one could maybe pull CDA’s from these EMR’s but those were really defined for a different purpose.
d.       The workflow is reversed – the traditional Order-Study-Report workflow looks different for a teleradiology application as in many cases the order comes after the fact, so it would be Study-Report-Order (including “reason for study”)-Report update. Interestingly enough, when talking with teleradiologists, they only have to adjust their report based on the “reason for study” in a few cases. Regardless, this workflow needs to be addressed by their PACS.
e.       Many studies, if not all, are “Unverified”– This is particularly true for battlefield and disaster applications. There is often no patient name (“civilian 1”), and definitely no patient ID, and it is not uncommon to have partial studies. A PACS that depends on the traditional order-based workflow will perform very poorly.

4.       CD’s are here to stay (for a while): I do have personal experiences (as many do) with image exchanges for me and my family as witnessed by the stack of CD’s I carry to doctors and specialists. Actually, as some of them lack CD readers on their laptops or have their computers locked down by their security departments, I carry a laptop with me with the images preloaded and ready to be viewed. My experience with my veterinarian is completely different. When I asked for a copy of the MRI of our dog on a CD from our neuro-veterinarian, I was told that it is “old-fashioned” but that they would be more than willing to send me a link to view the images in a viewer, or, alternatively allow the images to be downloaded as a zip file for me and my regular veterinarian to review, which I did. How is it that our veterinarians have this all figured out and our physicians don’t? I can come up with many reasons, but one of them was identified by a special ACR/RSNA committee which met during SIIM and that is the lack of a standard governance agreement. Instead of having to get BA’s from all your partners covering the HIPAA requirements, they recommend a standard document as part of the Carequality consortium, in the form of an implementation guide, which is available as a draft for public comment. In the CareQuality framework, 36 million documents are exchanged each month using 16 networks based on IHE XCA standards. If we can exchange documents, there is no reason to not exchange images.

5.       Cybersecurity is a hot topic: there is not a day or week that goes by without a report of yet another ransomware attack or security breach exposing literally millions of patient records. There have been reports of CT scans modified to create significant findings using the DICOM header preamble on CD’s to embed viruses on old devices that still run old OS that are not being patched anymore (note that Windows 7 support stops in January 2020).
Key safeguards include upgrading old OS’s, if that is not possible, then isolate them from your network as well as disable the USB’s (which is a problem by itself as several modalities depend on the USB to connect ultrasound, dental, or other wands and detectors), secure networks, and educate your employees on the danger of social engineering is critical. At one facility, the open rate of spam emails dropped from 80% to less than 20% after the IT department started to send out “bogus” spam emails to alert their employees to the danger of social engineering. Another great example of this phenomena was that of a (infected) USB drive that is dropped in the employee parking lot of a hospital with its logo on it so that an unsuspecting employee with good intentions will insert it in a hospital network computer resulting in great harm.

6.       New standards are available to provide greater interoperability: DICOM, FHIR and IHE have made several new additions which are covered my SIIM report part 2.

Overall, yet another good year for SIIM and its members. The major differences between SIIM and other mega-meetings such as RSNA is the fact that you can cover the exhibitions without having to walk (and often run) many miles in between different booths, you have much better access to many of the faculty and peers, and last but not least, there are an abundance of hands-on workshops to experiment with new tools and standards.

For example, at the XPert IIP workshop, attendees could learn troubleshooting DICOM headers using DVTK and the DICOM protocol using Wireshark sniffer using pre-loaded laptops provided as part of the training. Sessions covering DICOMWeb and FHIR hands-on experience as well as the IIP sandbox covering Mirth interface engine programming were also very popular. One of the themes this year was empowerment, what better way to empower users than by providing them with the skills and tools to do their job better and more effectively. 

Next year will be in Austin, which is closer to the OTech home base (Dallas, TX), I am looking forward to another great meeting next year!

Monday, June 3, 2019

Enterprise or encounter-based imaging workflow options.

As institutions start to incorporate their multiple imaging sources into an enterprise solution such as
provided by a Vendor Neutral Archive (VNA), they find that the biggest challenge is dealing with the different workflows used by non-radiology departments, which in many cases must be re-invented. There are many different workflow and integration options, as a matter of fact I have identified more than one hundred different combinations as listed below. Hopefully these will converge to a few popular ones, driven by standardization and vendor support.

The traditional radiology and cardiology workflow has matured and is defined in detail by the IHE SWF (Scheduled Workflow) profile, which recently has been updated to SWF.b to incorporate PIR (Patient Information Reconciliation) and requires the support of a more recent version of HL7, i.e. 2.5.1 (this was optional in the first version).  PIR specifies the use of updates and merges for reconciliation such as when using a temporary ID and for “John Doe” cases.

The non-radiology and cardiology enterprise imaging workflows are also known as “Encounter Based Imaging Workflow” in contrast to the traditional “Procedure Based Imaging Workflow” as defined by the SWF/PIR IHE profiles mentioned above. The difference is that there is no order being placed prior to the imaging. Despite the lack of an order, we still need the critical metadata for the images which consists of:

1.       Imaging context attributes (body part-acquisition info-patient and/or image orientation)
2.       Indexing fields (for retrieval such as patient demographics, study, series and image identifiers)
3.       Link(s) to related data (reports, measurements)
4.       Department/location/specialty information. This is an issue as some of these acquisition devices (e.g. Ultrasound) can be used for different departments. It is not as easy as having a fixed MRI in radiology; now we have devices that can belong to different departments and used in various locations (OR, ER, patient rooms, etc.)
5.       References to connect to patient folder especially for the EMR (patient centric access)
This assumes that the practitioner decides to keep the images, which is not necessarily always the case; a user might choose to discard some or all of the images depending on if they need to be part of the permanent electronic patient record and/or need to be shared with other practitioners.

Assuming we want to archive the images, the first step is to figure out how do we get access to the metadata. There are two different workflows:
1.       The user retrieves the meta-data first and then acquires the images
2.       The user first acquires the images and then matches them up with the metadata (typically at the same device).

The end-result is the same, the workflow is a little bit different as there needs to be a query made by the practitioner to get the data, which could be as simple as scanning a patient barcode, RFID or doing a search based on the patient’s demographic data.
How is this information retrieval being implemented? There are several options:
1.       Use the DICOM Modality Worklist (DMWL) similar to the SWF profile. The DMWL in the case of the traditional SWF includes the “What, Where, When, for Whom and How to Identify,” for example, performing a Chest PA X-ray (what), using the portable unit in the ER (where), at 7 am (when), for Mr. Smith (for whom), with a link to the order using the Accession Number and identifying it with Study UID 1.x.y.z (how to identify). In the case of the encounter-based imaging workflow, we only use the “for whom” and “where: as the other information is not known.
Using only patient ID and department, the specification of this DMWL variant is covered by the IHE Encounter Based Imaging Workflow (EBIW) profile, which is geared towards Point of Care (POC) ultrasound. The problem is that DMWL providers are not typically available outside radiology/cardiology and that acquisition devices (think an android based tablet capturing images or a POC US probe connecting to a smartphone) don’t typically support the DMWL client either.
2.       Use the Universal worklist and Procedure Step (UPS) as defined in the IHE EBIW, which is basically a DICOMWeb implementation of the traditional worklist, which makes it easier to implement, especially on mobile media. The same issue is true with this solution as with solution (1): who supports it? Note that not only it is an issue with the client software but also the availability of the server, i.e. worklist provider which is somewhat of an unknown outside radiology/cardiology.
3.       Use HL7 Query as defined by the PDQ profile, either version 2 or 3.
4.       Use FHIR as defined by the PDQ-M profile. Note that the differences between V2 and FHIR is that the traditional PV1 segment which has visit information has been renamed as the FHIR Encounter Resource. So, when you think about encounters, think about the visits in Version 2.
5.       Listen to any V2 ADT’s, i.e. patient registration messages.
6.       Use an API, preferably web-based if you use a mobile device, direct into an EMR, HIS or ADT system.
7.       Do a DICOM Patient information Query (C-Find) to a PACS database assuming that the patient has prior images.
8.       Any other proprietary option.

The second step is that we need to add additional information to the metadata that was not provided by the initial meta-data query, i.e., the Accession Number. The Accession Number was initially intended to link an image or set of images with an order and the result (diagnostic report and subsequent billing). Even though there is no order, you’ll find that the Accession number is critical as it is used by the API from an EMR to a PACS and/or VNA to access the images, to link to the results and notes, to make the connection to billing and to associate with study information (Study Instance UID’s).

A so-called “Encounter Manager,” as defined by IHE, as an actor could issue a unique Accession Number. This encounter manager could reside in a PACS, VNA, or broker. To make sure that Accession Numbers are unique and different from other Accession Numbers, such as those issued by RIS or EMR, most institutions use a prefix or suffix scheme. Note that the acquisition device does not have to deal with this Accession Number issue, a DICOM router could do a query for the Accession Number and automatically update the image headers before forwarding it to the PACS/VNA.
The next (optional) step is that an encounter might need to create a “dummy” order, because many EMR’s or HIS systems cannot do any billing or recognize the images that are created without an order, so in many cases, an order is created “after the fact.”

The last step is to notify the EMR that images are available. There are several options for that as well:
1.       Create a HL7 V2 ORU (observation result) transaction as defined by the IHE EBIW. This is probably the most common option as EMR’s typically support the ORU.
2.       Create a HL7 V2 ORM with order status being updated.
3.       Create a DICOM Instance Availability. This is actually used quite a bit (I have seen Epic EMR implementations that use this). IA has more detailed information compared with the HL7 v2 options.
4.       Send a Version 2 MDM message which has the advantage that you can use it to provide a link to the images.
5.       Use the DICOM MPPS transaction.
6.       Use the (retired) DICOM Study Content Notification (still in use by some legacy implementations)
7.       Manually “complete” entry in the EMR
8.       Use proprietary API implementations.

The scenarios described above assume that patients are always registered, and encounters scheduled. It becomes more complex if there is no patient registration, such as a POC US being used by a midwife at a patient’s home. The same applies for emergency cases, e.g.  when using it in an ambulance where the only information might be that it is a 30’s female, or at a disaster area or battlefield (“citizen-1”). In this case, we need a solution similar to the PIR to reconcile the images with information entered after the fact resulting in updates and merges, typically done using HL7 transactions.
Another future complication will be the implementation of patient-initiated imaging, for example, if a patient has a rash and wants to send an image taken with his or her phone to a practitioner or sends images of a scar after surgery to make sure it is healing properly.

As you can see from the above, the challenge with Encounter-Based Imaging is that there are many options for implementations, in theory one could multiply the number of options for each step and you’d come up with many different combinations (2 times 8 times 8 results in theoretically 128! different options).

IHE so far has only addressed two options, the one for POC ultrasound and photo’s in the EBIW profile which specifies DMWL for getting the demographics and using an ORU for the results. In practice, a typical hospital might use 5-10 or so different options for the different departments. Hopefully there will be a couple of “popular” options emerging which will be driven preferably by new IHE profile definitions and supported by the major vendors. In the meantime, if you are involved with enterprise imaging be prepared to spend quite a bit of time determining which option(s) fit best for your workflow, and is supported by your EMR/PACS/VNA/Acquisition modality vendors. You might need to spend a significant amount of time training your users for any additional steps that might be necessary to fit your solution.

Thursday, April 25, 2019

Why you should attend SIIM 2019, June 26-28

One of the major issues facing healthcare imaging and informatics professionals is the lack of
transparency in communication between modalities such as CT, MRI, ultrasound, CR and others, as well as the PACS/RIS, and VNA. When images and related information such as dose information and measurements do not get across, connections are rejected, changes and updates in the information are not propagated in a timely manner or at all, most of PACS administrators are stuck in between vendors who are finger-pointing to each other about the root cause.

Many vendors lock up the access to their log files requiring a (costly) service call to get someone to look at it, which takes time, assuming they have the skills to do that. This has become the main reason why people are attending advanced training classes and why you should consider attending the annual SIIM conference this year.

Imagine that an image is rejected by the PACS or you can’t read images from a CD of a patient who is scheduled for surgery and the surgeon really needs access to the patient’s CT study. The DICOM Validator (DVTK) toolkit will validate the DICOM header and tell you what is incorrect so you can fix it with an editing tool. Imagine that your system randomly loses some of the images in a study. The Wireshark DICOM sniffer will allow you to see exactly what is happening and why images are being rejected. Imagine you have performance issues; the same Wireshark will show you exactly the timestamps of the DICOM communication protocol and application level responses. Imagine there is missing information in a modality worklist, the Mirth HL7 interface engine allows you to map it to a field that could not be recognized by the worklist provider, and also as an extra bonus will store the HL7 orders in a temporary queue, which can be restarted in case there are any hiccups. Imagine the radiology report has some formatting issues, again, Mirth will be able to solve these issues. By the way, all of these tools are free for you to use.

Given the increase in de-constructed PACS, VNA’s that are connected to multiple PACS systems, which require constant synchronization, and the proliferation of zero footprint viewers that can be launched from an EMR, integration is getting more and more challenging requiring complex skills. SIIM leadership has recognized that teaching the advanced skills on how to use these tools will fill a major need and has expanded the program for this year. There will be sessions on using them in a very hands-on manner to provide you with this advanced knowledge.

There are other reasons to attend SIIM as well, i.e. spending time with vendors to kick the tires, learn about the latest in AI, networking with your peers and to share experiences, and last but not least enjoy the great Rocky Mountains. However, all of this is in my opinion minor to acquiring the necessary skills to make sure you can support your PACS in a professional manner. 

So, this is a very good reason to attend this year, I am looking forward to seeing you in Denver at these advanced sessions!

Monday, March 18, 2019

Impact of the Philips-Carestream acquisition on the end users: Good, Bad?

Since the recent Philips acquisition of the Carestream IT business, I have received several phone calls and had discussions with both elated and very concerned end-users. Interestingly enough, the positive feedback was mostly from Philips PACS users and most concerns were expressed by Carestream clients.

The Philips users were mostly excited about the Carestream enterprise archiving and storage component, which hopefully will replace the proprietary Philips back-end and be able to integrate better with enterprise archiving systems such as VNA’s. It is no secret that the Philips proprietary image format storage works very well for the Philips workstation display as it provides great (perceived) performance but to get the data out of their system in a standard DICOM manner is not as easy. Synchronizing changes in the Philips archive with the VNA cannot be automated due to lack of the IOCM IHE profile support. It is very challenging to say at the least, judging from the spike in attendance in our DICOM classes from Philips users who want to learn how to use DICOM network sniffers to find out when, where, and why certain studies are not exchanged between the Philips and their VNA.

The strength of the Philips PACS system is definitely its radiology worklist, radiologists really like its user friendliness, and PACS administrators like it as they can train a new user in 15 minutes unlike some of the other PACS user interfaces. This is important if, for example, you get a new batch of 15 residents to train every couple of months. So, the ideal match would be the Philips front-end with the Carestream back-end, however, that goes against the current trend of EMR-driven worklists for PACS.

From Carestream customers, I have heard mainly concerns that Philips might “contaminate” their current relationships and/or upset their support and service structure. It is rather common when a new company takes over a smaller one in this industry to see people leave, service centers be consolidated, not always for the better, and support take a major dip. In addition, the product they are currently using or planning to purchase might become obsolete due to product consolidation, especially if the main objective of an acquisition was not the technology but buying the channel and existing customer base.

So, what can we expect? Time will tell, but the good news is that both companies have a culture that is different from many other players in this field, which I know first-hand having worked for both Philips and Carestream’s predecessor (Kodak). Consequently, I have some level of confidence that this is going to be a good thing. But again, only time will tell, and in the meantime as a Philips or Carestream customer you might want to ask for solid guarantees from your suppliers and keep all options open.

Thursday, February 21, 2019

HIMSS19: Are we finally unblocking patient information?

Busier as ever
More than 45,000 visitors to the worlds largest healthcare IT conference held in Orlando, Fla,
browsed through 1200 booths looking for IT solutions for their facilities and listened to the many educational sessions. There is still a dichotomy between what was shown and the real world as the IHE showcase demonstrated 12 use cases where information seamlessly flowed between different vendors, while it is not always so smooth in practice based on stories from the trenches.
Here were my observations from this conference:

Distinguished panel at Keynote
1.       Interoperability, are we there yet? The meeting was dominated by the recent information-blocking rule, which was unwrapped by the US department of Health and Human Services (HHS) literally the day before the convention started. As Seema Verma, the US CMS administrator pointed out in her key note presentation, the government has given out US $36 billion on incentives to implement electronic health records with not much interoperability to show for it, so now it is time for the industry to step up.
Former US CTO Aneesh Chopra added to this saying that the CCD’s (Continuity of Care Documents) that are exchanged right now might not be the best solution to exchange patient information, but we need to look for other means such as open APIs, which can be used to tap into any EMR for information. These open APIs will become a requirement by 2020 according to the HHS. Penalties to health information exchanges and health information networks could be up to $1 million for lacking interoperability. Maybe this will help, however the rule is expected to get pushback from some of the stakeholders. For example, the AHA was quick to point out that it disagreed with certain parts of the requirements: “We cannot support including electronic event notification as a condition of participation for Medicare and Medicaid,” stated AHA Senior Vice President for Public Policy Analysis and Development Ashley Thompson.

Open API, is that sufficient? An open API is merely a “connector” that allows information to be exchanged, however, as was noted in the same keynote speech, if the only thing that can be exchanged is the patient name, sex and race, or if the clinical information is not well encoded and/or not standardized, the API is not of much use.

That is why implementation guides based on use cases specifying the many details of the information to be exchanged are critically important. The good news is that these implementation guides are a key component of the new FHIR standard, which can be electronically interpreted and are defined according to a well-defined template. The DaVinci activity which has already defined 12 of these guides and which are part of the FHIR balloted standard will facilitate the exchange. The focus of these guides is on provider/payer interactions and includes for example medication reconciliation for post discharge, coverage requirements information and document templates and rules. The booth demonstrating these use cases was one of the busiest in the IHE showcase area.

3.       What about social determinants? Health care determinants follow the 20/20/60 rule, i.e. 20 percent of ailments are determined genetically which can increasingly be predicted by looking at your DNA sequence, 20 percent are influenced by a healthcare practitioner such as your doctor, but 60 percent, i.e. the majority, is determined by the patient through his or her own actions and social determinants. For example, if you are genetically at risk for a heart condition, and your doctor has already placed a stent in one or more of your coronary arteries to help blood flow to your heart muscle, but you don’t change your life style, you won’t be able to get any better. Now, let’s say you are homeless and depend on food that is not good for your condition, you could be in trouble. It would be good if your physician knows those social factors, which could also include where you have traveled to recently. However, there are no “codes” available to report this in a standard manner. The majority of the health care determinants (60 percent) are not encoded, therefore, there is much work to be done in this area.

Impressive number of providers
participating in Commonwell
4.       How is information exchanged between providers? The ARRA (American Recovery and Reinvestment Act) from the previous administration had put money aside to establish public Health Information Exchanges (HIE’s). Unfortunately, many of these HIEs took the grant money and folded after that ran out, notably the HIE’s in North Texas and Tennessee and many others, shutdown after failing to find a sustainable business model.
Several vendors took the initiative to establish a platform for information exchange as they figured out that the effort to string connections one-by-one between healthcare providers would be much more expensive than creating their own exchange, which is how the Commonwell non-profit started. As of the conference, they had 12,000 connections to providers, which is probably 10 percent to 15 percent of all providers, which is a good start towards gaining a critical mass. Cerner seems to be the largest EMR vendor in this alliance. Epic was notably absent and has been the main driver of a somewhat competing alliance, called Carequality with different functionality but establishing similar objectives, i.e. exchanging information between EMR’s from different vendors at the providers.
The good news is that there is now a bridge established between these two platforms, which again makes the critical mass even larger. This situation is somewhat unique for the US as other countries have government initiatives for information exchange but for those countries without an initiative, the same model might work. This will hopefully solve the problem that was mentioned by one of the providers who said that it has been relatively easy to exchange information between his EMR (which happened to be Epic) and others as long as it was an EMR from the same vendor, but very hard if not impossible to get anything out of an EMR from another vendor into his EMR. This is a great effort, which together with the anti-blocking rules from CMS, might finally allow healthcare information to be exchanged.

One of the many portals
5.       Are patient portals finally taking off? It is still a challenge to access health care information as there is not really a universal portal that collects all of the information among different providers. You might need to maintain access to the information being present at your primary physician, your specialist(s), your hospital and even your lab work provider. One of the ways to consolidate this information is to have a single provider, such as the VA for veterans, and their portal “myhealtevet,” which has been relatively successful. How this is going to work as the VA  increases its outsourcing to private commercial providers remains to be seen. If you are on Medicare or Medicaid, CMS will provide a standard interface, which is used by several (free) patient portal providers where you can log in to see all of your claims, prescriptions and other relevant information. Again, if you do not happen to be a veteran or are not covered by CMS, but are a patient between the two of these organizations, there is not so much interoperability, however those two groups cover enough patients to start having a critical mass as well.

Standing room only at cloud
6.       Are cloud providers making any progress? One of the speakers, who happened to be working for New York Presbyterian, claimed that they are planning to have 80 percent of their applications and infrastructure in the cloud. There are more advantages to the cloud than potentially reducing cost, such as easier access by patients, and the potential to run AI algorithms for clinical support, analytics and decision support. Machine learning is more effective when there is a lot of data to be learned from, hence the advance of cloud archiving.

The big three cloud providers (Amazon, Google and Microsoft) are more than happy to take your business, the Amazon booth was packed every time I passed it. However, they still have a steep learning curve, even although they claim to have open API’s and a healthcare platform with a lot of features, they have to learn in a few years what healthcare vendors have accumulated in expertise over many decades. The good news is that they have potentially very deep pockets so if they are getting serious about this business, they could become major players.

7.       Is Uberization of healthcare happening? When people talk about Uberization they typically
Get the Uber or Lyft app in your EMR!
refer to the business model that provides easy access by consumers through mobile media, accountability of the providers, and tapping into a completely new source of providers such as private drivers who suddenly become transportation providers, or, in the case of Airbnb, private home owners who become innkeepers.
This is happening in healthcare as well as there is a big increase in tele-health providers who provide phone access to patients who want advice anytime, anywhere. Speaking with a physician who does this from his home, telehealth is as easy as using Uber as a physician can sign up anytime and just log on to take calls from patients for as long as he or she decides. As I listened to one such physician’s experiences, he provided life-changing advice, especially to patients who live in remote areas and otherwise would not be able to seek medical advice because of their remoteness.
In addition to telehealth services, Uber and Lyft were also promoting their transportation business to providers, to reduce potential no-shows by patients who have trouble with transportation. A provider can contract with either one of these to serve their transportation needs.
8.       What about the wearables? Apple introduced access to medical records through its health app
Note the wearable EKG sensor
as well as monitor
at the 2018 conference through a FHIR interface. Their provider list has been growing steadily and is now close to 200, including the VA which by itself accounts for 170 medical centers and more than 1000 clinics. This is a significant number, but not even close to the number of providers in the Commonwell platform, for example, whose members exceed 10,000. Therefore, there is still a lot of progress to be made.
There was an increase in intelligent detectors that can communicate vital signs and other clinical information using blue-tooth to a mobile device, for example, to allow patients to be released earlier from a hospital back home which is safer and more cost effective.

9.       Are we Safe from hackers yet? There has been a major increase in cyber security investment, which has become a necessity as healthcare has become a major target for hackers and ransomware opportunists.
Huge Security pavilion
Healthcare providers are making big investments in personnel and tools to try to protect themselves. The cybersecurity area of the conference was indeed huge as many new companies are providing their services. The average security part of an IT budget is about 6 percent to 8 percent but there are some who spend as much as 12 percent. Imagine an IT staff of a large organization, e.g. if you have 500 IT employees, there could be as many as 50-60 staff dedicated to cyber security.

Based on recent incidents, we still have a lot of work to do in healthcare to protect patient information, especially at the periphery, as medical devices in many cases appear to provide easy access points to a hospital’s back-end. This risk is one of the reasons that the FDA has been requiring a cyber security plan to be filed with every new medical device clearance.

plenty of FAX apps
10.   When are we going to get rid of the fax machines? About six months ago, CMS publicly announced that they want to get rid of all fax machines in healthcare by 2020. This is only one
year away, however in practice it appears that by far the majority of all healthcare communication in healthcare is done through the ubiquitous fax machine. A good manner to transition might be to exchange documents using the open API, and then do Natural Language Processing (NLP) to search for medications, allergies, and other important information that can be processed and potentially imported into the receiving EMR. In the meantime, there were still many small companies that were advertising smart ways to distribute faxes and I predict that this will still happen for several years to come.

In conclusion, this was yet another great conference. The emphasis was on unblocking the information that is locked up in many healthcare information systems and, until now, can only be exchanged if you happen to have an EMR from the same vendor as its source, or if you are lucky enough to have a provider that has access to Commonwell or Carequality, or a provider who uses one of the relatively few public Health Information Exchanges (HIE’s). Hopefully the industry and providers will start to cooperate on making this happen, we’ll see next year when we are going to be back in Orlando FL again.

In the mean time, if you are baffled about some of the terminology you might consider our FHIR, IHE or HL7 V2 training publications and classes.