Monday, October 14, 2019

Volunteering in Africa

Black Lion hospital CT/MR pavilion

I stepped out of the hotel lobby in Addis Ababa, Ethiopia, to a tropical downpour. No way would I have been able to walk to the hospital without being totally soaked, including my backpack with my laptop. The doorman saw my desperate look and told me to wait, as he was talking with a gentleman in a nice car waiting in front of the hotel. He then told me to step in and that he would take care of it. I told the driver that I was on my way to teach in the local hospital and we had a nice conversation while he made sure I arrived dry and safely. When I wanted to pay him, he refused, saying, “Thank you for what you do for my country.”

This is the kind of experience you can expect when working in a developing country as a volunteer. Not only do you make a big difference by spending your time and sharing expertise, but it is also very rewarding, and excellent “feel-good” therapy. The people you interact with greatly appreciate your contribution; not only the professionals that directly benefit from the shared knowledge, but many others that you encounter on the street or at your hotel.

In this particular trip, I was doing a RAD-AID sponsored IT assessment of the PACS system at the Black Lion Hospital in Addis Ababa. We were trying to solve a number of issues including: image quality issues with MRI images coming up unreadable at the PACS, figuring out how to connect their home-grown EMR to get a worklist going at the modalities, installing a teaching file solution, and trying to address several other small issues that they were encountering. In the week prior to that I taught a PACS bootcamp to 13 PACS administrators in Dar El Salaam, Tanzania, which was very well received. I like nothing better than the “Aha, is that how it works?” glint in the eyes of these professionals.

Teaching PACS bootcamp in Dar El Salaam
People sometimes ask me how it is to teach or work with healthcare professionals in developing countries, and I tell them that it is not any different than teaching in the US or any other country. There are smart and eager-to-learn people everywhere. The problem in developing countries is that there is very poor or no support from the vendors that provide the equipment as they don’t spend time and effort to create a support structure with well trained engineers. Therefore, the hospital staff often has to figure out the issues by themselves, which is why training by organizations such as RAD-AID and the SIIM Global ambassador program is so important and makes such a big difference.

I would encourage each and every SIIM member to consider volunteering. I know it might be somewhat out of your comfort zone, but I can guarantee you that not only will it make a major difference on the receiving side, it will be equally rewarding for you as a person as you will grow and gain new experiences. I myself am definitely hooked and can’t wait for my next assignment. I’ll do this as long as I am able, and I’m thankful for SIMM to support such a great cause.

Wednesday, September 18, 2019

Different levels of AI applications in diagnostic imaging


There are different levels where AI can be applied in radiology as well as other diagnostic imaging applications, depending on the step in the workflow from acquisition to interpretation, post processing and analysis.

The first level is at the Image Acquisition level. For example, one of the challenges with doing a CT scan is to have the patient center coincide with the center of the radiation beam, which will result in optimal dose distribution and corresponding image quality. In addition to the patient being centered, the distribution of the radiation dose, depending on the body part is also important, e.g. a lower dose for the head than for the pelvis. Instead of having a technologist making an educated guess, the machine can assist with this and automate the positioning process, again to optimize dose which means not using more than necessary.

De-noising of images is also an important feature. Typically, with lower radiation techniques, more noise is created, which can compromise a diagnosis and ultimately patient care. This is especially true for screening, where there is no direct indication to perform a CT study and limiting dose is important. An algorithm can be taught what noise looks like in a typical low-dose image and uses that knowledge to apply image processing to remove the noise to allow a lower dose technique to be used. The same principle is used to remove common artifacts such as created by metal parts in an X-ray. If the algorithm is taught how a typical artifact shows up in an image, it could remove it or, at a minimum, reduce it thus improving image quality and contributing to a better diagnosis.

An important feature for AI would be regulating the workflow, i.e. determining which cases should be considered “urgent” aka STAT based on automatic abnormality detection. These cases would be bumped to the top of the worklist to be seen by the radiologist.
The opposite is true as well, some of the images could be considered totally “clear,” i.e. having no indication and therefore not needing to be seen by a radiologist. This is useful in mass-screenings, e.g. for TB among immigrants, or black lung disease for people working in coal mines. These “normal” cases could be eliminated from a worklist.

The next level of AI is at the post-processing and reading level. CAD is probably the most common form of AI, where an image is marked using an annotation indicating a certain finding, which serves as a “second opinion.”

AI can also increase the productivity dramatically by assisting in creating a report. Macro’s can be used to automatically create sentences for common findings, again based on learning what phrases a user would typically use for a certain indication.
Standard measurements such as used for obstetrics can be automated.  The algorithm can detect the head and indicate automatically its circumference and diameter which are standard measurements to indicate growth.

One of the labor-intensive activities is the annual contouring of certain anatomical parts such as the optical nerve in skull images. This contouring is used by radiation therapy software to determine where to minimize radiation to prevent potential damage. Automating the contouring process could potentially save a lot of time.
Automatic labeling of the spine vertebrae for the radiologist also saves time, which could also improve accuracy. This time savings might only be seconds, but it would add up when a radiologist is reviewing a large number of such cases.
Determining the age of a patient based on the x-ray such as of a hand is a good example of quantification, another example is the amount of calcium in a bone indicating potential osteoporosis.

Some of the indications are characterized by a certain number of occurrences within a particular region, for example the number of “bad cells” indicating cancer in a certain area when looking at a tissue specimen through a microscope, or, in the case of digital pathology, displayed on a monitor. Labeling particular cells and automatic counting them offers a big time savings for a pathologist.

One of the frequent complaints heard about the workstation functionality is that the hanging protocols, i.e. how the images are organized for a radiologist are often cumbersome to configure and do not always work. AI can assist in having “self-learning” hanging protocols based on radiologist preferences and also be more intelligent in determining the body part to determine what hanging protocol is applicable.

As AI becomes integrated in the workflow, the expectation is that it is “always-on,” meaning that it is seamlessly operates in the background, without a user having to push any buttons or launch a separate application to have an AI “opinion.”

One of the challenges is also to make sure that relevant prior studies are available, which might need to be retrieved from local and/or remote image sources, for example from a VNA or cloud. AI can assist by learning what prior studies are typically used as a comparison and do an intelligent discovery of where they might be archived.

Not only do radiologists want to see prior imaging studies, but also additional medical information that might be stored in an Electronic Health Record or EMR such as lab results, patient history, medications, etc. Typically, a radiologist would have access to that information, especially as most PACS systems are migrating to become EMR driven, however for teleradiology companies, the lack of access to EMR data is a major issue, where AI might be able to assist.

AI is just starting to make an impact, we have only seen the tip of the iceberg, but it is clear that there can be major improvements made using this exciting technology.



Wednesday, September 11, 2019

The evolution of PACS through the years.

PACS systems have evolved quite a bit over the past 25 years, this essay provides the background of where PACS started, where we are now and where we are headed. I am covering the four essential PACS components, i.e. the P for Picture (viewing ), A for Archiving and image and information management, C for Communication and S for System.

Regarding the P for pictures: In the first generation view stations, the software was not as sophisticated, it had only basic functionality, and the viewers were thick clients, meaning that the images had to be downloaded to the local workstation and all of the processing was done locally. These view stations were mimicking a alternator, both in size and functionality, mostly displaying images in a landscape format.

By the second generation, radiologists discovered that they did not really needed 8 monitors but can view cross sectional studies using “stacking” and virtually integrating the3-D in their mind. The viewers added more sophisticated hanging protocols, aka DDP’s or Default Display Protocols, which refers back to how films were ”hanged” on a light box. How the images are sorted can depend on the modality, (e.g. Mammography), body part (e.g. Chest or extremity), specialty (e.g. Neuro) and individual preferences. Re-arranging images and sorting through literally hundreds of them in case of a cross-sectional study such as a CT or MRI is a burden for the radiologist and takes time. Inconsistent display can also be cause for medical errors, imagine that the new study is always displayed on the top of a monitor and the prior one on the bottom and that for some reason, this is reversed, this could cause the radiologist to report the wrong study. Voice aka Speech Recognition has become routine. Some studies, initially mammography, are subjected to Computer Aided Diagnosis which creates a “second opinion” for the radiologist by marking the images with CAD marks for clinical findings.

The 3rd generation workstations are accommodating different specialties in addition to radiology such as cardiology, ophthalmology, dermatology, and others, commonly referred to as “ologies”. The viewer becomes a Universal viewer which instead of a thick client is now a thin client which does not leave any trace of patient information after the user has logged out, aka a “zero-footprint”. Some modalities create images and/or studies with huge file sizes in excess to 1 GigaByta, which makes it more efficient to do what is called “server-side” rendering whereby the viewer functions as a remote window to a server which performs the processing.

The fourth generation of viewers implement web services that also allow for mobile access, i.e. look at the images from a mobile device whether it is a tablet or smart phone using the DICOMWeb protocol. What used to be called CAD is now replaced with Artificial Intelligence or AI which spans many more detections of various diseases in addition to automating the workflow for the radiologist. As an example, AI can detect a critical finding and automatically bump the study to the top of the worklist. It can also remember and learn physician preferences and support his or her workflow.

The next component of the PACS is the Archiving and image and information management. The early generations of PACS systems were limited by cost of archive media. Most systems would archive studies with a certain age on a second or third tier, slower and less expensive media such as Magnetic optical disks, tape, or even store it off-line.

In the second generation, the big Storage Area Networks and Networked Attached Storage Devices were introduced having multiple arrays of inexpensive disks called RAID’s which is still the most common configuration. Because of some natural disasters and hardware failures, most hospitals learned the hard way that redundancy and backup is critical so most of these archive systems by now have at least one mirrored copy and a sound backup. CD’s become the standard for image exchange between physicians.

In the 3rd generation, data migration as well as life cycle management is becoming a major issue. Many hospitals are replacing their PACS vendor and find out that it is really hard, costly and lengthy to migrate their images to another archive from a different vendor. They were looking for remote storage solutions, i.e. SSP’s, or buying a Vendor Neutral Archive (VNA) to take control over their image archive and not being dependent and locked in by a single PACS vendor. Some hospitals went all the way and deconstructed their PACS by buying workstations, workflow managers, routers in addition to their VNA and built their own PACS more or less from scratch, Cloud providers are making an in-road, and life cyscle management becomes important as not every hospital wants to store all the studies for ever but want to implement retention rules.

The fourth generation will see a shift to virtual storage, i.e. you won’t know or need to know where the images are archived, whether it is in the cloud or local, in which case it is most likely on solid state memory, providing very fast and reliable access. Images are now archived from anywhere in the enterprise, whether it is from a camera in the ER, to a Pint Of Care (POC) Ultrasound at the bed site or a video camera from physical therapy. The boundaries between documents and images is getting blurred, some store everything on one server, some use two distinct information management systems. Cyber security is a major concern, as malware is becoming a real threat and ransomware already has caused major downtimes, requiring strict security policies and mechanisms to protect the data.

The communication part of PACS has gone some major changes as well. Initially, each PACS had its own dedicated network, because sending images over the existing infrastructure would bring down the complete network. Speeds were up to about 100 Megabit/second, which was OK for the relative small image and study sizes. The second generation networks were upgraded to fiber instead of copper wire allowing speeds in excess of 1 Gigabit/Second. Network technology advanced allowing the PACS networks to be part of the overall hospital infrastructure by reconfiguring the routers and creating Virtual Local Area Networks aka VLAN’s. The third generation of network technology starts to replace the CD’s exchange with cloud based image exchange using brokers, i.e. having a 3rd party taking care of your information delivery to patients as well as physicians. In the fourth generation, we see the introduction of Webservices in the form FHIR and DICOMWeb allowing for distribution on mobile devices, we are needing to create new profiles to deal with encounter based imaging instead of order based imaging using universal worklists and of course, security is becoming a major threat requiring firewalls, the use of DMZ’s to screen your outside connections and cyber security monitoring tools.

The fourth component of the PACS is the “System” component which mostly includes workflow support, of which there was initially very little. In the second generation, there has been a shift from PACS driven to RIS driven worklists and IHE starts to make an impact by defining multiple use cases with their corresponding HL7, DICOM and other standards. In the 3rd generation, the annual IHE connectathons have made a major impact as it provides a neutral testing ground for proving that these IHE profiles really work. The worklists at the radiologist are becoming EMR driven and orders are placed using a Centralized Physician Order Entry (CPOE) system, often at the EMR. The last generation we see the use of cross enterprise information exchange starting to take place using IHE standards such as XDS, in a secure manner making sure that consents are in place and that authentication and audit trails are being utilized in the form of ATNA standards. Patients are also able to upload their information from the Personal Health Records (PHR) and wearables.

As you can see, we have come a long way since the early PACS days and we still have a bright future ahead of us. I am sure in another 5 years there will be some more changes to come.


Monday, August 5, 2019

DICOM Cyber security threats: Myths and Truths.

A report by Cylera labs identified a potential cyber security threat in DICOM files that are exchanged
on media such as CD, DVD, flash or through email, as well as through DICOM web service communications (DICOMWeb).

The threat was taken seriously enough by the DICOM committee that it issued a FAQs document to address this potential issue. This threat exploits the additional header that is created for media, email and web exchange. Before discussing the potential threat and what to do about it, let’s first discuss what this header looks like and how it is used.

Media exchange files have an additional header, aka the File Meta header which consists of :
1.  A 192 byte pre-amble
2.  The characters DICM to identify that the following is encoded in DICOM format
3.  Additional information that is needed to process the file, such as the file type, encoding (transfer syntax), who created this file, etc. 
4.   The regular DICOM file.  



This additional information (3) is encoded as standard DICOM tags, i.e. Group 0002 encoding. After the Group 0002 encoding, the actual DICOM file which normally would be exchanged using the DICOM communication protocol will start. This encapsulation is commonly referred to as “part10” encoding because it is defined in part 10 of the DICOM standard. 

The potential cyber security threat as mentioned in the article involves the 192 byte preamble as there are no real rules about what it might contain and how it is formatted. The definition of this area is that it is for Application Profile or implementation specified use. The initial use was for early ultrasound readers, but more recently it is generally used for TIFF file encoding so that a file could have “dual personality” i.e. it can be decoded by a TIFF reader as well as a DICOM reader. The DICOM reader will simply skip the pre-amble and process it accordingly. In case of a TIFF encoding, the preamble will have the TIFF identifiers, i.e. 4 bytes that contain “MM\x00\x2a” or “II\x2a\x00” and additional instructions to decode the file structure. This application seems to have some traction with pathology vendors who are very slow implementing the DICOM whole slide image file set as described by David Clunie in a recent article, or could be used potentially by researchers. If not used by a specific implementation, all bytes in this preamble shall be set to 00H as can be seen in the figure.

The definition of this preamble was identified as a “fundamental flaw in the DICOM design” in the Cylera article mentioned earlier. This assertion was made due to the fact that attackers could embed executable code within this area. This would allow attackers to distribute malware and even execute multi-stage attacks.

In my opinion, this “flaw” is overrated. First of all, the preamble was designed with a specific purpose in mind, allowing multiple applications to access and process the files, and, if not used accordingly, it is required to be set to zero’s. Furthermore, a typical DICOM CD/DVD reader would import the DICOM file, stripping off the complete meta-header (preamble, DICM identifier and Group 0002), potentially coerce patient demographics and study information such as the accession number, and import it in the PACS.

If for whatever reason, the import software would want to copy the DICOM file as-is, i.e. including the meta-header, it could check for presence of non-zero’s in the preamble, and if found, either reject or quarantine the file or overwrite it with zeros. The latter would impact potential “dual-personality” files, but it could check for presence of the TIFF header and act accordingly by making an exception for those very limited use cases (how many people are using pathology and/or research applications today?). Last but not least, don’t forget that we are only discussing a potential flaw with DICOM part-10 files that are limited to exchange media, which means that there is nothing to fear for the regular DICOM exchange between your modalities, PACS and view stations, as these files don’t have the meta-file.

But, to be honest, anything in a file which is “for implementation,” specific use, or is proprietary is potentially subject to misuse. There are Z-segments defined in HL7, private tags in DICOM and even a “raw data” file storage in DICOM that can contain anything imaginable. These additional structures were not design flaws but rather defined for very specific business reasons. The good news is that HL7 FHIR will do away with Z-segments as it is replaced with strictly defined extensions defined by conformance rules, but in the meantime we will be dealing with proprietary extensions for many years. Consequently, you better know where your messages originate and whether the originator has its cyber security measures in place.

In conclusion, the possibility of embedding malware in the DICOM preamble is limited to media exchange files only, which, if present, is easily detectable and is in almost every case stripped off anyway prior to importing these. There are definitely vulnerabilities with any “implementation specific” or proprietary additions to standard file formats. Knowing the originator of your files and messages is important, if there is any suspicion, run a virus scanner, have the application strip off and/or replace any proprietary information, and never ever run an executable that could be embedded within these files.


Is it an Image or a Document? Discussing the “grey area” of overlap between images and documents.

There is a major increase in images to be managed by enterprise imaging systems. It is critical to
decide on how to format the images and documents (DICOM or native?) and where to manage them (EMR, PACS/VNA, document management system, other? Below are some thoughts and recommendations you might consider.

Digital medical imaging used to be confined to radiology and cardiology, and on a smaller scale to oncology. Images were created, managed and archived within these departments. If you wanted to see them you would need to access the image management system (PACS) for that department.
Over the past decade, new image sources started to appear, for example, images taken during surgery through a scope, videos recorded by the gastroenterologists of endoscopic procedures, ophthalmologists recorded retinal images, and pathologists began using digital pathology imaging. Point of care (POC) ultrasound also began to be used increasingly, and now there are intelligent scanning probes available that can connect to a smart phone or tablet.

As the sources of imaging grow, the volume of imaging is growing exponentially. Talking with informaticists at major hospitals, it seems there are new image sources every week, whether it is in the ER where people are taking pictures for wound care or during surgery to assist anesthesiologists.
Good examples of the type of imaging that typically takes place outside the traditional radiology and cardiology domain can be seen at a recent webcast on encounter-based imaging workflow. In his presentation, Ken Persons from the Mayo clinic talks about the fact that they have literally 100’s of alternate imaging devices that create tens of thousands of images per month that need to be archived and managed.

Departments that never recorded images before are now doing this, such as videos from physical therapy recording changes in gait after back surgery. In addition to this avalanche of images generated by healthcare practitioners, soon there will be images taken by patients themselves that need to be kept, e.g. of a scar after surgery after they are being sent home. This will replace in-person follow up exams which will save time, effort and be more efficient. Managing these images has become a major challenge and has shifted from departmental systems to enterprise image management systems, i.e. from PACS to VNA’s.

How is non-image data managed? Textual data such as patient demographics, orders, results and billing information is exchanged, while connecting 100+ computer systems in a typical mid-size hospital, through interface engines. Over the past 5-10 years, Hospital Information Systems (HIS) and departmental systems dedicated to radiology (RIS), cardiology (CIS) and other departments, are being replaced by Electronic Medical Record systems (EMRs) and information is accessed in a patient-centric manner.

A physician now has a single log-on to the EMR portal and can access all the clinical text-based information as well as images. Textual information can be stored and managed by an EMR, e.g. for a lab result as discrete information in its database, or linked to as a document, e.g. a scanned lab report or a PDF document. In addition to these documents being managed in the EMR, they can also be managed and stored in a separate document management system with an API to the EMR for retrieval.

There is no single solution for the problem of where to manage (i.e. index and archive) diagnostic radiology reports. Their formats vary widely as discussed in a related post discussing report exchange on CD’s. In addition to standardized formats such as DICOM SR’s and Secondary capture, additional formats appeared including XML, RTF, TXT and native PDF’s. Not only do the diagnostic report formats differ, but also where they are managed. The reports could have been stored in departmental systems (RIS) or in some cases by a broker. A case in point is the AGFA (initially MITRA) broker (now called Connectivity Manager) that functions as a Modality Worklist provider, and in many institutions also is used to store reports. In addition, reports could reside temporarily in the Voice Recognition System, with another copy in the RIS, EMR and PACS. This causes issues with ensuring amendments and changes to these documents stay in sync at various locations.

Before the universal EMR access, many radiology departments would scan in old reports so they could be seen on the radiology workstation, in addition to scanning patient waivers and other related information into their PACS. This is still widely practiced, witnessed by the proliferation of paper scanners in those departments. These documents are converted to DICOM screen-saves (Secondary Capture), or, if you are lucky, as DICOM encapsulated PDF’s which are much smaller in file size than the Secondary Captures. With regard to MPEG’s, for example swallow studies, a common practice is to create so-called Multiframe Secondary Capture DICOM images. All of this DICOM “encapsulation” is done to manage these objects easily within the PACS, which provides convenient access for a radiologist.

The discussion about images and documents poses the question on what the difference is between an image and a document, which would also determine if the “object” is accessed from an image management system (PACS/VNA), which infers that it is in a DICOM format, or from a document management system (a true document management system, or RIS, EMR) which either assumes a XDS document format (using the defined XDS metadata) or some other semi-proprietary indexing and retrieval system. Note that there are several VNA’s that manage non-DICOM objects, but for the purpose of this discussion, it is assumed that a PACS/VNA manages “DICOM-only” objects.
In most cases, the difference between images and documents is obvious, for example, most people agree that a chest X-ray is a typical example of an image, and a PDF file is a clear example of a document, but what about a JPEG picture taken by a phone in the ER, or an MPEG video clip of a swallow study? A document management system can manage this, or, alternatively, we can “encapsulate” it in a DICOM wrapper and make it an image similar to an X-ray, with the same metadata, being managed by a PACS system.

What about an EKG? One could export the data as a PDF file, making it a document or alternatively maintain the original source data for each channel and store it in a DICOM wrapper so it can be replayed back in a DICOM EKG viewer. By the way, one can also encapsulate a PDF in a DICOM wrapper, which is called an “encapsulated PDF” and manage it in a PACS. Lastly, one could take diagnostic radiology reports and encapsulate them as a DICOM Structured report and do the same for a HL7 version 3 CDA document, e.g. a discharge report, and encapsulate it in a DICOM wrapper and store it in the PACS.

All of which shows that there is a grey area with overlap between images and documents, whereby many documents and other objects could be considered either images, or a better word is DICOM objects and managed by the PACS, or alternatively considered documents and managed by a document management system. Imagine you would implement an enterprise image management and document management system, what would your choices be with regard to these overlapping objects?
 Here are my recommendations:
1. Keep PDF’s as native PDF documents, UNLESS they are part of the same imaging study. For example, if you have an ophthalmology study that includes several retinal images and the same study also creates pdf’s, it would be easier to keep them together which means encapsulating the PDF as a DICOM object. But if you have a PDF for example, from a bone densitometry device, without any corresponding images, I suggest storing it as a PDF.
2.  Use the native format as much as possible:
a. There is no reason to encapsulate a CDA in a DICOM or even a FHIR document object, conversions often create loss of information and are often not reversible. Keep them as CDA’s.
b. Manage JPEG’s and MPEG’s (and others, e.g. TIFF etc.)  as “documents.” As a matter of fact, by using the XDS meta-data set to manage these you are better off because you also are able to manage information that is critical in an enterprise environment such as “specialty” and “department,” which would not be available in the DICOM metadata.
c. Use DICOM encoded EKG’s instead of the PDF screenshots.
d. Stay away from DICOM Secondary Capture if there is original data available, remember that those are “screenshots” with limited information, specifically, don’t use the Screen-Captured dose information from CT’s but rather the full fidelity DICOM Structured Reports which have many more details.
3. Stop scanning documents into the PACS/VNA as DICOM secondary capture and/or PDF’s, they don’t belong there, they should be in the EMR and/or document system.

An EMR is very well suited to provide a longitudinal record of a patient, however, none of the EMR’s I know of will store images. Images are typically accessed by a link from the EMR to a PACS/VNA so that they can be viewed in the same window as the patient record on a computer or mobile device. In contrast, documents are often stored in the EMR, but these are typically indexed in a rudimentary manner and most users hate to go through many documents that might be attached to a patient record to look for the one that has the information they are looking for. A better solution for document access is to have a separate enterprise document management system, which should be able to do better job managing these.

Some VNA’s are also capable of managing documents in addition to images, preferably using the XDS infra-structure. As a matter of fact, if you are NOT using the XDS standard, but a semi-proprietary interface instead to store JPEG’s, MPEG’s and all types of other documents, you might have a major issue as you will be locked into a particular vendor with potential future data migration issues.

Also, be aware of the differences between XDS implementations. The initial XDS profile definitions were based on SOAP messaging and document encapsulation, the latest versions include web services, i.e. DICOMWeb-RS for images and FHIR for documents. Web services allow images or documents to be accessed through a URL. Accessing information through web services is how pretty much all popular web-based information delivery happens today e.g. using Facebook, Amazon, and many others. It is very efficient and relatively easy to implement.

Modern healthcare architecture is moving towards deconstructing the traditional EMR/PACS/RIS silo’s to allow for distributed or cloud-based image and information management systems. From the user perspective, who accesses the information through some kind of a computer based portal or mobile device, it does not really matter where the information is stored, as long as there is a standard “connection” or interface that allows access to either an image or document using web services.

Right now is the perfect time to revisit your current architecture and reconsider how and where you manage and archive images and documents. Many hospitals have multiple copies of these objects stored in a format that does not make sense at locations that were dictated by having easy access to the data without considering whether they really belonged there. Instead of cluttering the current systems, especially when planning for the next generation of systems that are going to be FHIR and DICOMWeb enabled, it is important to index and manage your images and documents at the location where they belong in a format that makes sense.


Thursday, August 1, 2019

SIIM19 part 2: Standards update.


As the representatives for the various standards committees (DICOM, FHIR, IHE) reiterated during the recent 2019 SIIM conference in Denver, there are several new interoperability standards available that could make your life easier, but if the user community does not ask for them in their RFP’s and during regular vendor discussions, there is no incentive for these to be implemented.
Obviously, if you don’t know what to ask for, it gets difficult, therefore here is a synopsis of the new DICOM standards developments covered during the SIIM19 conference:


1.       Multi-energy CT imaging – CT scanners are getting equipped to acquire images using different X-ray energy spectra, which then are processed, subtracted, etc. to provide a different clinical perspective. When the initial CT DICOM metadata was defined in the early 1990’s, there were no multi-spectral CT scanners available or even thought of, therefore, to encode this with the “old” CT data requires a lot of customization and proprietary encoding, hence the need for a new series of objects. 

Remember that it does not only require the acquisition devices to support this new standard, which seems to be the least of the worry given the experience with adapting recent new DICOM objects, but more importantly, the PACS/VNA back-ends and especially the PACS and enterprise viewers will need to support it as well. There are 4 additional “families” of CT objects defined, i.e. for image encoding, material quantification, labeling and visualization.

2.       Contrast administration – Most US institutions have implemented an X-ray radiation dose recording and management system, motivated by the US federal requirements to put the dose information in each CT radiology report. The next area for potential legislative requirements and implementation is the contrast administration and corresponding management as contrast can also be detrimental to a person’s body.

The DICOM contrast agent administration reporting capability will facilitate this. The implementation is very similar to the dose reporting, i.e. it will be recorded in a dedicated Structured Report which provides details about the contrast which was programmed at the injector device and what is actually delivered.

3.       3-D printing – The RSNA hosted a big pavilion showing 3-D models and applications, initially for surgery planning, but eventually for implants. This is a new upcoming area, its management is currently shared between surgery and radiology. There is a need to retain and archive these 3-D “print files” and also for standard interfaces to the various 3-D printers. The DICOM standard added an encapsulation of these print files, called STL (an abbreviation of "stereolithography"). STL is file format native to the stereolithography CAD software created by 3D Systems and is also supported by many other software packages; it is widely used for rapid prototyping, 3D printing and computer-aided manufacturing. The 3-D model usage codes defined by DICOM include those used for:
a.       Educational purposes, such as training, patient education, etc.
b.       Tool fabrication for medical procedures such as radiation shields, drilling guides, etc.
c.       External prosthetics
d.       Whole or partial implants
e.       Surgery simulation
f.        Procedure planning
g.       Diagnostics
h.       Quality Control

4.       DICOMWeb – DICOMWeb provides a protocol alternative to the traditional DICOM protocol that is very effective in exchanging information using webservices and therefore is more suitable for mobile applications than the “traditional” DICOM protocol. There are equivalent services of the traditional DICOM Store, Move, and Find by using STOW, WADO and QIDO as well as the capability for bulk transfer (pixel data only) and metadata (header data) only. The webservices have been re-documented by cleaning up the existing documentation. In addition, a new enhancement has been defined to exchange thumbnails, so now instead of selecting the first image of a series as a source for the thumbnail, one can select an image that is representative of a series of images.

5.       Security – Cybersecurity is a big issue because of a recent publication about the possibility of using metadata that contains malicious data to store images on a CD. The Security Working Group together with the MITA cybersecurity people have issued a publication about this issue with precautions, (see press release). The metadata aka preamble could contain an executable; therefore, one is encouraged to use a virus scan and also disable running any executables from the media.

6.       Consistent protocols – for use by XA and MR are important in case a radiologist wants to compare a study with previous ones and also to compare studies that were created in different organizations. A DICOM extension allows for storing these protocols so they can be reused.

7.       Artificial Intelligence (AI) – is getting a lot of attention. Guidelines on how to include AI annotations and how to incorporate these into the workflow are defined. Assuming that the annotations are encoded in a DICOM Structured Report, there is a JSON representation of the DICOM SR defined.

8.       Dermatology – revitalized to address dermoscopy, which uses surface microscopy to evaluate skin lesions and can be used for early detection of skin cancer. It is an extension to the regular photography file definitions with new codes that are added.

9.       Ultrasound – has been revitalized to come up with a proposal to track transducers. This is somewhat of a challenge as not all of these probes are “intelligent” and can exchange a unique identifier. It is important to track transducers for infection control.

As mentioned earlier, if the user community does not request these new features, there is little chance that they will be implemented in a timely fashion by the manufacturers. A rule of thumb that I recommend is that one includes in the RFP an automatic upgrade for all new DICOM features within a reasonable time (e.g. 3 years) unless federal and/or state requirements require it to be sooner such as is the case for dose reporting (and might be for contrast administration).

Wednesday, July 3, 2019

SIIM19: Back to the Patient Perspective.


The annual gathering of healthcare imaging and IT professionals, i.e. SIIM 2019 in Denver kicked off with a moving story by the keynote from a patient, Allison Massari, who survived a life-threatening accident that burned over 50 percent of her body. Her story of the impact healthcare providers had on her recovery set the stage for hundreds of healthcare imaging practitioners, consultants and vendors to exchange their experiences and gave added meaning their professions before talk turned to their products and services, and education of their peers on what is new and what is coming. The meeting had good “vibes” as people were eager to learn and there was excitement about new developments. 

Here are my impressions:
1.       AI is over its initial hype: the initial fear factor that came with the hype of first AI applications that made radiologists anxious about the potential impact on their jobs has faded and it is becoming  obvious that there is still a lot of work to do and a long way to go.
Most AI companies don’t even have FDA approval for their products yet, even though the FDA is stepping up to the plate and is giving special considerations to the fact that many of these products are based on deep learning, whereby the behavior of the software might change over time.
This infographic provides a nice breakdown of FDA approvals over the past several years showing the percentage of radiology algorithms that were approved. AI is finding its way in some of the PACS applications starting with workflow enhancements, there are dose reduction applications for CT screening and some “low hanging fruit” surrounding detection of common diseases.

2.       Enterprise imaging is still very challenging: As Jim Whitfill, the current SIIM chair mentioned during his update, enterprise imaging is what most likely saved SIIM from its demise after the 2008 downturn in membership and conference attendance, as IIP professionals were starting to think about how to do enterprise imaging and subsequently publishing about it in the Journal of Digital Imaging.
The VNA or Vendor Neutral Archive became the vehicle to implement enterprise imaging solutions, however, the non-order (aka encounter-based) workflow for those non-radiology or cardiology departments is poorly defined and there are many different options. See my related post in which I identified more than 100 possible implementations. Talking at SIIM with several implementors, I identified three different strategies:
·         The "top-down" approach – This model implements a vendor-neutral archive (VNA) for radiology and/or cardiology first, and then starts to expand it with other departments, however, there is no single, uniform workflow for those departments resulting in many different options.
·         The "bottom-up" approach – This model, which was used at Stanford University implements a VNA beginning with one department and then adding in other departments using the same workflow (which is DICOM worklist based). After adding many other specialties, they are only now starting to add radiology and eventually cardiology.
·         The “hybrid” approach – This method, which was adopted at the Mayo Clinic, is a combination of both approaches, instead of having many different workflows or only one single workflow, they settled for only a handful, in this case five major workflows for the different departments. You can see details of this discussion at this short video clip.
3.       Teleradiology workflow is very challenging:  there are only a few PACS vendors that do teleradiology well, as a matter of fact, many teleradiology vendors build their own system as the requirements are so different:
a.       The turn-around time requirement is very challenging – a typical turn-around time has to be 5-10 minutes for trauma cases. This means that the workflow is super-optimized.
b.       AI can make a major impact – Hanging protocols are very hard to define as the source for these studies vary widely, some of the studies group all images in a single series, some in multiple series, and the series descriptions are not uniform, therefore, a simple algorithm determining which is the PA and which is the lateral chest and ordering them consistently saves a few mouse clicks which is time. Prioritizing studies based on certain critical findings is important as well. AI definitely assists in the efficiency and automating of repeated tasks.
c.       There is a lack of patient contextual data – There are many challenges to get the prior images for a particular study (see a renewed activity described below) as the use of CD’s for image exchange does not seem to be going away soon. But this workflow is well defined by IHE XDS-I and other profiles, and in many countries other than the US there are successful image exchange implementations based on standards. However, instead of a radiologist logging into an EMR and looking at the images while having the other patient context at their fingertips, a teleradiologist logs into a PACS seeing the image, wanting to have that patient context from potentially many different EMR’s. It is a “reverse” workflow, instead of being EMR-driven pulling multiple imaging studies, it is PACS driven wanting to pull multiple EMR documents. This is a new challenge which is not quite addressed yet; ideally one could maybe pull CDA’s from these EMR’s but those were really defined for a different purpose.
d.       The workflow is reversed – the traditional Order-Study-Report workflow looks different for a teleradiology application as in many cases the order comes after the fact, so it would be Study-Report-Order (including “reason for study”)-Report update. Interestingly enough, when talking with teleradiologists, they only have to adjust their report based on the “reason for study” in a few cases. Regardless, this workflow needs to be addressed by their PACS.
e.       Many studies, if not all, are “Unverified”– This is particularly true for battlefield and disaster applications. There is often no patient name (“civilian 1”), and definitely no patient ID, and it is not uncommon to have partial studies. A PACS that depends on the traditional order-based workflow will perform very poorly.

4.       CD’s are here to stay (for a while): I do have personal experiences (as many do) with image exchanges for me and my family as witnessed by the stack of CD’s I carry to doctors and specialists. Actually, as some of them lack CD readers on their laptops or have their computers locked down by their security departments, I carry a laptop with me with the images preloaded and ready to be viewed. My experience with my veterinarian is completely different. When I asked for a copy of the MRI of our dog on a CD from our neuro-veterinarian, I was told that it is “old-fashioned” but that they would be more than willing to send me a link to view the images in a viewer, or, alternatively allow the images to be downloaded as a zip file for me and my regular veterinarian to review, which I did. How is it that our veterinarians have this all figured out and our physicians don’t? I can come up with many reasons, but one of them was identified by a special ACR/RSNA committee which met during SIIM and that is the lack of a standard governance agreement. Instead of having to get BA’s from all your partners covering the HIPAA requirements, they recommend a standard document as part of the Carequality consortium, in the form of an implementation guide, which is available as a draft for public comment. In the CareQuality framework, 36 million documents are exchanged each month using 16 networks based on IHE XCA standards. If we can exchange documents, there is no reason to not exchange images.

5.       Cybersecurity is a hot topic: there is not a day or week that goes by without a report of yet another ransomware attack or security breach exposing literally millions of patient records. There have been reports of CT scans modified to create significant findings using the DICOM header preamble on CD’s to embed viruses on old devices that still run old OS that are not being patched anymore (note that Windows 7 support stops in January 2020).
Key safeguards include upgrading old OS’s, if that is not possible, then isolate them from your network as well as disable the USB’s (which is a problem by itself as several modalities depend on the USB to connect ultrasound, dental, or other wands and detectors), secure networks, and educate your employees on the danger of social engineering is critical. At one facility, the open rate of spam emails dropped from 80% to less than 20% after the IT department started to send out “bogus” spam emails to alert their employees to the danger of social engineering. Another great example of this phenomena was that of a (infected) USB drive that is dropped in the employee parking lot of a hospital with its logo on it so that an unsuspecting employee with good intentions will insert it in a hospital network computer resulting in great harm.

6.       New standards are available to provide greater interoperability: DICOM, FHIR and IHE have made several new additions which are covered my SIIM report part 2.

Overall, yet another good year for SIIM and its members. The major differences between SIIM and other mega-meetings such as RSNA is the fact that you can cover the exhibitions without having to walk (and often run) many miles in between different booths, you have much better access to many of the faculty and peers, and last but not least, there are an abundance of hands-on workshops to experiment with new tools and standards.

For example, at the XPert IIP workshop, attendees could learn troubleshooting DICOM headers using DVTK and the DICOM protocol using Wireshark sniffer using pre-loaded laptops provided as part of the training. Sessions covering DICOMWeb and FHIR hands-on experience as well as the IIP sandbox covering Mirth interface engine programming were also very popular. One of the themes this year was empowerment, what better way to empower users than by providing them with the skills and tools to do their job better and more effectively. 

Next year will be in Austin, which is closer to the OTech home base (Dallas, TX), I am looking forward to another great meeting next year!

Monday, June 3, 2019

Enterprise or encounter-based imaging workflow options.


As institutions start to incorporate their multiple imaging sources into an enterprise solution such as
provided by a Vendor Neutral Archive (VNA), they find that the biggest challenge is dealing with the different workflows used by non-radiology departments, which in many cases must be re-invented. There are many different workflow and integration options, as a matter of fact I have identified more than one hundred different combinations as listed below. Hopefully these will converge to a few popular ones, driven by standardization and vendor support.

The traditional radiology and cardiology workflow has matured and is defined in detail by the IHE SWF (Scheduled Workflow) profile, which recently has been updated to SWF.b to incorporate PIR (Patient Information Reconciliation) and requires the support of a more recent version of HL7, i.e. 2.5.1 (this was optional in the first version).  PIR specifies the use of updates and merges for reconciliation such as when using a temporary ID and for “John Doe” cases.

The non-radiology and cardiology enterprise imaging workflows are also known as “Encounter Based Imaging Workflow” in contrast to the traditional “Procedure Based Imaging Workflow” as defined by the SWF/PIR IHE profiles mentioned above. The difference is that there is no order being placed prior to the imaging. Despite the lack of an order, we still need the critical metadata for the images which consists of:

1.       Imaging context attributes (body part-acquisition info-patient and/or image orientation)
2.       Indexing fields (for retrieval such as patient demographics, study, series and image identifiers)
3.       Link(s) to related data (reports, measurements)
4.       Department/location/specialty information. This is an issue as some of these acquisition devices (e.g. Ultrasound) can be used for different departments. It is not as easy as having a fixed MRI in radiology; now we have devices that can belong to different departments and used in various locations (OR, ER, patient rooms, etc.)
5.       References to connect to patient folder especially for the EMR (patient centric access)
This assumes that the practitioner decides to keep the images, which is not necessarily always the case; a user might choose to discard some or all of the images depending on if they need to be part of the permanent electronic patient record and/or need to be shared with other practitioners.

Assuming we want to archive the images, the first step is to figure out how do we get access to the metadata. There are two different workflows:
1.       The user retrieves the meta-data first and then acquires the images
2.       The user first acquires the images and then matches them up with the metadata (typically at the same device).

The end-result is the same, the workflow is a little bit different as there needs to be a query made by the practitioner to get the data, which could be as simple as scanning a patient barcode, RFID or doing a search based on the patient’s demographic data.
How is this information retrieval being implemented? There are several options:
1.       Use the DICOM Modality Worklist (DMWL) similar to the SWF profile. The DMWL in the case of the traditional SWF includes the “What, Where, When, for Whom and How to Identify,” for example, performing a Chest PA X-ray (what), using the portable unit in the ER (where), at 7 am (when), for Mr. Smith (for whom), with a link to the order using the Accession Number and identifying it with Study UID 1.x.y.z (how to identify). In the case of the encounter-based imaging workflow, we only use the “for whom” and “where: as the other information is not known.
Using only patient ID and department, the specification of this DMWL variant is covered by the IHE Encounter Based Imaging Workflow (EBIW) profile, which is geared towards Point of Care (POC) ultrasound. The problem is that DMWL providers are not typically available outside radiology/cardiology and that acquisition devices (think an android based tablet capturing images or a POC US probe connecting to a smartphone) don’t typically support the DMWL client either.
2.       Use the Universal worklist and Procedure Step (UPS) as defined in the IHE EBIW, which is basically a DICOMWeb implementation of the traditional worklist, which makes it easier to implement, especially on mobile media. The same issue is true with this solution as with solution (1): who supports it? Note that not only it is an issue with the client software but also the availability of the server, i.e. worklist provider which is somewhat of an unknown outside radiology/cardiology.
3.       Use HL7 Query as defined by the PDQ profile, either version 2 or 3.
4.       Use FHIR as defined by the PDQ-M profile. Note that the differences between V2 and FHIR is that the traditional PV1 segment which has visit information has been renamed as the FHIR Encounter Resource. So, when you think about encounters, think about the visits in Version 2.
5.       Listen to any V2 ADT’s, i.e. patient registration messages.
6.       Use an API, preferably web-based if you use a mobile device, direct into an EMR, HIS or ADT system.
7.       Do a DICOM Patient information Query (C-Find) to a PACS database assuming that the patient has prior images.
8.       Any other proprietary option.

The second step is that we need to add additional information to the metadata that was not provided by the initial meta-data query, i.e., the Accession Number. The Accession Number was initially intended to link an image or set of images with an order and the result (diagnostic report and subsequent billing). Even though there is no order, you’ll find that the Accession number is critical as it is used by the API from an EMR to a PACS and/or VNA to access the images, to link to the results and notes, to make the connection to billing and to associate with study information (Study Instance UID’s).

A so-called “Encounter Manager,” as defined by IHE, as an actor could issue a unique Accession Number. This encounter manager could reside in a PACS, VNA, or broker. To make sure that Accession Numbers are unique and different from other Accession Numbers, such as those issued by RIS or EMR, most institutions use a prefix or suffix scheme. Note that the acquisition device does not have to deal with this Accession Number issue, a DICOM router could do a query for the Accession Number and automatically update the image headers before forwarding it to the PACS/VNA.
The next (optional) step is that an encounter might need to create a “dummy” order, because many EMR’s or HIS systems cannot do any billing or recognize the images that are created without an order, so in many cases, an order is created “after the fact.”

The last step is to notify the EMR that images are available. There are several options for that as well:
1.       Create a HL7 V2 ORU (observation result) transaction as defined by the IHE EBIW. This is probably the most common option as EMR’s typically support the ORU.
2.       Create a HL7 V2 ORM with order status being updated.
3.       Create a DICOM Instance Availability. This is actually used quite a bit (I have seen Epic EMR implementations that use this). IA has more detailed information compared with the HL7 v2 options.
4.       Send a Version 2 MDM message which has the advantage that you can use it to provide a link to the images.
5.       Use the DICOM MPPS transaction.
6.       Use the (retired) DICOM Study Content Notification (still in use by some legacy implementations)
7.       Manually “complete” entry in the EMR
8.       Use proprietary API implementations.

The scenarios described above assume that patients are always registered, and encounters scheduled. It becomes more complex if there is no patient registration, such as a POC US being used by a midwife at a patient’s home. The same applies for emergency cases, e.g.  when using it in an ambulance where the only information might be that it is a 30’s female, or at a disaster area or battlefield (“citizen-1”). In this case, we need a solution similar to the PIR to reconcile the images with information entered after the fact resulting in updates and merges, typically done using HL7 transactions.
Another future complication will be the implementation of patient-initiated imaging, for example, if a patient has a rash and wants to send an image taken with his or her phone to a practitioner or sends images of a scar after surgery to make sure it is healing properly.

As you can see from the above, the challenge with Encounter-Based Imaging is that there are many options for implementations, in theory one could multiply the number of options for each step and you’d come up with many different combinations (2 times 8 times 8 results in theoretically 128! different options).

IHE so far has only addressed two options, the one for POC ultrasound and photo’s in the EBIW profile which specifies DMWL for getting the demographics and using an ORU for the results. In practice, a typical hospital might use 5-10 or so different options for the different departments. Hopefully there will be a couple of “popular” options emerging which will be driven preferably by new IHE profile definitions and supported by the major vendors. In the meantime, if you are involved with enterprise imaging be prepared to spend quite a bit of time determining which option(s) fit best for your workflow, and is supported by your EMR/PACS/VNA/Acquisition modality vendors. You might need to spend a significant amount of time training your users for any additional steps that might be necessary to fit your solution.