Friday, December 6, 2019

My RSNA2019 top ten.

Welcome to my 36th (!) RSNA

I always enjoy RSNA, it is good to catch up with old and new friends, see what is new in our world of radiology, and last but not least enjoy a piece of deep-dish pizza or a Wiener Schnitzel and Apfelstrudel at the Christmas market. 

Here are my observations:

1. RSNA this year was all about AI. Several major vendors were exhibiting AI driven workflows and new clinical applications for this new phenomenon. In addition, if you were able to make it to the basement of McCormick place, you would find a dedicated hall just for the AI vendors. However, the size of this so-called AI Showcase was in inverse proportion to the amount of traffic, maturity of the products, and number of real-world implementations.
The AI "basement"
There is no question in my mind that AI is still very new and, except for some niche clinical applications, still has a long way to go before large-scale deployment is going to happen. I asked several vendors how many installs they had and the answer ranges from a couple to maybe a few hundred, which compared with the number of hospitals worldwide is a drop in the bucket. In addition, there was relatively little traffic in the dedicated AI hall, much less than at the other two main exhibit floors, so AI did not appear to be top of mind for most attendees.

AI at its best:
integrated with a PACS viewer
There is no question that AI in the long term will become ingrained in the daily workflow and add significant value and increase specificity and sensitivity to the diagnosis by supporting the diagnostic process, however, it might be a couple of years before we’ll see an impact, especially in the day-to-day work of radiologists who work outside the major academic centers, where most of the initial implementations are being tested and deployed.

This is how it should look:
Path on left and Xray on right
2. Digital pathology is taking off in the US. Several western and northern European countries are at least 5 years ahead of the US as they started implementing digital pathology 5+ years ago. FDA approvals held up deployment in the US, but recent clearances are allowing its implementation. There is also an issue with return on investment, which is negative, as you cannot get rid of the slides containing the specimens. There are actually extra costs as now you’ll need to get slide scanners, view stations and an image display and management infrastructure. The good news is that the lag in implementation allows the US to learn from early experiences and become leading edge instead of bleeding edge. 

Why is pathology important for radiology? The reason is that pathology images and reports provide a valuable additional datapoint for the radiologist. Initially, physicians would only look at shared pathology images during tumor board discussions, but there are other applications such as for screening immigrants who typically get an x-ray and possibly lab test to look for infectious diseases.

Another major impact of the implementation of digital pathology will be on image and archive management. It is very likely that these images will be stored on the radiology PACS archive and almost certainly on the enterprise archive or VNA, assuming that the facility has one. Most departments are still trying to manage the onslaught of the additional data from 3-D breast images (DBT) filling up the available data storage at least,  if not more than twice as fast. Wait until you get whole-slide scanned images from pathology, that are multiple gigabytes in size.

POCUS from GE,
innovative 2-sided probe
3. POC (Point of Care) ultrasound is continuing to make inroads. Stanford recently put a POC-US in the hands of every resident and faculty physician, see link. The top three players in this market is Philips with the Lumify, which seems to have the most comprehensive set of features especially OB/GYN measurements and templates, the GE unit, and the Butterfly. Butterfly is somewhat of an outlier as it has a subscription model for its usage and uploads images in their cloud. Pricing is between $6k and 2k for these units. A major challenge with these devices is how to archive any of the images that the physician wants to keep as they have to be properly identified with metadata to make sure they end up in the correct patient folder.
EMR vendors are pushing solutions to upload these directly into their systems, which is a mistake, images belong in an enterprise image management system together with all other images, however, these archives, often a VNA,  have been slow to adapt to the specific workflow requirements for these devices, even though IHE has already put a specification out defining on how to do this.

4. In addition to POC-US, there is POC-DX, POC-CT and POC-MR. The POC-DX, also known as the x-ray portables, have been around for a long time, they are mainly used in the OR, ER and ICU’s to provide bedside diagnostic x-ray.

Cute portable for kids
These portables use digital x-ray plates, which are wirelessly connected so that the images can be transferred automatically from the plate to the portable console for processing and QA and then wirelessly sent to a PACS for physician and radiologist viewing. The DR plates are getting less expensive, battery life is getting better, but they are still rather heavy, and one has to be careful to protect them from body fluids as many are not 100% sealed.

Fuji showed a flexible sensor detector which brought the weight of the

Most innovative product IMHO:
flex detector
plate back to a mere 4 lbs. Except for developing countries, where price is still a big determinant, DR is now replacing CR at a rapid pace. Sedecal showed a “ruggedized” version of its portable unit which can be transported in a “box,” has big wheels and is mainly used in the field by specialized users such as the Red Cross or the military in areas of conflict and natural disasters.

Looks like a CT,
moves like a portable
POC-CT has grown up as well. These CT scanners have evolved from a “CT on wheels” to truly portable units and can be moved around as easily as portable x-ray units. These have built-in radiation screening as part of the gantry and a lead flap in the front and back to screen any additional radiation.

my second most
 innovative product choice
The POC-MR was a newbie at the show. It is still subject to regulatory approval which can be expected later this year. Its application is somewhat limited due to its low field strength (.064T), but the advantage of the low magnetic field is that there are no issues with shielding, as a matter of fact, they were scanning in real-time in the booth. The images are very noisy but new advanced image processing and AI can improve the image quality up to a point that they are usable for the application needed.

5. Photographs can assist in diagnosis. Photographs can provide important contextual information and can be taken by providers as well as patients using a camera or smartphone. There are clinical and technical challenges to recording and managing these pictures. The clinical challenges include privacy and how to deal with sensitive photos, including the definition of what constitutes a sensitive photo. Technical challenges include security as well as how to capture the appropriate metadata such as patient information and body part. 
Good example showing photo and image
There are two working groups established that are supported jointly by HIMSS and SIIM to address these issues, the Photo Documentation Workgroup dealing with the clinical and technical issues and the Data Standards Evaluation Workgroup dealing with analyzing the existing standards for nomenclature related to body part and anatomic region. White papers can be expected from these workgroups in the near future.

Still need huge glasses but effect is amazing
6. Virtual Reality (VR) is moving to Augmented Reality (AR). VR has been somewhat of a niche application, mostly used by surgeons to prepare for surgery as it can show true 3-D models of the organs using CT or MR source
data. VR has always been a little bit disjunctive from the real patient as there has been no real direct connection between the actual subject and the images that are shown in a 3-D space. AR is changing that as there is a direct connection between the patient and the image created by the 3D. For example, a surgeon can look at the patient through his special AR glasses and see the synthetic image super-imposed on the body part of interest. Again, VR and AR are somewhat of a niche application but it is quite fascinating and really cool to be able to have “x-ray vision” and look inside a body and see its organs from different angles and perspectives, which should be of great help to surgeons. A great example of how radiology supports other specialties.

7. Monitor management for home reading is a challenge. Imagine that you want to read from home, and for your worklist and reporting you use a laptop computer. One would typically have two medical grade monitors, but that could be three or four as well. The good news is that most radiologists are starting to learn that using a medical grade monitor is a requirement for reading anything CR/DR and certainly mammography.
Monitor management black box
This means that the monitors are calibrated to show each individual pixel value into a greyscale value that an observer can distinguish so as not to miss any subtle changes in pathology, and they are typically managed remotely including the possibility of keeping those calibration curves in case the quality of the monitor display was challenged in a potential malpractice lawsuit (which is not uncommon). 

However, when trying to connect those multiple monitors using a standard windows PC, the hanging protocols, i.e. where the images are displayed is challenging and it might vary upon rebooting the PC. Therefore, one might use one of those small “black boxes,” which has a video board inside and a controller that can remotely connect to the calibration management software. It manages the display order so that it is consistent any time a radiologist connects his or her laptop again.

MRI with built-in recliner
8. New open MRI’s are being introduced. There have been open MRI’s for a long time, the advantage is accessibility to the patient which is especially important when doing surgery. Other reasons for doing an exam in an open MRI might be for patients who are claustrophobic. Lastly, if a patient has a condition that only shows up when he or she is standing or sitting, i.e. if there is a need to show the load-bearing there is now a unit that allows the patient to keep on sitting. Another example of how some of the common devices are being created for niche applications.

9. 3-D printing is maturing. The novelty of 3-D printing is somewhat over compared with last year’s RSNA, but there was still quite a bit of interest,
Amazing detail
and several vendors displayed some amazing examples. Also, since 2018, the DICOM standard includes the so-called STL (stereolithography) file format, which is commonly used by CAD software. This format can be used to send to 3-D printers, but also can be encapsulated into a DICOM file, i.e. with the typical DICOM header, the modality being “M3D,” similar to the encapsulated PDF files. It can then be managed on a PACS archive such as a VNA and added to the study, e.g. the CT, and be used to reprint if so desired. There is no question that for surgery planning for difficult and rare cases, this is a great tool that is becoming available.

Looking for volunteers!
10. In case you missed the friendly ladies at the RAD-Aid booth, you can website and sign up as a volunteer. I have been very fortunate to have first-hand experience with the impact that you can make by teaching in developing countries and supporting your peers in your area of expertise. Remember, you don’t have to be a radiologist teaching interpretation or IR, but there is also a major need for people teaching basic x-ray as well as CT, MR, US, and even how to procure and maintain systems, how to manage a department, and how to troubleshoot image quality and technical problems.

Excellent Tech support built in
The good news is that some of the vendors are incorporating features in their products that kind of “guide” a technologist through a procedure. A good example is the Carestream CR console that shows how to expose an extremity and make sure to use collimation, something that is obvious to anyone taking an x-ray in the developed world, but is often overlooked in these emerging markets. I can promise you that volunteering can not only make a major difference in the lives of the ones you touch and interact with, but you’ll become a different person.

In conclusion, this was another great year, there were some great talks, my favorite was “AI in
Cabs and Ubers lining up
for drop-off
developing countries,” where I think it can make a major impact due to the limited resources and lack of training. Some African countries have fewer radiologists than there are in my hometown, and therefore AI can be a major help. Remember, in those cases we are not concerned if there are a few percentage points gained in specificity or sensitivity, if you start with “0,”anything is pretty much a gain.

However, regarding the state of AI, I have never seen so many vendors without FDA clearance promoting solutions based on limited datasets from only a subset of the populations, for example how valid is an AI algorithm based on a clinical study in China to a population in a downtown US city where the majority is African-American?

I am curious to see the progress made by the same time next year, if I missed you this time, I hope to see you next year!

Monday, November 25, 2019

DICOM Modality Installation Checklist part 2

So you did all your homework prior to the new modality to be installed as described in part 1 of this post, i.e. you checked the conformance statements, used a simulator to query a worklist and send test images and checked if they display correctly at the PACS workstation. However, when you connect the new modality to the PACS it does not work. What do you do?

1.       Check connectivity: Ping the IP of the worklist provider and your destination(s), and then do a DICOM ping (aka Echo or Verification). The DICOM Verification feature might sometimes be hidden or only available under a service menu, but in many cases,  it is right there on the desktop or as a menu item. In rare cases there is no DICOM Verification implemented, shame on those vendors because it robs the service and support engineers from a very valuable tool. Failure of the network or DICOM ping indicate network issues, addressing (port, IP AE-Title) misconfiguration, or failure to add the device to the ACL list at the PACS.

2.       Assuming you have connectivity, but your images don’t show up on your PACS, the first line of defense would be to check the logs on either side, i.e. client and server or in DICOM lingo, SCU or SCP. The images at the PACS might have ended up “unverified” or “broken” which means that there is something wrong with the metadata or header. It is most likely an Accession number of ID integrity issue. Usually, these issues can be fixed with the standard tools available to the PACS administrator, however, in rare cases, you might need access to the PACS database to find out what happened, and in some very rare cases you might need to do an off-line validation of the metadata to see what causes the issue. The off-line validation will take the ages and runs a check against the DICOM data-dictionary. There are several DICOM validators that do this, both David Clunie has a validator and DVTK has a validator. In the case that the modality worklist does not show up, you again look at the logs and as a last resort, you will have to use a DICOM sniffer to see where the communication has broken down. A good illustration of such a problem was an Ultrasound of a major manufacturer which did not display the worklist, and only after using the sniffer we could prove that the information was actually received by the modality, and therefore, the fact that it was not displayed was a problem at that modality. I actually found out after running the validator that one of the worklist attributes had an illegal value and therefore the modality did not display it.

3.       Assuming you have a worklist at the modality, there might be information missing in the list or, there are too many entries or too few, meaning that the attributes used to filter the list were not applied correctly. In that case you will have to work with the interface specialist to map the HL7 orders to the worklist. Filters that determine what worklist items are displayed typically include the Modality, Scheduled AE-Title and/or Station Name. These have to be mapped from procedure codes, patient location and other elements in the HL7 order message.

4.       Assuming you are able to look at an image on a workstation, there could still be a display issue with the image ordering and view port positioning, which is typically determined by series and study descriptions as well as orientation information. If there is an image quality issue, there could be a problem with the pixel interpretation pipeline. The latter can be tested by using the test set developed for the IHE display protocol which have any possible permutation and combination of image types, photometric interpretation, presentation states, look up tables and other parameters impacting the display.

After troubleshooting these issues it should work! Congratulations on a job well-done. Remember with he proper training and tools you are empowered to solve these kind of tricky issues and problems by yourself instead of having to rely on your vendors who in many cases resort to finger pointing to each other. That is one of the very frequent reasons that IIP professionals show up for our training classes, in addition to getting additional career opportunities. Hope to see you at one of our training classes, see our schedule here.

Monday, November 18, 2019

DICOM Modality Installation Checklist.

One of the typical responsibilities of a PACS administrator is adding a new image acquisition
modality to the PACS system. It is also one of the more challenging tasks as it is often hard to predict how the device will interact as these are still not quite “plug-and-play.” To make it worse, this is often a visible and highly anticipated task, as in many cases the new modality has been expected for a long time. So, when it finally arrives at the loading dock, users want to see it up and working as soon as possible.
With proper preparation prior to and during the actual installation, the success rate of the install can be increased and the time to get it up and running can be greatly reduced and frustration kept to a minimum.

This is the check list I recommend prior to the install:
1.       Do a “paper validation” between the modality and its connections, i.e. DICOM worklist provider, DICOM destination(s) for image Store, Storage Commitment, Modality Performed Procedure Step, and Structured Reports. Get the DICOM conformance statements for these devices and compare them against each other. Make sure you get the right version of these conformance statements as functionality can differ substantially between different releases. Specifically look for the following in these documents:
a.       Make sure that there is support for the type of DICOM files (SOP Classes) you will be exchanging. Be aware of and look for support of the new “enhanced” SOP Classes such as for CT, MR, Angio, RF, breast tomosynthesis, IV-OCT and others.
b.       If you want to compress the images at the modality, make sure there is support of the type of compression at the source and destination(s) (JPEG lossless, lossy, Wavelet, MPEG for video, etc.)
c.       If you want to use Storage Commitment, make sure its behavior between the SCU and SCP matches with regard to the handling of the associations for the reply.
d.       If you want to use Modality Performed Procedure Step (MPPS), make sure that the implementation matches your workflow, for example, you don’t want to have MPPS report the study being complete if there are still images to be sent, processed, or imported.
e.       Match the worklist attributes between the modality and worklist provider and look for alternate mapping in case attributes might be missing on the modality side. An example would be to map missing patient weight or allergies in a Patient Comment field if that is required at the modality but not displayed.
2.       Do a “file validation” by asking the vendor to send you a CD with images, making sure that each type of image is on the CD. In addition, get sample Structured Reports, such as dose reports for CT or measurements for ultrasound and echo. Import these files on a test PACS, Voice Recognition and Dose management system and verify proper display of the images and measurements.  Make sure that the hanging protocols work at the workstations and if not, troubleshoot it to find what the cause is (study descriptions, body part, etc.)
3.       Do an “install validation” by using a modality simulator that is able to query a worklist using the same attributes as used by the new modality and simulate Store for the various file type to the test PACS. Simulate the Storage Commitment and MPPS. There are commercial modality simulators available (e.g. OT-DICE) as well as open source ones (DVTK). When doing the simulation, use the same IP address, port and AE-Title that the new modality would be using. It is strongly recommended to use best practices for the AE-Titles and port numbers, i.e. use an all caps AE-Title that indicates the institution, location and modality, and use the standard port number (11112) as assigned by IANA to DICOM devices. Work with IT so that you get a new, fixed IP address assigned for the new modality and make sure they configure the VLAN and routers to allow access.
If you have taken all these precautions, you should be able to swap out the simulator for the actual device, and the chances are that it might be “plug-and-play” assuming you addressed all the issues during the pre-install phase.
However, if it still does not work, you might want to do some troubleshooting using the tools as described in part 2 of this post.

Tuesday, October 22, 2019

What is your Enterprise Imaging Strategy: Bottom up or Top-down?

Enterprise imaging is the latest trend; however, a well working solution can be challenging to implement. The most common challenge is the differences in the various workflows. To implement an enterprise solution, three methods can be used:
  1. A "top-down" approach – This model implements a vendor-neutral archive (VNA) for radiology, cardiology, and several other departments, all pretty much at the same time. The problem is that every department has a different workflow. Some use the DICOM Modality Worklist, some use a HL7 feed, some don’t use any prior order or worklist mechanism causing the images to be reconciled with the patient information after the fact. As a matter of fact, if you look at all of the different possible combinations, there are more than 100 different scenarios as described here. Using the top-down approach will set you up for chaos if there are no well-defined workflow options and you let every department decide on their “favorite” workflow.
  2. A "bottom-up" approach – This model, which was used for example at Stanford University Healthcare, implements a VNA beginning with one department, and then adds other departments using the same workflow. This solution results in an initial struggle to adapt everyone to the same workflow but as soon as people start to see the results, everyone will get excited and be ready to tackle the next one. As of today, Stanford has several departments on-line, but everyone uses the same workflow. Interestingly enough, radiology was not the first department they started with. Also note that this is a multi-year process, which will take longer than the top-down approach.
  3. A hybrid approach – This method, which was adopted for example at the Mayo Clinic, is a combination of both approaches as it might not be feasible to have everyone using exactly the same workflow. In this particular institution they have identified five distinct workflows and all of the new departments pick from these five options. These options include the traditional order-based workflow, in their case driven by their EMR, the non-order-based workflow using Patient ID look-up for demographics and creating an EMR order after the fact, DICOM wrapping of JPEG’s and a couple of variants.
Therefore, to implement enterprise imaging, don’t go for the top-down approach as it can be chaotic, as each department uses a different workflow, but rather, develop a handful of workflows (preferably 3 but no more than 5) and steer the departments to these options. The options will be different for most institutions as each one has a different IT infrastructure and different access to patient order and encounter information. Before starting your implementation, develop these workflow options and spend time testing them to see how they work, and if they do, stick with these, even though it will require that users to change their behavior. As soon as they see the benefits, it will be successful.
For a video version of this presentation, including some thoughts on recent developments in PACS technology, see the live interview here.

Monday, October 14, 2019

Volunteering in Africa

Black Lion hospital CT/MR pavilion

I stepped out of the hotel lobby in Addis Ababa, Ethiopia, to a tropical downpour. No way would I have been able to walk to the hospital without being totally soaked, including my backpack with my laptop. The doorman saw my desperate look and told me to wait, as he was talking with a gentleman in a nice car waiting in front of the hotel. He then told me to step in and that he would take care of it. I told the driver that I was on my way to teach in the local hospital and we had a nice conversation while he made sure I arrived dry and safely. When I wanted to pay him, he refused, saying, “Thank you for what you do for my country.”

This is the kind of experience you can expect when working in a developing country as a volunteer. Not only do you make a big difference by spending your time and sharing expertise, but it is also very rewarding, and excellent “feel-good” therapy. The people you interact with greatly appreciate your contribution; not only the professionals that directly benefit from the shared knowledge, but many others that you encounter on the street or at your hotel.

In this particular trip, I was doing a RAD-AID sponsored IT assessment of the PACS system at the Black Lion Hospital in Addis Ababa. We were trying to solve a number of issues including: image quality issues with MRI images coming up unreadable at the PACS, figuring out how to connect their home-grown EMR to get a worklist going at the modalities, installing a teaching file solution, and trying to address several other small issues that they were encountering. In the week prior to that I taught a PACS bootcamp to 13 PACS administrators in Dar El Salaam, Tanzania, which was very well received. I like nothing better than the “Aha, is that how it works?” glint in the eyes of these professionals.

Teaching PACS bootcamp in Dar El Salaam
People sometimes ask me how it is to teach or work with healthcare professionals in developing countries, and I tell them that it is not any different than teaching in the US or any other country. There are smart and eager-to-learn people everywhere. The problem in developing countries is that there is very poor or no support from the vendors that provide the equipment as they don’t spend time and effort to create a support structure with well trained engineers. Therefore, the hospital staff often has to figure out the issues by themselves, which is why training by organizations such as RAD-AID and the SIIM Global ambassador program is so important and makes such a big difference.

I would encourage each and every SIIM member to consider volunteering. I know it might be somewhat out of your comfort zone, but I can guarantee you that not only will it make a major difference on the receiving side, it will be equally rewarding for you as a person as you will grow and gain new experiences. I myself am definitely hooked and can’t wait for my next assignment. I’ll do this as long as I am able, and I’m thankful for SIMM to support such a great cause.

Wednesday, September 18, 2019

Different levels of AI applications in diagnostic imaging

There are different levels where AI can be applied in radiology as well as other diagnostic imaging applications, depending on the step in the workflow from acquisition to interpretation, post processing and analysis.

The first level is at the Image Acquisition level. For example, one of the challenges with doing a CT scan is to have the patient center coincide with the center of the radiation beam, which will result in optimal dose distribution and corresponding image quality. In addition to the patient being centered, the distribution of the radiation dose, depending on the body part is also important, e.g. a lower dose for the head than for the pelvis. Instead of having a technologist making an educated guess, the machine can assist with this and automate the positioning process, again to optimize dose which means not using more than necessary.

De-noising of images is also an important feature. Typically, with lower radiation techniques, more noise is created, which can compromise a diagnosis and ultimately patient care. This is especially true for screening, where there is no direct indication to perform a CT study and limiting dose is important. An algorithm can be taught what noise looks like in a typical low-dose image and uses that knowledge to apply image processing to remove the noise to allow a lower dose technique to be used. The same principle is used to remove common artifacts such as created by metal parts in an X-ray. If the algorithm is taught how a typical artifact shows up in an image, it could remove it or, at a minimum, reduce it thus improving image quality and contributing to a better diagnosis.

An important feature for AI would be regulating the workflow, i.e. determining which cases should be considered “urgent” aka STAT based on automatic abnormality detection. These cases would be bumped to the top of the worklist to be seen by the radiologist.
The opposite is true as well, some of the images could be considered totally “clear,” i.e. having no indication and therefore not needing to be seen by a radiologist. This is useful in mass-screenings, e.g. for TB among immigrants, or black lung disease for people working in coal mines. These “normal” cases could be eliminated from a worklist.

The next level of AI is at the post-processing and reading level. CAD is probably the most common form of AI, where an image is marked using an annotation indicating a certain finding, which serves as a “second opinion.”

AI can also increase the productivity dramatically by assisting in creating a report. Macro’s can be used to automatically create sentences for common findings, again based on learning what phrases a user would typically use for a certain indication.
Standard measurements such as used for obstetrics can be automated.  The algorithm can detect the head and indicate automatically its circumference and diameter which are standard measurements to indicate growth.

One of the labor-intensive activities is the annual contouring of certain anatomical parts such as the optical nerve in skull images. This contouring is used by radiation therapy software to determine where to minimize radiation to prevent potential damage. Automating the contouring process could potentially save a lot of time.
Automatic labeling of the spine vertebrae for the radiologist also saves time, which could also improve accuracy. This time savings might only be seconds, but it would add up when a radiologist is reviewing a large number of such cases.
Determining the age of a patient based on the x-ray such as of a hand is a good example of quantification, another example is the amount of calcium in a bone indicating potential osteoporosis.

Some of the indications are characterized by a certain number of occurrences within a particular region, for example the number of “bad cells” indicating cancer in a certain area when looking at a tissue specimen through a microscope, or, in the case of digital pathology, displayed on a monitor. Labeling particular cells and automatic counting them offers a big time savings for a pathologist.

One of the frequent complaints heard about the workstation functionality is that the hanging protocols, i.e. how the images are organized for a radiologist are often cumbersome to configure and do not always work. AI can assist in having “self-learning” hanging protocols based on radiologist preferences and also be more intelligent in determining the body part to determine what hanging protocol is applicable.

As AI becomes integrated in the workflow, the expectation is that it is “always-on,” meaning that it is seamlessly operates in the background, without a user having to push any buttons or launch a separate application to have an AI “opinion.”

One of the challenges is also to make sure that relevant prior studies are available, which might need to be retrieved from local and/or remote image sources, for example from a VNA or cloud. AI can assist by learning what prior studies are typically used as a comparison and do an intelligent discovery of where they might be archived.

Not only do radiologists want to see prior imaging studies, but also additional medical information that might be stored in an Electronic Health Record or EMR such as lab results, patient history, medications, etc. Typically, a radiologist would have access to that information, especially as most PACS systems are migrating to become EMR driven, however for teleradiology companies, the lack of access to EMR data is a major issue, where AI might be able to assist.

AI is just starting to make an impact, we have only seen the tip of the iceberg, but it is clear that there can be major improvements made using this exciting technology.

Wednesday, September 11, 2019

The evolution of PACS through the years.

PACS systems have evolved quite a bit over the past 25 years, this essay provides the background of where PACS started, where we are now and where we are headed. I am covering the four essential PACS components, i.e. the P for Picture (viewing ), A for Archiving and image and information management, C for Communication and S for System.

Regarding the P for pictures: In the first generation view stations, the software was not as sophisticated, it had only basic functionality, and the viewers were thick clients, meaning that the images had to be downloaded to the local workstation and all of the processing was done locally. These view stations were mimicking a alternator, both in size and functionality, mostly displaying images in a landscape format.

By the second generation, radiologists discovered that they did not really needed 8 monitors but can view cross sectional studies using “stacking” and virtually integrating the3-D in their mind. The viewers added more sophisticated hanging protocols, aka DDP’s or Default Display Protocols, which refers back to how films were ”hanged” on a light box. How the images are sorted can depend on the modality, (e.g. Mammography), body part (e.g. Chest or extremity), specialty (e.g. Neuro) and individual preferences. Re-arranging images and sorting through literally hundreds of them in case of a cross-sectional study such as a CT or MRI is a burden for the radiologist and takes time. Inconsistent display can also be cause for medical errors, imagine that the new study is always displayed on the top of a monitor and the prior one on the bottom and that for some reason, this is reversed, this could cause the radiologist to report the wrong study. Voice aka Speech Recognition has become routine. Some studies, initially mammography, are subjected to Computer Aided Diagnosis which creates a “second opinion” for the radiologist by marking the images with CAD marks for clinical findings.

The 3rd generation workstations are accommodating different specialties in addition to radiology such as cardiology, ophthalmology, dermatology, and others, commonly referred to as “ologies”. The viewer becomes a Universal viewer which instead of a thick client is now a thin client which does not leave any trace of patient information after the user has logged out, aka a “zero-footprint”. Some modalities create images and/or studies with huge file sizes in excess to 1 GigaByta, which makes it more efficient to do what is called “server-side” rendering whereby the viewer functions as a remote window to a server which performs the processing.

The fourth generation of viewers implement web services that also allow for mobile access, i.e. look at the images from a mobile device whether it is a tablet or smart phone using the DICOMWeb protocol. What used to be called CAD is now replaced with Artificial Intelligence or AI which spans many more detections of various diseases in addition to automating the workflow for the radiologist. As an example, AI can detect a critical finding and automatically bump the study to the top of the worklist. It can also remember and learn physician preferences and support his or her workflow.

The next component of the PACS is the Archiving and image and information management. The early generations of PACS systems were limited by cost of archive media. Most systems would archive studies with a certain age on a second or third tier, slower and less expensive media such as Magnetic optical disks, tape, or even store it off-line.

In the second generation, the big Storage Area Networks and Networked Attached Storage Devices were introduced having multiple arrays of inexpensive disks called RAID’s which is still the most common configuration. Because of some natural disasters and hardware failures, most hospitals learned the hard way that redundancy and backup is critical so most of these archive systems by now have at least one mirrored copy and a sound backup. CD’s become the standard for image exchange between physicians.

In the 3rd generation, data migration as well as life cycle management is becoming a major issue. Many hospitals are replacing their PACS vendor and find out that it is really hard, costly and lengthy to migrate their images to another archive from a different vendor. They were looking for remote storage solutions, i.e. SSP’s, or buying a Vendor Neutral Archive (VNA) to take control over their image archive and not being dependent and locked in by a single PACS vendor. Some hospitals went all the way and deconstructed their PACS by buying workstations, workflow managers, routers in addition to their VNA and built their own PACS more or less from scratch, Cloud providers are making an in-road, and life cyscle management becomes important as not every hospital wants to store all the studies for ever but want to implement retention rules.

The fourth generation will see a shift to virtual storage, i.e. you won’t know or need to know where the images are archived, whether it is in the cloud or local, in which case it is most likely on solid state memory, providing very fast and reliable access. Images are now archived from anywhere in the enterprise, whether it is from a camera in the ER, to a Pint Of Care (POC) Ultrasound at the bed site or a video camera from physical therapy. The boundaries between documents and images is getting blurred, some store everything on one server, some use two distinct information management systems. Cyber security is a major concern, as malware is becoming a real threat and ransomware already has caused major downtimes, requiring strict security policies and mechanisms to protect the data.

The communication part of PACS has gone some major changes as well. Initially, each PACS had its own dedicated network, because sending images over the existing infrastructure would bring down the complete network. Speeds were up to about 100 Megabit/second, which was OK for the relative small image and study sizes. The second generation networks were upgraded to fiber instead of copper wire allowing speeds in excess of 1 Gigabit/Second. Network technology advanced allowing the PACS networks to be part of the overall hospital infrastructure by reconfiguring the routers and creating Virtual Local Area Networks aka VLAN’s. The third generation of network technology starts to replace the CD’s exchange with cloud based image exchange using brokers, i.e. having a 3rd party taking care of your information delivery to patients as well as physicians. In the fourth generation, we see the introduction of Webservices in the form FHIR and DICOMWeb allowing for distribution on mobile devices, we are needing to create new profiles to deal with encounter based imaging instead of order based imaging using universal worklists and of course, security is becoming a major threat requiring firewalls, the use of DMZ’s to screen your outside connections and cyber security monitoring tools.

The fourth component of the PACS is the “System” component which mostly includes workflow support, of which there was initially very little. In the second generation, there has been a shift from PACS driven to RIS driven worklists and IHE starts to make an impact by defining multiple use cases with their corresponding HL7, DICOM and other standards. In the 3rd generation, the annual IHE connectathons have made a major impact as it provides a neutral testing ground for proving that these IHE profiles really work. The worklists at the radiologist are becoming EMR driven and orders are placed using a Centralized Physician Order Entry (CPOE) system, often at the EMR. The last generation we see the use of cross enterprise information exchange starting to take place using IHE standards such as XDS, in a secure manner making sure that consents are in place and that authentication and audit trails are being utilized in the form of ATNA standards. Patients are also able to upload their information from the Personal Health Records (PHR) and wearables.

As you can see, we have come a long way since the early PACS days and we still have a bright future ahead of us. I am sure in another 5 years there will be some more changes to come.

Monday, August 5, 2019

DICOM Cyber security threats: Myths and Truths.

A report by Cylera labs identified a potential cyber security threat in DICOM files that are exchanged
on media such as CD, DVD, flash or through email, as well as through DICOM web service communications (DICOMWeb).

The threat was taken seriously enough by the DICOM committee that it issued a FAQs document to address this potential issue. This threat exploits the additional header that is created for media, email and web exchange. Before discussing the potential threat and what to do about it, let’s first discuss what this header looks like and how it is used.

Media exchange files have an additional header, aka the File Meta header which consists of :
1.  A 192 byte pre-amble
2.  The characters DICM to identify that the following is encoded in DICOM format
3.  Additional information that is needed to process the file, such as the file type, encoding (transfer syntax), who created this file, etc. 
4.   The regular DICOM file.  

This additional information (3) is encoded as standard DICOM tags, i.e. Group 0002 encoding. After the Group 0002 encoding, the actual DICOM file which normally would be exchanged using the DICOM communication protocol will start. This encapsulation is commonly referred to as “part10” encoding because it is defined in part 10 of the DICOM standard. 

The potential cyber security threat as mentioned in the article involves the 192 byte preamble as there are no real rules about what it might contain and how it is formatted. The definition of this area is that it is for Application Profile or implementation specified use. The initial use was for early ultrasound readers, but more recently it is generally used for TIFF file encoding so that a file could have “dual personality” i.e. it can be decoded by a TIFF reader as well as a DICOM reader. The DICOM reader will simply skip the pre-amble and process it accordingly. In case of a TIFF encoding, the preamble will have the TIFF identifiers, i.e. 4 bytes that contain “MM\x00\x2a” or “II\x2a\x00” and additional instructions to decode the file structure. This application seems to have some traction with pathology vendors who are very slow implementing the DICOM whole slide image file set as described by David Clunie in a recent article, or could be used potentially by researchers. If not used by a specific implementation, all bytes in this preamble shall be set to 00H as can be seen in the figure.

The definition of this preamble was identified as a “fundamental flaw in the DICOM design” in the Cylera article mentioned earlier. This assertion was made due to the fact that attackers could embed executable code within this area. This would allow attackers to distribute malware and even execute multi-stage attacks.

In my opinion, this “flaw” is overrated. First of all, the preamble was designed with a specific purpose in mind, allowing multiple applications to access and process the files, and, if not used accordingly, it is required to be set to zero’s. Furthermore, a typical DICOM CD/DVD reader would import the DICOM file, stripping off the complete meta-header (preamble, DICM identifier and Group 0002), potentially coerce patient demographics and study information such as the accession number, and import it in the PACS.

If for whatever reason, the import software would want to copy the DICOM file as-is, i.e. including the meta-header, it could check for presence of non-zero’s in the preamble, and if found, either reject or quarantine the file or overwrite it with zeros. The latter would impact potential “dual-personality” files, but it could check for presence of the TIFF header and act accordingly by making an exception for those very limited use cases (how many people are using pathology and/or research applications today?). Last but not least, don’t forget that we are only discussing a potential flaw with DICOM part-10 files that are limited to exchange media, which means that there is nothing to fear for the regular DICOM exchange between your modalities, PACS and view stations, as these files don’t have the meta-file.

But, to be honest, anything in a file which is “for implementation,” specific use, or is proprietary is potentially subject to misuse. There are Z-segments defined in HL7, private tags in DICOM and even a “raw data” file storage in DICOM that can contain anything imaginable. These additional structures were not design flaws but rather defined for very specific business reasons. The good news is that HL7 FHIR will do away with Z-segments as it is replaced with strictly defined extensions defined by conformance rules, but in the meantime we will be dealing with proprietary extensions for many years. Consequently, you better know where your messages originate and whether the originator has its cyber security measures in place.

In conclusion, the possibility of embedding malware in the DICOM preamble is limited to media exchange files only, which, if present, is easily detectable and is in almost every case stripped off anyway prior to importing these. There are definitely vulnerabilities with any “implementation specific” or proprietary additions to standard file formats. Knowing the originator of your files and messages is important, if there is any suspicion, run a virus scanner, have the application strip off and/or replace any proprietary information, and never ever run an executable that could be embedded within these files.

Is it an Image or a Document? Discussing the “grey area” of overlap between images and documents.

There is a major increase in images to be managed by enterprise imaging systems. It is critical to
decide on how to format the images and documents (DICOM or native?) and where to manage them (EMR, PACS/VNA, document management system, other? Below are some thoughts and recommendations you might consider.

Digital medical imaging used to be confined to radiology and cardiology, and on a smaller scale to oncology. Images were created, managed and archived within these departments. If you wanted to see them you would need to access the image management system (PACS) for that department.
Over the past decade, new image sources started to appear, for example, images taken during surgery through a scope, videos recorded by the gastroenterologists of endoscopic procedures, ophthalmologists recorded retinal images, and pathologists began using digital pathology imaging. Point of care (POC) ultrasound also began to be used increasingly, and now there are intelligent scanning probes available that can connect to a smart phone or tablet.

As the sources of imaging grow, the volume of imaging is growing exponentially. Talking with informaticists at major hospitals, it seems there are new image sources every week, whether it is in the ER where people are taking pictures for wound care or during surgery to assist anesthesiologists.
Good examples of the type of imaging that typically takes place outside the traditional radiology and cardiology domain can be seen at a recent webcast on encounter-based imaging workflow. In his presentation, Ken Persons from the Mayo clinic talks about the fact that they have literally 100’s of alternate imaging devices that create tens of thousands of images per month that need to be archived and managed.

Departments that never recorded images before are now doing this, such as videos from physical therapy recording changes in gait after back surgery. In addition to this avalanche of images generated by healthcare practitioners, soon there will be images taken by patients themselves that need to be kept, e.g. of a scar after surgery after they are being sent home. This will replace in-person follow up exams which will save time, effort and be more efficient. Managing these images has become a major challenge and has shifted from departmental systems to enterprise image management systems, i.e. from PACS to VNA’s.

How is non-image data managed? Textual data such as patient demographics, orders, results and billing information is exchanged, while connecting 100+ computer systems in a typical mid-size hospital, through interface engines. Over the past 5-10 years, Hospital Information Systems (HIS) and departmental systems dedicated to radiology (RIS), cardiology (CIS) and other departments, are being replaced by Electronic Medical Record systems (EMRs) and information is accessed in a patient-centric manner.

A physician now has a single log-on to the EMR portal and can access all the clinical text-based information as well as images. Textual information can be stored and managed by an EMR, e.g. for a lab result as discrete information in its database, or linked to as a document, e.g. a scanned lab report or a PDF document. In addition to these documents being managed in the EMR, they can also be managed and stored in a separate document management system with an API to the EMR for retrieval.

There is no single solution for the problem of where to manage (i.e. index and archive) diagnostic radiology reports. Their formats vary widely as discussed in a related post discussing report exchange on CD’s. In addition to standardized formats such as DICOM SR’s and Secondary capture, additional formats appeared including XML, RTF, TXT and native PDF’s. Not only do the diagnostic report formats differ, but also where they are managed. The reports could have been stored in departmental systems (RIS) or in some cases by a broker. A case in point is the AGFA (initially MITRA) broker (now called Connectivity Manager) that functions as a Modality Worklist provider, and in many institutions also is used to store reports. In addition, reports could reside temporarily in the Voice Recognition System, with another copy in the RIS, EMR and PACS. This causes issues with ensuring amendments and changes to these documents stay in sync at various locations.

Before the universal EMR access, many radiology departments would scan in old reports so they could be seen on the radiology workstation, in addition to scanning patient waivers and other related information into their PACS. This is still widely practiced, witnessed by the proliferation of paper scanners in those departments. These documents are converted to DICOM screen-saves (Secondary Capture), or, if you are lucky, as DICOM encapsulated PDF’s which are much smaller in file size than the Secondary Captures. With regard to MPEG’s, for example swallow studies, a common practice is to create so-called Multiframe Secondary Capture DICOM images. All of this DICOM “encapsulation” is done to manage these objects easily within the PACS, which provides convenient access for a radiologist.

The discussion about images and documents poses the question on what the difference is between an image and a document, which would also determine if the “object” is accessed from an image management system (PACS/VNA), which infers that it is in a DICOM format, or from a document management system (a true document management system, or RIS, EMR) which either assumes a XDS document format (using the defined XDS metadata) or some other semi-proprietary indexing and retrieval system. Note that there are several VNA’s that manage non-DICOM objects, but for the purpose of this discussion, it is assumed that a PACS/VNA manages “DICOM-only” objects.
In most cases, the difference between images and documents is obvious, for example, most people agree that a chest X-ray is a typical example of an image, and a PDF file is a clear example of a document, but what about a JPEG picture taken by a phone in the ER, or an MPEG video clip of a swallow study? A document management system can manage this, or, alternatively, we can “encapsulate” it in a DICOM wrapper and make it an image similar to an X-ray, with the same metadata, being managed by a PACS system.

What about an EKG? One could export the data as a PDF file, making it a document or alternatively maintain the original source data for each channel and store it in a DICOM wrapper so it can be replayed back in a DICOM EKG viewer. By the way, one can also encapsulate a PDF in a DICOM wrapper, which is called an “encapsulated PDF” and manage it in a PACS. Lastly, one could take diagnostic radiology reports and encapsulate them as a DICOM Structured report and do the same for a HL7 version 3 CDA document, e.g. a discharge report, and encapsulate it in a DICOM wrapper and store it in the PACS.

All of which shows that there is a grey area with overlap between images and documents, whereby many documents and other objects could be considered either images, or a better word is DICOM objects and managed by the PACS, or alternatively considered documents and managed by a document management system. Imagine you would implement an enterprise image management and document management system, what would your choices be with regard to these overlapping objects?
 Here are my recommendations:
1. Keep PDF’s as native PDF documents, UNLESS they are part of the same imaging study. For example, if you have an ophthalmology study that includes several retinal images and the same study also creates pdf’s, it would be easier to keep them together which means encapsulating the PDF as a DICOM object. But if you have a PDF for example, from a bone densitometry device, without any corresponding images, I suggest storing it as a PDF.
2.  Use the native format as much as possible:
a. There is no reason to encapsulate a CDA in a DICOM or even a FHIR document object, conversions often create loss of information and are often not reversible. Keep them as CDA’s.
b. Manage JPEG’s and MPEG’s (and others, e.g. TIFF etc.)  as “documents.” As a matter of fact, by using the XDS meta-data set to manage these you are better off because you also are able to manage information that is critical in an enterprise environment such as “specialty” and “department,” which would not be available in the DICOM metadata.
c. Use DICOM encoded EKG’s instead of the PDF screenshots.
d. Stay away from DICOM Secondary Capture if there is original data available, remember that those are “screenshots” with limited information, specifically, don’t use the Screen-Captured dose information from CT’s but rather the full fidelity DICOM Structured Reports which have many more details.
3. Stop scanning documents into the PACS/VNA as DICOM secondary capture and/or PDF’s, they don’t belong there, they should be in the EMR and/or document system.

An EMR is very well suited to provide a longitudinal record of a patient, however, none of the EMR’s I know of will store images. Images are typically accessed by a link from the EMR to a PACS/VNA so that they can be viewed in the same window as the patient record on a computer or mobile device. In contrast, documents are often stored in the EMR, but these are typically indexed in a rudimentary manner and most users hate to go through many documents that might be attached to a patient record to look for the one that has the information they are looking for. A better solution for document access is to have a separate enterprise document management system, which should be able to do better job managing these.

Some VNA’s are also capable of managing documents in addition to images, preferably using the XDS infra-structure. As a matter of fact, if you are NOT using the XDS standard, but a semi-proprietary interface instead to store JPEG’s, MPEG’s and all types of other documents, you might have a major issue as you will be locked into a particular vendor with potential future data migration issues.

Also, be aware of the differences between XDS implementations. The initial XDS profile definitions were based on SOAP messaging and document encapsulation, the latest versions include web services, i.e. DICOMWeb-RS for images and FHIR for documents. Web services allow images or documents to be accessed through a URL. Accessing information through web services is how pretty much all popular web-based information delivery happens today e.g. using Facebook, Amazon, and many others. It is very efficient and relatively easy to implement.

Modern healthcare architecture is moving towards deconstructing the traditional EMR/PACS/RIS silo’s to allow for distributed or cloud-based image and information management systems. From the user perspective, who accesses the information through some kind of a computer based portal or mobile device, it does not really matter where the information is stored, as long as there is a standard “connection” or interface that allows access to either an image or document using web services.

Right now is the perfect time to revisit your current architecture and reconsider how and where you manage and archive images and documents. Many hospitals have multiple copies of these objects stored in a format that does not make sense at locations that were dictated by having easy access to the data without considering whether they really belonged there. Instead of cluttering the current systems, especially when planning for the next generation of systems that are going to be FHIR and DICOMWeb enabled, it is important to index and manage your images and documents at the location where they belong in a format that makes sense.