Monday, December 10, 2018

RSNA2018: What’s in and what’s out.

Let it snow...
The annual radiology tradeshow at McCormick Place in Chicago started with a little hiccup as the Chicago airports closed down on Sunday due to a snowstorm, and slowed the flow of attendees flying in on the second day to only a trickle. Note that the Sunday after Thanksgiving is the busiest travel day of the year so it could not have come at a more inconvenient time. I myself was caught in this travel chaos as I spent all of Monday in the Dallas airport while my plane was trying to get into the arrival queue for O’Hare.

The overall atmosphere at the show was positive, attendance seemed to be similar to last year and most vendors I talked with were optimistic. About a third of the attendees come to the meeting just for the continuing education offerings, but another third come to visit vendors and “kick the tires” and see what’s new. My objective is also to see what the new developments are and to do some networking to get an idea of what is going on in the industry. 

Here are my observations:

1.       Artificial intelligence dominated the floor – Over the past few years, AI has created some
Dedicated area just
for AI showed
80 companies
anxiety as predictions that AI would replace radiologists in the near future. It seems that the anxiety has been relieved to a certain degree, but it has been replaced with a great deal of confusion of what AI really is, and with uncertainty of what the day-to-day impact could be.
A detailed description of the different levels of AI and the main application areas are the topic of an upcoming blog post, but it was clear that the technology is still immature. Despite the fact that there were 100+ dedicated AI software providers, in addition to many companies promoting some kind of AI in their devices or PACS, only a handful of them had FDA clearance. I also believe that the true impact of AI could be in developing countries that have a scarcity or even total lack of trained physicians. It is one thing to improve the detection by a physician of let’s say cancer by a few percent, but if AI could be used in a region that has no radiologists, then an AI application being used that can detect certain abnormalities  would be a 100% improvement.
There could be some workflow improvements possible using AI in the short term, however, one should also realize that the window between conception and actual implementation could be 3-5 years. Users are not too anxious to upgrade their software unless there is a very good reason. So, in short, the AI hype is definitely overrated and I believe that we’ll almost certainly have autonomous self-driving cars before we have self-diagnosing AI software.

a significant dose reduction
 for lung cancer screening
2.       Low dose CT scanning is becoming a reality – One of the near-term applications of AI allows the use of a fraction of a “normal” CT scan. Instead of a typical 40 mAs technique, acceptable images are created using only 5 mAs. This could have a major impact on cancer screening. The product shown did not have FDA clearance (yet) but there is every reason to expect that this can be available one year from now. The algorithm was created using machine learning from a dataset of a million images to identify body parts in lung CTs, and subsequently reduce the noise in those images, which allows for a significant dose reduction, claimed to be 1/20

Cone Beam CT
3.       Cone beam CT scanners are becoming mainstream – Cone beam CT scanners were initially
used primarily for dental applications where the resulting precision and high resolution images, especially in 3-D, are ideal for creating implants. However, for ENT applications, such as visualizing cochlear implants and inner ear imaging, its high resolution and relatively low cost makes them ideal. It is also very useful for imaging extremities, again, its high resolution can show hairline fractures well and is superior to standard x-ray. I counted at least 5 vendors offering these types of products; they are being placed in specialty clinics (e.g. ENT) as well as large hospitals.

4.       Point-Of-Care (POC) ultrasound is booming – POC ultrasound is getting inexpensive (between US $2k-15k), which is affordable enough to put one in every ambulance, and in the hands of every emergency room physician, and even for physicians doing “rounds” and visiting bedsides. There are different approaches for the hardware, each with its own advantages and disadvantages:
a.       Using a standard tablet or phone, there is an “app” needed for the user interface, image display, and upload to the cloud and/or PACS. All of the intelligence is inside the probe. However, one of the complaints I heard is that the probe tends to be somewhat heavy and can get very warm.
b.       Using a dedicated tablet modified for this use, it can take some of the load off the probe
for the processing. If the probe is powered through the tablet, it saves on weight as well.
Butterfly POC US, US$2k
Other things to look for is whether a monthly fee is included as several vendors use a subscription model, if it has a cloud based architecture (i.e. no stand-alone operation), and what applications can it be used for. Most of the low-end devices are intended for general use, and have only one or two probes. If you need OB/GYN measurements, you might need to look for a high end (close to US $10k-15k price range).
Also, uploading images into a PACS is nontrivial as one needs to make sure it ends up in the correct patient record of the PACS, VNA, EMR, etc. This is actually the number one problem as each facility seems to deal with these so-called “encounter-based” procedures in a different manner. There are guidelines defined by IHE, but in my opinion with a very narrow scope.

5.       3-D Printing is becoming mainstream – A complete section at the show was dedicated to 3-D  with regard to X3D/VRML models in ongoing. So, before you make major investments, I would make sure you are not locked into a proprietary format and interface.
Many companies showing off
printed body parts
printing. Several vendors showed printers and amazing models based on CT images. The application is not only for surgery planning (nothing better than having a real-size model in your hands prior to surgery) but also for patient education to share a treatment plan. I would caution however that the DICOM standard (as of 2018) includes a definition on how to exchange so-called “STL” models, but the work
There is not (yet) a large volume of these printed models. I talked with a representative of major medical center, who said they do about 5-10 a day, and another institution, i.e. a children’s hospital does about 3 per week. It seems to me that creating orthopedic replacements might become a major application, but then we ae not talking about models you can make with a simple printer that creates objects from nice colorful plastic, but rather one that can compete with the current prosthetics based on titanium and other materials.

6.       Introduction of new modalities – Every year there are several new modalities introduced, which are very promising and could have a major impact on how diagnosis is done in a few years for particular body parts and/or diseases. Examples are a new way to detect stroke by using
Dedicated Breast CT
electromagnetic imaging for the brain
. The images look very different from a CT scan, for example, but it gives a healthcare worker the information they need to make treatment decisions. Another new device is a dedicated breast CT device providing very high resolution, 3-D display and is more comfortable for a woman than a regular mammogram. Note that these devices don’t have FDA clearance (yet), but as common for these new technologies, they are deployed in Europe and as soon as the FDA feels comfortable, they be ready for sale in the US as well. On issue with these devices is that there is no real “predicate” device so they need clinical trials to show their benefits.

Equally important to what’s new is also observing what’s “old,” because the technology has become mature, or it has made it beyond the “early-adopter” stage. This is what I found:

1.       PACS/VNA/Enterprise imaging – Over the past few years, PACS systems have become mature and not much talked about. Most investments by institutions have been with new EMR’s so there has not much left over to upgrade the PACS system. The result is that many hospitals run several years behind in upgrading and/or replacing their PACS, which hurts the most when needing to facilitate new modalities such as the breast tomo (3-D) systems. One is forced to stick with proprietary solutions to make these work and/or using the modality vendor’s workstations to view these.

VNA implementations have also been spotty. Some work rather well, but some have major scaling and synchronization issues between the PACS and VNA. Enterprise imaging was touted the past 2 years as well, but as a result of a lack of orders (see discussion above about POC ultrasound) creating work-arounds, has not really taken off as expected. New features are needed such as radiation dose management, peer reviews, critical results reporting, and sophisticated routing and prefetching, which are solved by using third party “middleware” to resolve these issues.

2.       Blockchain – Using blockchain technology in healthcare has a limited application. The reason is that the bulk of the healthcare information does not lend itself to be stored in a public “ledger.” It is nice that the information cannot be altered, but unless it is completely anonymized (which is still an issue as there can be “hidden information in private data elements, embedded in the pixels, etc.), and made available for research purposes for example, there are not that many uses for this technology. As of now, some limited applications such as physician registries seem to be the only ones that are feasible in the short term.

3.       Cloud solutions – Google, Amazon and Microsoft are the big players in this market, but there are still very few “takers” for this technology. One of the reasons is the continuing press on major hacking events into corporations (500 million records from Marriott hotels is the most recent as of this writing) and reports of ransomware events of hospitals. Even though one could argue that the data is probably safer in the hands of one of the top cloud players than on some server in a local hospital, there is definitely a fear factor.

As an illustration, one of the participants told me that their hospitals cut off all of the external communications, so there is no Internet at all on any hospital PC. I have seen many physicians Googling on their personal devices such as tablet or phone instead, to search for information about certain diseases or cases. Despite the push from Google et al we probably need some real success stories before this becomes mainstream. Note that what I call “private cloud” solutions, which are provided by dedicated medical software vendors, are doing better, especially for replacement of CD image distribution and for allowing patients to access their images.

Overall, there was quite a bit to see and listen to at this year’s RSNA. Because of the weather cutting into my visit, I was barely able to cover everything I wanted to during the week. It was interesting to see how mature image processing techniques suddenly appeared as “major new AI” solutions, how there are still so many in their infancy, which makes me to believe that the immediate impact will be relatively little. I was more excited by new modalities and inexpensive ultrasounds, which will have a major impact. 

I am hoping that next year some vendors will spend more effort going back to some of the basics, providing robust integration and workflow support for the day-to-day operations. We’ll see what will be new next year!

Tuesday, October 23, 2018

Should I jump into the FHIR right now?

I get this question a lot, especially when I teach FHIR, which is a new HL7 standard for electronic
exchange of healthcare information, as there seems to be a lot of excitement if not “hype” about this topic.

My answer is usually “it depends” as there is a lot of potential, but there are also signs on the wall to maybe wait a little bit, until someone else figures out the bugs and issues. Here are some of the considerations that could assist your decision to either implement FHIR right now, require it for new healthcare imaging and IT purchases, or start using it as it becomes available in new products.

1.      The latest FHIR standard is still in draft stage for 90% of it – That means that new releases will be defined that are not backwards compatible. That means that upgrades are inevitable, which may cause interoperability issues as not all new products use the same release. As a matter of fact, I experienced this first hand during some hackathons as one device was on version 3 and the other one on version 2, which caused incompatibilities. The good news is that some of the so-called “resources” such as those used for patient demographics are now normative in the latest release so we are getting there slowly.

2.      FHIR needs momentum – Implementing a simple FHIR application such as used for appointments requires several resources, for example patient demographics, provider information, encounter data, and organization information. If you implement only the patient resource but use “static data” for example, the remainder is subject to updates, changes, and modifications, etc., in other words, if you slice out only a small part of the FHIR standard, you don’t gain anything. Unless you have a plan to move the majority of those resources eventually to FHIR, and upgrade as they become available, don’t do it. The US Veterans Administration showed at the latest HIMSS meeting how they exchange information between the VA and DOD using 11 FHIR resources that allowed them to exchange the most critical information. When implementing more than 10 FHIR resources you achieve critical mass.

3.      Focus on mobile applications – FHIR uses RESTful web services, which is how the internet works, i.e. how Amazon, Facebook and others exchange information. You get all of the internet security and authorization for free, for example, accessing your lab results from an EMR could be simple by using your Facebook login. The information is exchanged using standard encryption similar to what is used to exchange your credit card information when you purchase something at Amazon. Creating a crude mobile app can be done in a matter of days if not hours as is shown at the various hackathons. Therefore, use FHIR where it is the most powerful.

4.      Do NOT use it to replace HL7 v2 messaging – FHIR is like a multipurpose tool, it can be used for messaging, services, and documents, in addition to having a RESTful API, but that does not mean it is a better “tool.” One of the traps that several people fell into when HL7 version 3 was released, which is XML based, is that they started to implement new systems based on this verbose new standard, because it “is the latest,” without understanding how it would effectively choke the existing infrastructure in the hospitals. Version 2 is how the healthcare IT world runs, it is how we get “there” today and how it will be run for many more years to come. Transitioning away from V2 will be a very slow and gradual process, picking the lowest hanging fruit first.

5.      Do NOT use FHIR to replace documents (yet) – EMR to EMR information exchange uses the clinical document standard CDA, there are 20+ document templates defined such as for an ER discharge, which are critical to meet the US requirements for information exchange, they are more or less ingrained. However, there are some applications inside the hospital where a FHIR document exchange can be beneficial, for example, consider radiology reports, which need to be accessed by an EMR, a PACS viewing station, possibly a physician portal, and maybe some other applications. Instead of having copies stored in your voice recognition system, PACS, EMR, or even a router/broker or RIS, and having to deal with approvals, preliminary reports, and addendums at several locations, it is more effective to have a single accessible FHIR resource for those. One more comment about CDA; there is a mechanism to encapsulate a CDA inside a FHIR message, however, for that application you might be better off using true FHIR document encoding.

6.      Profiling is essential – Remember that FHIR is designed (on purpose) to address 80% of all use cases. As an example, consider the patient name definition, which has only the last and first (given) name. Just to put this in perspective, the version 2 name has xx components (last, first, middle, prefix, suffix, date of xxxxx etc.). What if you need to add an alias, a middle name, or whatever makes sense in your application? You use a well-defined extension mechanism, but what if everyone uses a different extension? There needs to be some common parameters that can be applied in a certain hospital, enterprise, state or country. Profiles define what is required, what is optional, and any extensions necessary to interoperate. I see several FHIR implementations in countries that did not make the effort to do this, for example, how to deal with Arabic names in addition to English names is a common issue in the Middle East, which could be defined in a profile.

7.      Develop a FHIR architecture/blueprint – Start with mapping out the transactions as they are passing through the various applications. For example, a typical MPI system today might exchange 20-30 ADT’s, meaning that it communicates patient demographics, updates, merges, and changes to that many applications. Imagine a single patient resource that makes all of those transactions obsolete as the patient info can be invoked by a simple http call whenever it is needed. Note that some of the resources don’t have to be created locally, a good example is the south Texas HIE, which provides a FHIR provider resource so you never have to worry about finding the right provider, location, name, and whether he or she is licensed.

8.      Monitor federal requirements (ONC in the US) – Whether you like it or not, vendors may be required to implement FHIR to comply with new regulations and/or incentives, including certification. In order to promote interoperability, which is still challenging (an understatement), especially in the US where we still have difficulty exchanging information even after billions of dollars spent on incentives, ONC is anxious to require FHIR based connectivity. This is actually a little bit scary given the current state of the standard, but sometimes, federal pressure could be helpful.

To repeat my early statement about FHIR implementation, yes “it depends.” Proceed with caution, implement it first where the benefits are the biggest (mobile), don’t go overboard and be aware that this is still bleeding edge and will take a few years to stabilize. If you would like to become more familiar with FHIR, there are several training classes and materials available, OTech is one of the training providers, and there is even a professional FHIR certification.

Saturday, October 6, 2018

PACS troubleshooting tips and tricks series (part 10): HL7 Orders and Results (report) issues.

In the last set of blog posts in this series I talked about how to deal with communication errors, causes for an image to be Unverified, errors in the image header or display and worklist. In this blog I’ll describe some of the most common issues with orders and results impacting the PACS system.

Orders and results are created in a HL7 format, almost always in a version 2 encoding, with the most popular version being 2.3.1. A generic issue with HL7, which is not restricted to just orders and results but pretty much all HL7 messaging is the fact that HL7 version 2 is not standardized, meaning that there are many different variations depending on the device manufacturer and the institution that also makes modifications and changes to meet local workflow and other requirements.

The IHE Scheduled Workflow Profile provides guidelines on what messages to support and what their contents should be, but support for those profiles have been somewhat underwhelming. Therefore, having an HL7 interface engine such as Mirth or other commercial versions has become a de-facto necessity to map the differences between different versions and implementations, and also to provide queuing capability in case an interface might be down for a short period of time, so it can be restarted. Here are the most common issues I have encountered specifically related to orders and results as well as updates:

·        Patient ID mix-ups – There are several places in the HL7 order where the patient ID can reside, i.e. in the internal, external, MRN, SSN, or yet another field. As of version 2.3.1, HL7 extended the external Patient ID field to become a list including the issuing agency and other details. DICOM supports a “primary” Patient ID field and expects all of the others to be aggregated in the “other ID” field. Finding where the Patient ID resides, in which field, or in the list, can be a challenge.
·        Physician name – The most important physician from a radiology perspective is the referring physician, which is carried over from the order in the DICOM MWL and image header. For some modalities, however, such as special procedures or cardiology, there can be other physicians such as performing physicians, attending, ordering, and others as well as multiple listings for each category. Even so, despite the fact that the referring physician has a fixed location in the HL7 order, it sometimes might be found in another field and require mapping.
·        HL7 and DICOM format mismatch – Ninety-five percent of DICOM data elements have the same formats (aka Value Representations) as the HL7 data types, the 5% differences can create issues when not properly mapped and/or transformed. For example, the Person Name has a different position for the name prefix and suffix and many more components in HL7.  There can be different maximum length restrictions possibly causing truncations, and the list of enumerated values can be different causing a worklist entry or resulting DICOM header to be rejected. An example is the enumerated values for patient gender which in DICOM is M, F, O, the list for HL7 version 2.3.1  is M, F, O, U and for version 2.5 it is even longer, i.e. M, F, O, U, A, N (see explanation of these vales). This requires mapping and transformation at the interface engine or MWL provider.
·        Report output issues – A report line is included in a so-called observation aka OBX-segment as part of a report message (ORU). There is no standard on how to divide the report, some put for example the impression, conclusion, etc. in a separate OBX, some group them together. In one case, a EMR receiving the report in HL7 encoding (ORU) only displayed the first line, obviously only reading the first OBX. Another potential issue is that a Voice recognition system might use either unformatted (TX) or formatted (FT) text and the receiver might not be able to understand the formatting commands
·        Support for DICOM Structured Reports – Measurements from ultrasound units and cardiology are encoded as a DICOM Structured Report. Being able to import those measurements and automatically filling in those measurements into a report is a huge time savings (several minutes for each report) and reduces copy/paste errors. However, not all Voice recognition systems do support the SR import and if so, they might have trouble with some of the SR templates and miss a measurement here and there. Interoperability with SR is generally somewhat troublesome, and implementation requires intensive testing and verification as I have seen some of the measurements being missed or misinterpreted. Some vendors also use their own codes for measurements, which requires custom configuration.
·        Document management – For long reports, it might be more effective to store them on a document management server and send the link to an EMR, or, encode it as a PDF if you want more control over the format, and attach this to the HL7 message. In this case, you will need to support the HL7 document management transactions (MDM) instead of the simple observations (ORU)
·        Updates/merges, moves – Any changes in patient demographics is problematic as there are many different transactions defined in HL7, depending on the level of change (in the person, patient, visit, etc.) and the type of change, i.e. move patient, merge to records, or simply update a name or other information in a patient record. Different systems support different transactions for these.

In conclusion, HL7 messages vary widely, and interface engines and mapping are necessary evils.
If you would like to create sample HL7 orders or results, you can use a HL7 simulator (parser/sender). The HL7 textbook is a good resource and there are also training options available.

PACS troubleshooting tips and tricks series (part 9): Modality Worklist issues

In the previous set of blog posts in this series I talked about how to deal with communication errors,
causes for an image to be Unverified and errors in the image header as well as display. This post will discuss the errors that might occur with the DICOM modality worklist.

A modality worklist (MWL) is created by querying a Modality Worklist provider using the DICOM protocol for studies to be performed at an acquisition modality. The information that is retrieved includes patient demographic details (name, ID, birthday, sex, etc.), order details (procedure code, Accession number identifying the order, etc.) and scheduling details (referring physician, scheduled date/time etc.). This information is contained in a scheduling database, which is created by receiving orders for the department in an HL7 format (ORM messages).

The Worklist provider used to be typically hosted on a separate server, aka broker or connectivity manager. But increasingly, this function is embedded in a PACS, a RIS or even EMR that has a radiology package. Moving this function from the broker to these other systems is the source of several issues as the original broker was likely rather mature with a lot of configurability to make sure it matches the department workflow, while some of these new implementations are still rather immature with regard to configurability.

The challenge is to provide a worklist with only those examinations that are scheduled for a particular modality, no more and no less, which is achieved by mapping information from the HL7 order to a particular modality. Issues include:

·        The worklist is unable to differentiate between the same modality at different locations – An order has a procedure code and description, e.g. CT head. As the order in HL7 does not have a separate field for modality, the MWL provider will map the procedure codes to a modality, in this case “CT” so a scanner can do a query for all procedures to be performed for modality “CT.” The problem occurs if there is a CT in the outpatient ER, one in cardiology for cardiac exams, one in main radiology, and one in the therapy department (RT). Obviously, we don’t want all procedures showing up on all these devices. It might get even more complicated if a CT in radiology is allocated, let’s say on Fridays to do scans for RT. We need to distinguish between these orders, e.g. look at the “patient class” being in-or outpatient, or department, or another field in the order and map these procedures to a particular station. The modalities will have to support the “Station Name” or “Scheduled AE-Title” as query keys.
·        The worklist can only query on a limited set of modality types – Some devices are not properly configured, for example, a panoramic x-ray unit used for dentistry should use the modality PX instead of CR, the latter of which might group them together with all of the other CR units. The same applies for a Bone Mineral Densitometry (“DEXA”) device; it should be identified as modality BMD instead of CR or OT (“Other”). Document scanners also should be configured to pull for “DOC” instead of OT or SC (“secondary Capture”), endoscopy exams need to be designated ES, and so on. The challenge is to configure the MWL provider as well as the modality itself to match these modality codes.
·        The worklist has missing information – A worklist query might not have enough fields to include all the information needed at the modality. In one particular instance I encountered, the hospital wanted to see the Last Menstrual Date (LMD) as it was always on the paper order. Other examples are contrast allergy information, patient weight for some modalities, pregnancy status, or other information. If the worklist query does not have a field allocated for these, one could map this at the MWL provider in another field, preferably a “comment field” instead of misusing another field that was intended and named for a different purpose.
·        The worklist is not being displayed – There could be several reasons, assuming that you tested the connectivity as described in earlier blogs, i.e. there could be no match for the matching key specified in the query request, or, the query response that comes back is not interpreted correctly. In one case a query response was not displayed at an ultrasound of a major manufacturer because one of the returned parameters had a value that was illegal, i.e. not part of the enumerated values defined by the DICOM standard for that field. In this case, I could only resolve this issue by looking at sniffer responses and taking those and running them against a validator such as DVTK.

MWL issues are tricky to resolve. It is highly recommended that one have access to the MWL provider configuration software. Most vendors will have a separate training class on this device. Be aware that the mapping tables need to be updated every time a new set of procedure codes is introduced; therefore, it is an ongoing support effort. Configuring requires detailed knowledge of HL7 so you can do the mapping into DICOM.

To troubleshoot these issues, a modality worklist simulator can be very useful. There is a DVTK modality worklist simulator available for free and a licensed modality simulator from OTech.

In case you need to brush up on your HL7 knowledge, there is a HL7 textbook available and there are on-line as well as face-to-face training classes, which include a lot of hands-on exercises.

In the next blog post we’ll spend some time describing the most common HL7 issues impacting the PACS.

PACS troubleshooting tips and tricks series (part 8): DICOM display errors.

In the last set of blog posts in this series I talked about how to deal with communication errors, causes for an image to be Unverified and header errors. This post will discuss the errors that might occur when trying to display the images caused by incorrect DICOM header encoding.

When an image is processed for display, it goes through a series of steps, aka the Pixel pipeline. Think about this pipeline as a conveyor belt with several stations, each station having a specific task, such as applying a mask to the image, applying a window/width level, a look up table, annotations, rotating or zooming the image, etc. These “stations” are instructed by the information in the DICOM header, or taken from a separate DICOM file called Presentation State for processing.
There are two categories of problems, the first set of problems might be due to incorrectly encoded header instructions, and the second category is the interpretation and processing caused by an incorrect software implementation. Here are the most common issues:

·        Incorrect grayscale interpretation and display – Images can be encoded as grayscale or color. Grayscale images are identified either as MONOCHROME2 in the header, which means that the lowest pixel value (“0”) is interpreted and displayed as black, or MONOCHROME1 in which case the maximum pixel value (255 for 8 bits images) is interpreted as black. Typically, MR and CT are encoded as MONOCHROME2 and digital radiography MONOCHROME1. However, there is nothing that prevents a vendor from inverting its data and using a different photometric interpretation. Anytime an image is displayed inverted instead of in its normal view, the MONOCHROME1-2 identification is the first place look. I have seen problems where the software after an upgrade ignored the photometric interpretation, causing all of CR/DR to be displayed correctly, but inverting the CT/MR, or displaying the image correctly but the mask or background to be inverted.
·        Incorrect color interpretation and display – color images can be encoded in several different manners, the most common one is using a triplet of Red, Green and Blue (RGB). However, DICOM allows one to use several others (CMYK, etc.) and also allows for sending a palette color in the header that the receiving workstation has to use to map the color scale. Palette color is used if the sender is very particular about the color, such as in nuclear medicine, unlike color for ultrasound when it is used to indicate the direction of the blood (red/blue). Having many different color encodings increases the chance that a receiver does not display one of those encodings. I have seen this after data migration where some of the ultrasound images from a particular manufacturer did not display the color correctly on the new PACS viewer.
·        Failing to display a Presentation State – The steps in the pipeline dealing with image presentation (mask, shutters, display and image annotation and image transformation such as zoom and pan) can be encoded and kept as a separate DICOM file together with the study containing the images. Not every vendor will implement all the steps correctly, and I also have seen ones that will only interpret the first Presentation State and ignore any additional Presentation States.
·        Incorrect Pixel interpretation of the Pixel representation – Some modalities, notably CT, can have negative numbers (Hounsfield units or HU) indicating that a visualized tissue has an X-ray attenuation less than water, which is calibrated to be exactly 0 HU. Some modalities will scale all the pixel values, especially CT and PET. If the software does not interpret it correctly, the image display will be corrupted.
·        Incorrect interpretation of non-square pixels – Some modalities, notably US and C-arms, have “non-square” pixels, meaning that the x and y direction have a different resolution. The pixels need to be “stretched” through interpolation, based on the aspect ratio, for example, if the ratio is 5/6, they need to be extended in the y direction for another 1/6th, which is 16.6%. If your images look compressed, which you’ll notice by the compressed text or, in case of a C-arm, you’ll notice that circles become egg-shaped, that indicates the software does not support non-square pixels. Except for looking kind of strange, it might not impact image interpretation.
·        Shutters incorrectly displayed – A shutter can be circular with a defined radius and center point, or rectangular with defined x,y coordinates, intending to cover collimated areas, which display as being very white to the radiologist. I have seen some implementations ignoring the circular shutter, which makes the radiologist who has to look at the white space very unhappy.
·        Overlay display issues – Overlays used to be encoded in the PACS database in proprietary formats, which is a big issue when migrating the data to another PACS system. And, if encoded in a DICOM defined manner, there are several options ranging from stand-alone objects, to bitmaps in the DICOM header, embedded in the pixel data field, or, worst case being burned-in, i.e. replacing the actual pixels with the overlay. If the overlays contain clinical information, e.g. a Left/Right indicator on the image, it is important to check how the overlays are encoded to make sure that when the data is migrated or read from a CD on another system, the user will be able to see it. The same applies for “fixing” burned-in annotations; don’t overlay a series of “XXX-es” in case the name was incorrect, as they might not be displayed in the future. The best way to get rid of incorrect burned-in annotations is to use an off-line image editing routine, which functions as a “paintbrush” and eliminates the pixel data.

The image pixel pipeline facilitates all the different combinations and permutations of the different pixel encodings, which in practice might not always be completely or correctly implemented. There is an IHE profile defined, called “Consistent Presentation of Images,” check your IHE integration statement of your PACS to determine whether it is supported, meaning that the software implements a complete pipeline.

In addition, this profile has a detailed test plan and a set of more than 200 images and corresponding presentation state files that are available in the public domain and can be accessed from the IHE website under "testtools". I strongly recommend that after the initial installation and with each subsequent software upgrade, that you load these images and check to see if the pipeline works. These test mages have different pixel encodings with the instructions in the header negating the pixel display, for example, an image might be MONOCHROME1 with an inverted LUT to be applied, displaying the same as if it was MONOCHROME 2 with a regular, linear LUT.

Another good resource is the PACS fundamentals textbook that explains the pipeline in great detail. The next blog post will be on Modality Worklist issues.

PACS troubleshooting tips and tricks series (part7): DICOM header errors.

The last set of blog posts in this series discussed dealing with communication errors and causes for an
image to be Unverified. This post will discuss the errors that might occur caused by incorrect DICOM header encoding.

The PACS will typically only check for information in the header to be correct for those data elements that directly impact data integrity, i.e. impacting the correct indexing and subsequent retrieval of the DICOM files. Those data elements are about 10-15 elements such as Name, ID, sex, etc. There could be other errors in the data that impact future retrieval and processing which would not necessarily cause an image to be Unverified.
The most common DICOM header issues I have experienced are as follows:

·        Old date and time format – The first version of the DICOM standard had a different encoding for the date and time, it separated them with a period, “.” for example, instead of the encoding YYYYMMDD (20180821) it would encode it as follows – YYYY.MM.DD (2018.08.21). A DICOM editor would need to be used to change this.

·        Padding using spaces and null’s – There could be problems with “padding” i.e. adding spaces and/or a null (an ACSII control character indicating “0”) either before or after a data element. Some of the data elements allow for padding before and after (e.g. the Accession Number), some only after (e.g. the Person Name). That means that a space before a Name (.spSmith) is significant, and a search on Smith (without space) will not match and not provide any results. The space before the Accession Number is not significant and therefore a search with or without a space should result in a match.

Part of this problem is self-inflicted as the DICOM standard requires each data element to have an even number of characters/bytes, and therefore, if a data element being odd (e.g. Smith having 5 characters) it is padded (Smith.sp) to change its length from 5 to 6 bytes. To make it even more complex, the Unique Identifier (UID) has to be padded with a “null” instead of a space, in case it has an odd number of characters. Most DICOM toolkits are aware of this and will strip them off and/or add them when providing these data elements to the application, but there could be “rogue” implementations that incorrectly do this. In my own experience, I have seen a DICOM router not sending images to a particular physician because the physician’s name (Smith.sp) in the header did not match the routing table (Smith).

·        Person Names – Patient and Physician names are encoded somewhat differently in the HL7 orders. For example, the physician name in HL7 is typically preceded with an alphanumeric code that refers to the physician registry to properly identify Dr. Smith. The sub-components are in a different order, i.e. the name suffix and prefixes are reversed in the DICOM data format, in addition to the fact that the name in HL7 can have up to 14 (!) components in the latest version, while DICOM only allows for 5 (Last, First, Middle, Prefix, Suffix).

Patient names can also be incorrect due to incorrect user input. Imagine an input clerk entering “John Smith” in the Last name field instead of entering these in the last/first name fields, this record will not match the name “Smith” upon searches. If detected, a user can update the patient name, which will cause a HL7 Update message to be sent to all interested parties. If not detected, it can create issues later on.

·        Escape and control characters – One of the most important control characters in DICOM is the “\” which is used to separate among different multiple valued components in a data element. For example, if the patient was identified in the past with her maiden name, or name of her first husband, one could encode this using Smith\Jones. The “\” is NOT a control character in HL7 which contains the order and patient demographic information, therefore, any HL7 to DICOM mapping is supposed to filter out these characters, and if essential, possibly replace with a non-DICOM control character, e.g. a “/” to prevent the software from becoming confused.

These are some of the most common errors, there are several more. The problem is that they might be undetected and can create issues later, for example when trying to migrate this data to another PACS or creating a CD that is read by another PACS. As a typical header has sometimes a hundred or more data elements, it is hard to detect these issues visually, and the use of a header validator is the only manner to detect any issues.

There are a couple of tools available for free that I use, DVTK, which you can see here for a video demo on how to use it. I recommend using that if there is an issue, e.g. when trying to read an image from a CD that is rejected or produces problems with the display. It is also a good idea to run this validator against any new modality, you’ll be surprised how many problems you’ll find, some of them might be insignificant, but some can be important.

Another resource to use is the OTech reference guide, which lists all of the data types (VR’s) for both DICOM and HL7 in case you need to check for the validity of data elements. We also spend quite a bit of time in our DICOM training sessions going over the testing and validation process.

The next post we’ll talk about the most common image display issues and validating the image display pipeline.

PACS troubleshooting tips and tricks series (part6): Unverified PACS cases.

In the last set of blog posts in this series I talked about network, addressing issues, incompatible file
types as well as transfer syntaxes and DICOM communication errors. This post deals with errors that might occur when there is a successful information transfer, however, the PACS determines that there are issues with the data that causes the file to be flagged as “Unverified” or “Broken.” This means that these images are NOT added to the queue to be interpreted by a radiologist. The following issues can occur –

·        Missing “exam complete” status – Some PACS systems will automatically add an incoming study to a radiologist’s worklist, some can be configured to do so, and some will require an “exam complete” status to be initiated. This exam complete can be entered at a radiology information system or an EMR by the technologist, which will typically cause a HL7 transaction to be sent to close out the order to the PACS. These updates could also be entered at the PACS, again, depending on the architecture.

It is possible to have this event automatically triggered at a modality by using MPPS (Modality Performed Procedure Step), however, there are relatively few institutions that make use of this feature, even though it is universally available at almost all digital modalities. Sometimes it is required to close out the study manually, for example, if the study requires loading additional information from a CD that is brought in by the patient, or requires additional processing and creation of derived images such as for using CAD or 3-D reconstructions.

Assuming that the PACS is configured to listen to the “exam complete” status, if this event is not issued because the technologist forgets to do so, or there is a communication error between the trigger initiator and PACS, it will cause the study to NOT appear on a worklist for interpretation.

·        Duplicate Identifier(s) – There are several important patient- and study-identifiers, which possibly can be duplicated or repeated for another study in error. The reasons for the duplication are incorrect manual entry, non-uniqueness, such as for the accession number and internal patient ID’s, especially when importing “foreign” studies, or due to software errors. The PACS core software will check for these situations because duplication will prevent them from uniquely identifying or indexing them for archiving and retrieval operations.

A special case occurs when the object number, aka SOP Instance UID, which is a Unique Identifier for that specific DICOM object, gets duplicated. The message “duplicate SUID” will typically occur. The SUID functions as a unique number, similar to a VIN number used for a car. Duplication can occur because of software errors, resending the same file twice to a destination or after a change is made in the file header. Different PACS systems might behave differently when receiving a duplicate SUID, some of them overwrite the original file, some of them will ignore them, and some might report the error causing an Unverified Study. One should never manually fix these SUID’s as uniqueness cannot be guaranteed, one should always use an off-line SUID generator in case these need to be fixed. If a “rogue” software implementation initiates duplicate UID’s on a regular basis for different objects, one could consider using a programmable router, which can be configured to create a new, unique number.

·        Missing identifier(s) – A missing identifier such as a Patient ID, Name, Accession Number, patient sex, birthdate, and several others will also cause an image to be Unverified. A user might actually use this behavior intentionally, for example by leaving an Accession Number blank when importing an external study, to allow it to be correctly and manually verified by a PACS system. The DICOM standard defines what information in the DICOM header has to be present, and a software application will typically flag when any of these are missing.

·        Exceeding length – Each data element in the header has, as part of its data type or VR (Value Representation) specification, an associated maximum length defined. Most of these are plenty, for example, the maximum length of a Patient Name is 64 characters which means that it will very rarely, if ever be exceeded. However, some of the attributes might be improperly used, such as including a long description in a field that is defined to have only 64 characters. Exceeding this maximum length could cause the object to be Unverified. If this is a common issue, one could use a DICOM router that can be configured to “fix” the header. This also might occur when migrating images to a new PACS where the old PACS did not care to identify this issue, and the new PACS would reject these cases as it interprets the DICOM standard more strictly than the initial source PACS.
·        Incorrect codes – Some of the DICOM data elements have defined terms that can be used that are identified by the DICOM standard as “enumerated values,” such as the value M, F and O (“Other”) for patient sex. If the initiating system, which can be an EMR or data entry system, has a different set of values for a certain data element, it can be rejected.

Fixing unverified studies provides the core of the work for many PACS administrators. Identifying the root cause and trying to prevent them will increase the data integrity of the system and relieve their jobs. There are routers that can fix chronic inconsistencies. Some PACS systems, especially those intended to be enterprise systems such as used by a VNA (Vendor Neutral Archive), can be configured to use “tag morphing” to fix the information. Tag morphing can also be used to clean up inconsistent study and series descriptions and/or body part identifiers. Note that some header issues could be unnoticed and are not flagged, which means that they only surface when someone tries to display the image or interpret the DICOM file. These errors, i.e. DICOM header issues will be covered in the next post.

Additional resources can be found in the DICOM textbook (ebook is available as well) and I also created a small DICOM/HL7 reference guide, which lists the DICOM data dictionary, UID’s and VR specifications.

Wednesday, August 29, 2018

PACS troubleshooting tips and tricks series (part5): DICOM communication errors.

In the last blog posts in this series I talked about network, addressing issues and incompatible file
types as well as transfer syntaxes. This post deals with errors that might occur doing the actual DICOM information transfer, i.e. after the DICOM connection (Association) has been successfully established.

When an Association is accepted by the server (SCP), which is indicated by the “Associate_Accept” transaction, the DICOM client (SCU) will issue the DICOM command, which is determined by the negotiated SOP Class. For example, imagine that a device proposes to exchange CT images and it is “OK-ed” by the SCP, the SCU will issue the C_Store command with the CT image file. The receiver will interpret the command, then discard that, and take the file and likely archive it in the case of a C_Store. It could also update a database so it can find it and/or reply to a DICOM query about its location.

If the SCP is a transient device such as a DICOM router, it will pass it on to its destination. Each DICOM command is returned with a corresponding Response, for example, a C-Store Request will result in a C_Store Response, the same applies for queries, moves, etc. The response has a status code associated with the transaction; hopefully it will be “success,” which is identified with the code “0000.” In case there is an error, the status code will contain the appropriate error code other than  “0000.” These codes are standardized by the DICOM standard, i.e. there are codes defined for the most frequent errors and warnings. 

Here are some of the common errors that you might see:
·        Resource issues – Imagine that you are sending a set of images to a destination with limited resources, such as a workstation with limited disk space. In the case that the destination cannot receive any more data files, it will indicate that in the status code A700, meaning “out of resources.” To resolve this issue, one would either go to the destination to free up more resources or send the information to another destination. The reason this error occurs is that one does not know in advance how many files are going to be sent, as there is no indication in the Association negotiation to say how many images are to be transferred. The resource issue does not have to be related to the required archive space, it could also be space for additional tables in the database or other resource restrictions.
·        Processing errors – The receiver reports errors when processing the information; for example, it might need to update several database tables upon receipt and archiving, and might have a problem as the images cannot be uniquely identified due to duplicate or missing identifiers. Not every receiving system implements the same set of criteria determining this error condition, some of them will actually accept the information and report “success” while quarantining the data file and flagging it as “Unverified” or “Broken.” This is done so that a physician might still be able to view them and could report on them, while awaiting resolution of let’s say the complete patient’s name and/or other demographic information. We’ll spend another post on the most common issues causing the Unverified status.
·        Warnings – A server might also give back a “warning,” its status code typically will start with a hex “B,” an example would be a print server sending back a warning that the number of sheets is getting low in a supply magazine. Another example could be a server telling a SCU that it modified or “coerced” one or more data elements in the header, as needed to make it unique or based on a patient update or merge. Most applications ignore these messages.
·        Pending – A “pending” status return message is not an error condition, but an indication to the client that the server is processing the request and will send more replies. This is common for a query response that has multiple matches. This non-successful completion is part of a normal behavior.

How would a client behave when it receives an error? Each device could have a different reaction, some might just continue with what they are doing and log the error, some might stop and notify the user, and some might retry for a configurable number of times with configurable intervals. If a device is following the DICOM standard guidelines, it should specify its behavior in its conformance statement under the section “SOP Specific conformance” for that particular SOP Class. Therefore, check that resource, noting almost all of the DICOM conformance statements can be found on-line. Make sure though that when you look at these conformance statements, that you have the right software version matching your device.

These types of errors are somewhat unpredictable and typically are caused by data information errors or inconsistency, unlike the errors that are caused by file type or transfer syntax errors. As mentioned earlier, it is possible that a server still acknowledges the information transfer to be successful, even if there is an issue to allow a physician to access the incomplete data, such as is common in the case of emergency where the patient cannot be identified at time of patient registration. These most common causes for these so called “Unverified” studies is going to be discussed in our next post.

For additional resources on this topic, you can use “,” which has a definition of the most common DICOM terms, or the DICOM textbook which is available either as a printed text or e-book, or, attend our on-line or face-to-face training seminars on PACS/DICOM.

PACS troubleshooting tips and tricks series (part4): Transfer Syntax support errors.

The last three blog posts in this series I talked about network, addressing issues and incompatible file
types. This post deals with incompatible Transfer Syntaxes (compression etc.) that could be proposed by a DICOM device initiating a DICOM connection.

In the last post I explained that a device proposes a list of items called a Presentation Context, which includes the file type (Abstract Syntax or SOP Class in DICOM terms) to be exchanged and the proposed encoding or Transfer Syntax of these files. These transfer syntaxes are a different representation of the data, the information content is still the same. Think about it as sending a file either in its original size, or, send it as a “zipped” file, in either case, the content is identical.

There are three parameters that can change in the Transfer Syntax:
1)     The byte order,
2)     Whether or not the data exchange includes the data type (or Value Representation) for each data element, and
3)     Whether the data is compressed.

When a particular transfer syntax is not supported by the device it will typically give back and error with the text “Transfer Syntax” not supported or the more generic message “Presentation Context” not supported.

·        Byte order issues – The byte order can be either Little Endian (LE), or Big Endian (BE), which defines whether the data is encoded for each word with its least significant byte first (LE) or most significant byte first (BE). As an analogy, some languages are written and read from right to left (Arabic, Hebrew), instead of left to right, it is a matter of knowing how to read the data (otherwise the data would be reversed). BE is usually associated with a UNIX based operating system using a Motorola CPU architecture, which means that you’ll see them only in relatively old images as most devices are now based on Intel/Windows architecture. 
Almost all PACS systems will support both LE and BE because they need to support these old formats, you’ll rarely see a BE modality. However, due to its limited support, I strongly suggest that you configure a system to ONLY support LE to prevent compatibility issues. I have seen a systems that claim to support BE, but do not display them correctly. Note that LE is the default transfer syntax, meaning that every system is supposed to be supporting LE.
·        Value Representation (VR) support issues – Each data element in the DICOM header can either specify for each individual element in its data type (Explicit VR) or leave it out (Implicit VR), which leaves it up to the software to support a data dictionary to determine its data type. Despite the fact that the Implicit VR is the default Transfer Syntax, I strongly suggest setting all of your devices to only send and/or accept the Explicit VR. The reason being that when archiving DICOM files on a CD, the Explicit VR is the required encoding, therefore you’ll reduce the chance that someone might just copy that file to a CD without doing the Implicit to Explicit VR conversion. In addition, many vendors include private data elements in the DICOM header and by requiring explicit VR, you will at least know the data type of those data elements, so you could potentially manage and interpret them.
·        Compression issues – Compression support has become important as new image files and studies are starting to get very large (notably the mammo breast tomosynthesis) thus taxing the communication and storage infrastructure. There are many compression versions defined in the DICOM standard, last time I counted there were 35!  In practice, most PACS systems might support only a handful, e.g. about 5-10. The JPEG for still images and MPEG for video are the most popular, and lately the JPEG2000 (Wavelet) compression is supported as well. If your PACS has not been upgraded lately, it might not support Wavelet, which will cause a rejection when a device wants to use the wavelet encoding for information exchange. This could be a problem as some of the senders might not be able to decompress the data upon request. Some devices support a proprietary compression which obviously only will be supported by devices from the same vendor. Note that compressed images are not allowed on the most popular CD format, which sometimes creates issues if this rule is not followed. When a lossy compression syntax is used (lossy means non-reversible instead of the lossless compression which is reversible), the creator of this file is required to change its Unique Identifier (UID), i.e. another copy if the image, which sometimes causes issues as some systems are confused about supporting two versions of the same object, i.e. the original and the lossy compressed. The creator of the lossy compressed image is also required to upgrade the header with a “compression flag” preventing the image from being lossy compressed again in the future as this will create major image artifacts.

Transfer syntax issues together with file type issues are the major causes  that a connection cannot be established. The good news is that it should be a consistent error, i.e. unless there is a software upgrade, or the user selects another object and/or transfer syntax, it should keep on working when its initial installation is successful. Detection of these issues can be done by looking at the log files at the sender and/or destination, or, if there is limited access to those files, using a DICOM sniffer such as Wireshark. It is important to recognize if a connection fails due to the initial negotiation of the file type and transfer syntax, or if it is during the actual data exchange, which we are going to discuss in the next post.

Additional resources regarding this topic can be found in the DICOM textbook, and additional skills to troubleshoot these issues can be learned in our DICOM/PACS training classes, either on-line or face-to-face.