Tuesday, October 23, 2018

Should I jump into the FHIR right now?


I get this question a lot, especially when I teach FHIR, which is a new HL7 standard for electronic
exchange of healthcare information, as there seems to be a lot of excitement if not “hype” about this topic.

My answer is usually “it depends” as there is a lot of potential, but there are also signs on the wall to maybe wait a little bit, until someone else figures out the bugs and issues. Here are some of the considerations that could assist your decision to either implement FHIR right now, require it for new healthcare imaging and IT purchases, or start using it as it becomes available in new products.

1.      The latest FHIR standard is still in draft stage for 90% of it – That means that new releases will be defined that are not backwards compatible. That means that upgrades are inevitable, which may cause interoperability issues as not all new products use the same release. As a matter of fact, I experienced this first hand during some hackathons as one device was on version 3 and the other one on version 2, which caused incompatibilities. The good news is that some of the so-called “resources” such as those used for patient demographics are now normative in the latest release so we are getting there slowly.

2.      FHIR needs momentum – Implementing a simple FHIR application such as used for appointments requires several resources, for example patient demographics, provider information, encounter data, and organization information. If you implement only the patient resource but use “static data” for example, the remainder is subject to updates, changes, and modifications, etc., in other words, if you slice out only a small part of the FHIR standard, you don’t gain anything. Unless you have a plan to move the majority of those resources eventually to FHIR, and upgrade as they become available, don’t do it. The US Veterans Administration showed at the latest HIMSS meeting how they exchange information between the VA and DOD using 11 FHIR resources that allowed them to exchange the most critical information. When implementing more than 10 FHIR resources you achieve critical mass.

3.      Focus on mobile applications – FHIR uses RESTful web services, which is how the internet works, i.e. how Amazon, Facebook and others exchange information. You get all of the internet security and authorization for free, for example, accessing your lab results from an EMR could be simple by using your Facebook login. The information is exchanged using standard encryption similar to what is used to exchange your credit card information when you purchase something at Amazon. Creating a crude mobile app can be done in a matter of days if not hours as is shown at the various hackathons. Therefore, use FHIR where it is the most powerful.

4.      Do NOT use it to replace HL7 v2 messaging – FHIR is like a multipurpose tool, it can be used for messaging, services, and documents, in addition to having a RESTful API, but that does not mean it is a better “tool.” One of the traps that several people fell into when HL7 version 3 was released, which is XML based, is that they started to implement new systems based on this verbose new standard, because it “is the latest,” without understanding how it would effectively choke the existing infrastructure in the hospitals. Version 2 is how the healthcare IT world runs, it is how we get “there” today and how it will be run for many more years to come. Transitioning away from V2 will be a very slow and gradual process, picking the lowest hanging fruit first.

5.      Do NOT use FHIR to replace documents (yet) – EMR to EMR information exchange uses the clinical document standard CDA, there are 20+ document templates defined such as for an ER discharge, which are critical to meet the US requirements for information exchange, they are more or less ingrained. However, there are some applications inside the hospital where a FHIR document exchange can be beneficial, for example, consider radiology reports, which need to be accessed by an EMR, a PACS viewing station, possibly a physician portal, and maybe some other applications. Instead of having copies stored in your voice recognition system, PACS, EMR, or even a router/broker or RIS, and having to deal with approvals, preliminary reports, and addendums at several locations, it is more effective to have a single accessible FHIR resource for those. One more comment about CDA; there is a mechanism to encapsulate a CDA inside a FHIR message, however, for that application you might be better off using true FHIR document encoding.

6.      Profiling is essential – Remember that FHIR is designed (on purpose) to address 80% of all use cases. As an example, consider the patient name definition, which has only the last and first (given) name. Just to put this in perspective, the version 2 name has xx components (last, first, middle, prefix, suffix, date of xxxxx etc.). What if you need to add an alias, a middle name, or whatever makes sense in your application? You use a well-defined extension mechanism, but what if everyone uses a different extension? There needs to be some common parameters that can be applied in a certain hospital, enterprise, state or country. Profiles define what is required, what is optional, and any extensions necessary to interoperate. I see several FHIR implementations in countries that did not make the effort to do this, for example, how to deal with Arabic names in addition to English names is a common issue in the Middle East, which could be defined in a profile.

7.      Develop a FHIR architecture/blueprint – Start with mapping out the transactions as they are passing through the various applications. For example, a typical MPI system today might exchange 20-30 ADT’s, meaning that it communicates patient demographics, updates, merges, and changes to that many applications. Imagine a single patient resource that makes all of those transactions obsolete as the patient info can be invoked by a simple http call whenever it is needed. Note that some of the resources don’t have to be created locally, a good example is the south Texas HIE, which provides a FHIR provider resource so you never have to worry about finding the right provider, location, name, and whether he or she is licensed.

8.      Monitor federal requirements (ONC in the US) – Whether you like it or not, vendors may be required to implement FHIR to comply with new regulations and/or incentives, including certification. In order to promote interoperability, which is still challenging (an understatement), especially in the US where we still have difficulty exchanging information even after billions of dollars spent on incentives, ONC is anxious to require FHIR based connectivity. This is actually a little bit scary given the current state of the standard, but sometimes, federal pressure could be helpful.

To repeat my early statement about FHIR implementation, yes “it depends.” Proceed with caution, implement it first where the benefits are the biggest (mobile), don’t go overboard and be aware that this is still bleeding edge and will take a few years to stabilize. If you would like to become more familiar with FHIR, there are several training classes and materials available, OTech is one of the training providers, and there is even a professional FHIR certification.

Saturday, October 6, 2018

PACS troubleshooting tips and tricks series (part 10): HL7 Orders and Results (report) issues.


In the last set of blog posts in this series I talked about how to deal with communication errors, causes for an image to be Unverified, errors in the image header or display and worklist. In this blog I’ll describe some of the most common issues with orders and results impacting the PACS system.

Orders and results are created in a HL7 format, almost always in a version 2 encoding, with the most popular version being 2.3.1. A generic issue with HL7, which is not restricted to just orders and results but pretty much all HL7 messaging is the fact that HL7 version 2 is not standardized, meaning that there are many different variations depending on the device manufacturer and the institution that also makes modifications and changes to meet local workflow and other requirements.

The IHE Scheduled Workflow Profile provides guidelines on what messages to support and what their contents should be, but support for those profiles have been somewhat underwhelming. Therefore, having an HL7 interface engine such as Mirth or other commercial versions has become a de-facto necessity to map the differences between different versions and implementations, and also to provide queuing capability in case an interface might be down for a short period of time, so it can be restarted. Here are the most common issues I have encountered specifically related to orders and results as well as updates:

·        Patient ID mix-ups – There are several places in the HL7 order where the patient ID can reside, i.e. in the internal, external, MRN, SSN, or yet another field. As of version 2.3.1, HL7 extended the external Patient ID field to become a list including the issuing agency and other details. DICOM supports a “primary” Patient ID field and expects all of the others to be aggregated in the “other ID” field. Finding where the Patient ID resides, in which field, or in the list, can be a challenge.
·        Physician name – The most important physician from a radiology perspective is the referring physician, which is carried over from the order in the DICOM MWL and image header. For some modalities, however, such as special procedures or cardiology, there can be other physicians such as performing physicians, attending, ordering, and others as well as multiple listings for each category. Even so, despite the fact that the referring physician has a fixed location in the HL7 order, it sometimes might be found in another field and require mapping.
·        HL7 and DICOM format mismatch – Ninety-five percent of DICOM data elements have the same formats (aka Value Representations) as the HL7 data types, the 5% differences can create issues when not properly mapped and/or transformed. For example, the Person Name has a different position for the name prefix and suffix and many more components in HL7.  There can be different maximum length restrictions possibly causing truncations, and the list of enumerated values can be different causing a worklist entry or resulting DICOM header to be rejected. An example is the enumerated values for patient gender which in DICOM is M, F, O, the list for HL7 version 2.3.1  is M, F, O, U and for version 2.5 it is even longer, i.e. M, F, O, U, A, N (see explanation of these vales). This requires mapping and transformation at the interface engine or MWL provider.
·        Report output issues – A report line is included in a so-called observation aka OBX-segment as part of a report message (ORU). There is no standard on how to divide the report, some put for example the impression, conclusion, etc. in a separate OBX, some group them together. In one case, a EMR receiving the report in HL7 encoding (ORU) only displayed the first line, obviously only reading the first OBX. Another potential issue is that a Voice recognition system might use either unformatted (TX) or formatted (FT) text and the receiver might not be able to understand the formatting commands
·        Support for DICOM Structured Reports – Measurements from ultrasound units and cardiology are encoded as a DICOM Structured Report. Being able to import those measurements and automatically filling in those measurements into a report is a huge time savings (several minutes for each report) and reduces copy/paste errors. However, not all Voice recognition systems do support the SR import and if so, they might have trouble with some of the SR templates and miss a measurement here and there. Interoperability with SR is generally somewhat troublesome, and implementation requires intensive testing and verification as I have seen some of the measurements being missed or misinterpreted. Some vendors also use their own codes for measurements, which requires custom configuration.
·        Document management – For long reports, it might be more effective to store them on a document management server and send the link to an EMR, or, encode it as a PDF if you want more control over the format, and attach this to the HL7 message. In this case, you will need to support the HL7 document management transactions (MDM) instead of the simple observations (ORU)
·        Updates/merges, moves – Any changes in patient demographics is problematic as there are many different transactions defined in HL7, depending on the level of change (in the person, patient, visit, etc.) and the type of change, i.e. move patient, merge to records, or simply update a name or other information in a patient record. Different systems support different transactions for these.

In conclusion, HL7 messages vary widely, and interface engines and mapping are necessary evils.
If you would like to create sample HL7 orders or results, you can use a HL7 simulator (parser/sender). The HL7 textbook is a good resource and there are also training options available.



PACS troubleshooting tips and tricks series (part 9): Modality Worklist issues


In the previous set of blog posts in this series I talked about how to deal with communication errors,
causes for an image to be Unverified and errors in the image header as well as display. This post will discuss the errors that might occur with the DICOM modality worklist.

A modality worklist (MWL) is created by querying a Modality Worklist provider using the DICOM protocol for studies to be performed at an acquisition modality. The information that is retrieved includes patient demographic details (name, ID, birthday, sex, etc.), order details (procedure code, Accession number identifying the order, etc.) and scheduling details (referring physician, scheduled date/time etc.). This information is contained in a scheduling database, which is created by receiving orders for the department in an HL7 format (ORM messages).

The Worklist provider used to be typically hosted on a separate server, aka broker or connectivity manager. But increasingly, this function is embedded in a PACS, a RIS or even EMR that has a radiology package. Moving this function from the broker to these other systems is the source of several issues as the original broker was likely rather mature with a lot of configurability to make sure it matches the department workflow, while some of these new implementations are still rather immature with regard to configurability.

The challenge is to provide a worklist with only those examinations that are scheduled for a particular modality, no more and no less, which is achieved by mapping information from the HL7 order to a particular modality. Issues include:

·        The worklist is unable to differentiate between the same modality at different locations – An order has a procedure code and description, e.g. CT head. As the order in HL7 does not have a separate field for modality, the MWL provider will map the procedure codes to a modality, in this case “CT” so a scanner can do a query for all procedures to be performed for modality “CT.” The problem occurs if there is a CT in the outpatient ER, one in cardiology for cardiac exams, one in main radiology, and one in the therapy department (RT). Obviously, we don’t want all procedures showing up on all these devices. It might get even more complicated if a CT in radiology is allocated, let’s say on Fridays to do scans for RT. We need to distinguish between these orders, e.g. look at the “patient class” being in-or outpatient, or department, or another field in the order and map these procedures to a particular station. The modalities will have to support the “Station Name” or “Scheduled AE-Title” as query keys.
·        The worklist can only query on a limited set of modality types – Some devices are not properly configured, for example, a panoramic x-ray unit used for dentistry should use the modality PX instead of CR, the latter of which might group them together with all of the other CR units. The same applies for a Bone Mineral Densitometry (“DEXA”) device; it should be identified as modality BMD instead of CR or OT (“Other”). Document scanners also should be configured to pull for “DOC” instead of OT or SC (“secondary Capture”), endoscopy exams need to be designated ES, and so on. The challenge is to configure the MWL provider as well as the modality itself to match these modality codes.
·        The worklist has missing information – A worklist query might not have enough fields to include all the information needed at the modality. In one particular instance I encountered, the hospital wanted to see the Last Menstrual Date (LMD) as it was always on the paper order. Other examples are contrast allergy information, patient weight for some modalities, pregnancy status, or other information. If the worklist query does not have a field allocated for these, one could map this at the MWL provider in another field, preferably a “comment field” instead of misusing another field that was intended and named for a different purpose.
·        The worklist is not being displayed – There could be several reasons, assuming that you tested the connectivity as described in earlier blogs, i.e. there could be no match for the matching key specified in the query request, or, the query response that comes back is not interpreted correctly. In one case a query response was not displayed at an ultrasound of a major manufacturer because one of the returned parameters had a value that was illegal, i.e. not part of the enumerated values defined by the DICOM standard for that field. In this case, I could only resolve this issue by looking at sniffer responses and taking those and running them against a validator such as DVTK.

MWL issues are tricky to resolve. It is highly recommended that one have access to the MWL provider configuration software. Most vendors will have a separate training class on this device. Be aware that the mapping tables need to be updated every time a new set of procedure codes is introduced; therefore, it is an ongoing support effort. Configuring requires detailed knowledge of HL7 so you can do the mapping into DICOM.

To troubleshoot these issues, a modality worklist simulator can be very useful. There is a DVTK modality worklist simulator available for free and a licensed modality simulator from OTech.

In case you need to brush up on your HL7 knowledge, there is a HL7 textbook available and there are on-line as well as face-to-face training classes, which include a lot of hands-on exercises.

In the next blog post we’ll spend some time describing the most common HL7 issues impacting the PACS.


PACS troubleshooting tips and tricks series (part 8): DICOM display errors.


In the last set of blog posts in this series I talked about how to deal with communication errors, causes for an image to be Unverified and header errors. This post will discuss the errors that might occur when trying to display the images caused by incorrect DICOM header encoding.

When an image is processed for display, it goes through a series of steps, aka the Pixel pipeline. Think about this pipeline as a conveyor belt with several stations, each station having a specific task, such as applying a mask to the image, applying a window/width level, a look up table, annotations, rotating or zooming the image, etc. These “stations” are instructed by the information in the DICOM header, or taken from a separate DICOM file called Presentation State for processing.
There are two categories of problems, the first set of problems might be due to incorrectly encoded header instructions, and the second category is the interpretation and processing caused by an incorrect software implementation. Here are the most common issues:

·        Incorrect grayscale interpretation and display – Images can be encoded as grayscale or color. Grayscale images are identified either as MONOCHROME2 in the header, which means that the lowest pixel value (“0”) is interpreted and displayed as black, or MONOCHROME1 in which case the maximum pixel value (255 for 8 bits images) is interpreted as black. Typically, MR and CT are encoded as MONOCHROME2 and digital radiography MONOCHROME1. However, there is nothing that prevents a vendor from inverting its data and using a different photometric interpretation. Anytime an image is displayed inverted instead of in its normal view, the MONOCHROME1-2 identification is the first place look. I have seen problems where the software after an upgrade ignored the photometric interpretation, causing all of CR/DR to be displayed correctly, but inverting the CT/MR, or displaying the image correctly but the mask or background to be inverted.
·        Incorrect color interpretation and display – color images can be encoded in several different manners, the most common one is using a triplet of Red, Green and Blue (RGB). However, DICOM allows one to use several others (CMYK, etc.) and also allows for sending a palette color in the header that the receiving workstation has to use to map the color scale. Palette color is used if the sender is very particular about the color, such as in nuclear medicine, unlike color for ultrasound when it is used to indicate the direction of the blood (red/blue). Having many different color encodings increases the chance that a receiver does not display one of those encodings. I have seen this after data migration where some of the ultrasound images from a particular manufacturer did not display the color correctly on the new PACS viewer.
·        Failing to display a Presentation State – The steps in the pipeline dealing with image presentation (mask, shutters, display and image annotation and image transformation such as zoom and pan) can be encoded and kept as a separate DICOM file together with the study containing the images. Not every vendor will implement all the steps correctly, and I also have seen ones that will only interpret the first Presentation State and ignore any additional Presentation States.
·        Incorrect Pixel interpretation of the Pixel representation – Some modalities, notably CT, can have negative numbers (Hounsfield units or HU) indicating that a visualized tissue has an X-ray attenuation less than water, which is calibrated to be exactly 0 HU. Some modalities will scale all the pixel values, especially CT and PET. If the software does not interpret it correctly, the image display will be corrupted.
·        Incorrect interpretation of non-square pixels – Some modalities, notably US and C-arms, have “non-square” pixels, meaning that the x and y direction have a different resolution. The pixels need to be “stretched” through interpolation, based on the aspect ratio, for example, if the ratio is 5/6, they need to be extended in the y direction for another 1/6th, which is 16.6%. If your images look compressed, which you’ll notice by the compressed text or, in case of a C-arm, you’ll notice that circles become egg-shaped, that indicates the software does not support non-square pixels. Except for looking kind of strange, it might not impact image interpretation.
·        Shutters incorrectly displayed – A shutter can be circular with a defined radius and center point, or rectangular with defined x,y coordinates, intending to cover collimated areas, which display as being very white to the radiologist. I have seen some implementations ignoring the circular shutter, which makes the radiologist who has to look at the white space very unhappy.
·        Overlay display issues – Overlays used to be encoded in the PACS database in proprietary formats, which is a big issue when migrating the data to another PACS system. And, if encoded in a DICOM defined manner, there are several options ranging from stand-alone objects, to bitmaps in the DICOM header, embedded in the pixel data field, or, worst case being burned-in, i.e. replacing the actual pixels with the overlay. If the overlays contain clinical information, e.g. a Left/Right indicator on the image, it is important to check how the overlays are encoded to make sure that when the data is migrated or read from a CD on another system, the user will be able to see it. The same applies for “fixing” burned-in annotations; don’t overlay a series of “XXX-es” in case the name was incorrect, as they might not be displayed in the future. The best way to get rid of incorrect burned-in annotations is to use an off-line image editing routine, which functions as a “paintbrush” and eliminates the pixel data.

The image pixel pipeline facilitates all the different combinations and permutations of the different pixel encodings, which in practice might not always be completely or correctly implemented. There is an IHE profile defined, called “Consistent Presentation of Images,” check your IHE integration statement of your PACS to determine whether it is supported, meaning that the software implements a complete pipeline.

In addition, this profile has a detailed test plan and a set of more than 200 images and corresponding presentation state files that are available in the public domain and can be accessed from the IHE website under "testtools". I strongly recommend that after the initial installation and with each subsequent software upgrade, that you load these images and check to see if the pipeline works. These test mages have different pixel encodings with the instructions in the header negating the pixel display, for example, an image might be MONOCHROME1 with an inverted LUT to be applied, displaying the same as if it was MONOCHROME 2 with a regular, linear LUT.

Another good resource is the PACS fundamentals textbook that explains the pipeline in great detail. The next blog post will be on Modality Worklist issues.

PACS troubleshooting tips and tricks series (part7): DICOM header errors.


The last set of blog posts in this series discussed dealing with communication errors and causes for an
image to be Unverified. This post will discuss the errors that might occur caused by incorrect DICOM header encoding.

The PACS will typically only check for information in the header to be correct for those data elements that directly impact data integrity, i.e. impacting the correct indexing and subsequent retrieval of the DICOM files. Those data elements are about 10-15 elements such as Name, ID, sex, etc. There could be other errors in the data that impact future retrieval and processing which would not necessarily cause an image to be Unverified.
The most common DICOM header issues I have experienced are as follows:

·        Old date and time format – The first version of the DICOM standard had a different encoding for the date and time, it separated them with a period, “.” for example, instead of the encoding YYYYMMDD (20180821) it would encode it as follows – YYYY.MM.DD (2018.08.21). A DICOM editor would need to be used to change this.

·        Padding using spaces and null’s – There could be problems with “padding” i.e. adding spaces and/or a null (an ACSII control character indicating “0”) either before or after a data element. Some of the data elements allow for padding before and after (e.g. the Accession Number), some only after (e.g. the Person Name). That means that a space before a Name (.spSmith) is significant, and a search on Smith (without space) will not match and not provide any results. The space before the Accession Number is not significant and therefore a search with or without a space should result in a match.

Part of this problem is self-inflicted as the DICOM standard requires each data element to have an even number of characters/bytes, and therefore, if a data element being odd (e.g. Smith having 5 characters) it is padded (Smith.sp) to change its length from 5 to 6 bytes. To make it even more complex, the Unique Identifier (UID) has to be padded with a “null” instead of a space, in case it has an odd number of characters. Most DICOM toolkits are aware of this and will strip them off and/or add them when providing these data elements to the application, but there could be “rogue” implementations that incorrectly do this. In my own experience, I have seen a DICOM router not sending images to a particular physician because the physician’s name (Smith.sp) in the header did not match the routing table (Smith).

·        Person Names – Patient and Physician names are encoded somewhat differently in the HL7 orders. For example, the physician name in HL7 is typically preceded with an alphanumeric code that refers to the physician registry to properly identify Dr. Smith. The sub-components are in a different order, i.e. the name suffix and prefixes are reversed in the DICOM data format, in addition to the fact that the name in HL7 can have up to 14 (!) components in the latest version, while DICOM only allows for 5 (Last, First, Middle, Prefix, Suffix).

Patient names can also be incorrect due to incorrect user input. Imagine an input clerk entering “John Smith” in the Last name field instead of entering these in the last/first name fields, this record will not match the name “Smith” upon searches. If detected, a user can update the patient name, which will cause a HL7 Update message to be sent to all interested parties. If not detected, it can create issues later on.

·        Escape and control characters – One of the most important control characters in DICOM is the “\” which is used to separate among different multiple valued components in a data element. For example, if the patient was identified in the past with her maiden name, or name of her first husband, one could encode this using Smith\Jones. The “\” is NOT a control character in HL7 which contains the order and patient demographic information, therefore, any HL7 to DICOM mapping is supposed to filter out these characters, and if essential, possibly replace with a non-DICOM control character, e.g. a “/” to prevent the software from becoming confused.

These are some of the most common errors, there are several more. The problem is that they might be undetected and can create issues later, for example when trying to migrate this data to another PACS or creating a CD that is read by another PACS. As a typical header has sometimes a hundred or more data elements, it is hard to detect these issues visually, and the use of a header validator is the only manner to detect any issues.

There are a couple of tools available for free that I use, DVTK, which you can see here for a video demo on how to use it. I recommend using that if there is an issue, e.g. when trying to read an image from a CD that is rejected or produces problems with the display. It is also a good idea to run this validator against any new modality, you’ll be surprised how many problems you’ll find, some of them might be insignificant, but some can be important.

Another resource to use is the OTech reference guide, which lists all of the data types (VR’s) for both DICOM and HL7 in case you need to check for the validity of data elements. We also spend quite a bit of time in our DICOM training sessions going over the testing and validation process.

The next post we’ll talk about the most common image display issues and validating the image display pipeline.


PACS troubleshooting tips and tricks series (part6): Unverified PACS cases.


In the last set of blog posts in this series I talked about network, addressing issues, incompatible file
types as well as transfer syntaxes and DICOM communication errors. This post deals with errors that might occur when there is a successful information transfer, however, the PACS determines that there are issues with the data that causes the file to be flagged as “Unverified” or “Broken.” This means that these images are NOT added to the queue to be interpreted by a radiologist. The following issues can occur –

·        Missing “exam complete” status – Some PACS systems will automatically add an incoming study to a radiologist’s worklist, some can be configured to do so, and some will require an “exam complete” status to be initiated. This exam complete can be entered at a radiology information system or an EMR by the technologist, which will typically cause a HL7 transaction to be sent to close out the order to the PACS. These updates could also be entered at the PACS, again, depending on the architecture.

It is possible to have this event automatically triggered at a modality by using MPPS (Modality Performed Procedure Step), however, there are relatively few institutions that make use of this feature, even though it is universally available at almost all digital modalities. Sometimes it is required to close out the study manually, for example, if the study requires loading additional information from a CD that is brought in by the patient, or requires additional processing and creation of derived images such as for using CAD or 3-D reconstructions.

Assuming that the PACS is configured to listen to the “exam complete” status, if this event is not issued because the technologist forgets to do so, or there is a communication error between the trigger initiator and PACS, it will cause the study to NOT appear on a worklist for interpretation.

·        Duplicate Identifier(s) – There are several important patient- and study-identifiers, which possibly can be duplicated or repeated for another study in error. The reasons for the duplication are incorrect manual entry, non-uniqueness, such as for the accession number and internal patient ID’s, especially when importing “foreign” studies, or due to software errors. The PACS core software will check for these situations because duplication will prevent them from uniquely identifying or indexing them for archiving and retrieval operations.

A special case occurs when the object number, aka SOP Instance UID, which is a Unique Identifier for that specific DICOM object, gets duplicated. The message “duplicate SUID” will typically occur. The SUID functions as a unique number, similar to a VIN number used for a car. Duplication can occur because of software errors, resending the same file twice to a destination or after a change is made in the file header. Different PACS systems might behave differently when receiving a duplicate SUID, some of them overwrite the original file, some of them will ignore them, and some might report the error causing an Unverified Study. One should never manually fix these SUID’s as uniqueness cannot be guaranteed, one should always use an off-line SUID generator in case these need to be fixed. If a “rogue” software implementation initiates duplicate UID’s on a regular basis for different objects, one could consider using a programmable router, which can be configured to create a new, unique number.

·        Missing identifier(s) – A missing identifier such as a Patient ID, Name, Accession Number, patient sex, birthdate, and several others will also cause an image to be Unverified. A user might actually use this behavior intentionally, for example by leaving an Accession Number blank when importing an external study, to allow it to be correctly and manually verified by a PACS system. The DICOM standard defines what information in the DICOM header has to be present, and a software application will typically flag when any of these are missing.

·        Exceeding length – Each data element in the header has, as part of its data type or VR (Value Representation) specification, an associated maximum length defined. Most of these are plenty, for example, the maximum length of a Patient Name is 64 characters which means that it will very rarely, if ever be exceeded. However, some of the attributes might be improperly used, such as including a long description in a field that is defined to have only 64 characters. Exceeding this maximum length could cause the object to be Unverified. If this is a common issue, one could use a DICOM router that can be configured to “fix” the header. This also might occur when migrating images to a new PACS where the old PACS did not care to identify this issue, and the new PACS would reject these cases as it interprets the DICOM standard more strictly than the initial source PACS.
·        Incorrect codes – Some of the DICOM data elements have defined terms that can be used that are identified by the DICOM standard as “enumerated values,” such as the value M, F and O (“Other”) for patient sex. If the initiating system, which can be an EMR or data entry system, has a different set of values for a certain data element, it can be rejected.

Fixing unverified studies provides the core of the work for many PACS administrators. Identifying the root cause and trying to prevent them will increase the data integrity of the system and relieve their jobs. There are routers that can fix chronic inconsistencies. Some PACS systems, especially those intended to be enterprise systems such as used by a VNA (Vendor Neutral Archive), can be configured to use “tag morphing” to fix the information. Tag morphing can also be used to clean up inconsistent study and series descriptions and/or body part identifiers. Note that some header issues could be unnoticed and are not flagged, which means that they only surface when someone tries to display the image or interpret the DICOM file. These errors, i.e. DICOM header issues will be covered in the next post.

Additional resources can be found in the DICOM textbook (ebook is available as well) and I also created a small DICOM/HL7 reference guide, which lists the DICOM data dictionary, UID’s and VR specifications.



Wednesday, August 29, 2018

PACS troubleshooting tips and tricks series (part5): DICOM communication errors.


In the last blog posts in this series I talked about network, addressing issues and incompatible file
types as well as transfer syntaxes. This post deals with errors that might occur doing the actual DICOM information transfer, i.e. after the DICOM connection (Association) has been successfully established.

When an Association is accepted by the server (SCP), which is indicated by the “Associate_Accept” transaction, the DICOM client (SCU) will issue the DICOM command, which is determined by the negotiated SOP Class. For example, imagine that a device proposes to exchange CT images and it is “OK-ed” by the SCP, the SCU will issue the C_Store command with the CT image file. The receiver will interpret the command, then discard that, and take the file and likely archive it in the case of a C_Store. It could also update a database so it can find it and/or reply to a DICOM query about its location.

If the SCP is a transient device such as a DICOM router, it will pass it on to its destination. Each DICOM command is returned with a corresponding Response, for example, a C-Store Request will result in a C_Store Response, the same applies for queries, moves, etc. The response has a status code associated with the transaction; hopefully it will be “success,” which is identified with the code “0000.” In case there is an error, the status code will contain the appropriate error code other than  “0000.” These codes are standardized by the DICOM standard, i.e. there are codes defined for the most frequent errors and warnings. 

Here are some of the common errors that you might see:
·        Resource issues – Imagine that you are sending a set of images to a destination with limited resources, such as a workstation with limited disk space. In the case that the destination cannot receive any more data files, it will indicate that in the status code A700, meaning “out of resources.” To resolve this issue, one would either go to the destination to free up more resources or send the information to another destination. The reason this error occurs is that one does not know in advance how many files are going to be sent, as there is no indication in the Association negotiation to say how many images are to be transferred. The resource issue does not have to be related to the required archive space, it could also be space for additional tables in the database or other resource restrictions.
·        Processing errors – The receiver reports errors when processing the information; for example, it might need to update several database tables upon receipt and archiving, and might have a problem as the images cannot be uniquely identified due to duplicate or missing identifiers. Not every receiving system implements the same set of criteria determining this error condition, some of them will actually accept the information and report “success” while quarantining the data file and flagging it as “Unverified” or “Broken.” This is done so that a physician might still be able to view them and could report on them, while awaiting resolution of let’s say the complete patient’s name and/or other demographic information. We’ll spend another post on the most common issues causing the Unverified status.
·        Warnings – A server might also give back a “warning,” its status code typically will start with a hex “B,” an example would be a print server sending back a warning that the number of sheets is getting low in a supply magazine. Another example could be a server telling a SCU that it modified or “coerced” one or more data elements in the header, as needed to make it unique or based on a patient update or merge. Most applications ignore these messages.
·        Pending – A “pending” status return message is not an error condition, but an indication to the client that the server is processing the request and will send more replies. This is common for a query response that has multiple matches. This non-successful completion is part of a normal behavior.

How would a client behave when it receives an error? Each device could have a different reaction, some might just continue with what they are doing and log the error, some might stop and notify the user, and some might retry for a configurable number of times with configurable intervals. If a device is following the DICOM standard guidelines, it should specify its behavior in its conformance statement under the section “SOP Specific conformance” for that particular SOP Class. Therefore, check that resource, noting almost all of the DICOM conformance statements can be found on-line. Make sure though that when you look at these conformance statements, that you have the right software version matching your device.

These types of errors are somewhat unpredictable and typically are caused by data information errors or inconsistency, unlike the errors that are caused by file type or transfer syntax errors. As mentioned earlier, it is possible that a server still acknowledges the information transfer to be successful, even if there is an issue to allow a physician to access the incomplete data, such as is common in the case of emergency where the patient cannot be identified at time of patient registration. These most common causes for these so called “Unverified” studies is going to be discussed in our next post.

For additional resources on this topic, you can use “www.otpedia.com,” which has a definition of the most common DICOM terms, or the DICOM textbook which is available either as a printed text or e-book, or, attend our on-line or face-to-face training seminars on PACS/DICOM.


PACS troubleshooting tips and tricks series (part4): Transfer Syntax support errors.


The last three blog posts in this series I talked about network, addressing issues and incompatible file
types. This post deals with incompatible Transfer Syntaxes (compression etc.) that could be proposed by a DICOM device initiating a DICOM connection.

In the last post I explained that a device proposes a list of items called a Presentation Context, which includes the file type (Abstract Syntax or SOP Class in DICOM terms) to be exchanged and the proposed encoding or Transfer Syntax of these files. These transfer syntaxes are a different representation of the data, the information content is still the same. Think about it as sending a file either in its original size, or, send it as a “zipped” file, in either case, the content is identical.

There are three parameters that can change in the Transfer Syntax:
1)     The byte order,
2)     Whether or not the data exchange includes the data type (or Value Representation) for each data element, and
3)     Whether the data is compressed.

When a particular transfer syntax is not supported by the device it will typically give back and error with the text “Transfer Syntax” not supported or the more generic message “Presentation Context” not supported.

·        Byte order issues – The byte order can be either Little Endian (LE), or Big Endian (BE), which defines whether the data is encoded for each word with its least significant byte first (LE) or most significant byte first (BE). As an analogy, some languages are written and read from right to left (Arabic, Hebrew), instead of left to right, it is a matter of knowing how to read the data (otherwise the data would be reversed). BE is usually associated with a UNIX based operating system using a Motorola CPU architecture, which means that you’ll see them only in relatively old images as most devices are now based on Intel/Windows architecture. 
Almost all PACS systems will support both LE and BE because they need to support these old formats, you’ll rarely see a BE modality. However, due to its limited support, I strongly suggest that you configure a system to ONLY support LE to prevent compatibility issues. I have seen a systems that claim to support BE, but do not display them correctly. Note that LE is the default transfer syntax, meaning that every system is supposed to be supporting LE.
·        Value Representation (VR) support issues – Each data element in the DICOM header can either specify for each individual element in its data type (Explicit VR) or leave it out (Implicit VR), which leaves it up to the software to support a data dictionary to determine its data type. Despite the fact that the Implicit VR is the default Transfer Syntax, I strongly suggest setting all of your devices to only send and/or accept the Explicit VR. The reason being that when archiving DICOM files on a CD, the Explicit VR is the required encoding, therefore you’ll reduce the chance that someone might just copy that file to a CD without doing the Implicit to Explicit VR conversion. In addition, many vendors include private data elements in the DICOM header and by requiring explicit VR, you will at least know the data type of those data elements, so you could potentially manage and interpret them.
·        Compression issues – Compression support has become important as new image files and studies are starting to get very large (notably the mammo breast tomosynthesis) thus taxing the communication and storage infrastructure. There are many compression versions defined in the DICOM standard, last time I counted there were 35!  In practice, most PACS systems might support only a handful, e.g. about 5-10. The JPEG for still images and MPEG for video are the most popular, and lately the JPEG2000 (Wavelet) compression is supported as well. If your PACS has not been upgraded lately, it might not support Wavelet, which will cause a rejection when a device wants to use the wavelet encoding for information exchange. This could be a problem as some of the senders might not be able to decompress the data upon request. Some devices support a proprietary compression which obviously only will be supported by devices from the same vendor. Note that compressed images are not allowed on the most popular CD format, which sometimes creates issues if this rule is not followed. When a lossy compression syntax is used (lossy means non-reversible instead of the lossless compression which is reversible), the creator of this file is required to change its Unique Identifier (UID), i.e. another copy if the image, which sometimes causes issues as some systems are confused about supporting two versions of the same object, i.e. the original and the lossy compressed. The creator of the lossy compressed image is also required to upgrade the header with a “compression flag” preventing the image from being lossy compressed again in the future as this will create major image artifacts.

Transfer syntax issues together with file type issues are the major causes  that a connection cannot be established. The good news is that it should be a consistent error, i.e. unless there is a software upgrade, or the user selects another object and/or transfer syntax, it should keep on working when its initial installation is successful. Detection of these issues can be done by looking at the log files at the sender and/or destination, or, if there is limited access to those files, using a DICOM sniffer such as Wireshark. It is important to recognize if a connection fails due to the initial negotiation of the file type and transfer syntax, or if it is during the actual data exchange, which we are going to discuss in the next post.

Additional resources regarding this topic can be found in the DICOM textbook, and additional skills to troubleshoot these issues can be learned in our DICOM/PACS training classes, either on-line or face-to-face.


Monday, August 27, 2018

PACS troubleshooting tips and tricks series (part2): Addressing.


In my last blog, I discussed network and transmission issues, in this write-up I’ll discuss “addressing” issues. A DICOM device needs to be configured with three addresses:
  1.   The network (IP address)
  2.  The port number
  3.  The Application level address aka AE-Title

As an analogy, think about the IP-address as a “street address,” the port number as the apartment number (let’s say 38A), and the AE-title the tenant, which can be multiple if the apartment is shared by several occupants.

Imagine that a system cannot communicate; the causes could be due to:

Use "ping" to check network availability
  • IP issuesthis can be caused by incorrect VLAN configuration, router DHCP settings, wireless issues, VPN problems, firewalls or as simple as having the incorrect IP address of your destination. Remember that DICOM was defined in the early 1990’s when no-one thought we would run out of IP addresses, so it relies on fixed IP addresses and cannot handle the dynamic addresses typically issued by DHCP. (Actually there is an option in DICOM that allows it but it is rarely implemented). Testing if you can reach your destination is simple, you use the command to get to the DOS screen and use the ping command with the appropriate IP destination (see illustration 1). To find the IP address of your destination you can use the “ipconfig” command for windows or “ifconfig” for Unix at your destination computer. Wireless connections are challenging, as many devices now are becoming wireless, such as portable ultrasounds, portable x-ray units, and the digital acquisition plates for digital radiography.
  • The challenges with wireless are:
  1. The communication has to be encrypted (remember the metadata in the DICOM files that are transferred contain patient information)
  2.  The IP address has to be fixed, which could be a challenge when moving between different floors and connecting to a different wireless router
  3. There are still immature wireless applications out there, i.e. unstable connections at the devices themselves. Note that the fact that a device communicates today does not mean that it works tomorrow, as there could be a new router installed with incorrect IP settings by your IT department.
·        Port issues – the “well-known” DICOM port is 104, however some devices don’t allow you to allocate this port in this low number range and therefore, a relatively newly “approved” DICOM port is 11112. It is strongly encouraged to always use this officially assigned port (as defined by IANA) when you configure your system. There could be multiple DICOM applications listening to the same port, I had this happen to me after I installed a DICOM viewer on my laptop, which was listening to 104 in the background and grabbing my images instead of the archive I wanted to send to it, which took me an hour or so to figure out. The “netstat” command will show you information about your network, with some of the options including the port numbers and processes that are attached.

Use Verification to check DICOM
AE availability
·        AE-Title (AET) issues – Some systems use the computer host name as an AE-Title, which is poor practice as a single host could have multiple AE-Titles. Some systems have a fixed AE-Title which is even worse, as they should be configurable. It is recommended to have a local “standard” about AE-title assignments, for example, include the hospital abbreviation, location and modality such as BAYL_DT_OUTP_CT1 being CT number 1 in the outpatient department at Baylor Medical Center in Dallas downtown. A good practice is to use capital letters only for the AET to eliminate any case sensitivity issues. Remember, there can be multiple AE-Title’s listening to the same port at the same IP, for example, a PACS system might have different AE-T’s for the archive, database, and workflow provider. Duplicate AE-Titles on the same network will create problems, especially when retrieving information, as the AE-Title is part of the retrieval command.
Most PACS systems will need to add the initiating device to be added to its configuration file for image retrievals, which would explain the fact that you can sometimes list the studies on a workstation but that the subsequent retrieval will fail. Some of the PACS systems allow you to configure it in a “promiscuous” mode meaning that it will listen to any device accessing it at the correct IP and port, also something that is bad practice and should be discouraged. Testing if the DICOM application is running and is able to reply to an initiating device is done by executing a DICOM Verification, aka “DICOM-ping” or “echo” command. Some of the applications allow you to execute this from the application, sometimes you have to find it at a service or other menu, see illustration for a sample.

PACS addressing issues are relatively easy to troubleshoot, if you use ping and DICOM Verification, you should be able to solve 95% of the issues. Sometimes you might find that ping does not make it through because of very strict IT policies and subsequent router settings, and some rogue DICOM implementations might have trouble supporting Verification (which is a requirement for any device that is a DICOM listener). 

If you are new to DICOM O(or like to learn more about its fundamentals) you might want to check out the DICOM textbook, or sign up for our on-line or face-to-face DICOM training classes which have quite a bit of hands-on exercises.

Last but not least, look for more upcoming “troubleshooting tips” which will be on DICOM file support issues.




PACS troubleshooting tips and tricks series (part3): DICOM file type support errors.


In the last two blog posts in this series I talked about network and addressing issues. Assuming we
are able to connect to another device, the next step is trying to exchange certain information between the two devices using the DICOM protocol. Here is where things can go wrong as well.

Before we discuss details, let’s go over the DICOM protocol steps that include the handshake. There are two separate levels of communication, the first one being the connection management, which can be considered the “session” layer of a protocol, whereby a connection is properly established and released. Establishing the connection includes an agreement of what information will be exchanged and how it is being encoded. The negotiation of the information exchange is a simple “proposed” and either “accepted” or “rejected” step. We’ll discuss the handshake about the filetype to be exchanged first.

There are three terms that are important in this context: the “Presentation Context” which is the set of parameters being proposed by the client device which includes the type of information to be exchanged, such as “CT Images,” a “Dose Report,” an image “Presentation State,” and many others. The second term to know is the “Abstract Syntax” that identifies the information to be exchanged, and the third term is the “SOP Class” identifying the requested service, such as “Store these CT images.” Incompatibility issues are typically reported by the device as “Presentation Context,” “Abstract Syntax,” or “SOP Class” not supported.

These are some commonly encountered issues:
·        The image type that is proposed by the client is not supported. The error that might come back
Example of "Abstract Syntax" not supported log using
Modality Simulator (OT-DICE)
and you’ll see in the log file can be either “Presentation Context,” “Abstract Syntax,” or “SOP Class” not supported. This happens most frequently if a device supports a relatively new image type such as the Breast Tomosynthesis Mammography image, or an Enhanced (multiframe) CT or MR, or some of the specialties that were recently added such as for ophthalmology, digital pathology and others. To diagnose this, you can look at the log files at the sender. If you don’t have access to these log files, you can use a sender-simulator that will allow you to see exactly what is happening. For the simulator, you can use various tools, I mostly use the DVTK toolset, which includes a simulator or OT-DICE (see illustration of the “Abstract Syntax not supported”). The Wireshark DICOM sniffer is also a great diagnostic tool to identify such problems. A solution to this issue is an upgrade of the software of your device, which might be a major undertaking and expensive, using a “fallback” image type which is configurable at the modality such as using the “traditional CT” instead of the “enhanced CT” SOP Class. Some archives allow a configuration change to support a new SOP Class with the understanding that they can be archived but maybe not properly display at the PACS workstation.
·        A non-image file type is not supported by the server (PACS, Workstation). You’ll see the same errors as listed for the file types, but they refer to a non-image file such as a Dose Report, a CAD Structured report for mammography or chest, a Presentation State containing image manipulation and display information, ultrasound measurements, a DICOM encapsulated PDF, and many others. These objects are typically meant to be sent with the images in the same study, and it is likely that the images will make it to its destination, but not the corresponding non-image files, so you probably will get a  “DICOM error.” You’ll use the same tools to diagnose and/or simulate this as for the image type issue.
·        A private image or non-image file is being proposed. Vendors frequently create private or proprietary file types. This is common for some CR vendors, ultrasound, and others. If the images are sent to a PACS from the same vendor, the PACS will most likely support and display them, but subsequent viewers very likely will not. Imagine that a private file is sent from Vendor A modality to a Vendor A PACS successfully, if forwarded to an enterprise archive from Vendor B, it might fail to connect. The same tools apply, to diagnose and simulate
·        A non-appropriate SOP Class is used as an interim solution. Some vendors create a different modality and corresponding image format, e.g. CT instead of mammography, a screen-save (Secondary Capture) instead of an ultrasound, or other SOP Class that was not intended to cover a specific modality. There is not much you can do about this except for upgrading your PACS as soon as you can, and at that time try to convert these anomalies back to the original, intended file type.
·        CD’s are inoperable. Instead of trying to exchange these different file types over a network and being rejected by the intended receiver, you also may experience exactly the same problem when trying to read images from a DICOM CD. After diagnosing the issue, the problem might be harder to solve because you might need to contact the creator of the CD, which is likely not local, to resolve the issue.

Filetype issues are sometimes a little bit hard to diagnose as you might not always be able to or be allowed to use the tools mentioned. You could change that of course, I know of one vendor who installs a modality simulator (OT-DICE) and Wireshark on each of its modalities, just in case you might need them.

The next in this series will be on Transfer Syntax (such as compression) incompatibility issues. 

Additional information can be found in the DICOM textbook, or you can sign up for our on-line or face-to-face PACS and/or DICOM training classes.