Wednesday, June 19, 2013

What do I need to know about HL7?

During the recent SIIM conference in Dallas, there was an overriding message to healthcare imaging and IT professionals that they should learn more about HL7 especially as image-enabling the EMR is becoming a very hot item. HL7 is definitely not rocket science and like any other standard, being able to “talk” HL7 is just a matter of knowing the most common terms and knowing where to look for what information.

Most health care imaging and IT professionals don’t really have to be experts as there are enough of those within an institution.  However, the problem is typically how to communicate with vendors and the HL7 experts and know enough to visualize any issues and bring them to the surface. By the way, we are concentrating on version 2.x, as version 3 is a completely different story and will be covered elsewhere.

First of all, even though the HL7 standard is very extensive and covers many domains, ranging from billing to housekeeping, and dietary to genomics, we are only concerned with a small subset of that in the case of imaging. Therefore, instead of the more than 70 different messages, we are only concerned with three specific to our domain, patient admission, order, and result management.

In a typical scenario, an order for a CT might be placed in a CPOE (computerized physician order entry) system, which in many cases is part of the EMR. This triggers an order message, which is called ORM (“Order Message”).  Each HL7 message has an event code embedded, of which there are no less than 130 defined, of which again we use only a couple, which in this case is called the O01 (order). If this concerns an outpatient, the actual order would not trickle down to a modality for the study to be performed until the patient arrives. A clerk or receptionist will register the arrival, which triggers an arrival message that is encoded as an ADT (Admission,Discharge,Transfer) with a trigger event code A01. This will cause the requisition to become an active work item that will appear on the worklist for the CT modality upon request by the CT technologist.

The worklist is created by mapping the HL7 order information into a DICOM worklist item, which is exchanged using a modality worklist query. After the selection of the patient from a list by the CT technologist, the images are created during image acquisition and the DICOM header contains the information copied from the worklist. Images are sent to a PACS system and, upon completion of the exam, they appear on a workstation for a radiologist to read. 

The order was also sent to a reporting system, which is typically a voice recognition system, so that when the radiologist selects a new patient from the list, the information needed to identify the report is available and automatically displayed on the report screen. A report is created, signed off, and exchanged by the reporting system with the EMR or radiology information system using an ORU (observation result) transaction.
Let’s look at a typical HL7 message and we’ll see that it is really not that hard to interpret these messages as long as you know how to read them. The message below is a sample order message.

OBR|||GG3234093|71099.99^RIGHT SHOULDER X-RAY^CPT4|||||||||||| 67890^GRAINGER^LUCY^X^^^MD^^^UAMC^L|||||||||||

A HL7 message contains so-called segments, which have a three-character segment ID and are delimited by a carriage-return. In our sample message, we can recognize the first segment ID’s, i.e. the MSH, it’s the segment containing general information about the message itself such as the initiator, receiver, date/time of message, and version number, in this case version 2.3.1. The next segment is the PID segment containing patient identification, the PV1 segment holds the patient visit information, ORC with common order information and OBR, which contains information about the details of an exam, diagnostic study/observation, or assessment that is specific to an order or result. 

The transaction has many different components, all of them having a fixed location and are separated by the control character “|”. The message content uses ACSII text encoding, therefore, it is not hard to interpret what it says in the OBR. In this case it says that a right shoulder X-ray is scheduled for the Patient Barry identified in the PID. However, if we need to figure out who is the referring physician, we need to go back to the interface specification to find out who of the MD’s identified in the ORC segment is the referring, who is the performing, and who is the attending physician.

What are the most common issues that we encounter in the imaging area related to HL7? First of all, the saying goes that “if you have seen one HL7 interface, you have seen one HL7 interface,” meaning that no two are alike. Each institution has its own customizations and many vendors make changes to their interfaces. Because of the variability, we use interface engines to map messages and make our HL7 to DICOM convertors (brokers) very flexible and configurable. In addition, there are inconsistencies and differences between the HL7 and DICOM protocol encoding which can cause issues. The HL7 related problems that I have experienced and expect are common, are as follows:

  • Mapping errors from the appropriate HL7 element into the right DICOM field:
DICOM has a single, primary patient identifier, although it is possible to dump all of the ones you like to carry along in the image header into the so-called “other patient ID’s” attribute.  HL7 has quite a few patient ID fields, i.e. a patient internal ID, external ID, Medical Record Number, Social Security Number, and it has the capability to indicate the patient ID issuer as well. Therefore, finding the patient identifier you want to use in the imaging domain requires the selection of the appropriate ID and mapping it accordingly. The same applies for the many physician names in the order, in the DICOM header we typically carry only the referring physician forward, therefore we need to make sure we map the correct field. Mapping is typically done at the broker, but could be done at the interface engine level as well. Mapping errors often occur as a result of software changes, or when upgrades impact these mappings.

  • Mapping errors from the appropriate HL7 sub-component into the right DICOM field:

The procedure description that is contained in the worklist might have two components, both the full description and a mnemonic. Having the correct information in the worklist and eventually in the DICOM header is critical as it often determines the appropriate image hanging protocol, i.e. the way the images are displayed on the workstation. In addition it could impact the retrieval of the appropriate prior studies to be pre-fetched for comparison. I have seen mapping errors where the DICOM procedure code contained the full description vs. the abbreviated one, which is a simple error to fix in the broker.
  • Missing granularity:

One of the identifiers that is used in a worklist query at a modality is the so-called “modality field.” A typical value could be “CT, MR, or CR.” The broker determines, based on the procedure code, what modality to use in the worklist, for example, it knows that a “CT head with and without contrast” is performed at a CT, and a “Right shoulder X-ray” is performed at a CR unit. However, to make sure that the worklist in the ICU only shows the orders done at the portable CR unit at the ICU floors or to determine that certain patients are to be imaged at the out-patient CT instead of the one in the ER, there is more granularity needed in the HL7 messages, such as department, in-out patient, etc. to determine the station name where the exam is to be performed. Filtering the worklist takes a little bit of research to determine this mapping by looking for what information is available in the HL7 message. Lacking this information could result in having far too many orders on the worklist, which is prone to operator error by potentially selecting the wrong patient.
  • Invalid control characters:

HL7 and DICOM use different control characters. For example, a patient ID in HL7 could contain several components, separated by the character “\”, e.g. it could be 345\56\6874. In DICOM, the “\” character is a control character used to separate different components within an attribute, and therefore cannot be used as part of an identifier or name attribute. If the DICOM worklist provider maps these fields with their contents “as-is,” without filtering those out, a problem occurs when these are parsed on the PACS side. One solution I have heard is to replace all “\” characters with “/” which are legal.
  • Invalid lengths:

DICOM has strict definitions about the maximum length of its attributes. In HL7, the length limitation is not at the same attribute level, therefore an attribute can be mapped into a field exceeding the length of that field. A PACS might or might not complain about the length of a field to meet the maximum length, which is a problem by itself. I have seen problems appear years after the fact when images are migrated to another PACS system, where the second PACS system complains about invalid lengths, which initially did not cause any problems with the first PACS.

In conclusion, as an imaging professional you will have to know about HL7 because this information will eventually end up in image headers and impact the workflow, efficiency and data integrity of the PACS system. You should be able to sit together with an HL7 specialist, who can either be internal or from the vendor side, and troubleshoot issues due to incorrect mapping, length and encoding issues. HL7 is not rocket science, there are good references (see the HL7 textbook) e-learning classes, and hands-on training classes available (see schedule). And if you really want to be an expert there is even an opportunity to get certified as an HL7 interface specialist (see studyguide). 

Tuesday, June 11, 2013

Top ten what’s in and what’s out at SIIM 2013

Echopixel showing its revolutionary
5-D concept; note the glasses and pen
which can be used to "pick up" any
anatomic structure.
The annual Society of Imaging Informatics in Medicine (SIIM) held its annual meeting on June 6-9 in Grapevine, TX, which is located in the Dallas metroplex. Here are my findings of the latest and greatest based on what I saw and heard; some of it I literally heard through the grapevine.

1. 5-D is in, 3-D is out:
Unlike radiologists, who are trained and used to thinking in multiple dimensions by merely looking at 2-D images, many physicians, especially surgeons, have a problem sometimes following certain anatomic structures, such as the many twists and turns of the small intestines when looking for a blockage in 2-D. Radiologists in general are able to visualize the information pretty well by mentally integrating the information at hand, if nothing else, as they have been trained for many years to do. Most physicians are more 3-D oriented, however, and find that an extra dimension is very helpful.

An upstart company at SIIM created yet another dimension, they call it “true 3-D” and dubbed the traditional 3-D application as “2.5-D,” however, I believe it is truly a new dimension, i.e. a 5th dimension. It works as follows: using a custom-built monitor, images are stacked and 3-D presentations are viewed by using a pen held in front of the screen. In addition, there are also two sensors that track the position of the 3-D glasses so that the user can move his or her head as well to get different views. Lastly, the 5th dimension comes into play because one can literally pick up a specific organ and lift it out its anatomic structure. It can be used, for example, to see exactly where certain arteries are attached. Early studies have shown a significant efficiency improvement reading certain studies. This is not going to be your day-to-day reading station, but for niche applications it could become a major asset.

2. Personalized medicine is in, evidence based is out:
As of today, when you have a cold you might see a physician, and you could be prescribed antibiotics that seem to work on most cases, but you are one of those exceptions that is allergic to that specific medication and have to go back and forth a few times until you find what works best for you. The course of action for a particular diagnosis is determined by evidence of what works and what does not work for a large group of similar patients. Currently Electronic Health Records, supported by sophisticated decision support tools,  rely on the evidence-based research to suggest the best course of action to the physician. Still, in many cases it could be hit and miss for the prescribed treatment, which might be OK if it concerns treating a cold, but might be critical if it determines treating a patient with a potential life threatening disease such as cancer.

Here is where personal medicine comes in: your DNA could be matched with DNA’s from similar cases and based on that information it has been proven that a much more effective course of action could be taken. As Dr. Siegel from the VA in Baltimore pointed out in his presentation, in the near future, imaging professionals will have to be prepared to review and consult genomic data in conjunction with the imaging that was performed and use that to determine a diagnosis and suggested course of action. The VA has information, as of today, on about 22 million patients in their database, one million of whom have genomic data recorded, which can be applied to these types of situations. This is still very much in the beginning phase as it takes supercomputer power to crunch all of that data, it also will have a big impact on storage requirements as this information is quite large (could be 50 GB, which is about 100 times the size of a typical CT study). However, in the near future we will be able to consult and use information from this analysis to make a better diagnoses.

3. The UNIviewer is in, zero footprint is out:
The clinician workstation has undergone several transformations. Initially, it was a thick client running mostly on a Windows® platform. That was replaced by browser-based technology, thin clients, and as of last year, the buzz word has been zero-footprint. The latter has as advantage that, if implemented well, there is no trace left behind after a user logs off or closes the window, which is preferable from a patient security perspective. In addition to changing the underlying software display technology, the source of the images has shifted from the main PACS server, to a dedicated web server, and now the Vendor Neutral Archive or VNA. As a “true” VNA (at least in my definition) is also able to manage non-radiology and non-DICOM images, such as MPEG’s, JPEG’s documents and other image types, the viewer should be able to accommodate these types of objects as well, hence the term “UNI-viewer.” Another requirement is that this viewer can either be launched or be embedded in an EMR application to allow a physician to have a patient-centric rather than a departmental view of the patient information. I am sure there will be another round of confusion about what a true UNI-viewer can or will do, for example, will it be able to render all types of DICOM image objects as well, such as the enhanced MRI and CT, the new digital breast tomosynthesis images, and show presentation states (I sure hope so) as well as measurements from a structured report. The problem is where to lay the boundary, i.e. will it also display CAD marks, and do 3-D MIP, etc.? The bottomline is that UNI-viewer is just another term and to find out what a device really does, one needs to pay close attention to the technical specifications, especially the DICOM conformance statement.

4. RESTfull is in and MINT is out:
There is no disagreement that the DICOM protocol is complex and not very suitable for exchanging information over the internet. The first attempt to fix this was by defining a protocol for exchanging secure email and information over the web, called DICOM WADO (Web Access to DICOM Objects) dating several years back. This did not quite meet the requirements and as a result, MINT (Medical Imaging Network Transport) was defined and actually implemented by a few vendors, but it did not really become very popular. The reaction from the standards community is a set of so-called RESTfull (Representational State Transfer) services which will maintain the DICOM encoding, according to which billions of images exist today while using the http protocol for exchange. These services are the RESTfull version of WADO to retrieve images and include the following specifications: STOW-RS for store-and-forward services, and QIDO for Query by ID for DICOM Objects. Some of these standards are still being worked on and are not quite included as part of IHE but there is no question that these are promising services and will potentially allow image enabling for the EMR and provide a means for external access of images.

5. EMR is in PACS is out:
The patient-centric approach is clearly driving the need for multiple image sources in addition to other clinical data. The departmental radiology or even cardiology PACS architecture as we have known for the past 15-20 years is disappearing. Images are being archived at an enterprise level or regional level, access is through UNI-viewers and through image enabled EMR’s being sourced by an enterprise level Vendor Neutral Archive. The support role is also shifting to an enterprise level IT role rather than a departmental (i.e. cardiology or radiology) role. A poll during one of the SIIM sessions showed that as of today, for about 50 percent of the institutions the support role is with the enterprise.  To be able to support this architecture, healthcare imaging and IT professionals need to learn about web services, HL7, and the characteristics and workflows of other modalities and departments, and have a thorough understanding of the overall architecture and workflow.

6. CPOE is in, RIS is out:
The traditional Radiology Information System or RIS companies are threatened by the EMR capability to provide centralized order entry (CPOE). The intelligence and decision support rules to determine that performing yet another CT is justified is at the EMR level anyway and it therefore makes sense to have this functionality at that level. As Dr. Siegel pointed out, a radiologist cannot know if the 21st CT scan for one person is justified and the 9th for another might not be, if one does not know the patient background, history and diseases.  There is also no question that a RIS module, which is part of an EMR has a tighter integration with the EMR than a separate RIS, which is typically integrated using HL7 transactions. In addition, EMR companies are often throwing in their radiology package for free, which is somewhat deceiving as the actual implementation cost could be quite high. The result is that stand-alone RIS systems are disappearing, which is kind of a mixed blessing. An issue is that the capability to provide standard protocols and procedure descriptions is not always as sophisticated in the EMR as it has been in the RIS systems because of their experience. A Dr. David Avrin from UCSF commented, it really does not makes sense to build all these tables for each EMR installation, but, rather try to find some common ground based on, for example, the RADLEX terminology.

7. Dose reporting is in, multiple CT exams are out:
California has required dose tracking since 2012 and the state of Texas just followed suit, meaning that the two most  populous states already require dose-tracking, which will most likely be followed by several other states. The how, what and where of the dose recording is still not quite obvious; standards are defined as part of DICOM and in corresponding IHE profiles, there are several vendors on the market that can connect to the modalities and/or PACS and collect this information either directly or by “screen-scraping” the CT dose overview screens. However, it is not about dose reporting; the intention is to use this information to characterize usage, benchmark it, and provide decision support rules about appropriateness of X-ray exams. As one of the speakers mentioned: technology is easy, but dose management is about establishing a cross-functional committee, determining how and where to collect the information and decide on how to act on it. Only then, can a doctor decide that another CT-scan should be performed with a more dose-efficient protocol, or be replaced by another imaging alternative.

8. SWIM is in, TRIP is out:
A major SIIM activity has been the discussion of “Transforming the Radiological Interpretation Process (TRIP). However, in my opinion, these discussions did not seem to go anywhere except for some crystal ball reading and discussions on where we might be going in the future. The good news is that there now is a real-life problem being addressed as part of this initiative, called the SIIM Workflow in Medicine (SWIM) project. This addresses the problem of using different terminologies for workflow states with different semantic meanings. For some of these workflow states this might be easy to interpret, for example the state of a report being “signed,” but what does it mean that a study has been “verified?” And even the term “verified” or it’s opposite “unverified” is not even used by every vendor and might have a different semantic meaning. This standardization effort would also allow information systems such as a PACS, RIS, CPOE and others to use these statuses to act upon. I have witnessed a situation where the appropriate status from a pharmacy system allowed a patient to automatically check in at a kiosk to see if the prescription was ready, but the RIS, which did not support the same status would not communicate that in the corresponding HL7 message, and therefore the information could not be used to automatically check into the department. There are about 140 events defined as well as 10 KPI’s (Key Performance Indicators), e.g. report turn-around time). This information will most likely end up in RADLex and assist the implementation of workflow management software in the not too distant future. We can then truly implement what Brad Ericsson of the Mayo clinic calls a “HEWEY,” or HL7 Enabled Workflow Engine.

9. Pictures are in, DICOM is out:
As one of the speakers mentioned “everyone has a digital camera” which means that the proliferation of non-DICOM images in hospitals is getting out of hand. Dealing with all of these JPEGS remind me of the early years of DICOM standardization: there are issues with color consistency, pixel sizes, etc. The workflow is quite different for these applications, for example, if a dermatologist wants to take a picture, or the ER physician photographs a wound, they just take a camera and shoot without “traditional” ordering or scheduling. Not only are there static pictures, but also video clips acquired for a variety of reasons, for example, taking a video clip of a person’s gait before and after surgery. The discovery process of where images and videos are created is just beginning and the process to make this part of the patient record will take some thinking with regard to proper identification. Standardization of the acquisition protocol is important, for example, if a picture of a skin lesion is made at the first visit with an i-phone, the second visit with a high-res camera, and the third time with a tablet computer using a flash, one can imagine that the color consistency, size and presentation, in addition to the encoding, (JPEG, JPEG compressed, TIFF) could be totally different. Standard settings might work in an office but maybe not in an ER. The other concern with these pictures are that there are different privacy concerns, as it might be one thing to include a chest X-ray in a EMR showing a fractured rib, but it is more sensitive to show the result of a physical abuse case with pictures of the victim/patient. Policies and procedures on the acquisition, identification and archiving of these pictures have to be defined and implemented.

10. HL7 geeks are in, DICOM guru’s are out:
The PACS webserver clients and clinical workstations are going to be replaced by image enabled EMR’s. In the case of a PACS system, one needs to worry only about a few interfaces (except for the modalities and workstations of course), which is typically a RIS, maybe a worklist provider/broker and a voice recognition system. An EMR however has typically dozens of interfaces, as an example, the one at Johns Hopkins in Baltimore has 86! DICOM structured report information such as from ultrasound measurements might end up in a document such as a CDA, defined by HL7 version 3. The overriding message at the EMR integration sessions was that the integration effort was grossly underestimated, and that the support personnel better become aware of the intricacies of HL7 really fast. As was stated: if you have seen one HL7 interface, you have seen one HL7 interface, meaning that there is a fair amount of customization and mapping being done in interface engines and gateways. Image and information professionals will have to deal increasingly with the HL7 protocol as orders now will be placed at a Centralized Physician Order Entry (CPOE) system, typically embedded in the EMR, while the reports are going to end up in the EMR as well.

In addition to the top ten mentioned above, I was not sure about the following, whether it is “in” or “out”:

·         SOA is in, maybe?
Service Oriented Architecture has been touted as the best thing since sliced bread for the past several years, in particular by Paul Chang of the University of Chicago. And yes, it is a great platform for integrating multiple applications, and if you are in a position to have a large IT development team you can do the customization and the integration yourself using the respective API interfaces. However, if you are a small or mid-size institution, you are interested more in plug-and-play capability based on standards such as HL7 and DICOM than having to support a machine level, tightly connected interface. Therefore, yes it is great, but if you consider the number of people who are able to make use of this capability, it is really more of a toy for institutions with significant IT resources than a main-stream required feature.

·         XDS-I is in, maybe?
Almost every imaging vendor claims to support XDS-I (Cross Enterprise Document and Image Sharing) in the PACS, and/or archive. For most buyers this has become a standard “check-mark” similar to being “DICOM compliant.” The actual number of implementations however have been very limited, at least in the US. Apparently, of the estimated 300 Health Information Exchanges (HIE) in the US, there are less than 2 percent supporting the exchange of images. How images are going to be exchanged is still a big question. Private companies, as well as most PACS vendors, are providing cloud services for image exchange. Some of these exchanges are merely an intermediary, some also provide data storage. The “Image share” initiative by RSNA, which was implemented by five academic institutions based on a grant of several million dollars only enrolled a meager 3000 patient participants. Except for proprietary solutions, there does not seem to be an alternative for XDS-I, but unless an alternative comes up, this might or might not become the way we are going to exchange images. To cloud or not to cloud, that is the question, with or without XDS-I, we will have to see.

And last but not least, is SIIM itself out or in?
I have attended most SIIM conferences, ever since it started in the 1990’s. The objective of SIIM is to bring academics, vendors, radiologists, technologists and PACS administrators together to discuss innovations, issues and provide a platform for continuing education and the ability to “kick the tires” of the imaging products available. These conferences are important for learning about the issues being faced in the medical imaging community
However, in contrast with other professional organizations such as HIMSS and RSNA, it is interesting to see that the attendance to the meeting is declining. I have heard from several attendees that there was not much “news” to hear or to see and that vendors are not offering much innovation anymore; they spend more time in EMR rather than in the imaging, especially the PACS area. However, I would throw this argument back to the user community and especially SIIM, to think outside the box, take off the radiology cap and think outside imaging, and think about enterprise workflow, and step out of the department to see what is done at the institution level and challenge the vendor community to come up with innovative solutions by presenting problems to be solved.

With regard to the SIIM program, Instead of dedicating program tracks to talk about entrepreneurship, career development, and reinventing the radiologist role, it might be more beneficial to provide the training, education and tools allowing these role changes to take place. This would require bringing in people from different specialties such as dermatologists, cardiologists, pathologists, and others as well as from different domains such as IT to replace or at a minimum augment the current faculty. To be honest, I know by now more about the University of Chicago, Cleveland Clinic, UCSF and other hospitals than I probably would care for and rather love to hear how Texas Health Resources, HCA, or my home-town Denton Regional Medical Center or other non-academic institutions take care of their problems as these are the places where most of the audience works.

A good example of a session where real life problems are being covered was at the satellite meeting about Digital Breast Tomosynthesis (DBT). In this in-depth discussion of the issues around this new technology (and there are quite a few major issues) a panel of users, consultants, vendors and the FDA, presented major problems and issues openly and frankly and, in the spirit of SIIM, solutions and roadmaps to resolutions were presented. The CBT issues range from the presence of proprietary encoding of the DBT image, to choking infrastructure due to the fact that the studies are a factor of 20 larger than the conventional 4-view digital mammo images. In addition, many of the PACS vendors cannot accommodate these new objects, or best case, only archive and do not display them. Let’s hope that the SIIM leadership will make this type of interactive session part of the future programs so that we can continue to have a thriving community where image and informatics professionals can gather.

So, in conclusion, is SIIM out or in? In my opinion they are halfway in the door, and it will totally depend on how their leadership and program committee take up the challenge to provide true learning opportunities of real-life issues, not only in academic institutions, but also in our day-to-day regional and smaller healthcare institutions. In my opinion it definitely requires a renewed face of the faculty from people who are not only thinking outside the box but also come from outside the “radiology box.” Only then, the SIIM conference will not just become a place to get continuing education credits but a foremost a place to exchange ideas, brainstorm solutions, and go back energized to implement what we learned. 

Monday, June 3, 2013

Technical issues regarding radiation dose management and recording.

California began requiring the recording of X-ray radiation dose for patients in June 2012, and other states and countries (notably Texas) are expected to follow. This will pose several challenges with regard to the management and recording of this information. In this write-up, I will only concentrate on the technical aspects, as others have expressed concerns about the impact to radiologists and patients (see article from Siegel et al in the May 2013 issue of ACR).

Dose reporting became a requirement after several incidents were reported a few years back about severe over-dosing hundreds of patients, mainly because of incorrect technique selections by technologists while performing CT scans. As with many incidents, it seems there always has to be a major issue or near-catastrophe and corresponding publicity before the industry and federal agencies take action, and that is what also happened this case. In parallel, there was also an initiative, called “image gently,” for imaging pediatric patients aimed at raising awareness of imaging techniques that balance image quality with reduced exposure to potentially harmful X-ray radiation.

The industry has reacted by redesigning their devices such that, in the case of a CT for example, there is not just a single, big blast of radiation, independent of the body part and patient characteristics, but, rather, based on a scout view, which allows a minimal dose, for example, for a head, while increasing the dose where needed, for example in the pelvis. The federal government has not taken any immediate action, but states have stepped up to make dose registration mandatory.

The problem with making dose registration mandatory is that devices were not equipped to provide dose information in a computer readable format, and even although CT systems are being shipped today with that functionality, there is still a huge installed base that requires upgrading to allow this. One could argue that the regulation was a little bit premature while not taking into account the technical capabilities, however, there are “poor-man’s” solutions described below to make this work.

The most pragmatic but labor intensive and inefficient way is to use a spreadsheet and manually record this information. A better way of course is to exchange this information electronically. There are four different solutions for recording radiation dose based on capabilities of the device:

1.      Get the dose information from the image header. This only works for a limited number of modalities, in particular digital mammography, where it has been common to record this information as part of the header. This requires that an application that would “mine” these headers and record the data accordingly.

2.       Use the Modality Performed Procedure Step (MPPS). The MPPS has additional parameters to record the dose of a performed procedure, and this has been used in the past, mainly for cardiology exams, as some of the cardiology information systems (CIS) have the capability to record this information. The issue with using this solution is that there is no one-to-one relationship between the information as recorded in a MPPS and the radiation event because the MPPS records only the images saved and does not take into account any doses from images that might have been deleted at a modality, or for example a fluoroscopy exam, where no images are created at all.

3.       Use the dose recording overview screens. A CT scanner typically records the dose in a separate screen, which is part of a study and exchanged as a separate, so-called secondary capture file, often as an additional series in the study identified with the modality “OT,” which stands for “Other.” The problem is that this method requires either a human to interpret the text embedded in this screen-saved image, or an application that uses optical character recognition (OCR), also sometimes referred to as “screen scrapers,” to get the data in an electronic format.

4.       Use the newly defined DICOM structured report containing the dose information. This is the preferred way to record the information as it has all of the information about the used technique and a more accurate recording, which could include information about multiple radiation events as well as the accumulated dose. There is also a corresponding IHE profile generated, which was demonstrated at the past two RSNA’s by several vendors, called REM (see for more details).

Assuming you have implemented a way to record the information, either manually, using image header, the MPPS, screen scraping or, best case, DICOM dose structured reports, the next question is how to manage this information. There does not seem to be a consensus (yet) as to where to keep this information. There are several options ranging from the PACS, RIS, a separate dose recording and management system at a department or hospital level, the Electronic Medical Record (EMR), or the Personal Health Record (PHR). It seems to make sense to keep the radiation dose information in a PHR as there could be many irradiation events at different institutions, especially when a person has a serious condition that includes treatment by many physicians and specialists in different locations. At a minimum, I would expect this information to be available in an EMR, as these in the near future are going to be exchanging clinical information anyway. However, both PHR and EMR dose recording solutions might be years away. Therefore I think that a temporary, stand-alone solution might be the best compromise. That is not to say that a PACS based solution might not work, assuming you have an enterprise PACS that covers all departments crating X-ray images (radiology, cardiology, dentistry, etc.). I see those as temporary, however, as we transition to a world when everyone has an EMR/PHR. Until then, a stand-alone device also might be more suitable to establish a connection with the outside world for reporting on dose usage and protocols used.

In conclusion, despite the initial backlash from the clinical community, dose reporting is here to stay. Several states are taking initiatives, and globally there is a lot of attention being paid to this as well, witnessed by the fact that Kuwait just issued an RFP for a national dose reporting management system.  If you have not already been forced to implement a solution (i.e. you are based in CA or TX), you should be thinking about it. Make sure you upgrade your X-ray devices, starting with the CT’s, with dose structured report capabilities. Also, plan for managing this information: where it will be kept, how it will be reported, and who will review it against benchmarked data, etc. Proper planning will prevent inefficient, temporary solutions that require extra work. And remember to keep the ultimate picture in mind, i.e. where the information ultimately belongs and should be managed, most definitely not in the PACS or RIS but at the EMR/PHR.