Monday, January 30, 2017

IHE 2017: It’s all about device connectivity.

400 engineers from all over the world
 testing their Healthcare Applications
This is the 19th year for IHE North American Connectathon, which has been held in Cleveland, Ohio
for the past several years. This event brings together 60 plus health care imaging and IT vendors for a week who work collaboratively connecting their systems. There were 115 systems this year, each one prepared to test interconnectivity. The goal is to reduce on-site testing and integration and ultimately advance health IT and patient care.

There were 400 engineers in attendance for the event and in addition, there were 65 so-called monitors who check the connectivity results. The general connectathon rule is that a vendor has to show that it can communicate with at least 3 other systems from different vendors to claim that they passed a specific test. The systems under test on the IHE NA Connectathon floor represent most of the top health IT vendors who provide systems such as EMR’s, patient care devices, imaging 
modalities, image archiving systems (PACS), and review stations. Except for any small devices such as monitoring systems and infusion pumps which are easy to carry, the engineers take simulators with them running the same software that would create MR, CT, nuclear medicine or mammography images and simulate the big archiving and communication systems that host the images and results.

This big event poses the question; how relevant is the support of these profiles, which are based on standards such as DICOM, HL7 and others, and what does it mean to support these? As an example, last year the emphasis at the connectathon was on testing XDS (Cross Enterprise Document and Imaging Sharing) profiles that have been very slow to be deployed this past year. Yes, most systems now support these profiles allowing information to be shared, identified, and managed between different enterprises, however with the exception of implementations in the UK, there have been relatively few implementations in other countries including the USA.

In the US this might be caused by the demise of several public Health Information Exchanges (HIE) due to a lack of funding and failing business models to support these, which basically took away the infrastructure to exchange the information using these profiles. Another barrier is the widespread implementation of proprietary interfaces to exchange these images with many cloud providers.

A major trend at the 2017 connectathon has been the emergence of patient care devices and web services, i.e. FHIR and DICOMWeb. The adoption of these standards might be much faster than the ones dealing with image and information sharing as there is a major benefit to be achieved in patient safety and efficiency using intelligent devices. As an example, anyone who recently has been hospitalized and watched what a nurse does every time he or she changes the supply for an infusion pump can attest; all those changes are always entered into the patient record manually. An intelligent infusion pump using IHE will be able to update the patient record automatically, producing major savings in efficiency and a potential reduction of errors, resulting in better patient safety.

60 or so monitors who evaluate
the test results for pass/fail
Another promising area is the upload of images from mobile devices into a PACS or electronic record. DICOM has added web services capability to its standard, allowing a phone app taking a picture, for example for wound care or from a dermatologist, in order to upload these images securely using the widely available web services interfaces. This could become a “killer-app” which will drive widespread implementation.

Overall, the connectathon was well attended, albeit with lower attendance than last year (I estimate 10 percent to 20 percent lower). It will be interesting to see next year if this is going to be a trend. This event is a major investment in time and effort. Preparation for it, including the creation of tools for simulators and test sets and performing pre-tests are on an order of magnitude bigger than the actual testing week. It appears that several companies are skipping one year and attend every other year instead of attending every year.

For the smaller vendors especially, it is still a major opportunity to test their applications against many of the big players. It also provides a good insight into how robust some of the implementations are as several of them had some major issues when trying to connect. So, overall a successful event, from my perspective it is worth attending again next year.

Thursday, January 12, 2017

Why would I use a medical grade monitor instead of a commercial one to view X-rays?

I get this question all the time: why should I pay thousands of dollars for a medical grade monitor to
Looking at medical monitors at a trade show
diagnose digital X-ray (CR/DR), if I can buy a very nice looking commercial grade off-the-shelf, (COTS) monitors at the local computer store. I have boiled this argument down to 6 important reasons based on critical arguments, which are (hopefully) simple to understand and allow you to convey this to your radiologists or administrators who have little technical or physics background.

1. A commercial grade monitor does not show all of the critical anatomical information. As the name implies, the COTS monitors are intended for office automation, to display documents to appear like a printed page.  Therefore performance attributes are weighted heavily to being as bright as possible so that text is easily resolved with minimal eyestrain.  Commercial displays therefore attain maximum luminance way before the graphic card input reaching its maximum input value.  Remember that a typical graphics card can display 256 different input values, each representing a distinct piece of valuable diagnostic information. These monitors have been observed to max out as low as a value of 200, which means values 201 to 255 are being mapped to the same luminance value ... maximum. This means that 20 percent of all the data is cropped or simply eliminated.

By contrast, medical grade monitors are calibrated to map each individual distinct pixel into something you can detect rather than following the natural response of a graphics card output. Unfortunately, it is normal for the natural COTS monitor response (un-corrected to DICOM) to yield the same luminance value (measured) for multiple sequential values, i.e., a flat spot in the response curve. These flat spots are especially obvious in the low range, i.e. the first 160 of the 256 values.  

What is the impact of a flat response? Let’s take as an example, for a commercial grade monitor the pixel values of 101, 102, 103, 104, and 105, could be mapped into a single luminance value on the screen. That means that if there is a slight nodule, which is identified by a difference in value between 102 and 105, it will disappear, as there is no distinction between these values on the monitor. Note that since the better part of the clinical information from the imaging modalities is in the lower 50 percent of the luminance range, this means that these are in the most critical areas wherein the ability to resolve pixels at different luminance values is being compromised.

In conclusion, the potential to miss critical diagnostic information both at high luminance and due to flat spots in the response should be the number one reason to not even consider a commercial grade monitor. Therefore, the first requirement for medical monitors is to insist on a monitor that is calibrated according to DICOM standards, which truly maps each of the different pixel values into a luminance value on the screen that is detectable by the human visual system as noticeably different.  It is best to have this calibration done at manufacturing to have an optimal mapping of the three RGB channels into the DICOM compliant curve.

2. Many commercial grade monitors don’t have the required dynamic range. The maximum light output of a monitor is specified using the units of cd/m2 (candela/square meter).  A good quality commercial display can achieve 300 cd/m2, sometimes more if you are lucky. The maximum value of 300 cd/m2 would be at the low end of any medical grade monitor, which might be able to go up to 500 cd/m2 or more.  Why do we need this much? The reason is that when a display is calibrated to DICOM, a percentage of the response is lost in the mapping process.   At 300cd/m   and applying DICOM corrections, the maximum value can be expected to decrease by about 10 percent. 

The human eye has a 250:1 contrast ratio at the ambient conditions of the viewing environment.  Assuming the commercial display was DICOM compliant with aftermarket software, the luminance ratio of the display and the eye would be very close.  However, ambient light detracts from the ability to see low contrast information.  This particular example would need to be in a low light room to achieve a 250:1 luminance ratio inclusive of ambient light. 

Medical displays are designed to operate between 400 and 600 cd/m2 as corrected to DICOM with reserve luminance potential for extended life at those levels.  Even if a monitor is calibrated, if there are not enough points to map the pixel data into, you clip off part of the information. For example, if you want to map 256 grayscale pixel values but have only 200 points available, you’ll lose that information. The required dynamic range depends on where you are going to use the monitor. As you are probably aware, the lighter, i.e. the brighter the room light, the more information you are going to lose in the dark, as you simply won’t be able to distinguish details in the dark.

There is a simple fix for that, the calibration takes into account the room light and makes sure the lowest pixel value is mapped to something you can detect. The whole range is shifted, which is important when using it in a light area such as ER or ICU. Also, it is good to have some “slack” at the dynamic range, the light source of the monitor will decrease (compare the output of an old light bulb), and get lower and lower over time). Therefore, the maximum brightness to facilitate mapping the whole data range should be about 350 cd/m2[1] assuming you use it in a dark environment. If you are using it in a bright area or if you want to make sure you have some slack to facilitate the decrease of monitor output over a period of let’s say 5 years, you might want to go with 450-500 cd/m2.

3. A medical-grade monitor typically adjusts the output to compensate for start-up variations in output. The light output of a monitor varies as the temperature needs to stabilize for about 30-60 minutes. You can leave them on day and night, or switch them on automatically one hour before they are going to be used, however, either method will drastically reduce the lifetime. Better grade medical monitors typically have a feedback mechanism built in that measures the light output and adjusts the current to the light source to create the same output. The third requirement therefore is to have a medical grade monitor with a light output stabilizer.

4. A medical grade monitor can usually keep a record and keep track of its calibration. One of my students in our PACS training told me that he had to deliver the calibration record of a specific monitor dated 2 years back for legal reasons, to prove that when an interpretation was made on a workstation, there was no technical reason that a specific finding was missed. In addition, you need access to these records on a regular basis, regardless, to make sure that the monitor is still operating within the acceptable range. This brings me to another point – many users seem to replace their monitors after a period of five years. If they are still within the calibration, there is no reason to do that. Therefore, the fourth requirement for a medical grade monitor is to make sure that you can retrieve and store the calibration records.

5. A medical grade monitor is typically certified. There are recommendations that are defined by the ACR for monitors. They are somewhat technical and in my opinion not worded strongly enough. Also, most medical grade monitor manufacturers are FDA approved, which is actually only a requirement in case you are reading digital mammography. If you meet the requirements stated above, you should be OK, but FDA approval does not hurt. You can check the FDA website and look up the manufacturer to see if they have been approved. The fifth (optional) requirement is therefore to be FDA approved.

6. In addition to being able to see all of the grayscale, which is characterized by the contrast resolution, you also need to be able to distinguish between the different pixels, i.e. your monitor needs to have the right spatial resolution to see the individual details. Let’s take a typical CR chest, which might have an image matrix size of 2000 by 2500 pixels, that results in 5 million pixels or 5MP. The standard configuration for a diagnostic monitor to look at X-rays is 3MP, because a physician has the capability to zoom or use an electronic loupe to see a one-to-one mapping of each image pixel element on the screen. One could actually argue that you can use a 2MP monitor as well, and yes that is correct as long as you realize that it will take more time to make a diagnosis, as you need to zoom more frequently. But if you are very cost sensitive, for example considering a system that is placed in a developing country where money is major issue, a 2MP configuration would do. So, the sixth and final requirement is to have a 3 MP monitor configuration (assuming time is more important than cost).

Does this mean that a commercial grade monitor cannot be used? It depends, if you are willing to manually calibrate the monitor and do this on a regular basis, by running a calibration check and making sure this can be applied by the monitor, if you make sure to take care of the warm-up time, if you have a monitor that meets the maximum brightness requirement, if you keep your calibration records, and are not worried that in case of a legal dispute the plaintiff does not have enough expertise to challenge you with the fact that you use sub-standard components that could impact patient care, well… it is up to you. But I would think twice about it, especially as the price difference between a good quality medical grade monitor and commercial grade monitor is not that great compared with the overall cost of a PACS system.

If you are interested to know more details, there is a video on how to use a test pattern to check whether a monitor is calibrated or not. And as always, in our PACS training classes we spend quite a bit of time going over monitor characteristics and calibration, and include hands-on experience with these, so if you want to learn more about this, check out our training schedule.

[1] The unit of measurement for luminance specified as brightness is called cd/m2 which stands for candela per square meter

Wednesday, December 7, 2016

RSNA 2016, What’s New?

This year’s annual Radiological Society of North America (RSNA2016) tradeshow in Chicago was
pretty good, the weather was cooperating, the labor unrest and strike from Lufthansa and workers at O’Hare just ended in time for us to get to the city, and the overall mood and atmosphere was positive. Vendors were happy about increased traffic from serious buyers and contenders, and the PACS replacement market seems to be growing. Here are my (subjective) observations:

1.       Deconstructed PACS is here to stay. I have to say that I was surprised by this as I thought it was going to be a fad, and, would only be for a few selected large organizations, but it seems to have a lot of momentum. I wondered why would anyone start building a PACS from scratch using components from best-of-breed vendors knowing that it takes a substantial investment of time and expertise to do this? Well, the reasons are:
a.       There are significant workflow improvements that can be achieved that are not possible or are very hard to do with “standard” PACS systems. Examples are, doing efficient image exchange; having universal worklist reading from multiple, disparate PACS systems; having an elaborate pre-fetching algorithm that makes sure you pull all prior studies; having optimal mapping of your HL7 based orders into a DICOM Modality Worklist that supports an effective technologist workflow, and several others.
b.      There is an emergence of middleware vendors that provide routers, DICOM tag morphing gateways, worklists, decision support software and workflow support
c.       Finally, we have more availability of zero-footprint viewers that are fully featured, i.e. are not just lightweight clinical viewers, but can do the heavy duty radiology work, and can operate either driven by the EMR, RIS and/or a general purpose worklist
d.      VNA installations are maturing, and now provide full control over the image archive to the end-user instead of locking the data up in semi-proprietary vaults, which required the user to keep on purchasing additional licenses for blocks of studies to be archived and/or accessed.
There are still challenges, as we find out how much proprietary information typically flows in between the various PACS components, but there is no question that the deconstructed PACS is here to stay and a good solution for larger organizations with good in-house IT and clinical support.

2.       The deep learning or machine learning hype, which are both forms of Artificial Intelligence has spread to radiology causing more harm than good at its onset. A story in the September 2016 Journal of the ACR stated that it could end radiology as a thriving specialty. As Dr. Eliot Siegel from the Baltimore VA stated in the controversial session “Will Machines Replace Radiologists?,” he is already getting emails from residents asking him if they should quit the practice. A Wall Street Journal article published on Dec. 5 discussed the AI threat and dismissed its short term impact based on three barriers: the lack of huge sources of data that are needed to “teach” these supercomputers the rules, the small incremental improvements (one or two percent) that are achievable that are not necessarily significant enough to justify the initial investment, and the lack of personnel to implement all of this. There is no question that improvements in technologies such as CAD will spread its use outside the common application of detecting lesions in breast imaging, and that radiologists could use computers much more effectively such as for automating reports using structured measurements from ultrasound. But replacing radiologists with computers will take a while, if not a couple of decades.

3.       3-D printing is gaining a lot of traction. This technology is not new; for the past 30 years the

automotive industry and other industrial applications have been printing models for rapid prototyping and hard to find replacement parts. But it has now become mainstream technology as you can go on-line and buy a 3-D printer for a few thousand US dollars. Larger institutions are starting to set up 3-D printing labs, which is somewhat of a challenge from an organizational perspective as it does not quite fit one specialty. The application intersects radiology, surgery, orthopedics, dentistry and others. But the number of models to be printed have exponentially grown, as the Mayo clinic reported for example that they now do several hundreds of models a year. The DICOM standards committee, which met during RSNA, decided to re-activate a 3-D working group to address the
interoperability issues. There are several standard 3-D formats that are similar as with, for example, a standard JPEG or TIFF 2-D image, but there is no place to put acquisition context, patient information, and other clinically important information in the meta-
data. In other words, we need a DICOM “wrapper” that allows this, and that can encapsulate the most common standard formats similar to what is done by “wrapping” a pdf file into a DICOM file format. This activity is expected to give this application a major boost so that these objects can be properly managed.

4.       Digital Radiology (DR) plate technology shows signs of maturing. There are three major
technical challenges with regard to DR technology that are showing signs of being solved in the next two to three years. These have to do with the top reasons that these plates are currently failing. The first one is weight and lack of robustness, which has to do with the fact that the detector is using glass as its main component. The same technology that is currently being used to create thin TV screens that are bendable should resolve this issue and result in super light-weight detectors. The second challenge is to make these detectors completely waterproof so they can withstand body fluids. The third challenge has to do with the way that the detector array is soldered, which should be manufactured in a more robust manner. Given the recent regulations by the US government to encourage DR replacements of CR, the volume should be going up, which should bring the price down for a plate from around US $50,000 to an average $25,000 and even lower. We are not quite there yet, but the next few years should provide some major improvements and cost reductions.

5.       Multi-modality integration is becoming more popular including such dual imaging devices as the PET-CT, PET-MRI and even using SPECT. In addition, it is also possible to integrate an ultrasound with a CT scan by connecting a dual camera to the probe and registering it with a prior CT scan. This could potentially even replace the often-used practice of using an X-ray fluoroscopy (C-arm) for visualization of needles inserted into a patient.

Usually, I have a “top-ten” list after RSNA, but this year I did not find that many innovations, not taking the new “60 second eye-lift,” or massage chairs into account. However as mentioned, the “vibe” was very positive and there was a lot of emphasis on how to become more efficient, how to improve radiology services, which combined with a concern for how the president-elect is going to impact the way that medicine is
practiced, resulted in a careful optimism. As always, I personally enjoy these tradeshows as there is no better way to get updates, talk to many peers, learn a lot and know what is going on in the industry than walking the aisles!

Monday, October 24, 2016

Deconstructed PACS part 4 of 4: Q and A with industry expert/consultants

This is part 4 of 4 of the Deconstructed PACS series, the recorded video and corresponding slides of the webcast video can be viewed as well here. In this final part we have the opportunity to interact with the VNA experts, Mike Cannavo (MC) and Michael Ryan (MR) to ask any deconstructed PACS related questions

Q1 Would you implement the VNA before or after the PACS implementation?
(MC) I prefer a VNA to be on the front-end because the migration has been done already and they have experience with it. The VNA implementation can easily take a year with the data migration taking a great part of that. Michael Ryan did it kind of on the back-end, it is probably a personal preference. My preference is to do it on the front-end.
(MR) If you have a choice, I would say implementing the VNA first makes the most sense.

Q2 What about the dataflow, do the images go to the VNA first and then to the PACS or the other way around?
(MR) That would really depend on the facility requirement, it can be done either way.
(MC) I disagree, the data should always go to the PACS first. By the way, I typically configure the local PACS to have 2 years of on-line storage.

Q3 Is the enterprise viewer in the VA environment used for primary radiology viewing?
 (MR) There is one application for the primary radiology and physician viewer, it is not a zero, but very small, footprint. That is how we can provide the clinical users with advanced viewing capabilities, especially the neuro-surgeons and orthopedic specialists. They are on separate networks though.

Q4 Is a deconstructed PACS less expensive than a “conventional PACS?”
 (MC) Most PACS vendors are selling bundled solutions that are 60% to 75% off list price, which means that a turnkey PACS solution will be significantly less. There is also integration, testing and internal support costs to deal with in the case of a deconstructed PACS.

Q5 An issue is having a consistent body-part, study and series descriptions etc. to allow prefetching the relevant prior studies for comparison, what is your solution for that?
 (MR) We standardized body-part and study descriptions. For the VA, the study description starts in our VISTA EMR. We had some success standardizing it over the facilities but it did not really meet the end-user requirements. Also, there is a trade-off between making it too generic or too specific. In our experience, “body part examined” contained a lot of “Other” values. This is especially an issue with older studies.

Q6 Why do you need a PACS archive and VNA archive if you have a viewer that has full radiology functionality and you have the other key pieces of the deconstructed PACS. Having a PACS seems redundant. 
(MC) I don’t see the vendor portion of the archive lasting, as there is too much proprietary and vendor specificity, which a VNA will eliminate. It will neutralize this, which is the “N” in the Vendor Neutral Archive. A VNA allows you to connect and reconnect any other PACS system without having to go with the added expense and time of the data migration. 
However, we still need a PACS as 80 percent of the US hospitals are under 200 beds in size and the majority won’t have the resources for doing a fully deconstructed PACS. However, if an institution is part of a large enterprise, and there are corporate resources available, than by all means they should consider it. For right now, and for the next 2-3 years, it will be mostly for the larger institutions that can afford to support it. 
(MR) There seems to be a lot of mergers and acquisitions outside the government world so I can imagine that it starts to make sense allowing them to tie it together and use viewers that can access multiple VNA’s. Indeed, for smaller institutions it would be a challenge both from a budgetary and personnel perspective.

Q7 How have you implemented integration of specialized applications, such as orthopedic templating?
(MR) we used to have a third party plug-in for our legacy PACS and we can do the same, i.e. launch that from our new enterprise viewer. We do have a dedicated 3-D solution from a third party. For image fusion, some of our facilities use a workstation plug-in as well and some do the fusion at their modality workstations.

Q8 What are your closing comments on this topic?
(MR) For us, the deconstructed PACS was a good solution as we were able to find vendors that can meet our requirements. In addition, in the VA there are time and budget constraints, which make a piecemeal purchasing and implementation a better solution. I believe that five years from now, when my colleagues are looking back, they will find that it was a good decision to go this route. 
(MC) The most important part is that you have an action plan in place that considers what you have in place, what you can use, what you want to replace, and sit with someone who understands where you are and where you want to go. Look at the requirements and financial and personnel resources you have. Make sure you document this and realize that it could take 2-3 years to get to what you need. As Mike Ryan demonstrated with the VISN23 VA implementation, you can be successful if you do due diligence and plan it carefully.

Friday, September 30, 2016

Deconstructed PACS part 3 of 4: Do’s and Don’ts from the PACSMan

This is part 2 of 4 of the Deconstructed PACS series, the full video and corresponding slides of the webcast can be viewed here

The first thing you need to do when considering a Deconstructed PACS (DP) is knowing your requirements. Make sure you differentiate between what you want versus what is really needed. Specifically, do you already have a solution in place that works, and especially, how does a DP solution fit in your long term strategy. Consider the integration with other clinical systems, Know that there are many components to make a DP solution work, particularly the PACS, and RIS.

You have to decide where the data is stored, for example, on-site, in the cloud or do you consider a hybrid storage solution. Then, who owns the database, how is access provided and who is managing that. Note that this is one of the major differentiators between a traditional PACS which in many cases has a locked down access to the database and the deconstructed PACS which transfers ownership of that information, including opening up the database scheme to the customer, to the client.

It is important to know the trade-offs, in particular the gains and losses. Knowing the true cost of ownership is critical, including the support cost. This takes into account the cost and solution for redundancy, which can be solved in a central versus distributed manner. Service and support can be provided either internally or externally. Internal is typically less expensive initially but requires an investment in hiring the right people and providing them with training so in the end the costs may be the same or even slightly higher.
A word of caution is that one never should buy on price alone. No vendor will ever lose a deal on price. You need to make sure that everything is included in the purchase price. Don’t buy something you don’t really need.  I have seen too many computers in a department sitting in a corner unused that were part of the deal looked good on paper, but turned out to be useless.

It is important to decide upfront who will perform the connectivity to all systems and how will it be done. You need to determine will it be a web-based interface, an HL7 or DICOM interface, or even a custom designed  API (application program interface) that is commonly used to connect to your speech recognition system, or any other interface.
It is critical to get as much in writing as you can. Know that not everything can be negotiated and there are often certain policies and contract items that are standard.

Make sure you get feedback from all stakeholders before making the purchase decision, especially from the CIO and CTO. Those two  are key. Do your due diligence to be able to justify your purchase. Look at all financing option,and  in particular if you want to finance it from the operating budget, versus capital budget or even  price per click. Regardless the term should not exceed 5 years as this technology is changing too fast to know what will be preferable at that time.

In conclusion, a deconstructed PACS is a good solution for certain applications, but not necessarily a one-size fits-all  for everyone. Do your due diligence to find out if this might be a good solution for you. It might but then again, it might not either.

Mike Cannavo, aka as “the PACSman”.

Monday, August 29, 2016

Deconstructed PACS part 2 of 4: Implementation within the VA

This is part 2 of 4 of the Deconstructed PACS series, the full video and corresponding slides of the webcast can be viewed here

The VA Midwest Health Care Network, otherwise known as VISN 23 (Veterans Integrated Service Network) implemented a deconstructed PACS between September, 2014 and August, 2015.  Our legacy PACS hardware, originally purchased as part of a Brit Systems installation was at end of life.  The leadership team in VISN 23 made the decision to proceed with a deconstructed PACS solution to replace our traditional PACS.

VISN 23 encompasses all of Minnesota, Iowa, North Dakota, South Dakota and Nebraska and small parts of additional surrounding states.  The largest facility is Minneapolis with 130,000 studies per year.  All told, the 11 facilities register 460,000 studies per year.

Prior to making the decision to proceed with a deconstructed PACS, the VISN 23 PACS and Imaging Service lines achieved successful implementations of various PACS “sub-components”.  These consisted of Corepoint Healthcare’s HL7 integration engine, PowerScribe 360®, Laurel Bridge Compass® DICOM Router, Pacsgear PACS Connect®, TeraRecon Intuition® Advanced Visualization as well as various CD burning and importing solutions across the enterprise.  Having the experience of researching, evaluating and procuring these various components encouraged the teams to move forward with a fully deconstructed PACS.

The primary three components of the deconstructed PACS in VISN 23 are Visage Imaging as the viewing solution, Lexmark Acuo as the VNA and Medicalis as the radiologists’ worklist. Due to concerns about bandwidth across the enterprise, VISN 23 chose to install a Visage server at 8 of the 11 campuses along with an Acuo local server which Acuo labels a “temporal”.  Acuo data centers are installed at both Minneapolis and Omaha with DICOM replication between these two.  The Medicalis servers are installed in Omaha.  Also installed in Omaha are the HL7, modality worklist and PowerScribe servers for the enterprise.

Prior to deconstructing the PACS, VISN 23 made extensive use of DICOM routers to ingest studies from the modality layer.  Also a third party modality worklist solution purchased from Pacsgear was implemented well ahead of the deconstructed PACS.  This allowed the biomed teams to fully configure the modalities to interact with these two systems before during and after implementing a deconstructed PACS.  This freed up time and resources during the actual implementation of the primary three components of the deconstructed PACS.

The VISN 23 team faced several challenges during implementation. First, we discovered that new internal policies prevented installation of the Visage viewer on enterprise desktops for clinical use.  Second, since the legacy PACS hardware was at end of life, implementation was begun before the legacy studies had been migrated.  Therefore, at the first site, we initiated a “just in time” or “Ad Hoc” migration meaning priors were retrieved from legacy systems for current studies as they were performed.  However, since we had to maintain the legacy PACS for the enterprise desktop viewer, we had to be cautious to avoid overburdening the legacy PACS with prior retrievals.  We managed this, but it was a delicate balancing act that went on for nearly six months.

Another challenge VISN 23 faced (and will continue to face regardless of PACS type) is that the VA’s HIS/RIS, known as VistA, will only generate an HL7 message at the time of patient registration.  This means that, essentially, there is no pre-fetch but rather a “post-fetch” or “just-in-time fetch”.  As we worked through the issues, there were times when priors were not fully available for the radiologists.  In response to this, we had a few users who innocently fetched entire jackets on multiple patients to get priors.  This caused serious system performance issues. This was easily remedied with education.

VISN 23 teams also discovered during the process of migration and priors retrieval that there were inconsistencies in some DICOM tags on these legacy studies.  We addressed this by using the evaluative tag morphing and writing capabilities of the DICOM routers mentioned earlier.

Lastly, at the request of the radiologists, support teams went back to the study description source in VistA’s RIS and improved the efficiency of the descriptions.  For example, if a CT Chest and a CT Abdomen/Pelvis were acquired together, all of the images were usually stored under the CT Chest description.  We modified the description for these studies to read “CT (CAP) Chest”.

Successes achieved during our implementation were several. The viewer and VNA were able to achieve a very tight integration for study and patient splits, edits, merges, and so on.  We found it much easier to view images from other facilities. Clinical staff easily adapted to the Visage viewer on the enterprise desktop.  The tag morphing and writing will lead to a much cleaner database.  The server side rendering of the Visage viewer allowed near instant viewing of even volumetric CT studies using minimal bandwidth. 

In summary, our vendors worked remarkably well together. VISN 23’s experience proved that a deconstructed PACS is a feasible alternative even in a challenging security environment such as the VA.

The author, Michael Ryan played a leading role in the implementation of the deconstructed PACS in the VA Midwest Health Care Network (VISN 23).  Michael has since retired from the VA and is now providing consulting services as MCR Consulting, LLC.  You can reach Mike at MCR Consulting, LLC,

Sunday, August 21, 2016

Deconstructed PACS: What is it, Implementation, Do’s and Don’ts (Part 1 of 4)

This is the first part of the deconstructed PACS (Picture Archiving and Communication System) series; for a full featured presentation you can look at the video. This part covers the “what” of this phenomena.

The term “deconstructed PACS” has only recently been coined and is basically referring to providing
a best-of-breed solution for the different PACS components as opposed to a single-vendor PACS. As will become clear in the discussion below, there have always been variations of deconstructed PACS systems implemented, but over the past few years, the degree to which the PACS systems have been split up and the range of components being sourced from different vendors has been increasing.

As of today, it is estimated that less than 5 percent of the installations are truly deconstructed, and there is still a lot to be learned about the support, return on investment, and what scenarios result in a good solution. Therefore, the deconstructed PACS is still relatively early in this evolution of PACS technology, and is no longer considered “hype” based on the amount of discussion and interest it gets at tradeshows and in user group discussions.

To explain what a deconstructed PACS system is, let’s start with a typical PACS system core architecture. The core has an application that manages the incoming images and related information, checks them for integrity, and sometimes, depending on the vendor, requires a technologist to “verify” them before they are added to the PACS database and archive and become available for physicians to interpret. The core also has a database/archive to allow for querying and retrieving the stored information and typically some type of Information Lifecycle Management (ILM), which implements retention rules. A radiologist can access the images to perform a diagnosis through a workflow manager, which provides a customized worklist to the radiologist, depending on the specialty, and synchronizes this list with other users who access the images. Referring physicians and specialists typically access the images for review using a web-based or some type of thin client or zero-footprint viewing interface.

One of the relatively common applications of a deconstructed PACS covers the use case whereby a radiologist needs to read for multiple institutions that have PACS systems from different vendors. Instead of having to log into different PACS systems using their proprietary workflow manager interface and dealing with multiple worklists, one can use a third party universal worklist provider that will provide a unified worklist.

A second example of a deconstructed PACS is where the database/archive is provided by a different vendor than the PACS system through a Vendor Neutral Archive (VNA). In its pure role, a VNA is a replacement for the PACS image manager/archive, which allows for a new PACS system to be deployed relatively easily as it reduces data migration issues. Note however, that it will shift the migration issue to when the VNA needs to be migrated, so this advantage could be overrated. The main benefit of a VNA is that it shifts the data ownership to the end user as several PACS vendors would store the data into a proprietary format and not even provide the database scheme to the end users.

Typically, one keeps the PACS database/image manager in place, which stores the data for 6-18 months for ready access by the radiologists. Synchronizing changes, deletions and updates of the images between the PACS and VNA is still an issue, it is addressed by implementing one of the standard IHE profiles called IOCM, but support is still lacking. An issue with VNA implementation is to decide the dataflow, i.e. are images sent first to the VNA and then to the PACS or vice versa. The same applies for physician access, should it be from the VNA or PACS? Also note that the VNA has become much more than just a simple PACS archive/image manager extension as it is also often used as an enterprise archiving solution which can potentially manage non-DICOM objects, deal with multiple Accession Numbers and patient ID’s, and perform sophisticated data clean-up using tag morphing.

A fully deconstructed PACS therefore typically has a workflow manager and image manager/archive from different vendors in addition to the diagnostic workstation and even the physician viewers. One could even argue that there is no need for a traditional PACS vendor anymore, assuming that the VNA is taking the role of the PACS image manager/archive.

Now let’s backup and look at the different PACS core and ancillary components as listed below and see how they can be deconstructed:

PACS core features to be considered for deconstructing:
  • ·         Prefetching and routing: most PACS systems provide routing capability, however, not always to the level of sophistication needed.
  • ·         Image QC: one could purchase QC workstations that provide auto merge/splitting of studies, allow for reprocessing CR/DR images and fix information in the header using a modality worklist feed.
  • ·         Reading worklist: this is the universal worklist for the radiologists that can talk to multiple PACS systems
  • ·         Diagnostic display: this is the main reader for generic radiology reading (CT, MR, CR/DR, US), but for some specialties such as digital mammography supporting breast tomosynthesis, nuclear medicine, including image fusion, and cardiology showing non-image data there might be a need for a workstation from different vendors.
  • ·         Physician display: web-server solutions from third party vendors have been available for some time; especially when looking for tablet access and zero-footprint solutions there are quite a few other options.
  • ·         3-D and other plug-ins: there has been a steady market for sophisticated 3-D applications and orthopedic templating from other vendors
  • ·         Archiving and image manager: here is where the VNA plays a role
  • ·         Audit trails and Security/privacy: this is typically done by each vendor in a proprietary manner, i.e. they provide authorization and access controls and log all accesses. Some vendors have a totally promiscuous interface, i.e. they allow anyone to access the PACS core, something that can be prevented by programming the network routers. Audit trail recording can be done using an external ATNA repository which can be shared by multiple ATNA sources.
  • ·         System administration: this addresses fixing broken or unverified studies, merging or splitting them to match orders, body parts or specialties and running database reports about storage usage, performance and availability. There are a few third party vendors that can provide some of these features, most of them are tied to the specific vendor.
  • ·         Load balancing: this is typically done by adding multiple servers that can split the load for the incoming and outgoing data
  • ·         Disaster recovery, high availability and business continuity: Many institutions have solved this by using a cloud storage solution.
  • PACS ancillary features that can be considered for deconstructing:
  • ·         Order processing: Many PACS vendors offer an integrated RIS/PACS or have the PACS worklist created by their RIS. In the majority of the cases, the RIS and PACS have been reconstructed. As a matter of fact, most of the order entry and processing is shifting to the EMR.
  • ·         Modality Worklist Provider: Most initial PACS systems were using an external modality worklist provider aka connectivity manager or broker. This takes in the HL7 orders, which are kept in a small appointment database and provide a reply to the modality worklist queries. There seems to be a trend to take this out of the PACS system as there are often limitations in the sophistication and amount of worklist filtering that can be provided.
  • ·         Dictation: even although many PACS systems are tightly connected with only one or at most two voice recognition systems, some radiologists prefer to work with a different manufacturer.
  • ·         Report storage and distribution: some vendors store reports in the PACS, some in the RIS, and some in a broker, but it appears that the trend is to store them in the EMR.
  • ·         Inter and intra image sharing: there are many solutions for this including portals, but many institutions opt to outsource the inter-facility image sharing to external companies to provide physician access.
  • ·         Critical results and discrepancy reporting: this is often built in but can also be added on.
  • ·         Clinical trials management: this can be added on by having a gateway that takes care of anonymization and adding the clinical trial identifiers.
  • ·         Teaching files: many users opt for an external solution, especially as it can be kept even after a PACS vendor has been replaced.
  • ·         Peer review: there are third party solutions for selecting random studies on a predetermined date and assigning them to a list of qualified radiologists.
  • ·         Dose and contrast management: radiation dose registration systems and in the future contrast management systems can be added on.
  • ·         Decision support: this is a new area, vendors are starting to offer data mining to provide decision support for physicians.
  • ·         Interface engine: some vendors integrate a HL7 interface engine with their product, but most of them provide an external vendor for this.

In conclusion, deconstructed PACS systems, aka “best-of-breed” have been around since PACS started, although maybe not to the degree as currently being implemented. The definition of a deconstructed PACS is vague and subject to interpretation by the vendor defining it; it can include one or many core or ancillary components. Remember that it is still a PACS, the basic functionality does not really change, both a deconstructed or reconstructed PACS does the same job. It is recommended that you look at your PACS system in case you need to have additional functionality, such as peer reviews, decision support or dose registration and consider third party vendors. If you are looking for a PACS replacement, you might even consider purchasing one of the core components from a different vendor. However, be careful, and look at the do’s and don’ts section of this series.

The author, Herman Oosterwijk is a long-time PACS trainer and consultant, published several textbooks and study guides on this subject and provides both face-to-face and on-line training on PACS, DICOM, HL7 and IHE. He can be reached at

Sunday, July 10, 2016

SIIM 2016: My top ten take-away’s.

The ultimate radiology workstation
The annual PACS meeting organized by SIIM and held in Portland, Oregon from June 28-July 1 was
a major turn-around from past years as attendance was up, the quality of the presentations significantly improved with regard to being current and interesting, and vendors getting quite a bit of traffic in their booths.

The messages were predominantly positive, unlike the doom and gloom from last year’s “PACS is dead” messages, and many of the sessions were standing room only, especially those dealing with enterprise imaging and anything to do with deconstruction of the PACS. SIIM seems to have started to reinvent itself, albeit slowly. The collaboration with HIMSS, a great initiative, resulted in publication of several high-quality white papers about enterprise imaging, which are freely available.  
Below are my top ten observations based on attending the presentations and talking with peers in the industry during the meeting:

1.       Archive-less PACS – Every year there is yet another acronym introduced that does not really improve functionality but seems to confuse rather than adds value. Case in point is the archive-less PACS, which was introduced as a variant between a traditional Vendor Neutral Archive (VNA) solution and a deconstructed PACS, however under close inspection, it is nothing more than building a traditional PACS system using best-of-breed components. It is identical to reconstructing a PACS from the deconstructed components. SO, nothing new, but what I liked from the presentation was that it gave a good breakdown of the several PACS components, being:
·         Order Processing

·         Prior identification/prefetching
·         Modality worklist
·         Image QC
·         Routing
·         Reading worklist
·         Diagnostic display
·         Dictation/VR integration
·         3-D integration
·         Annotations and Key images
·         Inter- and intra-enterprise communications
·         Non-diagnostic display
·         Archiving
I would have added lifecycle and/or content management, image management, other communication such as critical result reporting and discrepancy reporting, and peer reviews but this list was useful to show that a PACS system is not just about image communication and archiving.

2.       VNA – Vendor Neutral Archives (VNA’s) are increasingly being installed and are getting more mature. Most of them are now implementing IOCM (Imaging Object Change Management), which is the IHE profile that synchronizes a PACS with the VNA with regard to changes and updates as well as deletions of the images and related information. Challenges with deploying a VNA are mainly with the integration of specialties, especially those who create non-DICOM objects such as PDF’s, JPEG’s, MPEG’s or other file formats. The traditional radiology and to a lesser degree the cardiology workflow, where everything is scheduled and ordered and managed on a procedure level, does not quite fit the other specialties hence these issues arise. Images are typically managed on an encounter or visit level, identified with a visit number instead of an Accession Number. Prior to image acquisition, a modality might need to query an EMR to obtain patient demographics using HL7 messaging instead of having a DICOM modality worklist available. There is definitely a learning curve involved with implementing this in these other departments especially with the modified workflow. In addition, there are issues with regard to the integration of the results, especially for the EMR. For example, ultrasound images created by an anesthesiologist might not belong in the same place where the diagnostic ultrasounds are managed and displayed, but, rather as part of the surgery notes. When deploying these devices for enterprise imaging to include many different specialties, these kinds of issues will surface, which is to be expected and part of the learning process.

3.       DIY migration – Image migration is a fact-of-life as many institutions are at their second or third generation PACS, which in many cases means changing vendors. The reasons for changing vendors are typically driven by the need for increasing or adding different functionality, reliability, service and support issues, financial considerations, or in many cases the perception that another PACS vendor will magically solve existing issues that in many cases have nothing to do with the vendor but everything with how the system is used and managed.
Regardless, migration is a fact of life. When migrating a PACS archive, which can take months and in many cases more than a year, it becomes obvious how good your previous PACS was managed. Orphan images, un-identified studies, and other issues with the DICOM objects will surface when trying to add these to a new archive. Support for non-image objects such as Key Images, annotations and presentation states might be lacking or have limitations. Migrations used to be performed by the new PACS vendor or specialty companies, however, to save cost, more users are doing it themselves. One can purchase a software migration controller that queries the old archive and manages the image transfer potentially fixing DICOM tags or purchase a VNA that has this capability built in. DIY or Do It Yourself migration is definitely an option, instead of paying a lot of money to your PACS vendor or a migration vendor.

4.       Evolution of middleware – Many PACS systems, including VNA’s, have limited routing capabilities, lack the capability to change tags to identify the origin (i.e. institution and/or modality), manage duplicate Accession Numbers or coerce parameters in the header that impact the workstation hanging protocols such as the study or series descriptions. Hence the advent of middleware vendors who can provide these capabilities. Note that most VNA vendors do provide quite a bit of this functionality as they are used to performing tag morphing to preserve their image integrity, which can be jeopardized by having multiple PACS as a source, but most PACS systems do have limited functionality with regard to changing the image data and/or doing sophisticated routing. The good news is that there are several vendors that fill this void and provide the middleware to integrate a PACS with other PACS systems or VNA’s, and provide intelligent routing and tag morphing.

5.       The ultimate radiology workstation – The first PACS workstations were designed to mimic a film alternator, resulting in a row of four to six monitors or even two rows of four on top of each other. In the early years, these were CRT monsters, very heavy, and from a PACS administrator’s perspective, a “pain in the back.” Eventually, these configurations dwindled down to a two-monitor medical display configuration combined with a color text monitor for worklist display, looking at ultrasounds, and report creation. The reason for the second monitor is that radiologists were starting to look at CT and MR in a stack mode, virtually integrating the 3-D space in their minds by replaying the 2-D axial slices in a CINE mode. However, there is a need to also look at prior studies, and, in the case of CT ad MR, different reconstructed views (MPR, 3-D etc.), which again, asks for additional real estate. The circle has come around again by adding more and more monitors. This combined with an adjustable table that allows for a combined sitting/standing workplace and an acoustic work environment that provides a noise-free background, the work environment of a radiologist has become much more ergonomically sound. Also note in the illustration the use of a chair that can rotate and always provide a perpendicular view to the monitor. More research is needed in this area, but given the increase of occupational injuries caused by having a fixed, non-ergonomically designed work environment with multiple monitors might become a necessity.

6.       FHIR update – The FHIR standard, which is the protocol that provides a web-services based interface to healthcare systems such as an EMR, PACS archive, and other information resources, is coming along well. The problem with this standard is that it is still very much in a developmental phase, as a matter of fact, its official term for the latest version is DSTU 3.0, which means Draft Standard for Trial Use version 3, meaning that it is not finalized yet. The standard relies also on a set of so-called resources that are accessible in a standard format which are also not quite finished yet. Lastly, there are many options in the standard, which makes conformance a challenge, in addition to the fact that there is the version issue, i.e. is the interface based on version 3.0 or a predecessor? In the meantime, recommendations and/or standards including federal guidelines are being defined based on FHIR requirements. It seems as if FHIR is definitely moving up on the hype curve, but there is serious concern about the potential “bleeding edge” effect when there are going to be real-life implementations. So far, there is good feedback from hackathons and trials, but real-life implementations are still scarce and support from major vendors still in the works or in beta. It might make sense as a user to have a wait-and-see attitude about early implementations.
7.       DICOMWeb – The DICOM protocol has not changed since its initial definition in the early 1990’s. It does not have to, as it is robust and has a large installed base, as pretty much every medical imaging device supports it. There are many toolkits that allow for an effective and easy implementation supported by various operating systems and computers. However, the protocol is not that efficient for exchanging information over the web, i.e. to be displayed in browsers and mobile devices, which is getting increasingly important with the requirement to display images in an EMR. Therefore, similar to the HL7 standard, which added a web-based version called FHIR, the DICOM standard has added several options to allow use of web services, generally called DICOMWeb. The first generation called WADO (Web Access to DICOM Objects) has been around for quite some time, which basically is a http call for a DICOM image, the second generation added queries (QIDO-Query based on ID for DICOM Objects) and store (STOW-Store Over the Web). One should realize that these additions only impact the protocol, i.e. how the data is exchanged, and not the data formats, including the header definition. Also, one should realize that these additions are for an initial relatively small set of applications related to mobile access and most likely EMR functionality. It is generally understood that the vast majority of DICOM applications, especially the connection of the digital modalities such as the CT, MR. etc. will still be using the conventional, robust and proven protocol. DICOMWeb implementations are also in the prototype phase.

8.       XDS – Cross-Enterprise Document (and image) Sharing is a set of profiles defined by IHE to exchange documents and images. There has been widespread implementation in the UK, however, in the US it has been piecemeal, even though most PACS and VNA vendors support it. The standard is relatively mature. There were some initial issues around the definition of the metadata that has to be submitted with the information to be exchanged, as the identifiers used to manage information within a department (e.g. Accession Numbers) and enterprise (Patient ID’s) are not sufficient to identify an image or document uniquely among different domains. Information such as specialty. and patient cross-referencing is necessary. The good news is that in the US there are modalities to start doing XDS document submissions, mainly of non-DICOM objects, such as PDF’s and JPEG’s to for example a VNA. Interestingly enough the integration of multiple non-radiology specialties creating these non-DICOM objects might drive a more widespread adoption of XDS.

9.       Patient ID reconciliation – As images are being exchanged between multiple enterprises, there 
is a need to reconcile multiple ID’s that are issued by the various institutions. The US did not implement a universal patient/person identifier, which was actually part of the initial HIPAA regulations in the late 90’s due to privacy concerns. But even in countries that have a universal patient identifier, such as Canada, and many European and Asian countries, there are always cases where reconciliation is necessary because a person might not have an ID (think an illegal alien) or does not have his or her ID card when admitted to a healthcare institution. For example, in Canada, which has a so-called healthcard with a unique ID, there is still 2 percent of the admitted population that has incorrect or missing information. To reconcile a patient who has different local patient ID’s, there are two methods, i.e., deterministic and probabilistic. The first method assumes a match based on a known relationship such as a universal patient ID, the second one assigns a weight factor to each of the items in a list of demographic characteristics such as the name, birthdate, ID, gender, postal code and phone number and, if the result is higher than a certain probability threshold, declares a match or mismatch. The province of Ontario in Canada has currently four regions where information is shared and has experience with both methodologies. After matching almost 4 million patient records, it determined that .9 percent of the cases using a deterministic method resulted in an “uncertain match” and .39 percent in a mismatch. These mismatches were reported back to individual sites to allow for them to improve their match rate. Bottom line is that with both the deterministic and probabilistic method, mismatches can still occur requiring QA procedures to minimize these occurrences.

10.   Pathology – Digital pathology is very challenging and its implementation is trailing behind successful implementations in Europe. The main reason is that there is no FDA approval (yet) for this application requiring institutions to do dual interpretation, i.e. needing to perform a diagnosis based on the physical slide as well. There is a DICOM standard defined to encode these types of images that are created by a scanner, however, support is rare if non-existent. Even if approved, there are major implementation challenges based on the huge size of these exams and subsequent demand on infrastructure, archiving, and image display and manipulation. FDA approval is an initial and necessary step, but there are many other issues, including workflow challenges and initial resistance from the pathologists who have to get used to looking at these images on a monitor instead of through a microscope. Even though there are a few institutions starting to implement digital pathology, widespread adoption in the US is still several years down the road.

In conclusion, SIIM2016 was a good meeting. There were good discussions, and it provided a good opportunity to get up-to-date on new developments and talk with many users and vendors in a more relaxed and less crazy environment than the major big trade shows such as RSNA and HIMSS. Hopefully SIIM will proceed on their current path and next year in Pittsburg will be as good or better as this year in Portland.