Tuesday, April 11, 2017

Top Ten VNA Requirements

The term PACS VNA (Vendor Neutral Archive) has been loosely defined by different vendors and its functionality varies widely among providers. Early implementations have seen some good success stories but also, in several cases, caused confusion and initial frustrations and unmatched expectations. The list below concentrates on the key features that are necessary for a successful implementation. So, the VNA should:

1.    Facilitate enterprise archiving: Enterprise Archiving requires many different components, as a matter of fact, the joint SIIM/HIMMS working group has done a great job listing key components, including governance, a strategy definition, image and multimedia support, EHR integration and a viewer, but most importantly a platform definition, which can be provided by a VNA. The VNA needs to be the main enterprise image repository, which is the gateway to viewers and the EMR, taking in the information encoded as DICOM, as well as other formats, following the XDS (cross-enterprise document sharing) repository requirements. A true VNA needs to be able to provide that functionality.

2.    Facilitate cross-enterprise archiving: The VNA should be the gateway to the outside world for any imaging and image related documents. Examples of image related documents are obviously the imaging reports but also measurements (Structured Reports) and other supporting documentation, which can be scanned documents, or be in native digital formats. It also needs to be a gateway for external CD import and export, for portals, and the gateway to cloud sharing and archiving solutions.

3.    Support of non-DICOM objects (JPEG, MPEG, Waveforms). Even though DICOM has proven to be an excellent encapsulation of medical images and other objects, such as waveforms, PDF’s, documents, etc., there are cases where this is not as easy or possible. A use case for this is when one needs to archive a native MPEG video file from surgery or another specialty. As long as there is sufficient metadata to manage the object, this should be possible and be provided by the VNA.

4.    Be truly vendor neutral: Even if the VNA is from the same vendor as one or more of your PACS systems, its interface with any PACS system(s) should be open and non-proprietary. This is one of the most important requirements: plugging in a PACS to your VNA from another vendor should be very close to “plug-and-play.”

5.    Synchronize data with multiple archives: Lack of synchronization is probably the number one complaint that I hear from early implementers. To be fair to the VNA vendors, in many cases synchronization is lacking on the PACS side. Even if the VNA is able to facilitate the IOCM (Imaging Object Change Management) messages, which is basically a Key Image Note with the reason for changes (rejects, corrections for safety or quality reasons or worklist selection errors), if the PACS has no IOCM support, then you are left with manual corrections at multiple locations. At best, there should be some kind of web-based interface that allows a PACS administrator to make the changes. It might be possible to adjust the workflow, which could minimize corrections, for example, one institution does not send the copy to the VNA till one day after the images are acquired which means that the majority of the changes have been applied at the PACS, however, if the VNA is the main gateway for physician access, this is not feasible. Lack of this synchronization requires a PACS administrator to have to repeat the changes at different locations.

6.    Provide physician access: A key feature of the VNA is that it provides “patient-centered” image access, i.e. instead of a physician having to log into a radiology, cardiology, surgery or oncology PACS with different viewers using different log-ins, and disparate interfaces, there is now a single point of access. This access point is also used for the EMR plug-in, i.e. there should be an API that allows a physician to open up the images referred to in the EMR with a single click provided by the VNA. Note that accessing the data with a different viewer could create some training and support issues as the features and functions are most likely different from the PACS viewer.

7.    Take care of normalizing/specializing: As soon as images are shared between multiple departments and even enterprises, the lack of standardization becomes obvious with regard to Series and Studies Descriptions, procedure codes/descriptions, and body parts. The differences could be obvious, such as when using “skull” or “brain” for the same body part or subtle changes such as between “CT Head w/o contrast” and “CT HD without contrast.”  Any difference, even minor ones, could cause prior images to not be fetched for comparison. That is where, what is sometimes referred to as “tag morphing,” comes in, where the data is “normalized” according to a new set of descriptions and/or codes before it is archived in the VNA. When a specific PACS expects certain information to be encoded in a specific manner, the data has to be modified again to facilitate its local quirks, which I would call “specialization”.

8.    Handle multiple identities: Images will be presented to the VNA with local patient identifiers that need to be indexed and cross-referenced. The same applies to studies and orders. Most VNA’s can pre-fix an Accession Number to make it unique in the VNA domain and remove that prefix when sending the information back. This assumes that the Accession Numbers are not using the maximum allowed 16 byte length, otherwise it has to be dealt with in the database.

9.    Be the gateway to the outside world using open standards. Many states, regions, or, if small enough, countries, are rolling out a central registry (HIE or Health Information Exchange) so that an institution can register the presence of images and related information to anyone outside the enterprise who is authorized to access this information. The registration and also discovery uses the IHE defined protocols XDS while the PIX/PDQ standards take care of the patient cross referencing and query.

10. Meet your specific needs: More than 50 percent of US-based institutions are apparently installing or planning to install a VNA, according to a recent survey. I suspect that the main reason is that many are getting tired of yet another data migration, which is lengthy (months to years), and potentially costly in terms of both money and lost studies. The elimination of future migrations is somewhat of a moot point as PACS migration will likely be replaced with migrating the VNA, so it is kind of shifting but not eliminating this issue. The real reason for getting a VNA has to be some of the key features listed above. If on the other hand you have a relatively small institution, with only images created in radiology and possibly cardiology but not in any other specialty, and there is no immediate need for image exchange, then I would argue that you might be better off staying with the current PACS architecture as the business case for a VNA is not quite clear yet.

In conclusion, VNA’s are here to stay, assuming they have most, if not all, of the features listed above. However, it might not be for you, so you need to make a business case and look at the potential pro’s and con’s of getting a VNA. When you are thinking about getting a VNA, talk with your VNA and PACS vendor about the features listed above to make sure you understand the clinical, technical and personnel impact if your vendor does not support one or more of these features. By the way, we'll have a VNA seminar coming up, see details here.



Monday, February 27, 2017

HIMSS17: Where the Brightest Minds Inspire Action.

A view at the floor
Here are my top ten hot topics from this year’s HIMSS meeting, which was held Feb. 19-23 in Orlando this year. Before I comment, one should note that I look at this meeting with an imaging background and interoperability interest, so I probably missed many things outside my scope of interest. Second, I was a little bit turned off by this year’s slogan “Where brightest Minds inspire action.” I have to admit that it did its job because as soon as I arrived at the airport I made a picture of myself with this poster to post on social media, but after the fact I thought, “what about those even smarter physicians, or, even better, shouldn’t it be all about the patient?” In any case, it was a nice marketing scheme and definitely did its work.
So, what about my top 10 hot topics of this year?
  1. Cognitive computing is the solution to all problems in healthcare (or maybe not?) – Virginia “Ginni” Rometty (IBM CEO) said in her keynote speech, “We’re at a moment where we can change large pieces of healthcare and we are at a point where cognitive computing could usher in a new golden age in healthcare.” Note that cognitive computing is defined as “the simulation of human thought processes in a computer using techniques such as AI, machine learning, and neural networks. So, is our solution to our healthcare issues to replace physicians with computers? Maybe not quite yet, it can be used to guide precision medicine, such as tailored drug and therapy treatment for cancer, and yes it can also be used to create better outcomes using data mining to indicate more effective treatments. But I believe we have some major issues to deal with first, such as, having information about patients in different data silos that is still incompatible, hard to extract and exchange, and lastly, semantically very differently encoded. And then, assuming we do eventually get all that data at our fingertips, replacing the thought process from an experienced physician is somewhat more complex than being able to win a chess or jeopardy game (which is what Watson is known for). So, in my opinion, Watson may not be a solution for a while.
  2. Clinical Document Architecture (CDA) is not (quite) dead but “in hospice.” – This is a quote from Keith Boone, an interoperability expert and guru who wrote a blog about this particular subject 2 years ago. CDA is the document standard that was defined as part of HL7 version 3 and was supposedly to become the norm for exchanging documents out of, and in between EMR’s. For example, a physician can access a CDA encoded discharge record from a patient and import it into his or her own EHR by requesting the CDA from the hospital. There are several different templates defined as part of the CDA definition, such as care record summaries, clinical notes, and several others. The expectation is that CDA is going to be replaced by Fast Healthcare Interoperability Resources (FHIR) (see below) in the next 2 or 3
    Use case demo with FHIR
    years. However, I saw quite a few working demonstrations of CDA exchanges, and talking with HIE executives, it appears that this is for several applications the most common interface. And as we know in healthcare, there is a big resistance to change something that works (witnessed the death of HL7 version 3). So, even though FHIR will start to take off in a few years, there remains plenty of opportunity to fix the issues with CDA, which mostly deal with its semantic interoperability, and to keep it for certain use cases. So, in my opinion, CDA is not going to die soon and, for certain use cases, continue to be a good way to exchange information between EMR’s.
  3. FHIR year-three is still in its teens. – If you have teenagers you know what this
    One of the several presentations
    on FHIR at the HL7 booth
    means. It is unpredictable, you often are holding your breath, hoping that common sense will prevail (eventually). Despite (or maybe thanks to) the strong support from ONC many people don’t realize the FHIR is still a draft, and it has many resources still to be defined, and, unfortunately because of its loose definition, has many options, and lastly there is also the issue with having different releases out there. So, it will take a few years to get there and at that time it will be used next to conventional HL7 version 2, CDA, and in some rare version 3 messages, to exchange the information.
  4. Imaging is still an IT stepchild. – As with any stepchild, imaging does not get the attention it deserves and is under appreciated and underserved at this meeting. The HIMSS program committee does not seem to realize that the CIO’s who visit HIMSS will never set foot in any of the imaging trade shows, so in order to bridge the gap between IT and imaging, it is essential that there be education and exposure to the complexities of storing, archiving, managing and exchanging patient images. It is not the vendors fault, if you wanted to learn about the new enterprise image solutions using VNA technology, all of the major (and minor) players were present. But the lack of educational sessions on this topic was discouraging.
  5. The HIMSS Interoperability Showcase is growing up. – As of the second day, there were more visitors
    One of the many use-cases shown
    at the HIMSS showcase booth than last year’s (more than 7,400). This showcase showed true interoperability by having multiple vendors in the same booth demonstrating true life scenarios. One might wonder why there is so much more interest. Here are some of the questions I got when working at the IHE information booth this year: 
    From large insurance company – Why do I have to develop custom interfaces for each provider’s EMR to get data so we can use it to optimize our billing, reimbursement, actuarial predictions, etc. From device vendor – How can I upload my data using standards into the EMR?·  From HIE provider – Why are there so few people supporting XDS for cross enterprise data exchange?  From PACS vendor – We provide XDS capability but why don’t we get any “takers” in the US?   From IT hospital people – When can I get what I see here in my hospital? So, in my opinion, the increased interest in these exhibits is due to the fact that people are starting to realize that there indeed is another way of integrating systems, i.e. by using industry standards such as IHE, but that there is still a big difference between what was shown (the “show and tell” or what we call in our own jargon the “dog and pony show”) and what is available in the real world.
  6. IHE Connectathon is maturing. – The IHE connectathon took place in Cleveland,
    Connectathon in Cleveland
    just a few weeks prior to the HIMSS17 meeting, which is somewhat of a “testing ground” to prepare for the IHE showcase event, but also stands on its own merits as it is a great opportunity for the vendor community to test their applications among themselves. As the connectathon attendees can attest to, it was obvious that in contrast to the IHE showcase attendance, the connectathon attendance has been declining over the past few years. The number of Health IT systems tested has dropped between 2015 and 2017  and the number of organizations testing as well. IHE USA is still looking to see what the trends are and reasons for the drop, but in my opinion, it might just be that it is time to pause and make sure that there is time to implement all of these profiles from these different domains. It is good for standards to set the path ahead, but the industry needs to have time to implement it, and for the users to demand it, and test and deploy it and make the appropriate workflow and other changes that come with implementing new technologies.
  7. Enterprise-wide Image exchange is still a challenge. – The Joint SIIM/HIMSS workgroup on enterprise imaging gave their report at the meeting detailing the result, which produced several white papers that are available for free on their website and covers all aspects including the governance, image ownership, encounter-based imaging issues and viewing. The problem is not only how to manage and exchange these images, but also how to acquire them in a consistent manner, especially from the non-DICOM “ologies.” Taking a picture on a smartphone to be uploaded into the patient EMR is not trivial as it requires consistent and unique metadata to be generated, which sometimes can be done from order information, but often there is no order available. There are a couple of follow up working groups established that basically try to take these white papers a step further and educate users about the issues and resources. In addition, there is a working group that is evaluating the Digital Imaging Adoption Model (DIAM), as defined and used by the European HIMSS division, to see how it can be made applicable for the US.
  8. It is very hard (if not impossible) to get physicians to give up their pagers. – There are still about 1.5 million pagers! around in the US, which are almost exclusively used by healthcare practitioners. There are several reasons for this, most of them are related to habits as there is no reason that, with using the appropriate secure messaging software, one can’t use a smart phone. If a physician would use a smart phone it is possible to link directly to the EMR, look up on-line resources as needed and even pull up an image, all of which is not possible with a pager. Dr. Sean Spina from Island Health in Canada did an experiment with his pharmacy staff and found that when using the smartphones, the average response times for messages was reduced from five-and-half to three minutes and the time for high priority calls were reduced from 19 to 5 minutes. However, it really requires a top-down enforcement as one vendor commented that if they sold let’s say 1000 licenses for their messaging, after one year they found that at most 200 would be using it, the remainder would still be hooked on their pagers.
  9. Patient engagement through messaging is critical – Another important type of secure messaging that is evolving and also critical for outcomes is messaging of patients. There is a high percentage of patients that do not take critical medications, so simple follow-up texts has been proven to make a major difference, even potentially impacting readmission rates in the hospital.
  10. The last observation is that gadgets and happy hour rule at these types of
    My favorite give-away:
    Vespa Scooters
    events
    . – If walking down the aisles you wondered why there is more traffic and people seem to be more vocal late in the afternoon, it would have been the fact that the “bar is open” after 4 pm. Not only can you purchase beer, but you could get free Belgium beer at certain European vendors, and, of course, Bud Light with the US vendors, and here and there you could get a good glass of wine. And of course gadgets still rule, get those stuffed animals, pens, USB sticks, shopping bags and much more in return for vendors scanning your badge so they can follow up with junk mail.
This was another good trade show, it had record attendance because the “who-is-who” in healthcare IT was there and there were some pretty good talks even though it was often hard to spot them in the myriad of presentations. As one of my vendor-colleagues commented, “your customers expect you to be at HIMSS, whether you like it or not.” One common complaint was that the trade show was very elongated, i.e. to get from one side to the other was more than half a mile and it took me at least 10 minutes walking at a brisk pace to get from one place to the next, but I heard few complaints about the content. HIMSS18 will be in Vegas again, a favorite location for many, including myself. Looking forward to it!


Monday, January 30, 2017

IHE 2017: It’s all about device connectivity.

400 engineers from all over the world
 testing their Healthcare Applications
This is the 19th year for IHE North American Connectathon, which has been held in Cleveland, Ohio
for the past several years. This event brings together 60 plus health care imaging and IT vendors for a week who work collaboratively connecting their systems. There were 115 systems this year, each one prepared to test interconnectivity. The goal is to reduce on-site testing and integration and ultimately advance health IT and patient care.

There were 400 engineers in attendance for the event and in addition, there were 65 so-called monitors who check the connectivity results. The general connectathon rule is that a vendor has to show that it can communicate with at least 3 other systems from different vendors to claim that they passed a specific test. The systems under test on the IHE NA Connectathon floor represent most of the top health IT vendors who provide systems such as EMR’s, patient care devices, imaging 
modalities, image archiving systems (PACS), and review stations. Except for any small devices such as monitoring systems and infusion pumps which are easy to carry, the engineers take simulators with them running the same software that would create MR, CT, nuclear medicine or mammography images and simulate the big archiving and communication systems that host the images and results.

This big event poses the question; how relevant is the support of these profiles, which are based on standards such as DICOM, HL7 and others, and what does it mean to support these? As an example, last year the emphasis at the connectathon was on testing XDS (Cross Enterprise Document and Imaging Sharing) profiles that have been very slow to be deployed this past year. Yes, most systems now support these profiles allowing information to be shared, identified, and managed between different enterprises, however with the exception of implementations in the UK, there have been relatively few implementations in other countries including the USA.

In the US this might be caused by the demise of several public Health Information Exchanges (HIE) due to a lack of funding and failing business models to support these, which basically took away the infrastructure to exchange the information using these profiles. Another barrier is the widespread implementation of proprietary interfaces to exchange these images with many cloud providers.

A major trend at the 2017 connectathon has been the emergence of patient care devices and web services, i.e. FHIR and DICOMWeb. The adoption of these standards might be much faster than the ones dealing with image and information sharing as there is a major benefit to be achieved in patient safety and efficiency using intelligent devices. As an example, anyone who recently has been hospitalized and watched what a nurse does every time he or she changes the supply for an infusion pump can attest; all those changes are always entered into the patient record manually. An intelligent infusion pump using IHE will be able to update the patient record automatically, producing major savings in efficiency and a potential reduction of errors, resulting in better patient safety.

60 or so monitors who evaluate
the test results for pass/fail
Another promising area is the upload of images from mobile devices into a PACS or electronic record. DICOM has added web services capability to its standard, allowing a phone app taking a picture, for example for wound care or from a dermatologist, in order to upload these images securely using the widely available web services interfaces. This could become a “killer-app” which will drive widespread implementation.

Overall, the connectathon was well attended, albeit with lower attendance than last year (I estimate 10 percent to 20 percent lower). It will be interesting to see next year if this is going to be a trend. This event is a major investment in time and effort. Preparation for it, including the creation of tools for simulators and test sets and performing pre-tests are on an order of magnitude bigger than the actual testing week. It appears that several companies are skipping one year and attend every other year instead of attending every year.


For the smaller vendors especially, it is still a major opportunity to test their applications against many of the big players. It also provides a good insight into how robust some of the implementations are as several of them had some major issues when trying to connect. So, overall a successful event, from my perspective it is worth attending again next year.

Thursday, January 12, 2017

Why would I use a medical grade monitor instead of a commercial one to view X-rays?

I get this question all the time: why should I pay thousands of dollars for a medical grade monitor to
Looking at medical monitors at a trade show
diagnose digital X-ray (CR/DR), if I can buy a very nice looking commercial grade off-the-shelf, (COTS) monitors at the local computer store. I have boiled this argument down to 6 important reasons based on critical arguments, which are (hopefully) simple to understand and allow you to convey this to your radiologists or administrators who have little technical or physics background.

1. A commercial grade monitor does not show all of the critical anatomical information. As the name implies, the COTS monitors are intended for office automation, to display documents to appear like a printed page.  Therefore performance attributes are weighted heavily to being as bright as possible so that text is easily resolved with minimal eyestrain.  Commercial displays therefore attain maximum luminance way before the graphic card input reaching its maximum input value.  Remember that a typical graphics card can display 256 different input values, each representing a distinct piece of valuable diagnostic information. These monitors have been observed to max out as low as a value of 200, which means values 201 to 255 are being mapped to the same luminance value ... maximum. This means that 20 percent of all the data is cropped or simply eliminated.

By contrast, medical grade monitors are calibrated to map each individual distinct pixel into something you can detect rather than following the natural response of a graphics card output. Unfortunately, it is normal for the natural COTS monitor response (un-corrected to DICOM) to yield the same luminance value (measured) for multiple sequential values, i.e., a flat spot in the response curve. These flat spots are especially obvious in the low range, i.e. the first 160 of the 256 values.  

What is the impact of a flat response? Let’s take as an example, for a commercial grade monitor the pixel values of 101, 102, 103, 104, and 105, could be mapped into a single luminance value on the screen. That means that if there is a slight nodule, which is identified by a difference in value between 102 and 105, it will disappear, as there is no distinction between these values on the monitor. Note that since the better part of the clinical information from the imaging modalities is in the lower 50 percent of the luminance range, this means that these are in the most critical areas wherein the ability to resolve pixels at different luminance values is being compromised.

In conclusion, the potential to miss critical diagnostic information both at high luminance and due to flat spots in the response should be the number one reason to not even consider a commercial grade monitor. Therefore, the first requirement for medical monitors is to insist on a monitor that is calibrated according to DICOM standards, which truly maps each of the different pixel values into a luminance value on the screen that is detectable by the human visual system as noticeably different.  It is best to have this calibration done at manufacturing to have an optimal mapping of the three RGB channels into the DICOM compliant curve.

2. Many commercial grade monitors don’t have the required dynamic range. The maximum light output of a monitor is specified using the units of cd/m2 (candela/square meter).  A good quality commercial display can achieve 300 cd/m2, sometimes more if you are lucky. The maximum value of 300 cd/m2 would be at the low end of any medical grade monitor, which might be able to go up to 500 cd/m2 or more.  Why do we need this much? The reason is that when a display is calibrated to DICOM, a percentage of the response is lost in the mapping process.   At 300cd/m   and applying DICOM corrections, the maximum value can be expected to decrease by about 10 percent. 

The human eye has a 250:1 contrast ratio at the ambient conditions of the viewing environment.  Assuming the commercial display was DICOM compliant with aftermarket software, the luminance ratio of the display and the eye would be very close.  However, ambient light detracts from the ability to see low contrast information.  This particular example would need to be in a low light room to achieve a 250:1 luminance ratio inclusive of ambient light. 

Medical displays are designed to operate between 400 and 600 cd/m2 as corrected to DICOM with reserve luminance potential for extended life at those levels.  Even if a monitor is calibrated, if there are not enough points to map the pixel data into, you clip off part of the information. For example, if you want to map 256 grayscale pixel values but have only 200 points available, you’ll lose that information. The required dynamic range depends on where you are going to use the monitor. As you are probably aware, the lighter, i.e. the brighter the room light, the more information you are going to lose in the dark, as you simply won’t be able to distinguish details in the dark.

There is a simple fix for that, the calibration takes into account the room light and makes sure the lowest pixel value is mapped to something you can detect. The whole range is shifted, which is important when using it in a light area such as ER or ICU. Also, it is good to have some “slack” at the dynamic range, the light source of the monitor will decrease (compare the output of an old light bulb), and get lower and lower over time). Therefore, the maximum brightness to facilitate mapping the whole data range should be about 350 cd/m2[1] assuming you use it in a dark environment. If you are using it in a bright area or if you want to make sure you have some slack to facilitate the decrease of monitor output over a period of let’s say 5 years, you might want to go with 450-500 cd/m2.

3. A medical-grade monitor typically adjusts the output to compensate for start-up variations in output. The light output of a monitor varies as the temperature needs to stabilize for about 30-60 minutes. You can leave them on day and night, or switch them on automatically one hour before they are going to be used, however, either method will drastically reduce the lifetime. Better grade medical monitors typically have a feedback mechanism built in that measures the light output and adjusts the current to the light source to create the same output. The third requirement therefore is to have a medical grade monitor with a light output stabilizer.

4. A medical grade monitor can usually keep a record and keep track of its calibration. One of my students in our PACS training told me that he had to deliver the calibration record of a specific monitor dated 2 years back for legal reasons, to prove that when an interpretation was made on a workstation, there was no technical reason that a specific finding was missed. In addition, you need access to these records on a regular basis, regardless, to make sure that the monitor is still operating within the acceptable range. This brings me to another point – many users seem to replace their monitors after a period of five years. If they are still within the calibration, there is no reason to do that. Therefore, the fourth requirement for a medical grade monitor is to make sure that you can retrieve and store the calibration records.

5. A medical grade monitor is typically certified. There are recommendations that are defined by the ACR for monitors. They are somewhat technical and in my opinion not worded strongly enough. Also, most medical grade monitor manufacturers are FDA approved, which is actually only a requirement in case you are reading digital mammography. If you meet the requirements stated above, you should be OK, but FDA approval does not hurt. You can check the FDA website and look up the manufacturer to see if they have been approved. The fifth (optional) requirement is therefore to be FDA approved.

6. In addition to being able to see all of the grayscale, which is characterized by the contrast resolution, you also need to be able to distinguish between the different pixels, i.e. your monitor needs to have the right spatial resolution to see the individual details. Let’s take a typical CR chest, which might have an image matrix size of 2000 by 2500 pixels, that results in 5 million pixels or 5MP. The standard configuration for a diagnostic monitor to look at X-rays is 3MP, because a physician has the capability to zoom or use an electronic loupe to see a one-to-one mapping of each image pixel element on the screen. One could actually argue that you can use a 2MP monitor as well, and yes that is correct as long as you realize that it will take more time to make a diagnosis, as you need to zoom more frequently. But if you are very cost sensitive, for example considering a system that is placed in a developing country where money is major issue, a 2MP configuration would do. So, the sixth and final requirement is to have a 3 MP monitor configuration (assuming time is more important than cost).

Does this mean that a commercial grade monitor cannot be used? It depends, if you are willing to manually calibrate the monitor and do this on a regular basis, by running a calibration check and making sure this can be applied by the monitor, if you make sure to take care of the warm-up time, if you have a monitor that meets the maximum brightness requirement, if you keep your calibration records, and are not worried that in case of a legal dispute the plaintiff does not have enough expertise to challenge you with the fact that you use sub-standard components that could impact patient care, well… it is up to you. But I would think twice about it, especially as the price difference between a good quality medical grade monitor and commercial grade monitor is not that great compared with the overall cost of a PACS system.

If you are interested to know more details, there is a video on how to use a test pattern to check whether a monitor is calibrated or not. And as always, in our PACS training classes we spend quite a bit of time going over monitor characteristics and calibration, and include hands-on experience with these, so if you want to learn more about this, check out our training schedule.




[1] The unit of measurement for luminance specified as brightness is called cd/m2 which stands for candela per square meter

Wednesday, December 7, 2016

RSNA 2016, What’s New?

This year’s annual Radiological Society of North America (RSNA2016) tradeshow in Chicago was
pretty good, the weather was cooperating, the labor unrest and strike from Lufthansa and workers at O’Hare just ended in time for us to get to the city, and the overall mood and atmosphere was positive. Vendors were happy about increased traffic from serious buyers and contenders, and the PACS replacement market seems to be growing. Here are my (subjective) observations:

1.       Deconstructed PACS is here to stay. I have to say that I was surprised by this as I thought it was going to be a fad, and, would only be for a few selected large organizations, but it seems to have a lot of momentum. I wondered why would anyone start building a PACS from scratch using components from best-of-breed vendors knowing that it takes a substantial investment of time and expertise to do this? Well, the reasons are:
a.       There are significant workflow improvements that can be achieved that are not possible or are very hard to do with “standard” PACS systems. Examples are, doing efficient image exchange; having universal worklist reading from multiple, disparate PACS systems; having an elaborate pre-fetching algorithm that makes sure you pull all prior studies; having optimal mapping of your HL7 based orders into a DICOM Modality Worklist that supports an effective technologist workflow, and several others.
b.      There is an emergence of middleware vendors that provide routers, DICOM tag morphing gateways, worklists, decision support software and workflow support
c.       Finally, we have more availability of zero-footprint viewers that are fully featured, i.e. are not just lightweight clinical viewers, but can do the heavy duty radiology work, and can operate either driven by the EMR, RIS and/or a general purpose worklist
d.      VNA installations are maturing, and now provide full control over the image archive to the end-user instead of locking the data up in semi-proprietary vaults, which required the user to keep on purchasing additional licenses for blocks of studies to be archived and/or accessed.
There are still challenges, as we find out how much proprietary information typically flows in between the various PACS components, but there is no question that the deconstructed PACS is here to stay and a good solution for larger organizations with good in-house IT and clinical support.

2.       The deep learning or machine learning hype, which are both forms of Artificial Intelligence has spread to radiology causing more harm than good at its onset. A story in the September 2016 Journal of the ACR stated that it could end radiology as a thriving specialty. As Dr. Eliot Siegel from the Baltimore VA stated in the controversial session “Will Machines Replace Radiologists?,” he is already getting emails from residents asking him if they should quit the practice. A Wall Street Journal article published on Dec. 5 discussed the AI threat and dismissed its short term impact based on three barriers: the lack of huge sources of data that are needed to “teach” these supercomputers the rules, the small incremental improvements (one or two percent) that are achievable that are not necessarily significant enough to justify the initial investment, and the lack of personnel to implement all of this. There is no question that improvements in technologies such as CAD will spread its use outside the common application of detecting lesions in breast imaging, and that radiologists could use computers much more effectively such as for automating reports using structured measurements from ultrasound. But replacing radiologists with computers will take a while, if not a couple of decades.

3.       3-D printing is gaining a lot of traction. This technology is not new; for the past 30 years the

automotive industry and other industrial applications have been printing models for rapid prototyping and hard to find replacement parts. But it has now become mainstream technology as you can go on-line and buy a 3-D printer for a few thousand US dollars. Larger institutions are starting to set up 3-D printing labs, which is somewhat of a challenge from an organizational perspective as it does not quite fit one specialty. The application intersects radiology, surgery, orthopedics, dentistry and others. But the number of models to be printed have exponentially grown, as the Mayo clinic reported for example that they now do several hundreds of models a year. The DICOM standards committee, which met during RSNA, decided to re-activate a 3-D working group to address the
interoperability issues. There are several standard 3-D formats that are similar as with, for example, a standard JPEG or TIFF 2-D image, but there is no place to put acquisition context, patient information, and other clinically important information in the meta-
data. In other words, we need a DICOM “wrapper” that allows this, and that can encapsulate the most common standard formats similar to what is done by “wrapping” a pdf file into a DICOM file format. This activity is expected to give this application a major boost so that these objects can be properly managed.

4.       Digital Radiology (DR) plate technology shows signs of maturing. There are three major
technical challenges with regard to DR technology that are showing signs of being solved in the next two to three years. These have to do with the top reasons that these plates are currently failing. The first one is weight and lack of robustness, which has to do with the fact that the detector is using glass as its main component. The same technology that is currently being used to create thin TV screens that are bendable should resolve this issue and result in super light-weight detectors. The second challenge is to make these detectors completely waterproof so they can withstand body fluids. The third challenge has to do with the way that the detector array is soldered, which should be manufactured in a more robust manner. Given the recent regulations by the US government to encourage DR replacements of CR, the volume should be going up, which should bring the price down for a plate from around US $50,000 to an average $25,000 and even lower. We are not quite there yet, but the next few years should provide some major improvements and cost reductions.




5.       Multi-modality integration is becoming more popular including such dual imaging devices as the PET-CT, PET-MRI and even using SPECT. In addition, it is also possible to integrate an ultrasound with a CT scan by connecting a dual camera to the probe and registering it with a prior CT scan. This could potentially even replace the often-used practice of using an X-ray fluoroscopy (C-arm) for visualization of needles inserted into a patient.


Usually, I have a “top-ten” list after RSNA, but this year I did not find that many innovations, not taking the new “60 second eye-lift,” or massage chairs into account. However as mentioned, the “vibe” was very positive and there was a lot of emphasis on how to become more efficient, how to improve radiology services, which combined with a concern for how the president-elect is going to impact the way that medicine is
practiced, resulted in a careful optimism. As always, I personally enjoy these tradeshows as there is no better way to get updates, talk to many peers, learn a lot and know what is going on in the industry than walking the aisles!

Monday, October 24, 2016

Deconstructed PACS part 4 of 4: Q and A with industry expert/consultants

This is part 4 of 4 of the Deconstructed PACS series, the recorded video and corresponding slides of the webcast video can be viewed as well here. In this final part we have the opportunity to interact with the VNA experts, Mike Cannavo (MC) and Michael Ryan (MR) to ask any deconstructed PACS related questions

Q1 Would you implement the VNA before or after the PACS implementation?
(MC) I prefer a VNA to be on the front-end because the migration has been done already and they have experience with it. The VNA implementation can easily take a year with the data migration taking a great part of that. Michael Ryan did it kind of on the back-end, it is probably a personal preference. My preference is to do it on the front-end.
(MR) If you have a choice, I would say implementing the VNA first makes the most sense.

Q2 What about the dataflow, do the images go to the VNA first and then to the PACS or the other way around?
(MR) That would really depend on the facility requirement, it can be done either way.
(MC) I disagree, the data should always go to the PACS first. By the way, I typically configure the local PACS to have 2 years of on-line storage.

Q3 Is the enterprise viewer in the VA environment used for primary radiology viewing?
 (MR) There is one application for the primary radiology and physician viewer, it is not a zero, but very small, footprint. That is how we can provide the clinical users with advanced viewing capabilities, especially the neuro-surgeons and orthopedic specialists. They are on separate networks though.

Q4 Is a deconstructed PACS less expensive than a “conventional PACS?”
 (MC) Most PACS vendors are selling bundled solutions that are 60% to 75% off list price, which means that a turnkey PACS solution will be significantly less. There is also integration, testing and internal support costs to deal with in the case of a deconstructed PACS.

Q5 An issue is having a consistent body-part, study and series descriptions etc. to allow prefetching the relevant prior studies for comparison, what is your solution for that?
 (MR) We standardized body-part and study descriptions. For the VA, the study description starts in our VISTA EMR. We had some success standardizing it over the facilities but it did not really meet the end-user requirements. Also, there is a trade-off between making it too generic or too specific. In our experience, “body part examined” contained a lot of “Other” values. This is especially an issue with older studies.

Q6 Why do you need a PACS archive and VNA archive if you have a viewer that has full radiology functionality and you have the other key pieces of the deconstructed PACS. Having a PACS seems redundant. 
(MC) I don’t see the vendor portion of the archive lasting, as there is too much proprietary and vendor specificity, which a VNA will eliminate. It will neutralize this, which is the “N” in the Vendor Neutral Archive. A VNA allows you to connect and reconnect any other PACS system without having to go with the added expense and time of the data migration. 
However, we still need a PACS as 80 percent of the US hospitals are under 200 beds in size and the majority won’t have the resources for doing a fully deconstructed PACS. However, if an institution is part of a large enterprise, and there are corporate resources available, than by all means they should consider it. For right now, and for the next 2-3 years, it will be mostly for the larger institutions that can afford to support it. 
(MR) There seems to be a lot of mergers and acquisitions outside the government world so I can imagine that it starts to make sense allowing them to tie it together and use viewers that can access multiple VNA’s. Indeed, for smaller institutions it would be a challenge both from a budgetary and personnel perspective.

Q7 How have you implemented integration of specialized applications, such as orthopedic templating?
(MR) we used to have a third party plug-in for our legacy PACS and we can do the same, i.e. launch that from our new enterprise viewer. We do have a dedicated 3-D solution from a third party. For image fusion, some of our facilities use a workstation plug-in as well and some do the fusion at their modality workstations.

Q8 What are your closing comments on this topic?
(MR) For us, the deconstructed PACS was a good solution as we were able to find vendors that can meet our requirements. In addition, in the VA there are time and budget constraints, which make a piecemeal purchasing and implementation a better solution. I believe that five years from now, when my colleagues are looking back, they will find that it was a good decision to go this route. 
(MC) The most important part is that you have an action plan in place that considers what you have in place, what you can use, what you want to replace, and sit with someone who understands where you are and where you want to go. Look at the requirements and financial and personnel resources you have. Make sure you document this and realize that it could take 2-3 years to get to what you need. As Mike Ryan demonstrated with the VISN23 VA implementation, you can be successful if you do due diligence and plan it carefully.



Friday, September 30, 2016

Deconstructed PACS part 3 of 4: Do’s and Don’ts from the PACSMan

This is part 2 of 4 of the Deconstructed PACS series, the full video and corresponding slides of the webcast can be viewed here

The first thing you need to do when considering a Deconstructed PACS (DP) is knowing your requirements. Make sure you differentiate between what you want versus what is really needed. Specifically, do you already have a solution in place that works, and especially, how does a DP solution fit in your long term strategy. Consider the integration with other clinical systems, Know that there are many components to make a DP solution work, particularly the PACS, and RIS.

You have to decide where the data is stored, for example, on-site, in the cloud or do you consider a hybrid storage solution. Then, who owns the database, how is access provided and who is managing that. Note that this is one of the major differentiators between a traditional PACS which in many cases has a locked down access to the database and the deconstructed PACS which transfers ownership of that information, including opening up the database scheme to the customer, to the client.

It is important to know the trade-offs, in particular the gains and losses. Knowing the true cost of ownership is critical, including the support cost. This takes into account the cost and solution for redundancy, which can be solved in a central versus distributed manner. Service and support can be provided either internally or externally. Internal is typically less expensive initially but requires an investment in hiring the right people and providing them with training so in the end the costs may be the same or even slightly higher.
A word of caution is that one never should buy on price alone. No vendor will ever lose a deal on price. You need to make sure that everything is included in the purchase price. Don’t buy something you don’t really need.  I have seen too many computers in a department sitting in a corner unused that were part of the deal looked good on paper, but turned out to be useless.

It is important to decide upfront who will perform the connectivity to all systems and how will it be done. You need to determine will it be a web-based interface, an HL7 or DICOM interface, or even a custom designed  API (application program interface) that is commonly used to connect to your speech recognition system, or any other interface.
It is critical to get as much in writing as you can. Know that not everything can be negotiated and there are often certain policies and contract items that are standard.

Make sure you get feedback from all stakeholders before making the purchase decision, especially from the CIO and CTO. Those two  are key. Do your due diligence to be able to justify your purchase. Look at all financing option,and  in particular if you want to finance it from the operating budget, versus capital budget or even  price per click. Regardless the term should not exceed 5 years as this technology is changing too fast to know what will be preferable at that time.

In conclusion, a deconstructed PACS is a good solution for certain applications, but not necessarily a one-size fits-all  for everyone. Do your due diligence to find out if this might be a good solution for you. It might but then again, it might not either.

Mike Cannavo, aka as “the PACSman”.


Monday, August 29, 2016

Deconstructed PACS part 2 of 4: Implementation within the VA

This is part 2 of 4 of the Deconstructed PACS series, the full video and corresponding slides of the webcast can be viewed here

The VA Midwest Health Care Network, otherwise known as VISN 23 (Veterans Integrated Service Network) implemented a deconstructed PACS between September, 2014 and August, 2015.  Our legacy PACS hardware, originally purchased as part of a Brit Systems installation was at end of life.  The leadership team in VISN 23 made the decision to proceed with a deconstructed PACS solution to replace our traditional PACS.

VISN 23 encompasses all of Minnesota, Iowa, North Dakota, South Dakota and Nebraska and small parts of additional surrounding states.  The largest facility is Minneapolis with 130,000 studies per year.  All told, the 11 facilities register 460,000 studies per year.

Prior to making the decision to proceed with a deconstructed PACS, the VISN 23 PACS and Imaging Service lines achieved successful implementations of various PACS “sub-components”.  These consisted of Corepoint Healthcare’s HL7 integration engine, PowerScribe 360®, Laurel Bridge Compass® DICOM Router, Pacsgear PACS Connect®, TeraRecon Intuition® Advanced Visualization as well as various CD burning and importing solutions across the enterprise.  Having the experience of researching, evaluating and procuring these various components encouraged the teams to move forward with a fully deconstructed PACS.

The primary three components of the deconstructed PACS in VISN 23 are Visage Imaging as the viewing solution, Lexmark Acuo as the VNA and Medicalis as the radiologists’ worklist. Due to concerns about bandwidth across the enterprise, VISN 23 chose to install a Visage server at 8 of the 11 campuses along with an Acuo local server which Acuo labels a “temporal”.  Acuo data centers are installed at both Minneapolis and Omaha with DICOM replication between these two.  The Medicalis servers are installed in Omaha.  Also installed in Omaha are the HL7, modality worklist and PowerScribe servers for the enterprise.

Prior to deconstructing the PACS, VISN 23 made extensive use of DICOM routers to ingest studies from the modality layer.  Also a third party modality worklist solution purchased from Pacsgear was implemented well ahead of the deconstructed PACS.  This allowed the biomed teams to fully configure the modalities to interact with these two systems before during and after implementing a deconstructed PACS.  This freed up time and resources during the actual implementation of the primary three components of the deconstructed PACS.

The VISN 23 team faced several challenges during implementation. First, we discovered that new internal policies prevented installation of the Visage viewer on enterprise desktops for clinical use.  Second, since the legacy PACS hardware was at end of life, implementation was begun before the legacy studies had been migrated.  Therefore, at the first site, we initiated a “just in time” or “Ad Hoc” migration meaning priors were retrieved from legacy systems for current studies as they were performed.  However, since we had to maintain the legacy PACS for the enterprise desktop viewer, we had to be cautious to avoid overburdening the legacy PACS with prior retrievals.  We managed this, but it was a delicate balancing act that went on for nearly six months.

Another challenge VISN 23 faced (and will continue to face regardless of PACS type) is that the VA’s HIS/RIS, known as VistA, will only generate an HL7 message at the time of patient registration.  This means that, essentially, there is no pre-fetch but rather a “post-fetch” or “just-in-time fetch”.  As we worked through the issues, there were times when priors were not fully available for the radiologists.  In response to this, we had a few users who innocently fetched entire jackets on multiple patients to get priors.  This caused serious system performance issues. This was easily remedied with education.

VISN 23 teams also discovered during the process of migration and priors retrieval that there were inconsistencies in some DICOM tags on these legacy studies.  We addressed this by using the evaluative tag morphing and writing capabilities of the DICOM routers mentioned earlier.

Lastly, at the request of the radiologists, support teams went back to the study description source in VistA’s RIS and improved the efficiency of the descriptions.  For example, if a CT Chest and a CT Abdomen/Pelvis were acquired together, all of the images were usually stored under the CT Chest description.  We modified the description for these studies to read “CT (CAP) Chest”.

Successes achieved during our implementation were several. The viewer and VNA were able to achieve a very tight integration for study and patient splits, edits, merges, and so on.  We found it much easier to view images from other facilities. Clinical staff easily adapted to the Visage viewer on the enterprise desktop.  The tag morphing and writing will lead to a much cleaner database.  The server side rendering of the Visage viewer allowed near instant viewing of even volumetric CT studies using minimal bandwidth. 

In summary, our vendors worked remarkably well together. VISN 23’s experience proved that a deconstructed PACS is a feasible alternative even in a challenging security environment such as the VA.

The author, Michael Ryan played a leading role in the implementation of the deconstructed PACS in the VA Midwest Health Care Network (VISN 23).  Michael has since retired from the VA and is now providing consulting services as MCR Consulting, LLC.  You can reach Mike at MCR Consulting, LLC, mcryan@me.com