Friday, September 29, 2017

Vendor Neutral PACS Administrator training

A red light on my dashboard suddenly came on saying “no charging.” The battery indicator showed still at least 12 Volts, so I chose to continue my errand and take care of it when I got back home. That was a mistake, which I found out when my car stalled at a red light in a busy intersection. I should have turned around right away and/or gone to a garage to take care of my alternator which was broken. This event caused me to think that all of us are taught to drive a car before getting a license, but we aren’t taught basic troubleshooting of issues that might occur, hence these kinds of events could happen to anyone.

The same can be said of training as a PACS administrator. Similar to when a car salesman explains where to find the blinker and light switch, and possibly even how to set the clock on your car, there is little vendor training about how a PACS functions, what can go wrong, and how to interpret the “error messages.”

The good news is that cars have gotten pretty reliable, you don’t need to be a part-time mechanic anymore to be able to operate them. The bad news is that is not the case with supporting a PACS system. These are complex software applications, which definitely can have bugs, and are subject to many user errors and/or integration issues, which can cause images and related information to be unavailable or incorrectly presented to a physician.

Even though one is trained on a PACS system from a specific vendor of a particular release, it does not mean that you are taught the fundamentals. For example, what happens if the PACS rejects an image because it has a duplicate Accession Number, Study Number, Series UID, or SOP Instance UID?

Vendor-specific training does not cover what could have been the cause and how to fix it? Nor does it cover a “DICOM error,” or how to interpret the log files, or what to do if a modality does not display a worklist. What if images are randomly “dropped” when sending from a modality to the PACS? The easy answer is: call the vendor, but what if there is finger-pointing going on between the modality, RIS or PACS vendor, or what if the vendor is not going to be on-site for another 4 hours and your PACS is refusing to display any images?

I can go on-and-on listing many reasons and situations that are not covered by a vendor-specific PACS training program; but that is what you are taught by Vendor Neutral PACS Administrator (VNPA) training. That is why many PACS administrators search for “neutral” training providers that do teach the fundamentals.

The generic or neutral training is also a great track for healthcare imaging professionals who would like to get into this field, or want to cross over from a related career such as healthcare IT or clinical specialties such as radiological technologists.

The PACS fundamentals training covers subjects such as DICOM and HL7 basics and troubleshooting, and also covers new developments such as Vendor Neutral Archives (VNA), how to implement enterprise image archiving, what to look for when you get the new breast tomosynthesis modality or IV-OCT in cardiology, and the characteristics of the new encounter-based specialties such as surgery, endoscopy and in the future digital pathology.

As an additional bonus, you can even consider getting certified as a PACS administrator, where you might consider the basic, advanced and DICOM certifications.
So, even though you might have had the vendor-specific PACS administrator training, you might want to consider the Vendor Neutral PACS administrator training as well, to teach you the fundamentals which will empower you to be a mediator between vendors who are finger-pointing each other and blaming “the other” as the culprit, and to be able to perform basic trouble-shooting yourself without having to wait for your vendor to show up, and to be prepared for new developments in PACS and modality technology.

Thursday, September 21, 2017

PACS and Cyber Security.

There is a lot of anxiety around cybersecurity, especially after the recent ransomware incidents which
basically shut down several hospitals in the UK and affected several institutions in the US. The question is whether we should be concerned with potential cyber security breaches in our PACS systems and how to prevent, diagnose and react to them.

At the recent HIMSS security forum in Boston, a distinguished panel rated the security performance and readiness of healthcare IT systems at around 4 on a scale of 1 to 10. That is certainly very troublesome, and combined with the fact that breaches in healthcare systems are by far the most frequent as they are potentially more rewarding for hackers than trying to get access to, for example credit card information, that means that this industry still has a lot of catching up to do.

The problem is also that the vulnerabilities are increasing as the Internet of things (IOT) is expanding exponentially with as many as 10 million devices being added every day, and is estimated to reach 20 billion by 2020. Included in the IOT are medical imaging devices, which may put PACS in the high-risk category, as downtime could mean no access to images, which could directly impact patient care. However, there are even higher risk devices that have proven to be potential targets for intrusions such as IV pumps that administer drugs, implantable pacemakers, personal insulin pumps, etc. that can be immediately fatal to a patient. One can compare this threat with that posed to the controls of a self-driving car, whereby a hacker could turn the steering wheel so it goes towards the traffic, which can be as dangerous as increasing the morphine drip rate of an infusion pump.

Now getting back to PACS, if a hacker gains access to a patient imaging database, there is typically no Social Security numbers, addresses, credit cards or other potentially lucrative personal information stored in the PACS. A more likely scenario would be that the PACS is used to provide a “backdoor” into the EMR, or hospital information system to either shut that down and use it as a potential ransomware threat or get to the more extensive patient records in other systems. The prevailing opinion is that ransomware is probably the most likely scenario as it gives immediate rewards (pay $xxx or else….) instead of having to sell the patient records on the black market.

So, how can vendors and institutions prepare? First of all, no system can be made totally fool proof, just as no lock can be strong enough to protect against every type of attack. If someone is really motivated and wants to spend the time, there is always going to be a way to break in. The good news is that apparently a typical hacker is willing to spend, on average, a mere 150 hours on one attempt, after that he will move on to find another target that may be easier to break into.

This could be different if the attacker represents a nation-state that wants to access the records of military personnel served by a DOD hospital, they have all the time of the world, which is why the VA, DOD and other military healthcare institutions have a much higher set of cyber security rules. And the threat is real, according to the recent HIMSS security survey, more than 50 percent of the respondents reported that they had been subject of a known cyber-attack over the past 12 months. The emphasis is on “known” as it takes typically more than 200 days to detect an intrusion.

The key for preparation for every healthcare IT system is “basic hygiene,” analogous to hand-washing to prevent infections. Cyber security “hygiene” starts with updating your operating systems and implementing patches as they come out. Just as an illustration, the WannaCry ransomware attack exploited a flaw in the Microsoft OS for which a fix had been distributed two months prior to the attack, which affected about a quarter of a million computers in 230 countries. Basic “cyber hygiene” also includes password updates, three-way authentication, closing down unused ports, segmenting your network, disabling flash drives, using virus scanners and firewalls, etc.  Also, make sure you have a backup and/or duplicated system so that as soon as your system goes down you can still operate.

A comprehensive cyber security program has to be in place that includes allocating resources. As an example, Intermountain Healthcare has an IT staff of 600 people to support its 22 hospitals and 180 clinics with 70 of those people (12%) dedicated to cyber security. This is an exception, the average IT budget allocated to cyber security is only about 6-8%.

There are lots of resources to get started, the best known and most used is the NIST security framework, there is also a very extensive certification that is becoming more popular called HITRUST. At a minimum, one can start by looking at the so-called MDS2 (Manufacturer Disclosure Statement for Device Security) form developed by NEMA and HIMSS. As a vendor, one should look at these resources and as an end user you might want to request the MDS2 and ask about HITRUST certification. There already are several vendors who are supporting this.

In conclusion, PACS is probably not the number one target for cyber attack, but they could be an easy backdoor to other systems, which can be used to access patient and personal information that is valuable to hackers, and/or even worse, can be used as a ransom. Basic cyber security hygiene is critical, and using the NIST and/or HITRUST framework can be very beneficial.

Saturday, June 17, 2017

SIIM 2017 Top Ten Observations.

The 2017 SIIM (Society for Imaging Informatics in Medicine) meeting was held in Pittsburgh, PA on June 1-3.
View back to the city from Allegheny
The meeting was well attended both by users and an increasing number of exhibitors. This meeting is mostly attended by PACS professionals, typically PACS administrators, in addition to several “geeky” radiologists who have a special interest in medical informatics. Pittsburgh, in addition to being somewhat “out of the way,” was not a bad choice to hold a conference; downtown was quite nice and readily accessible, actually better than I expected. Here are my top ten takeaways of the meeting:

1.     AI (Artificial Intelligence) is still a very popular topic. The title of the keynote speech by Dr. Dryer from Mass General says it all; “Harnessing Artificial Intelligence: Medical Imaging’s Next Frontier.” AI goes also by the name of “deep learning” reflecting the fact that it uses large databases of medical information to determine trends, predictions, precision medicine approaches, and provide decision support for physicians.Another term people use is “machine learning” and I would argue that CAD (Computer Aided Diagnosis) is a form of AI as well. One of the major draws for this new topic is that some professionals are arguing that we won’t need radiologists anymore in the next 5-10 years as they are going to be replaced with machines. In my opinion, much of this is hype, but I believe that in two areas there will be a potentially significant impact on the future of radiology. First of all, for radiography screening AI could help to rule out “normal.” Imagine for breast screening or TB screening of chest images, one could potentially eliminate the reading of many of them as they would appear normal to a computer, freeing the physician to concentrate on the “possible positives” instead.Second, there were several new startup companies that showed some kind of sophisticated processing that can assist a radiologist with diagnosis, for very specific niche applications. There are a couple of issues with the latter. A radiologist might have to perform some extra steps and/or analyses, which could impact the application’s performance and throughput. As such, the application will have to provide a significant clinical advantage. Also, licensing additional software could be a cost that might or might not be reimbursed. In conclusion, AI’s initial impact will be small, and I don’t think that despite the major investments (GE investing $100m in analytics) it will mean the end of the radiology profession in the near future. A quote from Dr. Dryer also summed it up, “it will not be about Man vs. AI but rather the discussion of Man with, vs a Man without AI.”

2.     Cyber warfare is getting real. The recent WannaCry incident shut down 16 hospitals in the UK, which created chaos, as practitioners had to go back to paper. As we are now living in the IOT (Internet Of Things) era, we should be worried about ransomware and hacking. Infusion pumps, pacemakers and other devices can be accessed and their characteristics and operating parameters can be modified.It is interesting that HIPAA regulations already covered many of the security measures that could prevent and/or manage these incidents, but in the past, most institutions focused mostly on patient privacy. Of course, patient privacy is a major issue, but it might be prudent for institutions to shift some of the emphasis on network security instead of privacy as that could be potentially more damaging. Imagine the potential impact of one patient’s privacy being compromised vs the impact of infusion pumps going berserk, or a complete hospital shutdown.

3.     Facilitating the management of images created by “ologies” is still very challenging. Enterprise imaging, typically done using an enterprise archive such as a VNA as imaging repository, is still in its infancy. The joint HIMSS/SIIM working group has done a great job outlining all of the needed components and defined somewhat of an architecture, but there are still several issues to be resolved. When talking with the VNA vendors, their top issue that seems to come up universally is that the workflow of non-traditional imaging is poorly defined and does not lend itself very well to being managed electronically. For example, imagine a practitioner making an ultrasound during anesthesia or an ER physician taking a picture of an injury with his or her smart phone. How do we match up these images with the patient record in such a way that they can be managed? Most radiology-based imaging is order driven, which means that a worklist entry is available from a DICOM Modality Worklist provider, however, most of the “ologies” are encounter driven. There is typically no order, so to go hunting for the patient demographics from a source of truth can be challenging.There are several options, one could query a patient registration system using HL7, using a patient RFID or wristband as a key, or, if FHIR takes off, one could use the FHIR resource as a source, or one could use admission transactions instead (ADT), or do a direct interface to a proprietary database. There is probably another handful of options, which is the problem as there is no single standard that people are following. The good news is that the IHE is working on the encounter-based workflow, so we are eagerly awaiting their results.

4.     Patient engagement is still a challenge. There is no good definition of patient engagement in my opinion, and different vendors are implementing only piecemeal solutions. Here is what HIMSS has to say about this topic:
Patient engagement is the activity of providers and patients working together to improve health. A patient’s greater engagement in healthcare contributes to improved health outcomes, and information technologies can support engagement. Patients want to be engaged in their healthcare decision-making process, and those who are engaged as decision-makers in their care tend to be healthier and have better outcomes.
Many think of patient engagement as being equivalent to having a patient portal. The top reasons for patients wanting to use a portal are for making appointments, renewing prescriptions and paying their bills. However, none of these is a true clinical interaction. Face-to-face communication using, for example, Skype or another video communication, or just simply having an email exchange dealing with clinical questions are very important. One of the issues is that the population group that is the first to use these portals are also the group who already take responsibility for their own health. 
The challenge is to reach the non-communicative, passive group of patients and keep a check on their blood pressures, glucose levels, pacemaker records, etc. Also, portals are not always effective unless they can be accessed using a smart phone. This assumes of course that people have a phone, which was solved by one of the participants in the discussion by providing free phones for homeless so that texts can be sent for the medication reminders and checking up on them. Different approaches are also needed, as a point in fact, Australia had made massive investments in patient portals but because patients were by default set up as opt-out, only 5 percent of them were using portals. 
One of the vendors showed a slick implementation whereby the images of a radiology procedure were sent to the personal health record in the cloud and from there could easily be forwarded to any physician authorized by the patient. This is a major improvement and could impact the CD exchange nightmare we are currently experiencing. I personally take my laptop with my images loaded on it to my specialists as I have had several issues in the past with the specialists having no CD reader on their computers or lacking a decent DICOM viewer. There are still major opportunities for vendors to make a difference here.

5.     FHIR (Fast Healthcare Interoperability Resources) is getting traction, albeit limited. If you want one good
Packed rooms for educational sessions
example of hype, it would be the new
FHIR standard. It has been touted as the one and only solution for every piece of clinical information and even made it into several of the federal ONC standard guidelines. Now back to reality. We are on its third release of the Draft Standard for Trial implementation (DSTU3), typically, there is only one draft before a standard, and it is still not completely done. Its number of options are concerning as well. And then, assuming you have an EMR that has just introduced a FHIR interface (maybe DSTU version 2 or 3) for one or more resources, are you going to upgrade it right away to make use of it? But to be honest, yes, it will very likely be used for some relatively limited applications, some examples are the physician resource used by the HIE here in Texas finding information about referrals, or, as one of the SIIM presenters showed, a FHIR interface to get reports from an EMR to a PACS viewing station. But there are still many questions to be addressed to use what David Clunie calls “universal access to mythical distributed FHIR resources”.

6.     The boundary between documents and images remains blurry. When PACS were limited to radiology images, and document management systems were limited to scanned documents that were digitized, life was easy and there was a relatively clear division between images and documents. However, this boundary has become increasingly blurry. Users of PACS systems started to scan documents such as orders and patient release forms into the PACS, archiving them as encapsulated DICOM objects, either as a bitmap (aka as “Secondary Captures”) or encapsulated PDF’s.Some modalities such as ophthalmology were starting to create native PDF’s, bone densitometry (“DEXA”) scanners were also showing thumbnail pictures of the radiographs with a graph of its measurements in a PDF format. Then we got the requirement to store native png, tiff, jpeg’s and even mpeg videos in the PACS as well. At the same time, some of the document management systems were starting to store jpegs as well as ECG waveforms that were scanned in. By the way, there has been a major push for waveform vendors to create DICOM output for their ECG’s, which means they would now be managed by a cardiology PACS.And managing diagnostic reports is an issue by itself, some store them at the EMR, some at the RIS, some at the PACS and some at the document management system. The fact that the boundary is not well defined is not so much of an issue, what becomes clear is that each institution decides where the information resides and creates a universal document and image index and/or resource so that viewers can access the information in a seamless manner.

7.     The DICOMWeb momentum is growing. DICOMWeb is the DICOM equivalent of FHIR and includes what most people know as WADO, i.e. Web Access to DICOM Objects, but there is more to that, as it also allows for images to be uploaded (STOW), or queried (QIDO) and even provides a universal worklist allowing images to be labelled with the correct patient demographics before sending them off to their destination.There are three versions of DICOMWeb, each one builds on the next one with regard to functionality and a more advanced technology making them current with state-of-the-art web services. One should realize that the core of DICOM, i.e. its pixel encoding and data formats is not changed, we still deal with “DICOM headers” but that the protocol, i.e. the mechanism to address a source and destination as well as the commands to exchange information has become much simpler.As a matter of fact, as the SIIM hackathon showed, it is relatively easy to write a simple application using the DICOM resources. As with FHIR, DICOMWeb is still somewhat immature, and IHE is still trying to catch up. Note that the XDS-I profile is based on the second DICOMWeb iteration, which is based on SOAP (XML encapsulated) messaging that has recently been retired by the DICOM standards committee. The profile dealing with the final version of WADO, called MHD-I is still very new. There is a pretty good adoption rate though; and many PACS systems are implementing WADO, which unlike FHIR can be done by a simple proxy implementation on an existing traditional DICOM interface.

The radworkflow space
8.     Ergonomics is critical for radiology. I can feel it in my arm when I am typing or using a mouse for an extended time. Imagine doing it day-in and day-out while staring at a screen in half-dark, no wonder that radiology practitioners have issues with their arms, neck, and eyes. Dr Mukai, a practicing radiologist who started to rethink his workspace after having back surgery is challenging the status quo with what he calls the radworkflow space, i.e. don’t think about a workspace but rather a flow space (see link to his video). He built his own space addressing the following requirements:
a.     You need a curved area when looking at multiple monitors with a table and chair that can rotate making sure you always have a perpendicular view. Not only does this improve the view angle distortion from the monitors but also is easy on your neck muscles.
b.    Everything should be voice activated and by the way, all audio in and out should be integrated such as your voice activation, dictation software and phone.
c.     Two steps are too many and two seconds for retrieval is too much. It is amazing to think that retrievals of images in the 1990’s, using a dedicated fiber to the big PACS monitors of the first PACS systems used by the Army, were as fast or possibly faster than what is state-of-the-art today. Moore’s law of faster, better, quicker and more computing power apparently does not apply to PACS.
d.    Multiple keyboards is a no-no, even when controlling three different applications on 6 imaging monitors (one set for the PACS, one set for the 3-D software, and one set for outside studies).
Hopefully, vendors are taking notes and will start implementing some of these recommendations, it is long overdue.

Camera mounted at Xray
9.     Adding a picture to the exam to assist in patient identification. As we know, there are still way too many errors made in the healthcare delivery that potentially could be prevented. Any tool that allows a practitioner to double-check patient identity in an easy manner is recommended. A company that was exhibiting at SIIM had a simple solution as it takes a picture of a patient and makes it part of the study by creating a DICOM Secondary Capture of the image. It consists of a small camera that can be mounted at the x-ray source. I noticed two potential issues that need to be addressed: does it work with a MRI, i.e. what is the impact of a strong magnetic field on its operation? Second, now we know how to identify the patient better, how would it be to de-identify the study if needed? We would need to delete that image from the study prior to sharing it for the purpose of clinical trials, teaching files, or when sharing it through any public communication channel.

Nice dashboard from Cincinati Childrens
10.  Dashboards assist in department awareness. I am all in favor of dashboards, both clinical and operational as it typically allows one to graphically see what it going on. I liked the poster that was shown by Cincinnati Children’s showing the display that is placed in a prominent space in the department and shows its operational performance such as the number of unread procedures, turnaround time, a list of doctors who are on call, and also a news and weather link. They pulled this data from their PACS/RIS system doing some simple database queries. This is a good example of how to provide feedback to the staff.

As mentioned earlier, I thought that SIIM2017 was a pretty good meeting, not only for networking with fellow professionals, but also learning what’s new, and seeing a couple of new innovative small start-up companies, especially in the AI domain, and last but not least, enjoying a bit of Pittsburgh, which pleasantly surprised me. Next year will be in DC again, actually National Harbor MD, which despite its close location to Washington will not be a match for this year’s, but regardless, I’ll be looking forward to it.

Wednesday, June 14, 2017

Top 10 lessons learned when installing digital imaging in developing countries.

Patient at Zinga
Children's hospital,
close to Dar-es Salaam,
recipient of a Rotary
 International grant for
imaging equipment
Installing a digital medical imaging department in a developing country is challenging, which is
probably an understatement. The unique environment, lack of resources, money and training, pose barriers to creating a sustainable system.

As anyone who has worked in these countries will attest, sustainability is key, witnessed by the numerous empty buildings, sometimes half finished, non-working equipment due to lack of consumables, spare parts, or simply not having the correct power, A/C or infrastructure environment. 

I learned quite a bit when deploying these systems as a volunteer, especially through gracious grants by Rotary International and other non-profits, which allowed me to travel and support these systems in the field. Some of these lessons learned seem obvious, but I had to re-learn that, what is obvious in the developed world, is not necessarily the case at the emerging developing countries of the world. 

So, here is my top 10 lessons learned in the process:

1.       You need a “super user” at the deployment site with a minimum set of technical skills. Let’s take, as an example, a typical digital system for a small hospital or large clinic, which has one or two ultrasounds, a digital dental system and a digital X-ray, either using Direct or Computerized Radiography (DR or CR). These modalities require a network to connect them to a server and a diagnostic monitor and physician viewer. Imagine that the images don’t show up at the view station, someone needs to be able to check the network connection, and be able to run some simple diagnostics making sure that the application software is running. In addition to being able to do basic troubleshooting on-site, that person needs to also function as the single point of contact for a vendor trying to support the system and be the ears and eyes for support.

2.       Talking about “single point of contact,” I learned that it is essential to have a project manager on-site, which means that one person arranges for equipment to be there, knows what the configuration looks like, checks that the infrastructure is ready, does the follow up, etc. It is unusual that the local dealer does all of this. There also might be construction needed to make a room suitable for taking X-rays (shielding etc.), A/C to be installed to prevent the computers from overheating, network cables to be pulled, etc.; there has to be a main coordinator to do this.

3.       You also need a clinical coordinator on site. This person takes responsibility for X-ray radiation safety (which is a big concern) and also doing the QA checks, looking for dose creep (over exposing patients), reject analysis (what is the repeat rate for exams and why are they repeated). With regard to radiation safety, I have yet to see a radiation badge in a developing country, which is common practice for any healthcare practitioner who could be exposed to X-ray radiation in the developing world. As a matter of fact, I used to carry one with me all the time when on the vendor site and being in radiology departments on a regular basis. I would get calls from the radiation safety officer in my company when I forgot that I had left the badge in my luggage going through the airport security X-ray scanners. There is little radiation safety infrastructure available in developing countries, and the use of protective gloves, lead aprons and other protective devices is not always strictly enforced, this is definitely an area where improvements can be made.

4.       Reporting back to the donors is critical. There are basically three kinds of reports which are preferably shared on a monthly basis, as a matter of fact, this is a requirement for most projects funded by Rotary International grants: 1) The operational reports that include information such as number of exams performed by modality (x-ray, dental, ultrasound), age, gender, presenting diagnosis, exam type, etc. 2) The clinical reporting which includes the quality measures such as exposure index, kV mAs, etc. and 3) Outcomes reporting which includes demographics, trends, diagnosis, etc.
The operational reporting will indicate potential operational issues, for example, if the number of exams shows a sudden drop, there could be an equipment reliability issue. The clinical reporting will show if the clinic meets good practices. The outcomes reporting is not only the hardest to quantify but is the most important as it will prove to potential donors, investors and the local government the societal and population health impact of the technology. This information is critical to justify future grant awards.

5.       Power backup and stabilizers are essential. Power outages are a way of life, every day there can be a 4 hour or more power outage, therefore, having backup batteries and/or generators in addition to having a local UPS for each computer for short term outages is a requirement. One thing we overlooked is the fact that if we have power from the grid, the variation can be quite large, for example, a nominal 220V can fluctuate between 100 and 500 Volts. Needless to say most electronic equipment would not withstand such high spikes, so we had to go back in and install a stabilizer at one site after we had a burnout, which is now part of the standard package for new installs.

6.       Staging and standardization is a must. When I tried to install dental software on a PC on-site in Tanzania, it required me to enter a password. After getting back to a spot where I could email the supplier, I found that the magic word “Administrator” allowed me to start up the software, however, not until a loss of a day’s work as the time difference between the US and East Africa is 9 hours. After that, It took me only 5 minutes to discover the next obstacle, “device not recognized,” which did not allow the dental byte-wings to be used for capturing the X-rays. This caused another day delay as it took me another night to get an answer to solve that question. This shows that installing software onsite in the middle of nowhere is not very efficient unless you have at least 2 weeks time, which is often a luxury. And this was just a simple application, imagine a more complex medical imaging (PACS) system requiring quite a bit of configuration and setting up, it will take weeks.

There are a few requirements to prevent these issues:

1) Virtualize as much as you can, i.e. use a pre-built software VM (virtual machine) that can be “dropped in” on site. The other advantage of the virtual machine is that it is easy to restore to its original condition, or any other in-between conditions that are saved. It is interesting that the “virtualization trend,” which is common in the western IT world in order to save on computers, servers, and most importantly power and cooling capacity, is advantageous in these countries as well but more for ease of installation and maintenance reasons.

2) Stage as much as you can, but do it locally. If you preload the software on a computer in the US, ship it to let’s say Kenya, first you will be charged with an import duty that easily can be 40%, and you also might send the latest and greatest server hardware that nobody knows how to support locally. Therefore, the solution is to source your hardware locally providing local support and spare parts, and then stage it at a central local location that has internet access to monitor the software installation and then ship to the remote site.

3) Use standard “images” which goes back to the “cookie-cutter” approach, i.e. have a single standardized software solution, for maybe three different sizes of facilities, small, mid-size and large, so that the variation is minimal.

7.       Use a dedicated network. This goes back to the early days of medical imaging in the western world. I remember when we would connect a CT to the hospital network to send the images to the PACS archive, it would kill the network because of its high bandwidth demands. It is quite a different story right now, the hospital IT departments have been catching up, and have been configuring routers into VLANS that have fiber and/or gigabit speed connections to facilitate the imaging modalities. But we are back to square one in the developing world; networks, if available, are unreliable, might be open to the internet and/or computers that are allowed to use flash drives (the number one virus source), and therefore connecting these new devices to that would be asking for trouble. Therefore, when planning a medical imaging system, plan to pull your own cables, and use dedicated routers and switches. If you use high quality programmable and managed devices, it could become the core of the future hospital network expanding beyond the imaging department.

8.       Have an Internet connection. The bad news is that there is typically no reliable or affordable internet connection, however, the good news is that the phone system leapfrogged the cable infrastructure and therefore you should plan for a G3 compatible hot-spot that can be used to connect a support expert and take a look at the system in case there are any issues.

9.       Training is critical. Imagine buying a car for your 16-year-old daughter and just giving her the keys and telling her that she’ll be on her own. No-one would do that, but now imagine deploying a relatively complicated system in the middle of nowhere, which will allow people to make life-and-death decisions, without any proper training. I am not talking about clinical training on how to take an X-ray or do an ultrasound, but the training on how to support these systems that are taking the images, communicating, archiving them and displaying them. You need a person who takes the weekly back-ups to make sure that if there is a disk crash they can recover the information, who will do the database queries to get the report statistics, do the troubleshooting in case an image has been lost or misidentified, is the main contact to the support people at the vendor, and so on. On- the-job-training will not be sufficient. The good news is that it is relatively easy to create training videos and upload them on YouTube (or better send them on a CD as internet access might not always be available).

10.   Do not compromise on clinical requirements. I have seen darkroom processors being replaced with a CR and a commercial (i.e. non-medical) grade monitors to look at the images in a bright environment. This is very poor medical practice. No, you don’t need two medical grade 3 MegaPixel monitors at the cost of several thousands of dollars. Clinical trials have shown that a 2 Megapixel has the same clinical efficacy as a 3MP, but requires a user to use its zoom and pan tools a little bit more, which is acceptable in these countries. Therefore, the key is to use a medical grade monitor, which is calibrated to convert each individual grayscale value into a pixel that can be distinguished from each other. If this is not the case, there is no question that valuable clinical information will be lost. Also the so-called luminance ratio (difference between dark and white) does not have to be as high as long as the viewing environment is dark enough. So, as a rule of thumb, use an affordable medical grade monitor and put it into a dark room (paint windows, walls, hang curtains), don’t skimp on these monitors.

In conclusion, none of these lessons learned are new, we learned most of these 20 years ago, but the problem is that most of them might be forgotten or assumed, at least that is what I did when venturing out to these developing countries. The good news is that we can apply most of what we have learned and therefore be successful in providing imaging to the remaining two-thirds of the world that does not yet have access to basic imaging capabilities and thereby still make a major difference.

Monday, May 1, 2017

Digital Pathology: the next frontier for digital imaging; top ten things you should know about.

Typical pathology workstation (see note)
As the first digital pathology system finally has passed FDA muster and is ready to be sold and used in the USA, it is time for healthcare institutions to prepare for this new application. Before jumping head first into this new technology, it is prudent to familiarize yourself with the challenges of this application and learn from others who, notably in Europe, have been doing this for 5+ years. Here is a list of the top ten things you should be aware of.

1.       The business case for digital pathology is not obvious. Unlike the experience in radiology where film was replaced by digital detectors, and we could argue that the elimination of film, processors, file rooms and personnel would at least pay for some of the investment in digital radiology, digital pathology does not hold promise for the same amount of savings. Lab technicians will still need to prepare the slides, and as a matter of fact, there is additional equipment needed to digitize the slides to be viewed electronically.
The good news is that pathology contributes very little to the overall cost of healthcare (0.2%), and therefore, even though the investment in scanners, viewers, and archiving storage is significant, impact of this on the bottom line is small. Of course, there are lots of “soft” savings such as never losing slides, being able to conference and get second opinions without having to send slides around, much less preparation time for tumor boards, much faster turnaround through tele pathology, and the potential for Computer Aided Diagnosis. So, going digital makes every sense in the world, but it might just be a little bit hard to convince your CFO.

2.       Most institutions are “kind of” ready to take the jump from an architecture perspective. Many hospitals are strategizing how to capture all of their pathology imaging, in addition to radiology and cardiology, in a central enterprise archiving system (aka Vendor Neutral Archive). And they might have already made small steps towards that by incorporating some of the other “ologies.” However, pathology is definitely going to be challenging, as the files sizes for images are huge. A sub-sampled compressed digital slide could easily top 1.5GB, therefore you should be ready to multiply your digital storage requirements by a factor of 10. As a case in point the University of Utrecht, which has been doing this for 7 years is approaching 1 Petabyte of storage. So, even if you have an enterprise image management, archive and exchange platform in place, it definitely will need an adjustment.

3.       Pathology viewers are different from other “ologies.” Pathologists look at specimens in a three dimensional plane, unlike radiologists who, in many cases look at a 2-D plane (e.g. when looking at a chest radiograph). One could argue that looking at a set of CT or MRI slices is “kind of 3-D” but it is still different than when having to simulate looking at a slide under a microscope. The pathologist requires a 3-D mouse to view the images, which are readily available. The requirements for the monitors are different from other imaging specialties as well; a large-size good quality color monitor will suffice for displaying the images, which is actually much less expensive (by a factor of 10) than the medical grade monitors needed for radiology.

4.       Standard image formats are still in their infancy. This is something to be very aware of; most pathology management systems are closed systems, with an archive, viewer and workflow manager from the same vendor, with little incentive to use the existing DICOM pathology standard for encoding the medical images. Dealing with proprietary formats does not only lock you in to the same vendor, possibly making migration of the data to another vendor costly and lengthy, but also jeopardizes the whole idea of a single enterprise imaging archiving, management and exchange platform. Hopefully, user pressure will change this so that the vendors will begin to embrace the well-defined standards that the DICOM and IHE community has been working on for several years.

5.       Digital pathology will accelerate the access to specialists. I remember from several years back, visiting a remote area in Alaska, when it switched to digital radiology and when all the images were sent to Anchorage to be diagnosed. Prior to that, a radiologist would fly in for 2 days a week, weather permitting, to read the images. So if you needed a diagnosis over the weekend, you were out of luck. The same scenario applies for having a pathologist at those locations, as of now, the samples are sent, weather permitting, to a central location to be read. In some locations there is a surplus of pathologists, in some there is a shortage or even lack of these medical professionals. Digital pathology will level the playing field from a patient access perspective. Without having to physically ship the slides and/or specimens, it will significantly decrease the report turnaround time and impact patient care positively.

Typical Slide scanner (see note)
6.       Digital pathology is the next frontier. Here is some more good news. Vendors are spending 100’s of millions of dollars in developing this new technology. Digital scanners that can load stacks of slides and scan them while matching them with the correct patient using barcodes are available. Workflow management software has improved. Last but not least, automatic detection and counting instead of doing this manually, of certain structures in the images is a big improvement towards characterizing patterns and therefore diagnosis can be made more accurately.

7.       Don’t expect to become 100% digital. Some applications still require a microscope. The experience at the Utrecht Medical Center in the Netherlands is that you may achieve  95% conversion to digital but there are still some outliers that require a microscope because of the nature of the specimen. However, this is very manageable and only a relatively small subset.

8.       Digital pathology has ergonomic advantages. Imagine having to bend over most of the day while looking through a microscope, you can imagine that doing that day-in-day-out for many years can cause strain on your neck and back. Instead, sitting in a comfortable chair, or having a stand-up desk definitely is better, even although one needs to be careful with picking the right mouse to avoid carpal tunnel syndrome.

There is a lot of opportunity for automated
counting and detection (see note)
9.       Viewing capabilities are an order of magnitude better. This is obvious for professionals who are reading medical images as a radiologist or cardiologist, but for pathologists who were bound to a single view through a microscope, and who now are having multiple images next to each other, and being able to annotate them electronically, it is a completely new world.

10.   Research and education gets a major boost. Imagine the difference when teaching a group of pathology students who are supposed to be looking at a similar tissue through their own microscope and now they all can access the same image on their computer monitor. One can build a database of teaching files and easily share them electronically. All of this seems obvious for anyone who is involved with medical imaging in other specialties, but for pathology this is a major step.

In conclusion, digital pathology is finally here in the USA. However, there are some hurdles, starting with convincing the people who hold the purse that it is a good investment, then adjusting the architecture and workflows to facilitate the huge image sizes, and making sure that these systems support open standards so you are not going to be locked into a specific vendor. There are definitely major advantages and it might be expected that the benefits will soon become so evident that it will only be a matter of time before everyone will jump on the digital pathology band wagon. It is strongly recommended that you learn from others, notably in Europe who have been implementing this technology already for several years.

Note: Illustrations courtesy of Prof. Paul van Diest, UMC Utrecht.

Tuesday, April 11, 2017

Top Ten VNA Requirements

The term PACS VNA (Vendor Neutral Archive) has been loosely defined by different vendors and its functionality varies widely among providers. Early implementations have seen some good success stories but also, in several cases, caused confusion and initial frustrations and unmatched expectations. The list below concentrates on the key features that are necessary for a successful implementation. So, the VNA should:

1.    Facilitate enterprise archiving: Enterprise Archiving requires many different components, as a matter of fact, the joint SIIM/HIMMS working group has done a great job listing key components, including governance, a strategy definition, image and multimedia support, EHR integration and a viewer, but most importantly a platform definition, which can be provided by a VNA. The VNA needs to be the main enterprise image repository, which is the gateway to viewers and the EMR, taking in the information encoded as DICOM, as well as other formats, following the XDS (cross-enterprise document sharing) repository requirements. A true VNA needs to be able to provide that functionality.

2.    Facilitate cross-enterprise archiving: The VNA should be the gateway to the outside world for any imaging and image related documents. Examples of image related documents are obviously the imaging reports but also measurements (Structured Reports) and other supporting documentation, which can be scanned documents, or be in native digital formats. It also needs to be a gateway for external CD import and export, for portals, and the gateway to cloud sharing and archiving solutions.

3.    Support of non-DICOM objects (JPEG, MPEG, Waveforms). Even though DICOM has proven to be an excellent encapsulation of medical images and other objects, such as waveforms, PDF’s, documents, etc., there are cases where this is not as easy or possible. A use case for this is when one needs to archive a native MPEG video file from surgery or another specialty. As long as there is sufficient metadata to manage the object, this should be possible and be provided by the VNA.

4.    Be truly vendor neutral: Even if the VNA is from the same vendor as one or more of your PACS systems, its interface with any PACS system(s) should be open and non-proprietary. This is one of the most important requirements: plugging in a PACS to your VNA from another vendor should be very close to “plug-and-play.”

5.    Synchronize data with multiple archives: Lack of synchronization is probably the number one complaint that I hear from early implementers. To be fair to the VNA vendors, in many cases synchronization is lacking on the PACS side. Even if the VNA is able to facilitate the IOCM (Imaging Object Change Management) messages, which is basically a Key Image Note with the reason for changes (rejects, corrections for safety or quality reasons or worklist selection errors), if the PACS has no IOCM support, then you are left with manual corrections at multiple locations. At best, there should be some kind of web-based interface that allows a PACS administrator to make the changes. It might be possible to adjust the workflow, which could minimize corrections, for example, one institution does not send the copy to the VNA till one day after the images are acquired which means that the majority of the changes have been applied at the PACS, however, if the VNA is the main gateway for physician access, this is not feasible. Lack of this synchronization requires a PACS administrator to have to repeat the changes at different locations.

6.    Provide physician access: A key feature of the VNA is that it provides “patient-centered” image access, i.e. instead of a physician having to log into a radiology, cardiology, surgery or oncology PACS with different viewers using different log-ins, and disparate interfaces, there is now a single point of access. This access point is also used for the EMR plug-in, i.e. there should be an API that allows a physician to open up the images referred to in the EMR with a single click provided by the VNA. Note that accessing the data with a different viewer could create some training and support issues as the features and functions are most likely different from the PACS viewer.

7.    Take care of normalizing/specializing: As soon as images are shared between multiple departments and even enterprises, the lack of standardization becomes obvious with regard to Series and Studies Descriptions, procedure codes/descriptions, and body parts. The differences could be obvious, such as when using “skull” or “brain” for the same body part or subtle changes such as between “CT Head w/o contrast” and “CT HD without contrast.”  Any difference, even minor ones, could cause prior images to not be fetched for comparison. That is where, what is sometimes referred to as “tag morphing,” comes in, where the data is “normalized” according to a new set of descriptions and/or codes before it is archived in the VNA. When a specific PACS expects certain information to be encoded in a specific manner, the data has to be modified again to facilitate its local quirks, which I would call “specialization”.

8.    Handle multiple identities: Images will be presented to the VNA with local patient identifiers that need to be indexed and cross-referenced. The same applies to studies and orders. Most VNA’s can pre-fix an Accession Number to make it unique in the VNA domain and remove that prefix when sending the information back. This assumes that the Accession Numbers are not using the maximum allowed 16 byte length, otherwise it has to be dealt with in the database.

9.    Be the gateway to the outside world using open standards. Many states, regions, or, if small enough, countries, are rolling out a central registry (HIE or Health Information Exchange) so that an institution can register the presence of images and related information to anyone outside the enterprise who is authorized to access this information. The registration and also discovery uses the IHE defined protocols XDS while the PIX/PDQ standards take care of the patient cross referencing and query.

10. Meet your specific needs: More than 50 percent of US-based institutions are apparently installing or planning to install a VNA, according to a recent survey. I suspect that the main reason is that many are getting tired of yet another data migration, which is lengthy (months to years), and potentially costly in terms of both money and lost studies. The elimination of future migrations is somewhat of a moot point as PACS migration will likely be replaced with migrating the VNA, so it is kind of shifting but not eliminating this issue. The real reason for getting a VNA has to be some of the key features listed above. If on the other hand you have a relatively small institution, with only images created in radiology and possibly cardiology but not in any other specialty, and there is no immediate need for image exchange, then I would argue that you might be better off staying with the current PACS architecture as the business case for a VNA is not quite clear yet.

In conclusion, VNA’s are here to stay, assuming they have most, if not all, of the features listed above. However, it might not be for you, so you need to make a business case and look at the potential pro’s and con’s of getting a VNA. When you are thinking about getting a VNA, talk with your VNA and PACS vendor about the features listed above to make sure you understand the clinical, technical and personnel impact if your vendor does not support one or more of these features. By the way, we'll have a VNA seminar coming up, see details here.