Monday, March 18, 2019

Impact of the Philips-Carestream acquisition on the end users: Good, Bad?


Since the recent Philips acquisition of the Carestream IT business, I have received several phone calls and had discussions with both elated and very concerned end-users. Interestingly enough, the positive feedback was mostly from Philips PACS users and most concerns were expressed by Carestream clients.

The Philips users were mostly excited about the Carestream enterprise archiving and storage component, which hopefully will replace the proprietary Philips back-end and be able to integrate better with enterprise archiving systems such as VNA’s. It is no secret that the Philips proprietary image format storage works very well for the Philips workstation display as it provides great (perceived) performance but to get the data out of their system in a standard DICOM manner is not as easy. Synchronizing changes in the Philips archive with the VNA cannot be automated due to lack of the IOCM IHE profile support. It is very challenging to say at the least, judging from the spike in attendance in our DICOM classes from Philips users who want to learn how to use DICOM network sniffers to find out when, where, and why certain studies are not exchanged between the Philips and their VNA.

The strength of the Philips PACS system is definitely its radiology worklist, radiologists really like its user friendliness, and PACS administrators like it as they can train a new user in 15 minutes unlike some of the other PACS user interfaces. This is important if, for example, you get a new batch of 15 residents to train every couple of months. So, the ideal match would be the Philips front-end with the Carestream back-end, however, that goes against the current trend of EMR-driven worklists for PACS.

From Carestream customers, I have heard mainly concerns that Philips might “contaminate” their current relationships and/or upset their support and service structure. It is rather common when a new company takes over a smaller one in this industry to see people leave, service centers be consolidated, not always for the better, and support take a major dip. In addition, the product they are currently using or planning to purchase might become obsolete due to product consolidation, especially if the main objective of an acquisition was not the technology but buying the channel and existing customer base.

So, what can we expect? Time will tell, but the good news is that both companies have a culture that is different from many other players in this field, which I know first-hand having worked for both Philips and Carestream’s predecessor (Kodak). Consequently, I have some level of confidence that this is going to be a good thing. But again, only time will tell, and in the meantime as a Philips or Carestream customer you might want to ask for solid guarantees from your suppliers and keep all options open.

Thursday, February 21, 2019

HIMSS19: Are we finally unblocking patient information?


Busier as ever
More than 45,000 visitors to the worlds largest healthcare IT conference held in Orlando, Fla,
browsed through 1200 booths looking for IT solutions for their facilities and listened to the many educational sessions. There is still a dichotomy between what was shown and the real world as the IHE showcase demonstrated 12 use cases where information seamlessly flowed between different vendors, while it is not always so smooth in practice based on stories from the trenches.
Here were my observations from this conference:

Distinguished panel at Keynote
1.       Interoperability, are we there yet? The meeting was dominated by the recent information-blocking rule, which was unwrapped by the US department of Health and Human Services (HHS) literally the day before the convention started. As Seema Verma, the US CMS administrator pointed out in her key note presentation, the government has given out US $36 billion on incentives to implement electronic health records with not much interoperability to show for it, so now it is time for the industry to step up.
Former US CTO Aneesh Chopra added to this saying that the CCD’s (Continuity of Care Documents) that are exchanged right now might not be the best solution to exchange patient information, but we need to look for other means such as open APIs, which can be used to tap into any EMR for information. These open APIs will become a requirement by 2020 according to the HHS. Penalties to health information exchanges and health information networks could be up to $1 million for lacking interoperability. Maybe this will help, however the rule is expected to get pushback from some of the stakeholders. For example, the AHA was quick to point out that it disagreed with certain parts of the requirements: “We cannot support including electronic event notification as a condition of participation for Medicare and Medicaid,” stated AHA Senior Vice President for Public Policy Analysis and Development Ashley Thompson.

2.      
Open API, is that sufficient? An open API is merely a “connector” that allows information to be exchanged, however, as was noted in the same keynote speech, if the only thing that can be exchanged is the patient name, sex and race, or if the clinical information is not well encoded and/or not standardized, the API is not of much use.

That is why implementation guides based on use cases specifying the many details of the information to be exchanged are critically important. The good news is that these implementation guides are a key component of the new FHIR standard, which can be electronically interpreted and are defined according to a well-defined template. The DaVinci activity which has already defined 12 of these guides and which are part of the FHIR balloted standard will facilitate the exchange. The focus of these guides is on provider/payer interactions and includes for example medication reconciliation for post discharge, coverage requirements information and document templates and rules. The booth demonstrating these use cases was one of the busiest in the IHE showcase area.

3.       What about social determinants? Health care determinants follow the 20/20/60 rule, i.e. 20 percent of ailments are determined genetically which can increasingly be predicted by looking at your DNA sequence, 20 percent are influenced by a healthcare practitioner such as your doctor, but 60 percent, i.e. the majority, is determined by the patient through his or her own actions and social determinants. For example, if you are genetically at risk for a heart condition, and your doctor has already placed a stent in one or more of your coronary arteries to help blood flow to your heart muscle, but you don’t change your life style, you won’t be able to get any better. Now, let’s say you are homeless and depend on food that is not good for your condition, you could be in trouble. It would be good if your physician knows those social factors, which could also include where you have traveled to recently. However, there are no “codes” available to report this in a standard manner. The majority of the health care determinants (60 percent) are not encoded, therefore, there is much work to be done in this area.

Impressive number of providers
participating in Commonwell
4.       How is information exchanged between providers? The ARRA (American Recovery and Reinvestment Act) from the previous administration had put money aside to establish public Health Information Exchanges (HIE’s). Unfortunately, many of these HIEs took the grant money and folded after that ran out, notably the HIE’s in North Texas and Tennessee and many others, shutdown after failing to find a sustainable business model.
Several vendors took the initiative to establish a platform for information exchange as they figured out that the effort to string connections one-by-one between healthcare providers would be much more expensive than creating their own exchange, which is how the Commonwell non-profit started. As of the conference, they had 12,000 connections to providers, which is probably 10 percent to 15 percent of all providers, which is a good start towards gaining a critical mass. Cerner seems to be the largest EMR vendor in this alliance. Epic was notably absent and has been the main driver of a somewhat competing alliance, called Carequality with different functionality but establishing similar objectives, i.e. exchanging information between EMR’s from different vendors at the providers.
The good news is that there is now a bridge established between these two platforms, which again makes the critical mass even larger. This situation is somewhat unique for the US as other countries have government initiatives for information exchange but for those countries without an initiative, the same model might work. This will hopefully solve the problem that was mentioned by one of the providers who said that it has been relatively easy to exchange information between his EMR (which happened to be Epic) and others as long as it was an EMR from the same vendor, but very hard if not impossible to get anything out of an EMR from another vendor into his EMR. This is a great effort, which together with the anti-blocking rules from CMS, might finally allow healthcare information to be exchanged.


One of the many portals
demonstrated
5.       Are patient portals finally taking off? It is still a challenge to access health care information as there is not really a universal portal that collects all of the information among different providers. You might need to maintain access to the information being present at your primary physician, your specialist(s), your hospital and even your lab work provider. One of the ways to consolidate this information is to have a single provider, such as the VA for veterans, and their portal “myhealtevet,” which has been relatively successful. How this is going to work as the VA  increases its outsourcing to private commercial providers remains to be seen. If you are on Medicare or Medicaid, CMS will provide a standard interface, which is used by several (free) patient portal providers where you can log in to see all of your claims, prescriptions and other relevant information. Again, if you do not happen to be a veteran or are not covered by CMS, but are a patient between the two of these organizations, there is not so much interoperability, however those two groups cover enough patients to start having a critical mass as well.

Standing room only at cloud
providers
6.       Are cloud providers making any progress? One of the speakers, who happened to be working for New York Presbyterian, claimed that they are planning to have 80 percent of their applications and infrastructure in the cloud. There are more advantages to the cloud than potentially reducing cost, such as easier access by patients, and the potential to run AI algorithms for clinical support, analytics and decision support. Machine learning is more effective when there is a lot of data to be learned from, hence the advance of cloud archiving.

The big three cloud providers (Amazon, Google and Microsoft) are more than happy to take your business, the Amazon booth was packed every time I passed it. However, they still have a steep learning curve, even although they claim to have open API’s and a healthcare platform with a lot of features, they have to learn in a few years what healthcare vendors have accumulated in expertise over many decades. The good news is that they have potentially very deep pockets so if they are getting serious about this business, they could become major players.

7.       Is Uberization of healthcare happening? When people talk about Uberization they typically
Get the Uber or Lyft app in your EMR!
refer to the business model that provides easy access by consumers through mobile media, accountability of the providers, and tapping into a completely new source of providers such as private drivers who suddenly become transportation providers, or, in the case of Airbnb, private home owners who become innkeepers.
This is happening in healthcare as well as there is a big increase in tele-health providers who provide phone access to patients who want advice anytime, anywhere. Speaking with a physician who does this from his home, telehealth is as easy as using Uber as a physician can sign up anytime and just log on to take calls from patients for as long as he or she decides. As I listened to one such physician’s experiences, he provided life-changing advice, especially to patients who live in remote areas and otherwise would not be able to seek medical advice because of their remoteness.
In addition to telehealth services, Uber and Lyft were also promoting their transportation business to providers, to reduce potential no-shows by patients who have trouble with transportation. A provider can contract with either one of these to serve their transportation needs.
8.       What about the wearables? Apple introduced access to medical records through its health app
Note the wearable EKG sensor
as well as monitor
at the 2018 conference through a FHIR interface. Their provider list has been growing steadily and is now close to 200, including the VA which by itself accounts for 170 medical centers and more than 1000 clinics. This is a significant number, but not even close to the number of providers in the Commonwell platform, for example, whose members exceed 10,000. Therefore, there is still a lot of progress to be made.
There was an increase in intelligent detectors that can communicate vital signs and other clinical information using blue-tooth to a mobile device, for example, to allow patients to be released earlier from a hospital back home which is safer and more cost effective.

9.       Are we Safe from hackers yet? There has been a major increase in cyber security investment, which has become a necessity as healthcare has become a major target for hackers and ransomware opportunists.
Huge Security pavilion
Healthcare providers are making big investments in personnel and tools to try to protect themselves. The cybersecurity area of the conference was indeed huge as many new companies are providing their services. The average security part of an IT budget is about 6 percent to 8 percent but there are some who spend as much as 12 percent. Imagine an IT staff of a large organization, e.g. if you have 500 IT employees, there could be as many as 50-60 staff dedicated to cyber security.

Based on recent incidents, we still have a lot of work to do in healthcare to protect patient information, especially at the periphery, as medical devices in many cases appear to provide easy access points to a hospital’s back-end. This risk is one of the reasons that the FDA has been requiring a cyber security plan to be filed with every new medical device clearance.

plenty of FAX apps
10.   When are we going to get rid of the fax machines? About six months ago, CMS publicly announced that they want to get rid of all fax machines in healthcare by 2020. This is only one
year away, however in practice it appears that by far the majority of all healthcare communication in healthcare is done through the ubiquitous fax machine. A good manner to transition might be to exchange documents using the open API, and then do Natural Language Processing (NLP) to search for medications, allergies, and other important information that can be processed and potentially imported into the receiving EMR. In the meantime, there were still many small companies that were advertising smart ways to distribute faxes and I predict that this will still happen for several years to come.

In conclusion, this was yet another great conference. The emphasis was on unblocking the information that is locked up in many healthcare information systems and, until now, can only be exchanged if you happen to have an EMR from the same vendor as its source, or if you are lucky enough to have a provider that has access to Commonwell or Carequality, or a provider who uses one of the relatively few public Health Information Exchanges (HIE’s). Hopefully the industry and providers will start to cooperate on making this happen, we’ll see next year when we are going to be back in Orlando FL again.

In the mean time, if you are baffled about some of the terminology you might consider our FHIR, IHE or HL7 V2 training publications and classes.

Sunday, February 3, 2019

Cleveland 2019 IHE connectathon: it’s all about FHIR.


The 2019 IHE connectathon drew more than 300 healthcare IT professionals to a snowy and cold
Cleveland to test interoperability between their devices using IHE defined profiles. What was new this year were the recently published profiles facilitating exchange of information using mobile communications based on the new FHIR standard, which is based on standard web protocols.
The FHIR testing emphasized querying for patient information, corresponding documents, and uploading and retrieving them. In addition, we tested the audit trails using FHIR, which is important from a security and privacy perspective.

There are still relatively few FHIR based implementations in the field due to the immaturity of the standard (most of it is still a draft), the lack of critical mass (implementing FHIR for just one application such as scheduling does not make sense), and its steep learning curve as it is sufficiently different from what healthcare IT is accustomed to. Therefore, the more that can be verified and tested in a neutral environment such as the connectathon the better it is. Speaking with the participants, they uncovered quite a few issues in their early FHIR implementations, which again is good as it is better to solve these issues beforehand than during an actual deployment.

The attendance at this year’s event seemed to be lower than previous years, which could be due to the fact that there are several local connectathons happening during the year on other continents, which draw from the same crowd, and that implementations are starting to mature (except for FHIR of course) and therefore, there is less need for debugging.

There could also be somewhat of a “standards overkill” in place if one considers the fact that for radiology alone, which is one of the 11 domains, there are 22 defined profiles and another 27 published as draft. It is hard to keep up with all of these and for vendors to deploy all of these new requirements.

As FHIR matures, which will take several years as its typical release cycle is 12-18 months, there will be a need to test these new releases between the vendors providing the medical devices and software products. There were a few new vendors present but most of them consist of the same “crowd,” which is somewhat disappointing because I believe that the real FHIR implementation breakthrough will come from outsiders such as Apple, Amazon, Microsoft and/or Google, all of which were notably absent.

In conclusion, another very successful event, giving a boost to interoperability, something we very badly need as patients, especially in the US where we are still struggling to get access to our medical records, including images, and where healthcare institutions continue to have difficulty exchanging information among themselves and other providers. In many departments and medical offices, the fax machine still is an important tool, hopefully not for long.

Monday, December 10, 2018

RSNA2018: What’s in and what’s out.

Let it snow...
The annual radiology tradeshow at McCormick Place in Chicago started with a little hiccup as the Chicago airports closed down on Sunday due to a snowstorm, and slowed the flow of attendees flying in on the second day to only a trickle. Note that the Sunday after Thanksgiving is the busiest travel day of the year so it could not have come at a more inconvenient time. I myself was caught in this travel chaos as I spent all of Monday in the Dallas airport while my plane was trying to get into the arrival queue for O’Hare.

The overall atmosphere at the show was positive, attendance seemed to be similar to last year and most vendors I talked with were optimistic. About a third of the attendees come to the meeting just for the continuing education offerings, but another third come to visit vendors and “kick the tires” and see what’s new. My objective is also to see what the new developments are and to do some networking to get an idea of what is going on in the industry. 

Here are my observations:

1.       Artificial intelligence dominated the floor – Over the past few years, AI has created some
Dedicated area just
for AI showed
80 companies
anxiety as predictions that AI would replace radiologists in the near future. It seems that the anxiety has been relieved to a certain degree, but it has been replaced with a great deal of confusion of what AI really is, and with uncertainty of what the day-to-day impact could be.
A detailed description of the different levels of AI and the main application areas are the topic of an upcoming blog post, but it was clear that the technology is still immature. Despite the fact that there were 100+ dedicated AI software providers, in addition to many companies promoting some kind of AI in their devices or PACS, only a handful of them had FDA clearance. I also believe that the true impact of AI could be in developing countries that have a scarcity or even total lack of trained physicians. It is one thing to improve the detection by a physician of let’s say cancer by a few percent, but if AI could be used in a region that has no radiologists, then an AI application being used that can detect certain abnormalities  would be a 100% improvement.
There could be some workflow improvements possible using AI in the short term, however, one should also realize that the window between conception and actual implementation could be 3-5 years. Users are not too anxious to upgrade their software unless there is a very good reason. So, in short, the AI hype is definitely overrated and I believe that we’ll almost certainly have autonomous self-driving cars before we have self-diagnosing AI software.

a significant dose reduction
 for lung cancer screening
2.       Low dose CT scanning is becoming a reality – One of the near-term applications of AI allows the use of a fraction of a “normal” CT scan. Instead of a typical 40 mAs technique, acceptable images are created using only 5 mAs. This could have a major impact on cancer screening. The product shown did not have FDA clearance (yet) but there is every reason to expect that this can be available one year from now. The algorithm was created using machine learning from a dataset of a million images to identify body parts in lung CTs, and subsequently reduce the noise in those images, which allows for a significant dose reduction, claimed to be 1/20

Extremity
Cone Beam CT
3.       Cone beam CT scanners are becoming mainstream – Cone beam CT scanners were initially
used primarily for dental applications where the resulting precision and high resolution images, especially in 3-D, are ideal for creating implants. However, for ENT applications, such as visualizing cochlear implants and inner ear imaging, its high resolution and relatively low cost makes them ideal. It is also very useful for imaging extremities, again, its high resolution can show hairline fractures well and is superior to standard x-ray. I counted at least 5 vendors offering these types of products; they are being placed in specialty clinics (e.g. ENT) as well as large hospitals.

4.       Point-Of-Care (POC) ultrasound is booming – POC ultrasound is getting inexpensive (between US $2k-15k), which is affordable enough to put one in every ambulance, and in the hands of every emergency room physician, and even for physicians doing “rounds” and visiting bedsides. There are different approaches for the hardware, each with its own advantages and disadvantages:
a.       Using a standard tablet or phone, there is an “app” needed for the user interface, image display, and upload to the cloud and/or PACS. All of the intelligence is inside the probe. However, one of the complaints I heard is that the probe tends to be somewhat heavy and can get very warm.
b.       Using a dedicated tablet modified for this use, it can take some of the load off the probe
for the processing. If the probe is powered through the tablet, it saves on weight as well.
Butterfly POC US, US$2k
Other things to look for is whether a monthly fee is included as several vendors use a subscription model, if it has a cloud based architecture (i.e. no stand-alone operation), and what applications can it be used for. Most of the low-end devices are intended for general use, and have only one or two probes. If you need OB/GYN measurements, you might need to look for a high end (close to US $10k-15k price range).
Also, uploading images into a PACS is nontrivial as one needs to make sure it ends up in the correct patient record of the PACS, VNA, EMR, etc. This is actually the number one problem as each facility seems to deal with these so-called “encounter-based” procedures in a different manner. There are guidelines defined by IHE, but in my opinion with a very narrow scope.

5.       3-D Printing is becoming mainstream – A complete section at the show was dedicated to 3-D  with regard to X3D/VRML models in ongoing. So, before you make major investments, I would make sure you are not locked into a proprietary format and interface.
Many companies showing off
printed body parts
printing. Several vendors showed printers and amazing models based on CT images. The application is not only for surgery planning (nothing better than having a real-size model in your hands prior to surgery) but also for patient education to share a treatment plan. I would caution however that the DICOM standard (as of 2018) includes a definition on how to exchange so-called “STL” models, but the work
There is not (yet) a large volume of these printed models. I talked with a representative of major medical center, who said they do about 5-10 a day, and another institution, i.e. a children’s hospital does about 3 per week. It seems to me that creating orthopedic replacements might become a major application, but then we ae not talking about models you can make with a simple printer that creates objects from nice colorful plastic, but rather one that can compete with the current prosthetics based on titanium and other materials.

6.       Introduction of new modalities – Every year there are several new modalities introduced, which are very promising and could have a major impact on how diagnosis is done in a few years for particular body parts and/or diseases. Examples are a new way to detect stroke by using
Dedicated Breast CT
electromagnetic imaging for the brain
. The images look very different from a CT scan, for example, but it gives a healthcare worker the information they need to make treatment decisions. Another new device is a dedicated breast CT device providing very high resolution, 3-D display and is more comfortable for a woman than a regular mammogram. Note that these devices don’t have FDA clearance (yet), but as common for these new technologies, they are deployed in Europe and as soon as the FDA feels comfortable, they be ready for sale in the US as well. On issue with these devices is that there is no real “predicate” device so they need clinical trials to show their benefits.

Equally important to what’s new is also observing what’s “old,” because the technology has become mature, or it has made it beyond the “early-adopter” stage. This is what I found:

1.       PACS/VNA/Enterprise imaging – Over the past few years, PACS systems have become mature and not much talked about. Most investments by institutions have been with new EMR’s so there has not much left over to upgrade the PACS system. The result is that many hospitals run several years behind in upgrading and/or replacing their PACS, which hurts the most when needing to facilitate new modalities such as the breast tomo (3-D) systems. One is forced to stick with proprietary solutions to make these work and/or using the modality vendor’s workstations to view these.

VNA implementations have also been spotty. Some work rather well, but some have major scaling and synchronization issues between the PACS and VNA. Enterprise imaging was touted the past 2 years as well, but as a result of a lack of orders (see discussion above about POC ultrasound) creating work-arounds, has not really taken off as expected. New features are needed such as radiation dose management, peer reviews, critical results reporting, and sophisticated routing and prefetching, which are solved by using third party “middleware” to resolve these issues.

2.       Blockchain – Using blockchain technology in healthcare has a limited application. The reason is that the bulk of the healthcare information does not lend itself to be stored in a public “ledger.” It is nice that the information cannot be altered, but unless it is completely anonymized (which is still an issue as there can be “hidden information in private data elements, embedded in the pixels, etc.), and made available for research purposes for example, there are not that many uses for this technology. As of now, some limited applications such as physician registries seem to be the only ones that are feasible in the short term.

3.       Cloud solutions – Google, Amazon and Microsoft are the big players in this market, but there are still very few “takers” for this technology. One of the reasons is the continuing press on major hacking events into corporations (500 million records from Marriott hotels is the most recent as of this writing) and reports of ransomware events of hospitals. Even though one could argue that the data is probably safer in the hands of one of the top cloud players than on some server in a local hospital, there is definitely a fear factor.

As an illustration, one of the participants told me that their hospitals cut off all of the external communications, so there is no Internet at all on any hospital PC. I have seen many physicians Googling on their personal devices such as tablet or phone instead, to search for information about certain diseases or cases. Despite the push from Google et al we probably need some real success stories before this becomes mainstream. Note that what I call “private cloud” solutions, which are provided by dedicated medical software vendors, are doing better, especially for replacement of CD image distribution and for allowing patients to access their images.

Overall, there was quite a bit to see and listen to at this year’s RSNA. Because of the weather cutting into my visit, I was barely able to cover everything I wanted to during the week. It was interesting to see how mature image processing techniques suddenly appeared as “major new AI” solutions, how there are still so many in their infancy, which makes me to believe that the immediate impact will be relatively little. I was more excited by new modalities and inexpensive ultrasounds, which will have a major impact. 

I am hoping that next year some vendors will spend more effort going back to some of the basics, providing robust integration and workflow support for the day-to-day operations. We’ll see what will be new next year!


Tuesday, October 23, 2018

Should I jump into the FHIR right now?


I get this question a lot, especially when I teach FHIR, which is a new HL7 standard for electronic
exchange of healthcare information, as there seems to be a lot of excitement if not “hype” about this topic.

My answer is usually “it depends” as there is a lot of potential, but there are also signs on the wall to maybe wait a little bit, until someone else figures out the bugs and issues. Here are some of the considerations that could assist your decision to either implement FHIR right now, require it for new healthcare imaging and IT purchases, or start using it as it becomes available in new products.

1.      The latest FHIR standard is still in draft stage for 90% of it – That means that new releases will be defined that are not backwards compatible. That means that upgrades are inevitable, which may cause interoperability issues as not all new products use the same release. As a matter of fact, I experienced this first hand during some hackathons as one device was on version 3 and the other one on version 2, which caused incompatibilities. The good news is that some of the so-called “resources” such as those used for patient demographics are now normative in the latest release so we are getting there slowly.

2.      FHIR needs momentum – Implementing a simple FHIR application such as used for appointments requires several resources, for example patient demographics, provider information, encounter data, and organization information. If you implement only the patient resource but use “static data” for example, the remainder is subject to updates, changes, and modifications, etc., in other words, if you slice out only a small part of the FHIR standard, you don’t gain anything. Unless you have a plan to move the majority of those resources eventually to FHIR, and upgrade as they become available, don’t do it. The US Veterans Administration showed at the latest HIMSS meeting how they exchange information between the VA and DOD using 11 FHIR resources that allowed them to exchange the most critical information. When implementing more than 10 FHIR resources you achieve critical mass.

3.      Focus on mobile applications – FHIR uses RESTful web services, which is how the internet works, i.e. how Amazon, Facebook and others exchange information. You get all of the internet security and authorization for free, for example, accessing your lab results from an EMR could be simple by using your Facebook login. The information is exchanged using standard encryption similar to what is used to exchange your credit card information when you purchase something at Amazon. Creating a crude mobile app can be done in a matter of days if not hours as is shown at the various hackathons. Therefore, use FHIR where it is the most powerful.

4.      Do NOT use it to replace HL7 v2 messaging – FHIR is like a multipurpose tool, it can be used for messaging, services, and documents, in addition to having a RESTful API, but that does not mean it is a better “tool.” One of the traps that several people fell into when HL7 version 3 was released, which is XML based, is that they started to implement new systems based on this verbose new standard, because it “is the latest,” without understanding how it would effectively choke the existing infrastructure in the hospitals. Version 2 is how the healthcare IT world runs, it is how we get “there” today and how it will be run for many more years to come. Transitioning away from V2 will be a very slow and gradual process, picking the lowest hanging fruit first.

5.      Do NOT use FHIR to replace documents (yet) – EMR to EMR information exchange uses the clinical document standard CDA, there are 20+ document templates defined such as for an ER discharge, which are critical to meet the US requirements for information exchange, they are more or less ingrained. However, there are some applications inside the hospital where a FHIR document exchange can be beneficial, for example, consider radiology reports, which need to be accessed by an EMR, a PACS viewing station, possibly a physician portal, and maybe some other applications. Instead of having copies stored in your voice recognition system, PACS, EMR, or even a router/broker or RIS, and having to deal with approvals, preliminary reports, and addendums at several locations, it is more effective to have a single accessible FHIR resource for those. One more comment about CDA; there is a mechanism to encapsulate a CDA inside a FHIR message, however, for that application you might be better off using true FHIR document encoding.

6.      Profiling is essential – Remember that FHIR is designed (on purpose) to address 80% of all use cases. As an example, consider the patient name definition, which has only the last and first (given) name. Just to put this in perspective, the version 2 name has xx components (last, first, middle, prefix, suffix, date of xxxxx etc.). What if you need to add an alias, a middle name, or whatever makes sense in your application? You use a well-defined extension mechanism, but what if everyone uses a different extension? There needs to be some common parameters that can be applied in a certain hospital, enterprise, state or country. Profiles define what is required, what is optional, and any extensions necessary to interoperate. I see several FHIR implementations in countries that did not make the effort to do this, for example, how to deal with Arabic names in addition to English names is a common issue in the Middle East, which could be defined in a profile.

7.      Develop a FHIR architecture/blueprint – Start with mapping out the transactions as they are passing through the various applications. For example, a typical MPI system today might exchange 20-30 ADT’s, meaning that it communicates patient demographics, updates, merges, and changes to that many applications. Imagine a single patient resource that makes all of those transactions obsolete as the patient info can be invoked by a simple http call whenever it is needed. Note that some of the resources don’t have to be created locally, a good example is the south Texas HIE, which provides a FHIR provider resource so you never have to worry about finding the right provider, location, name, and whether he or she is licensed.

8.      Monitor federal requirements (ONC in the US) – Whether you like it or not, vendors may be required to implement FHIR to comply with new regulations and/or incentives, including certification. In order to promote interoperability, which is still challenging (an understatement), especially in the US where we still have difficulty exchanging information even after billions of dollars spent on incentives, ONC is anxious to require FHIR based connectivity. This is actually a little bit scary given the current state of the standard, but sometimes, federal pressure could be helpful.

To repeat my early statement about FHIR implementation, yes “it depends.” Proceed with caution, implement it first where the benefits are the biggest (mobile), don’t go overboard and be aware that this is still bleeding edge and will take a few years to stabilize. If you would like to become more familiar with FHIR, there are several training classes and materials available, OTech is one of the training providers, and there is even a professional FHIR certification.

Saturday, October 6, 2018

PACS troubleshooting tips and tricks series (part 10): HL7 Orders and Results (report) issues.


In the last set of blog posts in this series I talked about how to deal with communication errors, causes for an image to be Unverified, errors in the image header or display and worklist. In this blog I’ll describe some of the most common issues with orders and results impacting the PACS system.

Orders and results are created in a HL7 format, almost always in a version 2 encoding, with the most popular version being 2.3.1. A generic issue with HL7, which is not restricted to just orders and results but pretty much all HL7 messaging is the fact that HL7 version 2 is not standardized, meaning that there are many different variations depending on the device manufacturer and the institution that also makes modifications and changes to meet local workflow and other requirements.

The IHE Scheduled Workflow Profile provides guidelines on what messages to support and what their contents should be, but support for those profiles have been somewhat underwhelming. Therefore, having an HL7 interface engine such as Mirth or other commercial versions has become a de-facto necessity to map the differences between different versions and implementations, and also to provide queuing capability in case an interface might be down for a short period of time, so it can be restarted. Here are the most common issues I have encountered specifically related to orders and results as well as updates:

·        Patient ID mix-ups – There are several places in the HL7 order where the patient ID can reside, i.e. in the internal, external, MRN, SSN, or yet another field. As of version 2.3.1, HL7 extended the external Patient ID field to become a list including the issuing agency and other details. DICOM supports a “primary” Patient ID field and expects all of the others to be aggregated in the “other ID” field. Finding where the Patient ID resides, in which field, or in the list, can be a challenge.
·        Physician name – The most important physician from a radiology perspective is the referring physician, which is carried over from the order in the DICOM MWL and image header. For some modalities, however, such as special procedures or cardiology, there can be other physicians such as performing physicians, attending, ordering, and others as well as multiple listings for each category. Even so, despite the fact that the referring physician has a fixed location in the HL7 order, it sometimes might be found in another field and require mapping.
·        HL7 and DICOM format mismatch – Ninety-five percent of DICOM data elements have the same formats (aka Value Representations) as the HL7 data types, the 5% differences can create issues when not properly mapped and/or transformed. For example, the Person Name has a different position for the name prefix and suffix and many more components in HL7.  There can be different maximum length restrictions possibly causing truncations, and the list of enumerated values can be different causing a worklist entry or resulting DICOM header to be rejected. An example is the enumerated values for patient gender which in DICOM is M, F, O, the list for HL7 version 2.3.1  is M, F, O, U and for version 2.5 it is even longer, i.e. M, F, O, U, A, N (see explanation of these vales). This requires mapping and transformation at the interface engine or MWL provider.
·        Report output issues – A report line is included in a so-called observation aka OBX-segment as part of a report message (ORU). There is no standard on how to divide the report, some put for example the impression, conclusion, etc. in a separate OBX, some group them together. In one case, a EMR receiving the report in HL7 encoding (ORU) only displayed the first line, obviously only reading the first OBX. Another potential issue is that a Voice recognition system might use either unformatted (TX) or formatted (FT) text and the receiver might not be able to understand the formatting commands
·        Support for DICOM Structured Reports – Measurements from ultrasound units and cardiology are encoded as a DICOM Structured Report. Being able to import those measurements and automatically filling in those measurements into a report is a huge time savings (several minutes for each report) and reduces copy/paste errors. However, not all Voice recognition systems do support the SR import and if so, they might have trouble with some of the SR templates and miss a measurement here and there. Interoperability with SR is generally somewhat troublesome, and implementation requires intensive testing and verification as I have seen some of the measurements being missed or misinterpreted. Some vendors also use their own codes for measurements, which requires custom configuration.
·        Document management – For long reports, it might be more effective to store them on a document management server and send the link to an EMR, or, encode it as a PDF if you want more control over the format, and attach this to the HL7 message. In this case, you will need to support the HL7 document management transactions (MDM) instead of the simple observations (ORU)
·        Updates/merges, moves – Any changes in patient demographics is problematic as there are many different transactions defined in HL7, depending on the level of change (in the person, patient, visit, etc.) and the type of change, i.e. move patient, merge to records, or simply update a name or other information in a patient record. Different systems support different transactions for these.

In conclusion, HL7 messages vary widely, and interface engines and mapping are necessary evils.
If you would like to create sample HL7 orders or results, you can use a HL7 simulator (parser/sender). The HL7 textbook is a good resource and there are also training options available.



PACS troubleshooting tips and tricks series (part 9): Modality Worklist issues


In the previous set of blog posts in this series I talked about how to deal with communication errors,
causes for an image to be Unverified and errors in the image header as well as display. This post will discuss the errors that might occur with the DICOM modality worklist.

A modality worklist (MWL) is created by querying a Modality Worklist provider using the DICOM protocol for studies to be performed at an acquisition modality. The information that is retrieved includes patient demographic details (name, ID, birthday, sex, etc.), order details (procedure code, Accession number identifying the order, etc.) and scheduling details (referring physician, scheduled date/time etc.). This information is contained in a scheduling database, which is created by receiving orders for the department in an HL7 format (ORM messages).

The Worklist provider used to be typically hosted on a separate server, aka broker or connectivity manager. But increasingly, this function is embedded in a PACS, a RIS or even EMR that has a radiology package. Moving this function from the broker to these other systems is the source of several issues as the original broker was likely rather mature with a lot of configurability to make sure it matches the department workflow, while some of these new implementations are still rather immature with regard to configurability.

The challenge is to provide a worklist with only those examinations that are scheduled for a particular modality, no more and no less, which is achieved by mapping information from the HL7 order to a particular modality. Issues include:

·        The worklist is unable to differentiate between the same modality at different locations – An order has a procedure code and description, e.g. CT head. As the order in HL7 does not have a separate field for modality, the MWL provider will map the procedure codes to a modality, in this case “CT” so a scanner can do a query for all procedures to be performed for modality “CT.” The problem occurs if there is a CT in the outpatient ER, one in cardiology for cardiac exams, one in main radiology, and one in the therapy department (RT). Obviously, we don’t want all procedures showing up on all these devices. It might get even more complicated if a CT in radiology is allocated, let’s say on Fridays to do scans for RT. We need to distinguish between these orders, e.g. look at the “patient class” being in-or outpatient, or department, or another field in the order and map these procedures to a particular station. The modalities will have to support the “Station Name” or “Scheduled AE-Title” as query keys.
·        The worklist can only query on a limited set of modality types – Some devices are not properly configured, for example, a panoramic x-ray unit used for dentistry should use the modality PX instead of CR, the latter of which might group them together with all of the other CR units. The same applies for a Bone Mineral Densitometry (“DEXA”) device; it should be identified as modality BMD instead of CR or OT (“Other”). Document scanners also should be configured to pull for “DOC” instead of OT or SC (“secondary Capture”), endoscopy exams need to be designated ES, and so on. The challenge is to configure the MWL provider as well as the modality itself to match these modality codes.
·        The worklist has missing information – A worklist query might not have enough fields to include all the information needed at the modality. In one particular instance I encountered, the hospital wanted to see the Last Menstrual Date (LMD) as it was always on the paper order. Other examples are contrast allergy information, patient weight for some modalities, pregnancy status, or other information. If the worklist query does not have a field allocated for these, one could map this at the MWL provider in another field, preferably a “comment field” instead of misusing another field that was intended and named for a different purpose.
·        The worklist is not being displayed – There could be several reasons, assuming that you tested the connectivity as described in earlier blogs, i.e. there could be no match for the matching key specified in the query request, or, the query response that comes back is not interpreted correctly. In one case a query response was not displayed at an ultrasound of a major manufacturer because one of the returned parameters had a value that was illegal, i.e. not part of the enumerated values defined by the DICOM standard for that field. In this case, I could only resolve this issue by looking at sniffer responses and taking those and running them against a validator such as DVTK.

MWL issues are tricky to resolve. It is highly recommended that one have access to the MWL provider configuration software. Most vendors will have a separate training class on this device. Be aware that the mapping tables need to be updated every time a new set of procedure codes is introduced; therefore, it is an ongoing support effort. Configuring requires detailed knowledge of HL7 so you can do the mapping into DICOM.

To troubleshoot these issues, a modality worklist simulator can be very useful. There is a DVTK modality worklist simulator available for free and a licensed modality simulator from OTech.

In case you need to brush up on your HL7 knowledge, there is a HL7 textbook available and there are on-line as well as face-to-face training classes, which include a lot of hands-on exercises.

In the next blog post we’ll spend some time describing the most common HL7 issues impacting the PACS.


PACS troubleshooting tips and tricks series (part 8): DICOM display errors.


In the last set of blog posts in this series I talked about how to deal with communication errors, causes for an image to be Unverified and header errors. This post will discuss the errors that might occur when trying to display the images caused by incorrect DICOM header encoding.

When an image is processed for display, it goes through a series of steps, aka the Pixel pipeline. Think about this pipeline as a conveyor belt with several stations, each station having a specific task, such as applying a mask to the image, applying a window/width level, a look up table, annotations, rotating or zooming the image, etc. These “stations” are instructed by the information in the DICOM header, or taken from a separate DICOM file called Presentation State for processing.
There are two categories of problems, the first set of problems might be due to incorrectly encoded header instructions, and the second category is the interpretation and processing caused by an incorrect software implementation. Here are the most common issues:

·        Incorrect grayscale interpretation and display – Images can be encoded as grayscale or color. Grayscale images are identified either as MONOCHROME2 in the header, which means that the lowest pixel value (“0”) is interpreted and displayed as black, or MONOCHROME1 in which case the maximum pixel value (255 for 8 bits images) is interpreted as black. Typically, MR and CT are encoded as MONOCHROME2 and digital radiography MONOCHROME1. However, there is nothing that prevents a vendor from inverting its data and using a different photometric interpretation. Anytime an image is displayed inverted instead of in its normal view, the MONOCHROME1-2 identification is the first place look. I have seen problems where the software after an upgrade ignored the photometric interpretation, causing all of CR/DR to be displayed correctly, but inverting the CT/MR, or displaying the image correctly but the mask or background to be inverted.
·        Incorrect color interpretation and display – color images can be encoded in several different manners, the most common one is using a triplet of Red, Green and Blue (RGB). However, DICOM allows one to use several others (CMYK, etc.) and also allows for sending a palette color in the header that the receiving workstation has to use to map the color scale. Palette color is used if the sender is very particular about the color, such as in nuclear medicine, unlike color for ultrasound when it is used to indicate the direction of the blood (red/blue). Having many different color encodings increases the chance that a receiver does not display one of those encodings. I have seen this after data migration where some of the ultrasound images from a particular manufacturer did not display the color correctly on the new PACS viewer.
·        Failing to display a Presentation State – The steps in the pipeline dealing with image presentation (mask, shutters, display and image annotation and image transformation such as zoom and pan) can be encoded and kept as a separate DICOM file together with the study containing the images. Not every vendor will implement all the steps correctly, and I also have seen ones that will only interpret the first Presentation State and ignore any additional Presentation States.
·        Incorrect Pixel interpretation of the Pixel representation – Some modalities, notably CT, can have negative numbers (Hounsfield units or HU) indicating that a visualized tissue has an X-ray attenuation less than water, which is calibrated to be exactly 0 HU. Some modalities will scale all the pixel values, especially CT and PET. If the software does not interpret it correctly, the image display will be corrupted.
·        Incorrect interpretation of non-square pixels – Some modalities, notably US and C-arms, have “non-square” pixels, meaning that the x and y direction have a different resolution. The pixels need to be “stretched” through interpolation, based on the aspect ratio, for example, if the ratio is 5/6, they need to be extended in the y direction for another 1/6th, which is 16.6%. If your images look compressed, which you’ll notice by the compressed text or, in case of a C-arm, you’ll notice that circles become egg-shaped, that indicates the software does not support non-square pixels. Except for looking kind of strange, it might not impact image interpretation.
·        Shutters incorrectly displayed – A shutter can be circular with a defined radius and center point, or rectangular with defined x,y coordinates, intending to cover collimated areas, which display as being very white to the radiologist. I have seen some implementations ignoring the circular shutter, which makes the radiologist who has to look at the white space very unhappy.
·        Overlay display issues – Overlays used to be encoded in the PACS database in proprietary formats, which is a big issue when migrating the data to another PACS system. And, if encoded in a DICOM defined manner, there are several options ranging from stand-alone objects, to bitmaps in the DICOM header, embedded in the pixel data field, or, worst case being burned-in, i.e. replacing the actual pixels with the overlay. If the overlays contain clinical information, e.g. a Left/Right indicator on the image, it is important to check how the overlays are encoded to make sure that when the data is migrated or read from a CD on another system, the user will be able to see it. The same applies for “fixing” burned-in annotations; don’t overlay a series of “XXX-es” in case the name was incorrect, as they might not be displayed in the future. The best way to get rid of incorrect burned-in annotations is to use an off-line image editing routine, which functions as a “paintbrush” and eliminates the pixel data.

The image pixel pipeline facilitates all the different combinations and permutations of the different pixel encodings, which in practice might not always be completely or correctly implemented. There is an IHE profile defined, called “Consistent Presentation of Images,” check your IHE integration statement of your PACS to determine whether it is supported, meaning that the software implements a complete pipeline.

In addition, this profile has a detailed test plan and a set of more than 200 images and corresponding presentation state files that are available in the public domain and can be accessed from the IHE website under "testtools". I strongly recommend that after the initial installation and with each subsequent software upgrade, that you load these images and check to see if the pipeline works. These test mages have different pixel encodings with the instructions in the header negating the pixel display, for example, an image might be MONOCHROME1 with an inverted LUT to be applied, displaying the same as if it was MONOCHROME 2 with a regular, linear LUT.

Another good resource is the PACS fundamentals textbook that explains the pipeline in great detail. The next blog post will be on Modality Worklist issues.