Monday, June 3, 2019

Enterprise or encounter-based imaging workflow options.


As institutions start to incorporate their multiple imaging sources into an enterprise solution such as
provided by a Vendor Neutral Archive (VNA), they find that the biggest challenge is dealing with the different workflows used by non-radiology departments, which in many cases must be re-invented. There are many different workflow and integration options, as a matter of fact I have identified more than one hundred different combinations as listed below. Hopefully these will converge to a few popular ones, driven by standardization and vendor support.

The traditional radiology and cardiology workflow has matured and is defined in detail by the IHE SWF (Scheduled Workflow) profile, which recently has been updated to SWF.b to incorporate PIR (Patient Information Reconciliation) and requires the support of a more recent version of HL7, i.e. 2.5.1 (this was optional in the first version).  PIR specifies the use of updates and merges for reconciliation such as when using a temporary ID and for “John Doe” cases.

The non-radiology and cardiology enterprise imaging workflows are also known as “Encounter Based Imaging Workflow” in contrast to the traditional “Procedure Based Imaging Workflow” as defined by the SWF/PIR IHE profiles mentioned above. The difference is that there is no order being placed prior to the imaging. Despite the lack of an order, we still need the critical metadata for the images which consists of:

1.       Imaging context attributes (body part-acquisition info-patient and/or image orientation)
2.       Indexing fields (for retrieval such as patient demographics, study, series and image identifiers)
3.       Link(s) to related data (reports, measurements)
4.       Department/location/specialty information. This is an issue as some of these acquisition devices (e.g. Ultrasound) can be used for different departments. It is not as easy as having a fixed MRI in radiology; now we have devices that can belong to different departments and used in various locations (OR, ER, patient rooms, etc.)
5.       References to connect to patient folder especially for the EMR (patient centric access)
This assumes that the practitioner decides to keep the images, which is not necessarily always the case; a user might choose to discard some or all of the images depending on if they need to be part of the permanent electronic patient record and/or need to be shared with other practitioners.

Assuming we want to archive the images, the first step is to figure out how do we get access to the metadata. There are two different workflows:
1.       The user retrieves the meta-data first and then acquires the images
2.       The user first acquires the images and then matches them up with the metadata (typically at the same device).

The end-result is the same, the workflow is a little bit different as there needs to be a query made by the practitioner to get the data, which could be as simple as scanning a patient barcode, RFID or doing a search based on the patient’s demographic data.
How is this information retrieval being implemented? There are several options:
1.       Use the DICOM Modality Worklist (DMWL) similar to the SWF profile. The DMWL in the case of the traditional SWF includes the “What, Where, When, for Whom and How to Identify,” for example, performing a Chest PA X-ray (what), using the portable unit in the ER (where), at 7 am (when), for Mr. Smith (for whom), with a link to the order using the Accession Number and identifying it with Study UID 1.x.y.z (how to identify). In the case of the encounter-based imaging workflow, we only use the “for whom” and “where: as the other information is not known.
Using only patient ID and department, the specification of this DMWL variant is covered by the IHE Encounter Based Imaging Workflow (EBIW) profile, which is geared towards Point of Care (POC) ultrasound. The problem is that DMWL providers are not typically available outside radiology/cardiology and that acquisition devices (think an android based tablet capturing images or a POC US probe connecting to a smartphone) don’t typically support the DMWL client either.
2.       Use the Universal worklist and Procedure Step (UPS) as defined in the IHE EBIW, which is basically a DICOMWeb implementation of the traditional worklist, which makes it easier to implement, especially on mobile media. The same issue is true with this solution as with solution (1): who supports it? Note that not only it is an issue with the client software but also the availability of the server, i.e. worklist provider which is somewhat of an unknown outside radiology/cardiology.
3.       Use HL7 Query as defined by the PDQ profile, either version 2 or 3.
4.       Use FHIR as defined by the PDQ-M profile. Note that the differences between V2 and FHIR is that the traditional PV1 segment which has visit information has been renamed as the FHIR Encounter Resource. So, when you think about encounters, think about the visits in Version 2.
5.       Listen to any V2 ADT’s, i.e. patient registration messages.
6.       Use an API, preferably web-based if you use a mobile device, direct into an EMR, HIS or ADT system.
7.       Do a DICOM Patient information Query (C-Find) to a PACS database assuming that the patient has prior images.
8.       Any other proprietary option.

The second step is that we need to add additional information to the metadata that was not provided by the initial meta-data query, i.e., the Accession Number. The Accession Number was initially intended to link an image or set of images with an order and the result (diagnostic report and subsequent billing). Even though there is no order, you’ll find that the Accession number is critical as it is used by the API from an EMR to a PACS and/or VNA to access the images, to link to the results and notes, to make the connection to billing and to associate with study information (Study Instance UID’s).

A so-called “Encounter Manager,” as defined by IHE, as an actor could issue a unique Accession Number. This encounter manager could reside in a PACS, VNA, or broker. To make sure that Accession Numbers are unique and different from other Accession Numbers, such as those issued by RIS or EMR, most institutions use a prefix or suffix scheme. Note that the acquisition device does not have to deal with this Accession Number issue, a DICOM router could do a query for the Accession Number and automatically update the image headers before forwarding it to the PACS/VNA.
The next (optional) step is that an encounter might need to create a “dummy” order, because many EMR’s or HIS systems cannot do any billing or recognize the images that are created without an order, so in many cases, an order is created “after the fact.”

The last step is to notify the EMR that images are available. There are several options for that as well:
1.       Create a HL7 V2 ORU (observation result) transaction as defined by the IHE EBIW. This is probably the most common option as EMR’s typically support the ORU.
2.       Create a HL7 V2 ORM with order status being updated.
3.       Create a DICOM Instance Availability. This is actually used quite a bit (I have seen Epic EMR implementations that use this). IA has more detailed information compared with the HL7 v2 options.
4.       Send a Version 2 MDM message which has the advantage that you can use it to provide a link to the images.
5.       Use the DICOM MPPS transaction.
6.       Use the (retired) DICOM Study Content Notification (still in use by some legacy implementations)
7.       Manually “complete” entry in the EMR
8.       Use proprietary API implementations.

The scenarios described above assume that patients are always registered, and encounters scheduled. It becomes more complex if there is no patient registration, such as a POC US being used by a midwife at a patient’s home. The same applies for emergency cases, e.g.  when using it in an ambulance where the only information might be that it is a 30’s female, or at a disaster area or battlefield (“citizen-1”). In this case, we need a solution similar to the PIR to reconcile the images with information entered after the fact resulting in updates and merges, typically done using HL7 transactions.
Another future complication will be the implementation of patient-initiated imaging, for example, if a patient has a rash and wants to send an image taken with his or her phone to a practitioner or sends images of a scar after surgery to make sure it is healing properly.

As you can see from the above, the challenge with Encounter-Based Imaging is that there are many options for implementations, in theory one could multiply the number of options for each step and you’d come up with many different combinations (2 times 8 times 8 results in theoretically 128! different options).

IHE so far has only addressed two options, the one for POC ultrasound and photo’s in the EBIW profile which specifies DMWL for getting the demographics and using an ORU for the results. In practice, a typical hospital might use 5-10 or so different options for the different departments. Hopefully there will be a couple of “popular” options emerging which will be driven preferably by new IHE profile definitions and supported by the major vendors. In the meantime, if you are involved with enterprise imaging be prepared to spend quite a bit of time determining which option(s) fit best for your workflow, and is supported by your EMR/PACS/VNA/Acquisition modality vendors. You might need to spend a significant amount of time training your users for any additional steps that might be necessary to fit your solution.












Thursday, April 25, 2019

Why you should attend SIIM 2019, June 26-28


One of the major issues facing healthcare imaging and informatics professionals is the lack of
transparency in communication between modalities such as CT, MRI, ultrasound, CR and others, as well as the PACS/RIS, and VNA. When images and related information such as dose information and measurements do not get across, connections are rejected, changes and updates in the information are not propagated in a timely manner or at all, most of PACS administrators are stuck in between vendors who are finger-pointing to each other about the root cause.

Many vendors lock up the access to their log files requiring a (costly) service call to get someone to look at it, which takes time, assuming they have the skills to do that. This has become the main reason why people are attending advanced training classes and why you should consider attending the annual SIIM conference this year.

Imagine that an image is rejected by the PACS or you can’t read images from a CD of a patient who is scheduled for surgery and the surgeon really needs access to the patient’s CT study. The DICOM Validator (DVTK) toolkit will validate the DICOM header and tell you what is incorrect so you can fix it with an editing tool. Imagine that your system randomly loses some of the images in a study. The Wireshark DICOM sniffer will allow you to see exactly what is happening and why images are being rejected. Imagine you have performance issues; the same Wireshark will show you exactly the timestamps of the DICOM communication protocol and application level responses. Imagine there is missing information in a modality worklist, the Mirth HL7 interface engine allows you to map it to a field that could not be recognized by the worklist provider, and also as an extra bonus will store the HL7 orders in a temporary queue, which can be restarted in case there are any hiccups. Imagine the radiology report has some formatting issues, again, Mirth will be able to solve these issues. By the way, all of these tools are free for you to use.

Given the increase in de-constructed PACS, VNA’s that are connected to multiple PACS systems, which require constant synchronization, and the proliferation of zero footprint viewers that can be launched from an EMR, integration is getting more and more challenging requiring complex skills. SIIM leadership has recognized that teaching the advanced skills on how to use these tools will fill a major need and has expanded the program for this year. There will be sessions on using them in a very hands-on manner to provide you with this advanced knowledge.

There are other reasons to attend SIIM as well, i.e. spending time with vendors to kick the tires, learn about the latest in AI, networking with your peers and to share experiences, and last but not least enjoy the great Rocky Mountains. However, all of this is in my opinion minor to acquiring the necessary skills to make sure you can support your PACS in a professional manner. 

So, this is a very good reason to attend this year, I am looking forward to seeing you in Denver at these advanced sessions!


Monday, March 18, 2019

Impact of the Philips-Carestream acquisition on the end users: Good, Bad?


Since the recent Philips acquisition of the Carestream IT business, I have received several phone calls and had discussions with both elated and very concerned end-users. Interestingly enough, the positive feedback was mostly from Philips PACS users and most concerns were expressed by Carestream clients.

The Philips users were mostly excited about the Carestream enterprise archiving and storage component, which hopefully will replace the proprietary Philips back-end and be able to integrate better with enterprise archiving systems such as VNA’s. It is no secret that the Philips proprietary image format storage works very well for the Philips workstation display as it provides great (perceived) performance but to get the data out of their system in a standard DICOM manner is not as easy. Synchronizing changes in the Philips archive with the VNA cannot be automated due to lack of the IOCM IHE profile support. It is very challenging to say at the least, judging from the spike in attendance in our DICOM classes from Philips users who want to learn how to use DICOM network sniffers to find out when, where, and why certain studies are not exchanged between the Philips and their VNA.

The strength of the Philips PACS system is definitely its radiology worklist, radiologists really like its user friendliness, and PACS administrators like it as they can train a new user in 15 minutes unlike some of the other PACS user interfaces. This is important if, for example, you get a new batch of 15 residents to train every couple of months. So, the ideal match would be the Philips front-end with the Carestream back-end, however, that goes against the current trend of EMR-driven worklists for PACS.

From Carestream customers, I have heard mainly concerns that Philips might “contaminate” their current relationships and/or upset their support and service structure. It is rather common when a new company takes over a smaller one in this industry to see people leave, service centers be consolidated, not always for the better, and support take a major dip. In addition, the product they are currently using or planning to purchase might become obsolete due to product consolidation, especially if the main objective of an acquisition was not the technology but buying the channel and existing customer base.

So, what can we expect? Time will tell, but the good news is that both companies have a culture that is different from many other players in this field, which I know first-hand having worked for both Philips and Carestream’s predecessor (Kodak). Consequently, I have some level of confidence that this is going to be a good thing. But again, only time will tell, and in the meantime as a Philips or Carestream customer you might want to ask for solid guarantees from your suppliers and keep all options open.

Thursday, February 21, 2019

HIMSS19: Are we finally unblocking patient information?


Busier as ever
More than 45,000 visitors to the worlds largest healthcare IT conference held in Orlando, Fla,
browsed through 1200 booths looking for IT solutions for their facilities and listened to the many educational sessions. There is still a dichotomy between what was shown and the real world as the IHE showcase demonstrated 12 use cases where information seamlessly flowed between different vendors, while it is not always so smooth in practice based on stories from the trenches.
Here were my observations from this conference:

Distinguished panel at Keynote
1.       Interoperability, are we there yet? The meeting was dominated by the recent information-blocking rule, which was unwrapped by the US department of Health and Human Services (HHS) literally the day before the convention started. As Seema Verma, the US CMS administrator pointed out in her key note presentation, the government has given out US $36 billion on incentives to implement electronic health records with not much interoperability to show for it, so now it is time for the industry to step up.
Former US CTO Aneesh Chopra added to this saying that the CCD’s (Continuity of Care Documents) that are exchanged right now might not be the best solution to exchange patient information, but we need to look for other means such as open APIs, which can be used to tap into any EMR for information. These open APIs will become a requirement by 2020 according to the HHS. Penalties to health information exchanges and health information networks could be up to $1 million for lacking interoperability. Maybe this will help, however the rule is expected to get pushback from some of the stakeholders. For example, the AHA was quick to point out that it disagreed with certain parts of the requirements: “We cannot support including electronic event notification as a condition of participation for Medicare and Medicaid,” stated AHA Senior Vice President for Public Policy Analysis and Development Ashley Thompson.

2.      
Open API, is that sufficient? An open API is merely a “connector” that allows information to be exchanged, however, as was noted in the same keynote speech, if the only thing that can be exchanged is the patient name, sex and race, or if the clinical information is not well encoded and/or not standardized, the API is not of much use.

That is why implementation guides based on use cases specifying the many details of the information to be exchanged are critically important. The good news is that these implementation guides are a key component of the new FHIR standard, which can be electronically interpreted and are defined according to a well-defined template. The DaVinci activity which has already defined 12 of these guides and which are part of the FHIR balloted standard will facilitate the exchange. The focus of these guides is on provider/payer interactions and includes for example medication reconciliation for post discharge, coverage requirements information and document templates and rules. The booth demonstrating these use cases was one of the busiest in the IHE showcase area.

3.       What about social determinants? Health care determinants follow the 20/20/60 rule, i.e. 20 percent of ailments are determined genetically which can increasingly be predicted by looking at your DNA sequence, 20 percent are influenced by a healthcare practitioner such as your doctor, but 60 percent, i.e. the majority, is determined by the patient through his or her own actions and social determinants. For example, if you are genetically at risk for a heart condition, and your doctor has already placed a stent in one or more of your coronary arteries to help blood flow to your heart muscle, but you don’t change your life style, you won’t be able to get any better. Now, let’s say you are homeless and depend on food that is not good for your condition, you could be in trouble. It would be good if your physician knows those social factors, which could also include where you have traveled to recently. However, there are no “codes” available to report this in a standard manner. The majority of the health care determinants (60 percent) are not encoded, therefore, there is much work to be done in this area.

Impressive number of providers
participating in Commonwell
4.       How is information exchanged between providers? The ARRA (American Recovery and Reinvestment Act) from the previous administration had put money aside to establish public Health Information Exchanges (HIE’s). Unfortunately, many of these HIEs took the grant money and folded after that ran out, notably the HIE’s in North Texas and Tennessee and many others, shutdown after failing to find a sustainable business model.
Several vendors took the initiative to establish a platform for information exchange as they figured out that the effort to string connections one-by-one between healthcare providers would be much more expensive than creating their own exchange, which is how the Commonwell non-profit started. As of the conference, they had 12,000 connections to providers, which is probably 10 percent to 15 percent of all providers, which is a good start towards gaining a critical mass. Cerner seems to be the largest EMR vendor in this alliance. Epic was notably absent and has been the main driver of a somewhat competing alliance, called Carequality with different functionality but establishing similar objectives, i.e. exchanging information between EMR’s from different vendors at the providers.
The good news is that there is now a bridge established between these two platforms, which again makes the critical mass even larger. This situation is somewhat unique for the US as other countries have government initiatives for information exchange but for those countries without an initiative, the same model might work. This will hopefully solve the problem that was mentioned by one of the providers who said that it has been relatively easy to exchange information between his EMR (which happened to be Epic) and others as long as it was an EMR from the same vendor, but very hard if not impossible to get anything out of an EMR from another vendor into his EMR. This is a great effort, which together with the anti-blocking rules from CMS, might finally allow healthcare information to be exchanged.


One of the many portals
demonstrated
5.       Are patient portals finally taking off? It is still a challenge to access health care information as there is not really a universal portal that collects all of the information among different providers. You might need to maintain access to the information being present at your primary physician, your specialist(s), your hospital and even your lab work provider. One of the ways to consolidate this information is to have a single provider, such as the VA for veterans, and their portal “myhealtevet,” which has been relatively successful. How this is going to work as the VA  increases its outsourcing to private commercial providers remains to be seen. If you are on Medicare or Medicaid, CMS will provide a standard interface, which is used by several (free) patient portal providers where you can log in to see all of your claims, prescriptions and other relevant information. Again, if you do not happen to be a veteran or are not covered by CMS, but are a patient between the two of these organizations, there is not so much interoperability, however those two groups cover enough patients to start having a critical mass as well.

Standing room only at cloud
providers
6.       Are cloud providers making any progress? One of the speakers, who happened to be working for New York Presbyterian, claimed that they are planning to have 80 percent of their applications and infrastructure in the cloud. There are more advantages to the cloud than potentially reducing cost, such as easier access by patients, and the potential to run AI algorithms for clinical support, analytics and decision support. Machine learning is more effective when there is a lot of data to be learned from, hence the advance of cloud archiving.

The big three cloud providers (Amazon, Google and Microsoft) are more than happy to take your business, the Amazon booth was packed every time I passed it. However, they still have a steep learning curve, even although they claim to have open API’s and a healthcare platform with a lot of features, they have to learn in a few years what healthcare vendors have accumulated in expertise over many decades. The good news is that they have potentially very deep pockets so if they are getting serious about this business, they could become major players.

7.       Is Uberization of healthcare happening? When people talk about Uberization they typically
Get the Uber or Lyft app in your EMR!
refer to the business model that provides easy access by consumers through mobile media, accountability of the providers, and tapping into a completely new source of providers such as private drivers who suddenly become transportation providers, or, in the case of Airbnb, private home owners who become innkeepers.
This is happening in healthcare as well as there is a big increase in tele-health providers who provide phone access to patients who want advice anytime, anywhere. Speaking with a physician who does this from his home, telehealth is as easy as using Uber as a physician can sign up anytime and just log on to take calls from patients for as long as he or she decides. As I listened to one such physician’s experiences, he provided life-changing advice, especially to patients who live in remote areas and otherwise would not be able to seek medical advice because of their remoteness.
In addition to telehealth services, Uber and Lyft were also promoting their transportation business to providers, to reduce potential no-shows by patients who have trouble with transportation. A provider can contract with either one of these to serve their transportation needs.
8.       What about the wearables? Apple introduced access to medical records through its health app
Note the wearable EKG sensor
as well as monitor
at the 2018 conference through a FHIR interface. Their provider list has been growing steadily and is now close to 200, including the VA which by itself accounts for 170 medical centers and more than 1000 clinics. This is a significant number, but not even close to the number of providers in the Commonwell platform, for example, whose members exceed 10,000. Therefore, there is still a lot of progress to be made.
There was an increase in intelligent detectors that can communicate vital signs and other clinical information using blue-tooth to a mobile device, for example, to allow patients to be released earlier from a hospital back home which is safer and more cost effective.

9.       Are we Safe from hackers yet? There has been a major increase in cyber security investment, which has become a necessity as healthcare has become a major target for hackers and ransomware opportunists.
Huge Security pavilion
Healthcare providers are making big investments in personnel and tools to try to protect themselves. The cybersecurity area of the conference was indeed huge as many new companies are providing their services. The average security part of an IT budget is about 6 percent to 8 percent but there are some who spend as much as 12 percent. Imagine an IT staff of a large organization, e.g. if you have 500 IT employees, there could be as many as 50-60 staff dedicated to cyber security.

Based on recent incidents, we still have a lot of work to do in healthcare to protect patient information, especially at the periphery, as medical devices in many cases appear to provide easy access points to a hospital’s back-end. This risk is one of the reasons that the FDA has been requiring a cyber security plan to be filed with every new medical device clearance.

plenty of FAX apps
10.   When are we going to get rid of the fax machines? About six months ago, CMS publicly announced that they want to get rid of all fax machines in healthcare by 2020. This is only one
year away, however in practice it appears that by far the majority of all healthcare communication in healthcare is done through the ubiquitous fax machine. A good manner to transition might be to exchange documents using the open API, and then do Natural Language Processing (NLP) to search for medications, allergies, and other important information that can be processed and potentially imported into the receiving EMR. In the meantime, there were still many small companies that were advertising smart ways to distribute faxes and I predict that this will still happen for several years to come.

In conclusion, this was yet another great conference. The emphasis was on unblocking the information that is locked up in many healthcare information systems and, until now, can only be exchanged if you happen to have an EMR from the same vendor as its source, or if you are lucky enough to have a provider that has access to Commonwell or Carequality, or a provider who uses one of the relatively few public Health Information Exchanges (HIE’s). Hopefully the industry and providers will start to cooperate on making this happen, we’ll see next year when we are going to be back in Orlando FL again.

In the mean time, if you are baffled about some of the terminology you might consider our FHIR, IHE or HL7 V2 training publications and classes.

Sunday, February 3, 2019

Cleveland 2019 IHE connectathon: it’s all about FHIR.


The 2019 IHE connectathon drew more than 300 healthcare IT professionals to a snowy and cold
Cleveland to test interoperability between their devices using IHE defined profiles. What was new this year were the recently published profiles facilitating exchange of information using mobile communications based on the new FHIR standard, which is based on standard web protocols.
The FHIR testing emphasized querying for patient information, corresponding documents, and uploading and retrieving them. In addition, we tested the audit trails using FHIR, which is important from a security and privacy perspective.

There are still relatively few FHIR based implementations in the field due to the immaturity of the standard (most of it is still a draft), the lack of critical mass (implementing FHIR for just one application such as scheduling does not make sense), and its steep learning curve as it is sufficiently different from what healthcare IT is accustomed to. Therefore, the more that can be verified and tested in a neutral environment such as the connectathon the better it is. Speaking with the participants, they uncovered quite a few issues in their early FHIR implementations, which again is good as it is better to solve these issues beforehand than during an actual deployment.

The attendance at this year’s event seemed to be lower than previous years, which could be due to the fact that there are several local connectathons happening during the year on other continents, which draw from the same crowd, and that implementations are starting to mature (except for FHIR of course) and therefore, there is less need for debugging.

There could also be somewhat of a “standards overkill” in place if one considers the fact that for radiology alone, which is one of the 11 domains, there are 22 defined profiles and another 27 published as draft. It is hard to keep up with all of these and for vendors to deploy all of these new requirements.

As FHIR matures, which will take several years as its typical release cycle is 12-18 months, there will be a need to test these new releases between the vendors providing the medical devices and software products. There were a few new vendors present but most of them consist of the same “crowd,” which is somewhat disappointing because I believe that the real FHIR implementation breakthrough will come from outsiders such as Apple, Amazon, Microsoft and/or Google, all of which were notably absent.

In conclusion, another very successful event, giving a boost to interoperability, something we very badly need as patients, especially in the US where we are still struggling to get access to our medical records, including images, and where healthcare institutions continue to have difficulty exchanging information among themselves and other providers. In many departments and medical offices, the fax machine still is an important tool, hopefully not for long.

Monday, December 10, 2018

RSNA2018: What’s in and what’s out.

Let it snow...
The annual radiology tradeshow at McCormick Place in Chicago started with a little hiccup as the Chicago airports closed down on Sunday due to a snowstorm, and slowed the flow of attendees flying in on the second day to only a trickle. Note that the Sunday after Thanksgiving is the busiest travel day of the year so it could not have come at a more inconvenient time. I myself was caught in this travel chaos as I spent all of Monday in the Dallas airport while my plane was trying to get into the arrival queue for O’Hare.

The overall atmosphere at the show was positive, attendance seemed to be similar to last year and most vendors I talked with were optimistic. About a third of the attendees come to the meeting just for the continuing education offerings, but another third come to visit vendors and “kick the tires” and see what’s new. My objective is also to see what the new developments are and to do some networking to get an idea of what is going on in the industry. 

Here are my observations:

1.       Artificial intelligence dominated the floor – Over the past few years, AI has created some
Dedicated area just
for AI showed
80 companies
anxiety as predictions that AI would replace radiologists in the near future. It seems that the anxiety has been relieved to a certain degree, but it has been replaced with a great deal of confusion of what AI really is, and with uncertainty of what the day-to-day impact could be.
A detailed description of the different levels of AI and the main application areas are the topic of an upcoming blog post, but it was clear that the technology is still immature. Despite the fact that there were 100+ dedicated AI software providers, in addition to many companies promoting some kind of AI in their devices or PACS, only a handful of them had FDA clearance. I also believe that the true impact of AI could be in developing countries that have a scarcity or even total lack of trained physicians. It is one thing to improve the detection by a physician of let’s say cancer by a few percent, but if AI could be used in a region that has no radiologists, then an AI application being used that can detect certain abnormalities  would be a 100% improvement.
There could be some workflow improvements possible using AI in the short term, however, one should also realize that the window between conception and actual implementation could be 3-5 years. Users are not too anxious to upgrade their software unless there is a very good reason. So, in short, the AI hype is definitely overrated and I believe that we’ll almost certainly have autonomous self-driving cars before we have self-diagnosing AI software.

a significant dose reduction
 for lung cancer screening
2.       Low dose CT scanning is becoming a reality – One of the near-term applications of AI allows the use of a fraction of a “normal” CT scan. Instead of a typical 40 mAs technique, acceptable images are created using only 5 mAs. This could have a major impact on cancer screening. The product shown did not have FDA clearance (yet) but there is every reason to expect that this can be available one year from now. The algorithm was created using machine learning from a dataset of a million images to identify body parts in lung CTs, and subsequently reduce the noise in those images, which allows for a significant dose reduction, claimed to be 1/20

Extremity
Cone Beam CT
3.       Cone beam CT scanners are becoming mainstream – Cone beam CT scanners were initially
used primarily for dental applications where the resulting precision and high resolution images, especially in 3-D, are ideal for creating implants. However, for ENT applications, such as visualizing cochlear implants and inner ear imaging, its high resolution and relatively low cost makes them ideal. It is also very useful for imaging extremities, again, its high resolution can show hairline fractures well and is superior to standard x-ray. I counted at least 5 vendors offering these types of products; they are being placed in specialty clinics (e.g. ENT) as well as large hospitals.

4.       Point-Of-Care (POC) ultrasound is booming – POC ultrasound is getting inexpensive (between US $2k-15k), which is affordable enough to put one in every ambulance, and in the hands of every emergency room physician, and even for physicians doing “rounds” and visiting bedsides. There are different approaches for the hardware, each with its own advantages and disadvantages:
a.       Using a standard tablet or phone, there is an “app” needed for the user interface, image display, and upload to the cloud and/or PACS. All of the intelligence is inside the probe. However, one of the complaints I heard is that the probe tends to be somewhat heavy and can get very warm.
b.       Using a dedicated tablet modified for this use, it can take some of the load off the probe
for the processing. If the probe is powered through the tablet, it saves on weight as well.
Butterfly POC US, US$2k
Other things to look for is whether a monthly fee is included as several vendors use a subscription model, if it has a cloud based architecture (i.e. no stand-alone operation), and what applications can it be used for. Most of the low-end devices are intended for general use, and have only one or two probes. If you need OB/GYN measurements, you might need to look for a high end (close to US $10k-15k price range).
Also, uploading images into a PACS is nontrivial as one needs to make sure it ends up in the correct patient record of the PACS, VNA, EMR, etc. This is actually the number one problem as each facility seems to deal with these so-called “encounter-based” procedures in a different manner. There are guidelines defined by IHE, but in my opinion with a very narrow scope.

5.       3-D Printing is becoming mainstream – A complete section at the show was dedicated to 3-D  with regard to X3D/VRML models in ongoing. So, before you make major investments, I would make sure you are not locked into a proprietary format and interface.
Many companies showing off
printed body parts
printing. Several vendors showed printers and amazing models based on CT images. The application is not only for surgery planning (nothing better than having a real-size model in your hands prior to surgery) but also for patient education to share a treatment plan. I would caution however that the DICOM standard (as of 2018) includes a definition on how to exchange so-called “STL” models, but the work
There is not (yet) a large volume of these printed models. I talked with a representative of major medical center, who said they do about 5-10 a day, and another institution, i.e. a children’s hospital does about 3 per week. It seems to me that creating orthopedic replacements might become a major application, but then we ae not talking about models you can make with a simple printer that creates objects from nice colorful plastic, but rather one that can compete with the current prosthetics based on titanium and other materials.

6.       Introduction of new modalities – Every year there are several new modalities introduced, which are very promising and could have a major impact on how diagnosis is done in a few years for particular body parts and/or diseases. Examples are a new way to detect stroke by using
Dedicated Breast CT
electromagnetic imaging for the brain
. The images look very different from a CT scan, for example, but it gives a healthcare worker the information they need to make treatment decisions. Another new device is a dedicated breast CT device providing very high resolution, 3-D display and is more comfortable for a woman than a regular mammogram. Note that these devices don’t have FDA clearance (yet), but as common for these new technologies, they are deployed in Europe and as soon as the FDA feels comfortable, they be ready for sale in the US as well. On issue with these devices is that there is no real “predicate” device so they need clinical trials to show their benefits.

Equally important to what’s new is also observing what’s “old,” because the technology has become mature, or it has made it beyond the “early-adopter” stage. This is what I found:

1.       PACS/VNA/Enterprise imaging – Over the past few years, PACS systems have become mature and not much talked about. Most investments by institutions have been with new EMR’s so there has not much left over to upgrade the PACS system. The result is that many hospitals run several years behind in upgrading and/or replacing their PACS, which hurts the most when needing to facilitate new modalities such as the breast tomo (3-D) systems. One is forced to stick with proprietary solutions to make these work and/or using the modality vendor’s workstations to view these.

VNA implementations have also been spotty. Some work rather well, but some have major scaling and synchronization issues between the PACS and VNA. Enterprise imaging was touted the past 2 years as well, but as a result of a lack of orders (see discussion above about POC ultrasound) creating work-arounds, has not really taken off as expected. New features are needed such as radiation dose management, peer reviews, critical results reporting, and sophisticated routing and prefetching, which are solved by using third party “middleware” to resolve these issues.

2.       Blockchain – Using blockchain technology in healthcare has a limited application. The reason is that the bulk of the healthcare information does not lend itself to be stored in a public “ledger.” It is nice that the information cannot be altered, but unless it is completely anonymized (which is still an issue as there can be “hidden information in private data elements, embedded in the pixels, etc.), and made available for research purposes for example, there are not that many uses for this technology. As of now, some limited applications such as physician registries seem to be the only ones that are feasible in the short term.

3.       Cloud solutions – Google, Amazon and Microsoft are the big players in this market, but there are still very few “takers” for this technology. One of the reasons is the continuing press on major hacking events into corporations (500 million records from Marriott hotels is the most recent as of this writing) and reports of ransomware events of hospitals. Even though one could argue that the data is probably safer in the hands of one of the top cloud players than on some server in a local hospital, there is definitely a fear factor.

As an illustration, one of the participants told me that their hospitals cut off all of the external communications, so there is no Internet at all on any hospital PC. I have seen many physicians Googling on their personal devices such as tablet or phone instead, to search for information about certain diseases or cases. Despite the push from Google et al we probably need some real success stories before this becomes mainstream. Note that what I call “private cloud” solutions, which are provided by dedicated medical software vendors, are doing better, especially for replacement of CD image distribution and for allowing patients to access their images.

Overall, there was quite a bit to see and listen to at this year’s RSNA. Because of the weather cutting into my visit, I was barely able to cover everything I wanted to during the week. It was interesting to see how mature image processing techniques suddenly appeared as “major new AI” solutions, how there are still so many in their infancy, which makes me to believe that the immediate impact will be relatively little. I was more excited by new modalities and inexpensive ultrasounds, which will have a major impact. 

I am hoping that next year some vendors will spend more effort going back to some of the basics, providing robust integration and workflow support for the day-to-day operations. We’ll see what will be new next year!


Tuesday, October 23, 2018

Should I jump into the FHIR right now?


I get this question a lot, especially when I teach FHIR, which is a new HL7 standard for electronic
exchange of healthcare information, as there seems to be a lot of excitement if not “hype” about this topic.

My answer is usually “it depends” as there is a lot of potential, but there are also signs on the wall to maybe wait a little bit, until someone else figures out the bugs and issues. Here are some of the considerations that could assist your decision to either implement FHIR right now, require it for new healthcare imaging and IT purchases, or start using it as it becomes available in new products.

1.      The latest FHIR standard is still in draft stage for 90% of it – That means that new releases will be defined that are not backwards compatible. That means that upgrades are inevitable, which may cause interoperability issues as not all new products use the same release. As a matter of fact, I experienced this first hand during some hackathons as one device was on version 3 and the other one on version 2, which caused incompatibilities. The good news is that some of the so-called “resources” such as those used for patient demographics are now normative in the latest release so we are getting there slowly.

2.      FHIR needs momentum – Implementing a simple FHIR application such as used for appointments requires several resources, for example patient demographics, provider information, encounter data, and organization information. If you implement only the patient resource but use “static data” for example, the remainder is subject to updates, changes, and modifications, etc., in other words, if you slice out only a small part of the FHIR standard, you don’t gain anything. Unless you have a plan to move the majority of those resources eventually to FHIR, and upgrade as they become available, don’t do it. The US Veterans Administration showed at the latest HIMSS meeting how they exchange information between the VA and DOD using 11 FHIR resources that allowed them to exchange the most critical information. When implementing more than 10 FHIR resources you achieve critical mass.

3.      Focus on mobile applications – FHIR uses RESTful web services, which is how the internet works, i.e. how Amazon, Facebook and others exchange information. You get all of the internet security and authorization for free, for example, accessing your lab results from an EMR could be simple by using your Facebook login. The information is exchanged using standard encryption similar to what is used to exchange your credit card information when you purchase something at Amazon. Creating a crude mobile app can be done in a matter of days if not hours as is shown at the various hackathons. Therefore, use FHIR where it is the most powerful.

4.      Do NOT use it to replace HL7 v2 messaging – FHIR is like a multipurpose tool, it can be used for messaging, services, and documents, in addition to having a RESTful API, but that does not mean it is a better “tool.” One of the traps that several people fell into when HL7 version 3 was released, which is XML based, is that they started to implement new systems based on this verbose new standard, because it “is the latest,” without understanding how it would effectively choke the existing infrastructure in the hospitals. Version 2 is how the healthcare IT world runs, it is how we get “there” today and how it will be run for many more years to come. Transitioning away from V2 will be a very slow and gradual process, picking the lowest hanging fruit first.

5.      Do NOT use FHIR to replace documents (yet) – EMR to EMR information exchange uses the clinical document standard CDA, there are 20+ document templates defined such as for an ER discharge, which are critical to meet the US requirements for information exchange, they are more or less ingrained. However, there are some applications inside the hospital where a FHIR document exchange can be beneficial, for example, consider radiology reports, which need to be accessed by an EMR, a PACS viewing station, possibly a physician portal, and maybe some other applications. Instead of having copies stored in your voice recognition system, PACS, EMR, or even a router/broker or RIS, and having to deal with approvals, preliminary reports, and addendums at several locations, it is more effective to have a single accessible FHIR resource for those. One more comment about CDA; there is a mechanism to encapsulate a CDA inside a FHIR message, however, for that application you might be better off using true FHIR document encoding.

6.      Profiling is essential – Remember that FHIR is designed (on purpose) to address 80% of all use cases. As an example, consider the patient name definition, which has only the last and first (given) name. Just to put this in perspective, the version 2 name has xx components (last, first, middle, prefix, suffix, date of xxxxx etc.). What if you need to add an alias, a middle name, or whatever makes sense in your application? You use a well-defined extension mechanism, but what if everyone uses a different extension? There needs to be some common parameters that can be applied in a certain hospital, enterprise, state or country. Profiles define what is required, what is optional, and any extensions necessary to interoperate. I see several FHIR implementations in countries that did not make the effort to do this, for example, how to deal with Arabic names in addition to English names is a common issue in the Middle East, which could be defined in a profile.

7.      Develop a FHIR architecture/blueprint – Start with mapping out the transactions as they are passing through the various applications. For example, a typical MPI system today might exchange 20-30 ADT’s, meaning that it communicates patient demographics, updates, merges, and changes to that many applications. Imagine a single patient resource that makes all of those transactions obsolete as the patient info can be invoked by a simple http call whenever it is needed. Note that some of the resources don’t have to be created locally, a good example is the south Texas HIE, which provides a FHIR provider resource so you never have to worry about finding the right provider, location, name, and whether he or she is licensed.

8.      Monitor federal requirements (ONC in the US) – Whether you like it or not, vendors may be required to implement FHIR to comply with new regulations and/or incentives, including certification. In order to promote interoperability, which is still challenging (an understatement), especially in the US where we still have difficulty exchanging information even after billions of dollars spent on incentives, ONC is anxious to require FHIR based connectivity. This is actually a little bit scary given the current state of the standard, but sometimes, federal pressure could be helpful.

To repeat my early statement about FHIR implementation, yes “it depends.” Proceed with caution, implement it first where the benefits are the biggest (mobile), don’t go overboard and be aware that this is still bleeding edge and will take a few years to stabilize. If you would like to become more familiar with FHIR, there are several training classes and materials available, OTech is one of the training providers, and there is even a professional FHIR certification.

Saturday, October 6, 2018

PACS troubleshooting tips and tricks series (part 10): HL7 Orders and Results (report) issues.


In the last set of blog posts in this series I talked about how to deal with communication errors, causes for an image to be Unverified, errors in the image header or display and worklist. In this blog I’ll describe some of the most common issues with orders and results impacting the PACS system.

Orders and results are created in a HL7 format, almost always in a version 2 encoding, with the most popular version being 2.3.1. A generic issue with HL7, which is not restricted to just orders and results but pretty much all HL7 messaging is the fact that HL7 version 2 is not standardized, meaning that there are many different variations depending on the device manufacturer and the institution that also makes modifications and changes to meet local workflow and other requirements.

The IHE Scheduled Workflow Profile provides guidelines on what messages to support and what their contents should be, but support for those profiles have been somewhat underwhelming. Therefore, having an HL7 interface engine such as Mirth or other commercial versions has become a de-facto necessity to map the differences between different versions and implementations, and also to provide queuing capability in case an interface might be down for a short period of time, so it can be restarted. Here are the most common issues I have encountered specifically related to orders and results as well as updates:

·        Patient ID mix-ups – There are several places in the HL7 order where the patient ID can reside, i.e. in the internal, external, MRN, SSN, or yet another field. As of version 2.3.1, HL7 extended the external Patient ID field to become a list including the issuing agency and other details. DICOM supports a “primary” Patient ID field and expects all of the others to be aggregated in the “other ID” field. Finding where the Patient ID resides, in which field, or in the list, can be a challenge.
·        Physician name – The most important physician from a radiology perspective is the referring physician, which is carried over from the order in the DICOM MWL and image header. For some modalities, however, such as special procedures or cardiology, there can be other physicians such as performing physicians, attending, ordering, and others as well as multiple listings for each category. Even so, despite the fact that the referring physician has a fixed location in the HL7 order, it sometimes might be found in another field and require mapping.
·        HL7 and DICOM format mismatch – Ninety-five percent of DICOM data elements have the same formats (aka Value Representations) as the HL7 data types, the 5% differences can create issues when not properly mapped and/or transformed. For example, the Person Name has a different position for the name prefix and suffix and many more components in HL7.  There can be different maximum length restrictions possibly causing truncations, and the list of enumerated values can be different causing a worklist entry or resulting DICOM header to be rejected. An example is the enumerated values for patient gender which in DICOM is M, F, O, the list for HL7 version 2.3.1  is M, F, O, U and for version 2.5 it is even longer, i.e. M, F, O, U, A, N (see explanation of these vales). This requires mapping and transformation at the interface engine or MWL provider.
·        Report output issues – A report line is included in a so-called observation aka OBX-segment as part of a report message (ORU). There is no standard on how to divide the report, some put for example the impression, conclusion, etc. in a separate OBX, some group them together. In one case, a EMR receiving the report in HL7 encoding (ORU) only displayed the first line, obviously only reading the first OBX. Another potential issue is that a Voice recognition system might use either unformatted (TX) or formatted (FT) text and the receiver might not be able to understand the formatting commands
·        Support for DICOM Structured Reports – Measurements from ultrasound units and cardiology are encoded as a DICOM Structured Report. Being able to import those measurements and automatically filling in those measurements into a report is a huge time savings (several minutes for each report) and reduces copy/paste errors. However, not all Voice recognition systems do support the SR import and if so, they might have trouble with some of the SR templates and miss a measurement here and there. Interoperability with SR is generally somewhat troublesome, and implementation requires intensive testing and verification as I have seen some of the measurements being missed or misinterpreted. Some vendors also use their own codes for measurements, which requires custom configuration.
·        Document management – For long reports, it might be more effective to store them on a document management server and send the link to an EMR, or, encode it as a PDF if you want more control over the format, and attach this to the HL7 message. In this case, you will need to support the HL7 document management transactions (MDM) instead of the simple observations (ORU)
·        Updates/merges, moves – Any changes in patient demographics is problematic as there are many different transactions defined in HL7, depending on the level of change (in the person, patient, visit, etc.) and the type of change, i.e. move patient, merge to records, or simply update a name or other information in a patient record. Different systems support different transactions for these.

In conclusion, HL7 messages vary widely, and interface engines and mapping are necessary evils.
If you would like to create sample HL7 orders or results, you can use a HL7 simulator (parser/sender). The HL7 textbook is a good resource and there are also training options available.