Thursday, March 29, 2018

Critical measures to keep your PACS secure.

A recent article by a McAfee employee stated that more than 1100 PACS systems are currently exposed to the public internet and can be relatively easily accessed to the extent that intruders can retrieve and upload images with few problems. In doing research for his article, he did just that. The fact that certain image files were accessed is troublesome by itself. Even more troubling, medical data was changed. We wonder if someone is paying attention at those sites or if McAfee notified them.

According to HIPAA, if patient data is seen by someone who should not see it, federal law requires doctors, hospitals, and other health care providers to notify those patients of a “breach” of their health information. Patients in turn can file a complaint with the HHS office for Civil Rights (OCR). But, independent from privacy and possibly even ethical questions about this particular experiment, the fact remains that 7% of all PACS systems are completely unprotected.

What does it mean that a system is secure? According to a recent MITA white paper called “Cyber Security for Medical Imaging”:  A device can be considered secure if it defends unintended or unauthorized operation with respect to its intended environment and its intended use—as specified by its manufacturer. Therefore, providing security measures in the device and also infrastructure is a key requirement.

Security for medical devices also got the attention of the FDA. A premarket notification or 510(k) approval for PACS systems has to include a security assessment otherwise it won’t be cleared for use by the FDA. The premarket submission requires several additional documents addressing the security aspects.

Security is not only a PACS issue. At the HIMSS 2018 conference that just wrapped up in Las Vegas, the organization released its annual cybersecurity survey of its 70,000 health IT professionals showing that 75% of the respondents had experienced a recent significant security incident.

Now, back to what you can do to protect your PACS system. Below are the most important measures you can take to guard against internal and external security threats. It consists of three major phases: Identify, Mitigate, and Monitor and Review.
1.   Identify the Threats

·        Inventory your systems – Know your “surface area” for attack. Many times the weak link is an open workstation at a specialist’s office or a modality hidden in an OR that nobody checked. Remember to include your wireless systems as well. The increased connectivity within the enterprise of new specialties, POC ultrasounds and other imaging input and output devices being used by physicians create additional challenges for security.

·        Conduct a security review – Engage an outside auditor to review your systems and conduct penetration testing. Audit your systems’ compliance with the HIPAA Security Rule and PCI standards. PCI applies to payment systems, not healthcare, but they have many relevant and helpful guidelines. Even if you have a strong security team in-house, it helps to have an unbiased “outsider” to review your security plan.

·        Manage your vendor access – Vendors typically access their products as part of their service contracts, which means you could have multiple access points in your PACS. For example, one for the PACS server vendor, several for your modalities, which can be from a couple of different vendors, and you might have third party software for 3-D or other applications, as well as a speech recognition vendor, etc. Make sure that vendor access is limited to their device only and, most importantly, monitor it, i.e. check the audit trails and access logs. You want to know who accessed your devices and when, in case there are issues. It would not be the first time that a vendor made changes corrupting a database, upgrading a software release, installing patches, etc. without notifying the customer.
2. Mitigate the Threats

·        Switch off the promiscuous mode on your devices – Almost every PACS server has a “poor man’s” security mechanism, by which an unknown DICOM application, identified by a DICOM AE-Title that is not in its configuration file, is refused a connection at the Application level aka Association. However, if you have configured your PACS to be “promiscuous,” meaning that it will talk to any AE-Title, it will connect and potentially allow the upload or retrieval of data. The advantage of operating in a promiscuous mode is that every time a new device needs to be connected, you don’t have to change the configuration, however this is very poor practice.

·        Manage access configuration – PACS systems often have some granularity with regard to these privileges, for example, it might allow any device to query its database, but not allow images to be retrieved. If you look at some of the PACS user groups, this is probably one of the top-ten most asked questions: “Why can I not retrieve images from the PACS despite the fact that I can communicate with it.” The answer is that you need to add it to the access configuration file. Manage this access and configuration yourself as a PACS administrator; don’t rely on the vendor to do this for you.

·        Map any connected device to its port and IP address – As part of the access configuration file, most PACS systems will also keep track of who is connecting from what IP address and port number (hence the requirement to have fixed IP addresses using the DICOM protocol). If a device with a known AE-Title suddenly tries to access the server from another IP or to a different port, it should refuse the connection. Again, manage this configuration.

·        Use standard port numbers not port 104! – For security, DICOM applications should run in user mode without root access. Port numbers below 1024 are privileged ports that require root access by the application. The Internet Assigned Numbers Authority (IANA) has assigned a standard “registered” port number of 11112 that should be used rather than the well-known port 104.

·        Secure your perimeter – Use standard IT security best practices to harden your exposure to the outside, such as:

a.     Use a professional firewall – A Stateful Inspection Firewall can analyze packets down to the application layer. The simple packet filtering firewall you find in most routers is not effective against a determined hacker.

b.     Deny-first – Restrict access with a deny-first firewall policy, then whitelist systems and IP addresses that need access.

c.     Use intrusion detection systems (IDS) – An IDS can spot hacking attempts in real-time. An IDS can log and alert you when suspicious actions occur, like the administrative credentials logging into the EHR system at 3:00 a.m.

d.    Use Proxies and Routers – Proxy and router systems from vendors such as DICOM Systems, MedWeb, RamSoft, Osimis, Laurel Bridge and others sit between your PACS and outside systems. These proxies can automate encryption, authentication, anonymization, and more. They provide a security wrapper for DICOM devices to manage the limitations of the DICOM standard. When selecting a proxy system, it’s wise to use a professional vendor. Don’t try to roll-your-own security.

·        Use VPNs for remote access – If you must allow direct access to PACS, use a Virtual Private Network (VPN). These can be complicated to implement and require a knowledgeable IT staff to manage. Don’t be tempted to skip this and trust a password to protect your open ports. Your password will be broken or leaked.

·        Segment your network – Segment your network using Virtual LANs (VLAN) and demilitarized zones (DMZ) where appropriate. Put your public-facing servers in a DMZ. An attacker gaining control of these servers will have limited surface area to attack your internal systems. Use VLANs to segment departments where possible. There is no reason for the shipping computers on the loading dock to have direct IP access to the DICOM network systems.

·        Control access by removable devices – has reported that 25 percent of malware is transferred through USB, therefore, you can eliminate this risk by simply not allowing unauthorized USB drives in secure networks. This protects you by restricting unauthorized data exfiltration and by cutting off a vector for malware intrusion. Disabling the USB ports is easy to do in Windows as an administrator. You can disable USB removable media across the entire domain via Active Directory Group Policy. Remember that this applies to your service providers as well, the last thing you want to have happen is allow a service engineer who just picked up a virus from a “dirty” site to use the same flash drive to infect your network (this happens!)
3. Monitor and Review
·        Check your audit trails – Audit trails for application level access are a HIPAA requirement. If you never look at them, they might as well not be implemented. Regular audits help detect both internal and external access violations of patient data. There are no “standard” guidelines on how often to check the audit trails, but most people we talk with seem to do it once a week, checking random accesses from random people. Support for standard audit trails is important, IHE defines a ATNA profile that provides this information to be recorded in a standard format. Check your IHE integration statement of your PACS system. Further transformation of your audit trails into XES event logs can facilitate process mining.

·        Rinse and Repeat – Go back to step 1. Inventory, audit and conduct reviews. This is a continuous process. You’ll never be finished, but you can be assured that you put the right practices in place.

Will these measures protect your PACS? As many security professionals will tell you, nothing is 100% secure. If someone really wants to get access to your data and/or modify it, there is likely a way. But instead of leaving the front door wide open and put your family jewels on the kitchen table, you can at least have a locked, security system and add another perimeter to secure it, so that potential hackers or intruders will go looking for an easier target.

Herman Oosterwijk is president of OTech and a PACS, DICOM, HL7 trainer/consultant, David Finster consults on security and data protection best practices. We encourage comments.

Tuesday, March 20, 2018

DICOM Experts, Where Art Thou?

This past month alone, I got three inquiries from high tech imaging companies looking for seasoned DICOM professionals; two are wanted on the east coast (Boston), two in rural Arkansas, and if you like skiing and hiking, there is a vacancy in Boulder, Colorado.

One of these positions does not even require US residency, as they are willing to sponsor a work visa for qualified applicants. The reason these inquiries came to me is that there are literally thousands of students who went through the OTech DICOM training over the past 25 years, and therefore, I have a large base of “alumni” among my Facebook and Linked-in friends.

This poses the question, what is an expert anyway? My first source is always the (un)-official source of truth, i.e. Wikipedia:
Historically, an expert was referred to as a sage (Sophos), was a profound thinker distinguished for wisdom and sound judgment. Informally, an expert is someone widely recognized as a reliable source of technique or skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status by peers or the public in a specific well-distinguished domain.
The next question would then be, how to define a DICOM “expert?” To define his or her skills, I like to refer to the official DICOM certification for professionals, which is managed and administered by PARCA. The requirements for this certification include knowing:
1.     Negotiation – How DICOM connections (Associations) are being negotiated and established, i.e. the handshake and agreement on the type of images to be exchanged and encoded such as compression. Note that “images” mean any DICOM file, including dose reports, measurements, presentation states containing overlays etc.
2.     Messages and data elements ­– How DICOM metadata (literally “data about data”), aka DICOM headers that are part of the DICOM file is encoded and can be interpreted.
3.     Storage and Image management – That DICOM protocol services include the capability to query a worklist at a modality, allow for images to be exchanged, get a commitment from an archive about its permanent storage and can communicate study status and changes to the procedure using “Modality Performed Procedure Step.”
4.     Print, Query/Retrieve and compression – There are still a lot of DICOM printers, especially in emerging and developing countries, communicating with the DICOM print protocol, while Query/Retrieve is the interface to a PACS database/archive. Compression specifies what compression schemes are supported and can be negotiated such a JPEG, JPEG2000, MPEG, and others.
5.     DICOM Media – Reliable CD interchange is still a major headache and pain point for many institutions, if only everyone would follow the DICOM standard closely, it would be much easier. One should be familiar with how images are stored on a CD i.e. as so-called “part-10” files and how the DICOMDIR or directory is structured.
6.     Image quality and Structured Reports – DICOM defines a so-called pixel pipeline which specifies all the steps that the pixel data is going through prior to being displayed such as different greyscale/color schemes, annotations, Look-Up Tables, etc. Displaying the images on a monitor that is calibrated using the DICOM defined standard greyscale and color mapping is critical to ensuring that every discreet pixel value is mapped into a distinguishable greyscale or color value. Structured Reports are used for measurements, CAD marks, dose information, key images and other information related to image metrics.
7.     VR’s and conformance – A VR or Value Representation defines the data types, i.e. maximum length and encoding of the DICOM data elements. Knowing where and how to evaluate these allows for spotting errors, the most frequent being exceeding maximum length, invalid codes in the fields, invalid characters, etc. Conformance is critical as it allows checking whether two DICOM devices can communicate using the conformance statements.
8.     Networking – This includes addressing, i.e. use of IP address, port number, and AE-Title, using tools such as DICOM network sniffers as well as interpreting the communication logs and dumps.
9.     Troubleshooting – To troubleshoot DICOM connections, one would use simulators and test tools. The most basic tool is the use of the DICOM Verification, as well as using multiple test images such as those for testing the imaging pipeline and be able to change negotiation parameters
10.  New DICOM extensions – There are several DICOM extensions, such as the specifications “for processing” aka raw data, which typically is used to perform CAD, the definition of the new multi-frame enhanced CT, MR and other image types, using the Universal Worklist and the new pathology image definition. Last but not least, is DICOMWeb, which uses RESTfull services, mostly being used for mobile access and through web browsers and is the counterpart of the HL7 FHIR services.
As you can see, there is quite a bit involved with being a “DICOM expert.” If you feel like honing your skills, you might want to check out available textbooks, training or pursue certification. If you feel you would qualify for one of the “expert positions,” feel free to forward your resume and I’ll be happy to share it with those inquiring about hiring.

Monday, March 12, 2018

HIMSS 2018: Wake-up call for the sleeping giants.

As I browsed through the vendor exhibits among the more than 45,000 healthcare IT professionals gathered in Las Vegas last week for HIMSS 2018, I noticed that the big IT giants Amazon, Google, and Microsoft (Apple was noticeably absent from the exhibit floor), as well as other businesses who are in the CRM space (Salesforce) are finally taking notice of the opportunities in healthcare. It was also not a coincidence that Eric Smidt, past chairman of Alphabet, Google’s parent company, was the keynote speaker for the conference. I believe that this is very promising as healthcare in many ways is very much behind other industries and can learn from their experiences.

As anecdotal evidence of the need for better technology in healthcare, I listened to a presentation from a vascular surgeon who explained how he annotates relevant images on a PACS viewing station, then takes a picture with his iPhone of the screen and shares it using Chat with his residents and surgical team to prep for surgery. The reason for him to have to use his phone, is that we don’t yet have the “connectors” that tie these phones, tablets, and other smart devices with our big, semi-closed healthcare imaging and IT systems. The good news is that Apple just announced an interface allowing information exchange, which can be used, among other things, for patients to access their medical information from a hospital EMR. Also Google cloud announced an open API.
Here are my top observations from HIMSS2018:

Demonstration of new Apple
App accessing health records
·       Patients are taking control of their medical information: Apple announced a FHIR based interface on the iPhone that provides access to personal health records. The interface is built into the recent Apple phone as part of its health app. Information such as recorded allergies, medications, lab results, etc. is copied to the person’s phone. Note that this is different from solutions where this information is stored in the cloud (e.g. Google, etc.).
Regardless, it allows patients to access and keep their own information. It provides a mechanism for patients to share the information, as the hospitals are struggling to meet that demand (only one out of three hospitals can share information according to a recent AHA study, despite the fact that more than 90% of them use electronic health records).
In reverse, it is not that hard to upload this information back into an EMR of a physician or a specialist, together with information collected from blue-tooth enabled blood pressure, pacemaker, insulin pump, and other intelligent healthcare devices as well as wearables. At the IHE interoperability showcase demonstration areas, there were several demonstrations of how this upload can be achieved using standard interface protocols, often using FHIR.

·       FHIR is gaining more traction: The new HL7 protocol allowing easy access, especially by mobile devices, to so-called resources such as lab results, reports, and also patient information is getting more traction. However, there is a still a big disparity between what is shown as “works in progress” such as the demonstrations at the IHE interoperability showcase, and what is actually deployed. Almost every use-case that was demonstrated at the showcase had one or more FHIR elements, such as used for patient information access, uploading images or labs, accessing registries, etc. However, when I asked vendors on the exhibit floor where they deployed the FHIR interface, many of them told me that yes, they have their FHIR interface available but are still waiting for the first customer to actually use it.
There are a couple of exceptions, for example, at the Mayo Clinic they are using FHIR to access diagnostic reports, utilizing the EPIC FHIR interface, but there are still very few. One of the major obstacles with FHIR implementations is that it took them a long time (5 years to-date) to get to a standard that has at least some normative parts in it, which will be release 4 to be balloted soon, which means that any implementation you do right now is subject to changes as upgrades are non-backwards compatible. As an example, the Apple FHIR interface is based on release 2. So, I am officially upgrading my FHIR implementation status from “very limited” to “spotty,” but I believe that there is definitely a lot of potential.

Demonstration of VA to DOD gateway
based on FHIR technology
·       The VA is making major strides in healthcare interoperability: I feel compelled to call out the US department of Veterans Affairs as there is a push to shift some of their care to the private sector, while the fact of the matter is that research shows that the VA scores higher than the industry in many of the quality scores, despite the fact that yes, there is still a lot of disparity between the different VA facilities. The high quality of care is not in the least caused by the early implementation of electronic medical records and the ability to be paperless. But, their current medical record system is becoming out-of-date, hence the intention to replace it with a new EMR at the cost of about $10 billion over the course of the next 10 years. Nevertheless in many ways, their current system still outshines what can be achieved today by commercial vendors.
As a case in point, there is a connection between the VA EMR and the one from the DOD that allows for a smooth transition of veteran data between these two entities, which is based on FHIR. What is significant, is that of the many FHIR resources that FHIR has defined (more than 100 up to now, planning to be at about 150), the VA is able to exchange all of the information needed with only very few FHIR resources, notably Patient, Imaging Study, Questinonnaire, Observation, Clinical Impression, Diagnostic Report, Encounter, Condition, Composition, Allergy and Medications. This means that implementing a relatively limited subset can still be very effective. Hopefully their replacement EMR (Cerner?) will have the same kind of interoperability, which seems to be a point of contention right now in the contract negotiations for replacement.

·       The big EMR companies are doomed (or are they?): This millennium has shown a major shift in healthcare IT as the past ten years the number of hospitals in the US having an electronic record has gone from 10% to more than 90%.
However, these monolithic, semi-closed systems which accumulate all the patient information in big databases that are hard to access with limited tools for dashboarding and quality metrics, and who often charge a hefty fee to provide yet another interface to get information in or out, might be on their way out unless they change their architecture and focus. For what it’s worth, even the White House is taking notice as Jared Kushner mentioned during the meeting that “Trump has a new plan for interoperability.”
Let’s look at an analogy on how other industries solve the information access problem, for example, a website for a hotel. If you would like to find directions to the hotel, you click on a link to Google Maps, if you want to know what the local sightseeing tours are, you click on “tripit”, for reviews you click on “Tripadvisor”, and so on.
Now let’s go back to our ideal EMR user screen, wouldn’t it be nice if you can get the patient information from a “source of truth,” which is a web-accessible source for patient information, the latest lab results from the lab, either internal and/or external, the past 6 months progress on a weight loss program from the patient’s Fitbit located in the cloud, diagnostic reports from the radiology reporting system, and so on. And by the way, arranging transportation for the patient is just another click on the Uber or Lyft App (note the announcement from Allscripts to embed a Lyft interface to their EMR).
The EMR would be a mash-up of multiple resources accessible through standard protocols (FHIR), in some cases guaranteed immutable, using blockchain technology, and the only functionality left would be a temporary cache and workflow engine that guides health care practitioners through their job in a very easy to use manner.
Currently user friendliness, especially, still leaves a lot to be desired, as a recent study showed that during an average patient visit, providers spent 18.6 minutes entering or reviewing EHR data on digital devices, and only 16.5 minutes of face-to-face time with patients. We’ll see what happens over the next 5 years and who will win and who will lose but it appears that FHIR might facilitate a disruptive development.

Standing room seats only for blockchain
·       Blockchain has some (limited) applications in healthcare. I purposely did not mention blockchain in the title of this write up so as not to overload my ISP as I found it to be the most hyped (according to the dictionary: “extravagant or intensive publicity or promotion”) subject of the conference. Presentations on this subject went beyond standing room only.
What is blockchain? It is an immutable, decentralized public ledger that could be used to securely share transactions without a central authority. Knowing that most of the patient’s health information is not intended to be public, and that some of the files (think a 1.5GB digital pathology slide) are just too big to simply move around and copy multiple times, it makes the application for blockchain very limited in scope. The immutable aspect is also hard to accomplish, even for objects or entities that you might think are immutable such as a patient/person.
Imagine that you would store the patient information in a blockchain (e.g. a url and “fingerprint” or “signature” of the data), can you really guarantee that there would be no changes? Some of the content might need to be updated such as a “disease status” in case someone dies, a different name in case a woman who marries, and it is not uncommon anymore for a patient to change sex.
Apart from the “content,” the structure might change as well, due to database changes such as allowing storage of multiple middle names, aliases, etc. Some of these solutions such as providing a unique, immutable person identification, will be resolved by other industries anyway as financial institutions have a lot of interest in making sure that they provide credit to “real persons” and identify if a financial transaction is requested by the actual person instead of a hacker or intruder.
There are however a few blockchain candidates for healthcare, one example was shown at the recent RSNA show dealing with certification and accreditation of physicians, which should be public and from a reliable source. Another example is dealing with consents, so that a healthcare provider can trust the fact that patient information can be shared with for example a parent or caretaker, and what part of the record can be shared and what not (e.g. limit access to mental illness records or the fact that a 16 year old daughter uses contraceptives). So, in conclusion, yes there are some limited applications for blockchain technology, many of them we can “borrow” from other industries, and some of them we can implement for medical purposes, but in practice it will be few.

Salesforce: Patients are
·       Healthcare is learning from CRM companies: According to one of the major CRM companies, Salesforce, Customer Relationship Management (CRM) is a technology for managing all your company’s relationships and interactions with customers and potential customers. Replace the word “customers” with “patients” and you have a perfect system that allows a healthcare institution to manage their patients in a better manner. That is why not only Salesforce but other companies (I saw a demo at Microsoft) are using the CRM core to provide patient management solutions.

·       Artificial Intelligence is making small progress: It would not be right not to mention AI in this report as it is in the top ten tweets about the conference. However, machine learning and Artificial Intelligence is still not as easy as one might think. Some researchers indicate that the IQ of intelligent machines to be equivalent of a 4 your old right now. But, as of now, machines are unbeatable for chess and jeopardy, so there are definitely some applications that can benefit from AI. Examples are predicting ER re-admission rates of certain patients and taking action accordingly, assisting a physician to make a better diagnosis, or, even better, ruling out any findings with an almost 100% accuracy, which would assist in routine screenings. In addition to the technology having to become more mature, there is also an issue with data access as I talked with one user who is in charge of entering manually textual data from old records in structured format, and the fact that much of the accessible data is not very structured. There is a lot of emphasis on AI, so much that some companies are re-branding their whole healthcare business around it (think IBM: Watson Health), which also seems an overkill to me. But AI will silently enter into many applications where it can impact workflow, enhance diagnosis and clinical outcomes.

Yes, I want theVespa
·       HIMSS is still an IT tradeshow: Imagine walking around the RSNA (radiology conference) and being asked if you want to enter in a $200 drawing, participate in a magician performance or, enter a drawing for a motorcycle. It would be unthinkable, but it is still common at the HIMSS. This indicates that it gears towards a different audience than clinicians. In contrast with the last time, however, I did not see any showgirls on the floor this year for photo-ops, so the only decision I had to make was if I would enter the motorcycle or scooter drawing. Having driven a Vespa myself when I was young, it was not a hard choice for me.

In conclusion, this was another great event, with some hype as usual, but I found especially the promise of “outsiders” getting involved in the business of healthcare to be very encouraging. A “fresh look” from these companies using some of the practices that make our life easier when we are not sick, could definitely make our life easier and improve patient care when we are sick. There is no reason that financial transactions can freely move between banks so that I can go to an ATM any place in the world and access my account, while my physician has trouble getting timely lab results, medications, allergies and other pertinent information. I can’t wait for the sleeping giants to not only wake up but get actively involved and make an impact.

Herman Oosterwijk is a healthcare imaging and IT trainer/consultant. In case you like to learn more about new standards, in particular FHIR, check out the upcoming web training and in-depth face-to-face training.