Tuesday, December 31, 2013

RSNA 2013: My top 10 on what’s new and what’s old, part 2.

It is the second day of the RSNA radiology tradeshow in Chicago. I am flagging down a cab to get me to my 7 am breakfast meeting in the Hilton Towers, which is on my way to the conference at the McCormick Place. After this meeting, we hurry into another cab to get us to our courses, lectures, and appointments at Starbucks. I always have my phone close by to look for texts from those I am going to meet, who might be stuck waiting for the bus, held up with another appointment, or were called into a last minute customer meeting that has a higher priority than meeting with yet another consultant like myself.

Walking the floors...
Meeting with many professionals and walking the floors is how I find out what’s new. After having commented on what I saw that was truly new, gadgets and/or product and services in part 1 of this series, here are my observations on what is not really new, which means that I have seen it in previous year(s), but it has reached either a new level of maturity, or added significant improvements or new features this time around. I therefore label these as “new and old.” I’ll address any noteworthy “truly old” developments in part 3. So, here is my next top ten (or maybe 20) listed:

Hopefully,
 your workflow looks less complex
      1.       Workflow: Workflow is an “old” subject. Vendors have been talking about it for many years, however, it still seems to be a major struggle. When doing my informal survey of the top three issues facing customers and vendors, workflow is almost always the number one issue. One should think that after 20 plus years of PACS implementations, it should have been addressed and solved, however, nothing is further from the truth. Why are we still struggling with this subject, and why is it that according to many of my colleague consultants, 60 percent of the hospitals run at a sub-optimal level and could achieve major improvements in efficiency if they would take the time to look at what they have and what they could have? In my opinion, there are four reasons they don’t do this:

·         Institutions and staff don’t take the time to do long term planning. Every PACS administrator I’ve talked with is in the middle of a PACS upgrade, or having trouble keeping up with all the changes required for Meaningful Use, or is just busy fixing studies and addressing other burning issues. Note that this is only from the ones I talked with who were sent to the RSNA, not those who did not have time or could not get the funding to travel to this event. No one seems to take time out to sit down with all parties involved and see how the workflow can be improved.
·         There is a big lack of knowledge of the clinical workflow among the people managing these systems. PACS systems were initially managed by radiology staff, with a PACS administrator reporting to the radiology administrator, who properly can set priorities and understand the workflow implications of changes, upgrades and new interfaces. PACSs are no longer just a radiology project but have become an enterprise activity taking care of managing images from multiple departments and specialties, and have migrated to IT for support of the hardware, (which has been located to their central computing center), and software (helpdesk support etc.). The most common complaint I hear is that access to the servers is now limited and a simple reboot or just checking status and/or files on the servers is close to impossible as they are locked out of accessing the main computer center, and that any support call now has to go through another two layers before it can get to the person involved. Needless to say knowledge of clinical workflows is greatly watered down at the support level with people who only have a pure IT background taking care of business.
·         IHE recommendations and profile definitions are still being ignored. One of the first profiles defined by the Integrating the Healthcare Enterprise (IHE) is called “scheduled workflow.” It was subsequently followed by an “unscheduled workflow” profile definition addressing the case when incomplete patient data is available for trauma cases. Standard DICOM and HL7 transactions were refined, and options eliminated as much as possible to come to a rigid and well-defined sequence of transactions. This results in the automation of changes in procedures, patient updates, and automatic verification by the modality of a study to allow for modality and workstation worklist synchronization. However, when I ask my students in our PACS training classes how many have implemented Modality Performed Procedure Step and Storage Commitment, which are essential components of the scheduled workflow profile and used to streamline the department workflow, I get either blank stares or at best a confirmation of maybe 10 percent of the audience that they have implemented this. One of the reasons goes back to my first comment, i.e. people being too busy with day-to-day activities to take the time to sit back and look at how to fundamentally change their workflow to really make use the technology they have available.
·         It is not about just the PACS anymore. Assuming that an institution has a well functioning PACS system, there are several other subsystems that have to work correctly, such as the critical results reporting and ER discrepancy reporting. In addition, to ensuring proper quality of care, one should also address the peer review process. With regard to the critical results reporting, there is nothing more frustrating to a radiologist than having to track down a physician to follow up on a critical finding marked code red, if there has just been a shift change, or when the finding occurs at the end of the day, or when the patient already left the ER having been sent home by the physician who missed the critical issue. Increasing integration with multiple systems is lacking (see integration comments below as well).

There are still vendors
who claim that their archive is a VNA,
despite the fact that they merely have
a simple DICOM Store (level 1)
2.       VNA: The implementation of VNA’s has moved from what the Gartner consulting group calls the Technology Trigger, passing through the Peak of Inflated Expectations to the Trough of disillusionment phase. In layman’s terms, it did not meet the expectations of the initial hype as people found that a VNA implementation has some major challenges.

I found three types of VNA vendors, the first group are those who embrace it with both arms as they see that a full level 5 implementation (see related white paper of the different levels) gives them a strategic advantage and they seem to be very well positioned to address customer needs.

The second group are the laggards who are trying to catch up with adding the functions needed for

The third group are those who interestingly enough are still ignoring the need to offer a full-fledged VNA and think they can just put another label on their existing PACS archive and hope that clients are not looking through the marketing smoke and mirrors and recognize that this is just the same old thing. One vender told me jokingly, upon asking me what I could do for him, that he wished I could take the VNA away, but unfortunately I don’t have a magic wand, and even if I had, I would not want to use it to turn the clock back 10 years, as VNA’s are here to stay and addressing the issue of providing a true enterprise image information and management solution using open standards is essential.
a full VNA, such as synchronization between multiple PACS systems and VNA, full-featured routing and pre-fetching, tag morphing, information life-cycle management and support for a uni-viewer and HIE connectivity. These vendors have obviously underestimated the demand and are working hard to catch up. Many of them are learning the hard way by deploying VNA’s prematurely resulting in all types of workflow and other issues.

HIE's look great on paper but are still
far from 100% operational
3.       HIE and XDS: The establishment of Health Information Exchanges (HIE’s), either private or public, have been a major part of the US government’s initiatives to facilitate information exchanges to reduce unnecessary duplication of tests, and making observations and results widely available to health care practitioners. A key requirement of the HIE is the support for the Cross Document image and information exchange standards, aka XDS. XDS is not rocket science, it is based on existing standards, however, there are very few implementations of it. The reason for the relatively few implementations appears to be (according to the people I spoke with during this event) a gross misunderstanding of how it works, what it does, and what is needed for existing infrastructure to support it. Hopefully, a better understanding by training and education by the IHE committee and other third parties (here is a shameless plug for the training provided by OTech) will change this.

"big-foot" vs "zero-footprint"
       4.       Uni-viewer: Last year’s zero-footprint viewer is re-labeled as a uni-viewer. However, I would expect that a uni-viewer does not only have the typical zero-footprint characteristics such as leaving “no-trace behind” after a physician logs off, but also can display other specialties such as dentistry, ophthalmology and obviously cardiology and in the future pathology. I have seen one uni-viewer that also supports the new “enhanced” CT, MR, cardiology, and angiography image specifications (aka DICOM SOP Class support) and even digital mammography multi-slice tomosynthesis objects. For these large objects, server-side rendering seems to be the best solution as transferring one of those studies onto local cache will take too much time. Many of these viewers are positioned to connect to a VNA, using, for example, the new DICOM web-based protocols, and provide patient-centric vs a department-centric approach to physicians. Interestingly enough, I talked with one institution where the radiologists have taken notice of the fact that physicians have ready access to patient studies from multiple locations through this uni-viewer while accessing the VNA and they also want the same functionality. They don’t quite realize that the PACS has a rather sophisticated workflow manager functionality that interfaces with their radiology viewers and provides a worklist that is synchronized between multiple readers and maintains the status of when studies are read and reported. However, as one person commented, as soon as VNA is able to provide this functionality, the “PACS might be dead.”

5.       System integration: The first radiologist I ran into at the RSNA meeting mentioned that poor
The more "bubbles" the more complex
integration is his main issue. He said he is reading images from multiple facilities using a so-called worklist aggregator, which communicates between the several PACS vendors and provides a composite worklist to his workstation. This software needs to integrate with his PACS systems, his voice recognition system and the RIS. In addition, the system interfaces with the EMR to export the reports as well as images. Therefore, in total he has four systems that need to be integrated. It used to be two (PACS and RIS), followed by the voice recognition and now the EMR as well. It appears that the level of integration and associated complexity has reached the upper limit as he is struggling with several interface and integration issues. Just coordinating the synchronization of these systems with regards to upgrades is likely a major issue, for example, if the PACS is upgraded, it has to be tested and verified with four other systems. Imagine if he had an additional peer review system, and critical results reporting system, as well as dose reporting system. I don’t know if that is going to be manageable; it seems that the limit for systems integration is about to be reached.

Note integrated phone and
capability to access images
from the bed side monitor
6.       Bedside integration: Anyone who has visited a hospital room lately has seen the COW’s (Computer on wheels), that are used by the nursing staff to take vitals and other information about a patient. Increasingly, the clinical features are integrated with patient resources and entertainment at the bedside through a stationary monitor. One vendor demonstrated this whereby this terminal can show not only your movie on demand, but also pull up a patient record from an EMR, including any corresponding images, take vitals, and even allow for a teleconference with a physician or nurse using the small camera, (which can be physically covered when privacy is needed). A wireless keyboard is provided as well.
Gesture controls for view stations
(note game controller on top of monitor)

       7.      The use of gaming controls to view images: Last year I saw an adaptation of one of the gaming consoles to use gestures to control a monitor. This year there were several demonstrations, both as part of the scientific exhibits and built into commercial products. This application is primarily for use in conference rooms and for teaching whereby a physician can remotely manipulate images.


8.       Auto-scrolling through image stacks: Gesture controls will not help much for day-to-day use by
Note small rectangular autoscroll
hardware device in between mouse
and computer
radiologists in their regular reading. However, a new auto-scroll device might help them as many are suffering from wrist issues from using the mouse and/or trackball day in and day out. This device, which simply connects between the trackball and the computer will automatically scroll the images, which is especially important for axial images sets such as for CT and MRI. Hopefully, this will prevent some carpal-tunnel syndrome sufferers in the future.

Demonstration of a
surgery procedure
        9.       Virtual anatomy tables: also this year there were multiple vendors showing, what is known as the “virtual pathology table,” which is basically a large touch screen display laying flat and built into a demonstration table which allows for a physician to manipulate data that is typically based on a CT ort MRI 3D data set thereby performing a “virtual autopsy.” This is a great teaching tool, and can also be used for forensic applications in case a person has been buried provided they had a CT scan done in case there are follow-up investigations.

        10.   Dose: The registration of radiation dose has become even more important as several states have enacted legislation or have bills pending approval that require dose registration of CT scans as a minimum, and potentially other X-ray exams in the future. Every institution in the affected states is scrambling to establish a set of policies and procedures matching the technical capabilities that are available with additional software solutions. Unfortunately, existing systems have to be upgraded to export the well-defined DICOM structured reports that contain the dose information, which means that in the meantime, several vendors have implemented band-aid solutions, which rely on screen-saved dose overview information, which has to be interpreted using OCR (Optical Character Recognition) software. There seems to be a consensus that this information is to be stored at the patient level for example, with his or her personal health record, however that infrastructure is not (yet) in place and therefore many store the information in a EMR, or even a PACS or RIS where it obviously does not belong. It will take a few more years for dose registration to become ubiquitous and be seamlessly integrated into the regular workflow.

In conclusion, the most heard “old topics” were VNA and uni-viewers, workflow, HIE and XDS. These are still immature and will need several years to come to fruition, I am sure we will see them again at next year’s tradeshow. In the mean time, look for part 3 of this series coming up soon.

Top 10 practical IT skills every PACS administrator (IIP) should have.

Most imaging and information professionals (IIP’s) who are taking care of PACS and EMR systems have some type of IT background or, if they started their careers in the clinical field, have taken courses in this area or learned on the job. The IT knowledge that is required in a particular job depends often on whether there is a strong IT department that supports the healthcare imaging and information systems to supplement the IIP skills, but regardless of the scope and strength of the internal IT support, it is always good to not have to rely on an external person who is often in another department, possibly outsourced or centralized and only available through yet another help-desk person. Therefore, being able to troubleshoot and diagnose fundamental problems is often invaluable, especially if there is “fingerpointing” going on such as, “it is not the network,” or “my system works fine.”

Here is my list of basic skills that every IIP should have based on discussions with the many PACS administrators attending our PACS training and derived from questions I have seen in the many PACS user forums:

Overview of the basic skills required:

1.       Network diagnostics: IIP’s should be able to isolate whether a problem is related to the network and its infrastructure (switches, routers, etc.), including bandwidth problems, or if it caused by an application or device. The most common networking problem is caused by cable cuts. Therefore, being able to find out if a connection is still live, is invaluable (ping). The second most common problem is that IP addresses might be re-assigned or expired, and therefore being able to check a device IP address is important (ipconfig). Performance and routing issues can be diagnosed with TraceRT, while netstat checks whether a port is still open or might have been closed by a well-intentioned IT person. And nslookup will allow you to troubleshoot any potential DNS issues. Knowing the command to be able to renew an IP address is also important.

2.       DICOM connectivity diagnostics: Assuming that the network is OK, the next step is to look for a connectivity issue with imaging devices using a DICOM protocol by analyzing any accessible log files. A typical DICOM connection always starts with connection (aka Association) negotiation, which is executed by an Association Request and corresponding Accept or Reject. A reject would definitely raise a red flag. The second part of the connection would be the actual DICOM command Request (Store, Find, etc.)  with the Response and, most important, its return status code (success or fail for whatever reason). You would want to look for any non-success reasons to be returned. Lastly, there should be a Release Request and Accept finishing the protocol. Being able to test negotiation of the connection, sending out a test commend (“Echo”) and successfully releasing the connection using DICOM Verification is invaluable. Most devices have this feature available from a service or tools menu, some even have this accessible from the GUI, look for “DICOM Echo,” “Verification,” “DICOM ping,” or simple “Test,” to perform this function.

3.       DICOM header knowledge and fixing capability: Assuming that the DICOM connection works, i.e. it passes the test under (2), there could be a problem with an image that is either misidentified, i.e. information is missing from its header or meta-data, or that certain information in the header potentially jeopardizes the data integrity of the receiving system because of duplicates or contradictions in the identifiers. This can cause images to be flatly rejected, or accepted by a receiver and ending up “unverified” or “broken.” Most of these cases are caused by incorrect data entry, for example, by using an already existing Accession Number, misspelling a patient ID, incorrect patient selection by a technologist, etc. Most Imaging and information systems have an elaborate set of tools to fix, merge, and split studies to solve these issues. However, there are some error cases that cannot be fixed with the commonly available tools. These can occur when trying to import an external study from a CD, or, which is increasingly more common, when migrating up to millions of records from one PACS system to another. An IIP should be able to recognize the problem (e.g. duplicate SOP Instance UID, or Institution Name exceeds maximum characters) and be able to use tools such as OT-DICE or DVTK-edit that are typically stand-alone and allow for fixing these headers. Note: It is recommended that you use a tool that is reliable and known to NOT create any corrupt DICOM headers, as some of them do, and only making changes that you feel comfortable with as some of those changes, notably the UID’s, could potentially impact the data integrity of the system.

4.       HL7 messaging protocol: A PACS system is fed by an order and sends back results, while often using arrival information as a trigger to add information on a worklist for a modality. These transactions are encoded using the HL7 protocol. In its most basic form, IIP’s should be able to determine if the HL7 feeds are alive and working, for example, by monitoring the incoming orders and outgoing reports. This is critical, as a one-hour downtime of HL7 feeds, for a busy department, typically causes 5 to 10 hours of fixing by a IIP on the PACS side because of all of the unverified studies caused by manual patient and order input, and its corresponding misspelling, missing accession numbers etc. at the modalities. Equally important is to determine whether the reports are going out, especially for STAT or emergency cases as physicians are waiting for those results to be available within 15-30 minutes. In many cases, the HL7 orders are received by a worklist provider, aka broker or connectivity manager, and the reports are sent from a Voice Recognition server. Knowing how to monitor these devices, and being able to restart them is essential. Sometimes, a queue might still be available at an interface engine and it simply requires restarting that queue for the orders or results to be resent.

5.       HL7 transaction knowledge: In addition to knowledge about the HL7 transactions, an IIP should also be able to know the details of the actual encoding of these messages. For example, most of the modality worklist information is generated by an HL7 order and it is not uncommon that after an update from the information system or interface engine, certain information is suddenly misplaced, absent or incorrectly encoded. The good news is that the HL7 transactions are all encoded as readable ASCII text and it is therefore relatively easy to find out from a HL7 message what could be missing or incorrect just by looking at the message in Notepad. If there are some questions about the encoding, one could use a reference book, or tool such as OT-SEND to parse the message and identify the problem to be resolved by the IT department or vendor.

6.       UNIX commands: There are only a handful of PACS systems that use UNIX or derivates (Linux, AIX, etc.) for their main viewing applications or other PACS workflow and archive components. However, there are several that base their core (database servers, archive servers) on UNIX. The good news is that these systems are so reliable that one needs to access this information very infrequently, however as a result, the knowledge on how to maneuver within this operating system might become stale. In any case, an IIP should know the most basic UNIX commands such as how to start and stop processes, show the process status, and perform basic file and directory management functions, as well as network administration (check and set IP address, etc.).

7.       Windows knowledge: Most workstations are using Microsoft Windows® operating systems and/or applications such as Explorer® and others as their core. An IIP should be able to check process status, kill and restart, be able to check and set firewalls, configure IP addresses and net masks, as well as proxy servers, and check system hardware and software configurations. Knowing how to restore a standard back-up “image” of the OS and relevant applications is critical as well.

8.       SQL queries: Almost all of the commercial databases use relational databases as the core of the Imaging and Information management systems ,and if not, for example, when they use the upcoming NoSQL databases, they still have a standard query interface based on SQL. Having access to the main database to perform custom queries is important for troubleshooting as well as for providing essential statistical information. For troubleshooting, imagine that images were sent to the PACS but they “disappeared,” meaning that they did not appear on a worklist for a radiologist and also did not appear in the “to be verified,” or “broken” study queue. This is an increasing problem as departments go paperless. In the past, there would be a piece of paper, i.e. requisition that would be given to a radiologist that he or she can use to find the study that was performed and has to be interpreted. If a department is paperless, there needs to be more checks and balances to make sure that nothing falls between the cracks as there is no visible evidence of that. Being able to access the database and find out using smart queries of what was added to the database is often invaluable. After being able to diagnose the issue, fixing it can be done either by the IIP, or by the vendor, or IT department, depending on whether the IIP has both read/write access. In addition to troubleshooting support, for statistics, it is very useful to be able to mine the database information for data analytics such as finding out the turn-around time of certain exams, how many images were created by certain modalities by certain technologists, how many studies were performed at certain modalities, etc.

9.       Monitor quality assessment: Monitors degrade because of their nature. Their light source degrades over time, which means that compliance with the DICOM calibration as well as meeting the common guidelines for maximum luminance has to be verified on a regular basis. Verification is often done automatically because the vendor has built-in sensors and calibration software that checks the performance regularly. Calibration has to be checked manually on a regular basis, depending on the application, for example, those used for digital mammography might require checking weekly and those used for other specialties monthly or even annually. A quick visual test can be done by using an appropriate test pattern, which would show any obvious issues with the proper mapping from the digital values onto the proper grayscale values on the screen. Being able to bring up this test pattern and interpret it is critically important, especially if there is a question about the image quality.

10.   Last but not least, there is one generic skill that is important to troubleshoot any problem, and that is being able to locate issues using simple logic and exclusion. One starts with identifying the area of concern and systematically excludes everything that seems to work to find the culprit. This skill is hard to teach and grows with experience but is essential to be able to diagnose any IOT related issues.


As a final note, I see many postings and questions in user groups about issues that could be easily diagnosed by using the right tools. Remember, it is all about visibility, for example, if a worklist does not work, try the tools as mentioned above to test the network, application and look in the log files. I am convinced that if every IIP would know how to use these tools and would be allowed by the IIP vendors access to the systems to use them, there would be less fingerpointing, less downtime, and better system support. Also, remember that by mastering these tools you empower both yourself and the organization you work for by visualizing issues instead of having to wait and rely on external experts either from the organization or your vendor. Lastly, you might want to consider getting certified to show to your employer and the outside world that you have learned these skills. Most of these are requirements included as the IT portion of the PARCA CPAS PACS administrator certification (see the OTech training schedule for more details about the schedule for CPAS certification).

Thursday, December 5, 2013

RSNA 2013: My top 10 on what’s new and what’s old, part 1.

It is a brisk morning, typical for November, and I am waiting for the bus to get me to the annual radiology circus: RSNA 2013 in McCormick Place conference center in Chicago. And a hustle and bustle it is indeed, especially around 10 am when the exhibition starts, and at 5 pm when it ends, and between scientific sessions when everyone is running to the next classroom to attend a presentation on the next latest and greatest technological innovation, or a lecture in order to get their CME credits.

Need my caffeine fix, even if it takes
20 minutes
Having a technical background, I can only comment on technological innovations and the curiosities at the meeting. These observations are obviously totally subjective and are offered as only a partial impression of what was shown. My findings are based on the OBWA method (Observations By Walking Around), i.e. seeing if there is something interesting that catches my eye and by talking with the many friends and colleagues and ex-colleagues, in the hallways, corridors and at the bar in the evening. This list is unlike the other RSNA news reports that are biased by published press releases or interviews from the industry “experts” with the inherent hype and vaporware.

I have split my observations into three parts, 1) what’s new, 2) what’s not so new (“new and old”) and 3) what’s old. So here are my first top 10 new ones:

A typical dual energy CT image,
showing the overlay with the 2nd image

  1. Multi-energy CT’s: Some of you might remember the very first generation of rotational CT’s, where a gantry containing the X-ray tube and detector were located in a rotating ring and images were taken slice by slice by rotating the gantry, and alternating clockwise and counterclockwise around the patient. Scan times could be 15-30 minutes with an additional 15 minutes or so doing the numbers crunching to create a set of maybe 50 images. Since then vendors introduced slip-ring technology to have a continuous rotation, multi-slice detectors to provide orthogonal voxels providing the source for excellent 3-D imaging, high-speed rotation of fractions of a second to allow for dynamic studies such as used for cardiac applications. Every time one might think the technology has matured, yet another innovation comes along to provide a complete new dimension and/or paradigm, such is the case with the CT multi-energy imaging capabilities. By processing two images that are acquired using a different energy spectrum, which can be done by using multiple X-ray tubes, switching voltage or, in the case Philips, multiple detectors, each with a different energy absorption characteristic, one can create different images that can provide more information than just the traditional attenuation information expressed in Hounsfield units. The images looked to me like fused PET/CT images however, the color images contain much different information using atomic numbers, which we will probably have to learn to interpret similar to when we saw the very first MR images. The CT images were created on a CT that is pending FDA approval so there is not a lot of experience available yet but who knows, this technology might become standard over the next few years, similar to the multi-slice capability of the recent CT’s.
  2. Micro-dose Mammo: There is also no disagreement that too much X-ray radiation can increase
    New mammo slit scanner technology
    the risk of cancer just as other factors such as nutrition, lifestyle, and genetic disposition can, and therefore any imaging solution that reduces the dose for digital mammography can only be applauded. With annual screening for women from let’s say age 50 to their average age in the US of 81 years resulting in a total of 31 exams, typically with two views or images taken for each breast. The good news is that the new Philips microdose digital mammo system uses a slit-scanner technology, which means that the tube rotates at an angle to scan the detector plate thereby reducing the dose by 40% according to the vendor. There are a few hundred of these systems installed in Europe, especially in Scandinavia and France, and installations in USA are just starting. Hopefully, this will challenge other vendors to either adopt this technology or rethink their implementations to achieve a similar reduction in dose.
  3. Social media for radiology: The use of facebook® for posting images is regarded by many as a
    novelty, something many, including myself, would never consider until a good friend of mine shared with me the fracture of his wife’s leg, which he had posted on his facebook page after she had a traffic accident. For those who have been ignoring facebook, I suggest you look a the “Radiology Signs” facebook page which allows people to post interesting cases, which has more than 400,000 “likes” as of today (are there even that many radiologists connected to facebook?). That is why at least one vendor is implementing a “facebook sharing” option in their PACS viewing software, obviously after making sure that the image is totally de-identified and stripped from any personal information. I believe that the power of social media cannot be underestimated and it might become not only a great education tool but also a forum for interaction and communication.
  4. Radiology-patient partnerships: The theme of RSNA President Dr. Sarah Donaldson’s address
    View from the top on Sunday, which
    was actually busier than usual
    was about partnership. I have seen partnerships between radiologists and other physicians in several institutions, notably those where physicians are on staff and paid by the hospital, which seems to avoid a lot of turf wars and breaks down silos between the different specialties. However, partnerships between radiologists and patients are a new concept as it is mostly the primary physician who gets a copy of the report and reviews it with the patient. As a matter of fact, unless a patient takes the effort to look at the person who signed the radiology report, he or she is almost never aware of who did the interpretation anyway. Therefore, even though it might be a good idea, I doubt that a radiologist would even return my phone call if I tried to call him or her about a diagnosis. I am fortunate in that I know several radiologists personally and if I want to ask a question about my or one of my friend’s or family member’s radiology exams, I send them a CD and ask for their (second) opinion. However, that is not an option for most people. Therefore, in my opinion, these are nice catchy phrases and make good headlines, but there has to be a major culture shift to make patient partnerships in radiology happen, if it ever will.
     
  5. NoSQL: Most people might not know what technology is behind managing those millions of
    images in a PACS or enterprise storage solution, but it is typically based on a relational database such as Oracle, Sybase, MySQL or other commercial or open source product. Databases were not designed with managing patient information in mind, except for the somewhat ancient MUMPS, which is both a language and a database, is the core of many of the popular EMR’s but is not typically used in PACS system databases. In addition, commercial database licenses are not cheap and therefore impact the system cost significantly. Therefore it is no wonder that vendors are looking for alternatives.
    Voila! the NoSQL, which stands for Not Only SQL, which indicates that they can still be accessed using SQL queries but also allow for other access methods. NoSQL databases were invented in the late 90’s and are very scalable and highly optimized for simple retrieval and updating operations such as used for medical applications. The nice thing about designing a product from scratch, which as an example, Karos did with their new VNA implementation, is that it allows you to use the latest technologies instead of porting or converting it from older technologies. NoSQL might become a good alternative to the commercial databases that are not as suitable and overkill for what a PACS or enterprise archive solutions are trying to accomplish.
  6. Analytics: Medicine is probably one of the disciplines that is the least measured and analyzed
    One of the many analytics companies
    with regard to efficiency and cost. Most institutions don’t have a good handle on how much it costs to perform a diagnostic procedure other than the amount of the reimbursement from an insurance company or medicare/medicaid. In order to analyze information we need to measure its input and make sure it is correct and accurate. 
    There are several companies offering analytics, most of them I had never heard of before, which showed information displayed on very nice dashboards, however, when talking with them I found that there is still a lot of missing and “dirty” information out there. There is also a lack of standardization of workflow and terminology, which SIIM is working on as part of their SWIM (SIIM Workflow Initiative in Medicine) project. Together with consistent terminology there needs to be a consistent implementation as well. For example, DICOM header information might contain one or all of the Attributes: Study time, Series Time, Acquisition time, and Content time. Some PACS systems even add a timestamp when an image is received by the PACS archive. Consequently, which time do you use to define a procedure length is at best a guess, especially if you want to compare modalities from different vendors who are using different Attributes in their image header. Similar problems occur when you want to record the report turn-around time: is the end time defined by the time that the radiologist signs the report electronically (assuming this is available), is it the time when it is sent to a report repository such as at the RIS, or when it appears in an EMR, it is faxed to a physician, or when it appears in his (secure) email inbox? In conclusion, there is going to be a major increase in analytics but we will have to do a lot of standardization of terminology and measurement, as well as data clean-up before we can trust the results of these tools.
  7. Compact CR: CR systems are getting more and more compact. The first generation CR I ever
    Couldn't be more compact
    encountered was a FUJI CR and it occupied a small room. That system included a printer as well as the CR technology and was ahead of the PACS infrastructure, which followed within the next 10 years. The first challenge was to make a CR small enough that it would fit on a table as a “tabletop” system. Having achieved that over the past 10 years, the latest technology allows it to be so small that it barely sticks out from the wall. The affordability also has come down so much that small practices, and even dentists, veterinarians, and chiropractors are now considering digital technologies instead of film. These are also great solutions for emerging and developing countries where there is an installed base of film X-ray equipment, which is a barrier to providing healthcare by itself as many can’t afford the film and associated developing costs. There will still be a need for larger high volume CR systems, but I would think that these small CR systems will become as ubiquitous as the many small copiers you find in offices in addition to a high speed copier in a central mailing room of an office.
  8. Wireless badges: When I used to work regularly with X-ray systems and/or visit X-ray
    Wireless X-ray badges
    departments, I would always carry my X-ray radiation badge with me. I remember that I occasionally forgot to take it out of my carry-on luggage at the airport, which caused a call at the end of the month from the radiation safety officer at my company questioning why the readout of the badge was much higher than normal. New badges are getting more sophisticated as they now can measure this information and send it wirelessly to a repository. This replaces the old collection and distribution system making it much easier and convenient, also allowing semi real-time monitoring.
  9. Less floor space: What was different this year was the reduction in exhibition floor space, which was somewhat of a mixed blessing. The smaller exhibit space was very welcome to many of those who in the past had to cross from the North-South location to the lakeside area exhibition halls. There were only two exhibit halls this year as vendors brought in significantly less “iron,” for example instead of a complete CT or MR gantry they would bring a
    Scale models instead of
    the real thing
    scale model and/or a monitor showing its images. Obviously the RSNA organization itself would have liked to rent out more space, but also if I were to buy a million dollar or more piece of diagnostic equipment, I might want to touch and feel it, similar to wanting to kick the tires when buying a car. Consequently, this year, if I had wanted to see the equipment prior to signing on the bottom line, I would have to travel to another facility to see it installed, or visit the manufacturer’s facility. I understand the cost savings to the vendors not having to haul these systems around, but I would feel somewhat cheated as a potential customer, especially after paying hundreds of dollars to be able to enter a tradeshow like RSNA.
  10. More lines: It seemed to me as if sequestration or budget cuts had an impact on the RSNA this year, as the waiting lines were much longer than it appeared to me in the past. Waiting for half an hour during the TSA check-in at the airport was to be expected for a holiday weekend, as well as 20
    This is a typical 35 minutes wait
    minutes to get a cup of coffee at the exhibition hall Starbucks, but having to stand in line for more than 30 minutes at 10 am in the morning just to pick up a badge and wait even longer for the bus getting back to the hotel at 5 pm while being exposed to CO2 exhaust fumes in the catacombs of the conference center was unexpected and in my opinion due to poor organization and customer service. I’ll definitely arrive earlier and depart later next year. Hopefully there will be more registration contractors and buses next year as well.
These were my observations about what’s new, and to be honest, there was not much earth shaking and or very innovative this year, as much technology has matured, which is why I will have a follow up on “what’s new and old” as well as “what old news,” in parts 2 and 3 of this report on RSNA 2013 (stay posted).

Herman O.

Monday, November 4, 2013

The quest to train professionals with in-depth hands-on DICOM and HL7 experience.

Students from our June PACS class
The current penetration of EMR’s for physicians is already exceeding 50% and for hospitals it is more than 70%. One of the most common issues that are brought up by CIO’s about their implementation challenges is the lack of available expertise to integrate these systems with their order entry systems, lab, pharmacy, and the numerous other connections. A typical institution has about 150 computer systems ranging from surgery to blood bank, all of them typically exchanging this information using the HL7 standard, and many of them needing to be connected to the new EMR’s. In addition to the much sought after HL7 expertise, as these systems are going to be increasingly “image enabled”, the need for professionals with detailed DICOM knowledge, especially for the integration and troubleshooting, will also undoubtedly increase significantly.

If you are looking for an opportunity to strengthen your skills, here is your opportunity as OTech offers its next DICOM/HL7 seminar (see schedule) in the second week of December, following our popular PACS administration certification class in the Dallas Metroplex. The HL7 training covers a comprehensive discussion on the Version 2 standard, including a hands-on section which allows students to create and test HL7 messaging from a variety of applications. The DICOM training covers not only the DICOM protocol and data format specifications in great detail, but also provides students with numerous tools and test images to be able to implement and troubleshoot integration projects.
Especially for those living in the Northeast and Midwest, this seminar in the Dallas area will be a nice break from the cold while having the opportunity to eat some local BBQ and steaks (and yes, if you are a vegetarian, we have lots of choices as well). Looking forward to see you in Texas!

Herman Oosterwijk

President OTech Inc.

Wednesday, September 25, 2013

Critical Results reporting, the missing link from a PACS.

The more mature PACS systems have become, the more it becomes obvious that there are still certain key functionalities missing, which have to be purchased as an “add-on” to have the system to meet the clinical requirements of reliable and timely physician communication.

These add-on critical test results messaging (CTRM) systems, as they are called, are especially important if there are critical test results that need to be acted on within a very short and defined time period. This is important for radiology results and for other imaging specialties, especially cardiology, but even more for lab results.

The classic example of such a result is when a radiologist observes a critical finding at 5 pm on Friday afternoon, which needs to be taken care of within the next four hours, and the patient has already been discharged, and the referring physician is unavailable after hours. In these cases, there has to be a well-defined chain of communication of that test result to the appropriate physician, with confirmation of the interpreted result and the action taken.

A special case of a critical result report is the ER discrepancy report, whereby an ER physician might have discharged the patient having missed a fracture or other important finding. I have first-hand experience with such a situation when my daughter, a few years back, went on a school trip to Colorado in the winter. The car with six students ended up in a ditch because of slippery weather. The students were transported to the ER, and subsequent X-rays did not show anything at first reading. The students continued on their school trip, and a radiologist came in the next morning and noticed a neck fracture in my daughter’s friend’s X-ray, and it took them another day or so to locate the girls in the lodge where they were staying. She was instructed to go back to the hospital immediately to be fitted with a neck brace. If a reliable means of communication between the ER physician, radiologist and patient had been in place, the potential risk would have been avoided.

CTRM systems are provided by several vendors, each with varying functionality. They can be integrated with the PACS, RIS, reporting system, or EMR. Most systems are tightly connected through a proprietary API interface. The workflow supported by these systems depends on the configuration capabilities. One should also make sure it supports the workflow that is used within your institution. It is possible to use one system for multiple specialties, or having multiple systems for each department, each with their own interfacing challenges.


When planning to purchase a CTRM system, one should perform a due diligence investigation that includes defined specifications for the requirements, test and acceptance criteria and site visits of installed systems in a similar environment. Otherwise it might not meet the requirements.  More details can be found in this video showing a discussion about the implementation steps.

Monday, September 9, 2013

CD’s vs the Cloud.

View of clouds in the Rockies, Co
Our recent on-line discussion regarding image sharing, which focused on advantages with regard to using a cloud solution for sharing information vs image exchange using CD’s, has gotten quite a few comments and created an interesting discussion on the various use groups. First of all, if you have not watched the video of this discussion, I encourage you to do so, at this link.

There were questions raised about security and encryption, which seems to be a major concern. One should realize that cloud solutions always provide either a secure dedicated link, or, in case they use a public communication infrastructure, it uses a VPN which provides encryption at the lower communication level, or at the transport level using https or similar. The information exchange is as secure as your credit card number when purchasing an on-line item in a store such as Amazon or eBay.

There are some distinct advantages of the CD exchange over cloud solutions: CD exchange is typically less inexpensive compared with a cloud solution, the integration of a report might be challenging using the cloud (they are typically added to the CD), and CD exchange is very simple: one can just pop in the CD and look at the images. Importing and/or viewing the images on a CD does not require an administrator in contrast to when importing images from an external source such as the cloud.

Even although these could still be considered a niche applicator, there are also challenges with exchanging video streams over the cloud because of bandwidth limitations and support by the receiver PACS of the MPEG encoding.


In my opinion, cloud transfer will prevail over the exchange of images using CD’s, but it will take some time. Cost reduction, and simplicity will certainly accelerate the acceptance of the cloud.

Tuesday, August 6, 2013

Image sharing using Facebook: fact or fiction?

There are several mechanisms and methods for sharing medical images between healthcare practitioners depending on the workflow scenario and the architecture used. Some of these applications have been in use for at least 20 years, some of them are still being developed, and some of these solutions might not make sense today but could very well change how we share images in the near future. Some of these applications might seem far-fetched, in particular with regard to exchanging images using social media. However, one should remember that the most common critique when Twitter was still in its infancy was that “it did not have a purpose,” until the Arab spring occurred in which the social media played a major role.  Therefore, I would not reject image sharing via social media as being far-fetched, but rather, take it as a valid option. Before we look at the Facebook image-sharing scenario, I want to describe the use-case scenarios and then look at how we can accomplish this using different architectures, list the communication options, and discuss the maturity of these solutions.

Use cases: There are many clinical disciplines that need to exchange images, the most common is  the exchange of radiology images for review and evaluation, however, practitioners in pathology, ophthalmology, dermatology and many other specialties also require image sharing. The most popular use cases are as follows:

Emergency medicine scenario
Often during off-hours, a study has to be reviewed and reported causing a preliminary report to be generated and sent back to the requester within a very limited timeframe, e.g. 15-30 minutes. A more detailed report is often created when a radiologist or other specialty practitioner is available, e.g. during regular working hours.

Primary radiology coverage
This is when a radiologist is not present onsite, such as is often the case in rural areas, or when a radiologist covers clinics in the suburbs, or provides coverage for disaster or war zones. In this case exchanging images with the onsite clinicians is essential. Instead of the “preliminary” read as used in the emergency scenario, the practitioner creates a final report.

Second opinion
When a specialist is looking for an opinion from a peer who might have more experience with a certain imaging modality or particular disease pattern, an image exchange is needed. This is commonly used when new modalities or acquisition techniques are implemented, such as CT/PET or MR/PET, or if the occurrence of a certain disease is not common in a particular region, for example, when a patient returns sick from travel to a tropical country to the US where physicians might not be familiar with let’s say malaria. This scenario is where social media also might play a role.

Comparison or referral
This occurs when the primary reason for the image exchange is not to make a diagnosis from the original study, as that already has taken place, but to have the previous studies available. For example, a patient is treated in another location and previous exams have to be viewed for either comparison to a new study, or what is more often the case, the previous study could be used to assess the patient’s condition without having to repeat that procedure again. This scenario “re-uses” the studies and reports as input to diagnosis and further treatment.

Implementation: Each image sharing application does not necessarily have a single implementation but a certain use case can be implemented using different methods, however, some of the architectures are more suitable to specific use cases than others. Let’s look at the mechanisms to exchange the studies.

Point-to-point modality to viewer: A technologist can push certain studies directly from a modality, such as a CT in an ER, to a doctor’s home for review at his or her DICOM viewer. There is a direct connection from the CT to the physician.

PACS to viewer: A PACS system could be set-up to route all STAT studies arriving from a modality directly to a physician’s workstation. This is similar to the point-to-point modality to viewer push, but the advantage is that there is a copy available at the PACS as that is used as an intermediary. If there are multiple modalities that have to share images, the sending can be centralized from a single source, i.e. the PACS router. If a PACS does not support sophisticated routing using rules determined by information in the image header in order to determine what information goes where, one could use an add-on image router, which can be provided by several manufacturers.

PACS worklist: Images are sent to the PACS, and the radiologist has access to the PACS worklist using the PACS workstation. The workflow management features of the PACS can be used to indicate which studies are STAT, which ones are being read, etc. This works well if a radiologist only reads from one hospital, or multiple institutions that all have the same PACS system. The same workflow is used whether the radiologist reads the images locally, or he or she accesses the PACS from a remote location.

Aggregate PACS: If the radiologists have to read from multiple different PACS systems, it makes sense for them to use their own mini-PACS servers and worklist management. This is typically how nighthawk or emergency medicine works, as these radiologists support many different hospitals, each with their own PACS systems from different vendors. The images are therefore routed from either the modality or the PACS to a Teleradiology PACS server, which aggregates the multiple work lists. The radiologist retrieves them from the Teleradiology server and does the reporting. The workstation uses a new “combined” worklist.

PACS web server: Several PACS systems provide a web server, or one can purchase a web server from a different vendor. The web server can be embedded in the PACS core software, or implemented as a separate hardware box, which will have a copy of the images from the PACS. Images are typically retrieved over the web and if one uses a true zero-footprint viewer, there is no trace left on the viewer after the user logs off, which satisfies privacy and security regulations. The worklist capabilities are often not present or less sophisticated as when using the aggregate PACS worklist. The advantage of a separate vs. an integrated web server is that images are available even if the PACS might be down, and therefore this access type can also serve as a backup. One could also use a mini-web server which gets the information directly from the acquisition modality, but that only makes sense for a small clinic with only one or two modalities and no PACS to speak of.

EMR: Instead of using a PACS, one can also use an electronic health record to view the images. The advantage is that there is much more contextual information available including lab results, previous reports, patient history, etc. Image enabling of an EMR differs from vendor to vendor. One can use a PACS plug-in, which basically launches a viewer inside the EMR window after exchanging the appropriate context information such as an Accession Number, or one could do its own query and retrieval from the EMR viewer to the PACS database or from an enterprise image manager and archive solution such as a Vendor Neutral Archive or VNA.

Image sharing using the cloud: Images can be exchanged using an external image sharing service, which functions as a broker and forwards the images to the recipient. There are two versions, i.e. either the cloud service provider uses only a store-and-forward mechanism, or the cloud functions as a repository and keeps the images for a certain time period. Institutions need to subscribe to the cloud service provider for a fee. This solution makes sense for institutions that regularly exchange information but not often enough to warrant a dedicated link to each other. A good example would be an academic or specialty hospital with relationships with several other institutions in a geographic area that refer patients on a regular basis and want to exchange images. Note that the institution is tied into one particular cloud provider that exchanges the information, which is typically in a proprietary method.

Image sharing using a Health Information Exchange (HIE): This uses the same architecture as used by the commercial cloud provider, however, the implementation follows open standards. The HIE can be private, for example as established within a provider network with several hospitals and/or clinics, or it can be public, for example as those being established as part of the incentives by the US federal government to improve healthcare.

Image sharing using a Personal Health record (PHR): The main application of the many PHR’s that are being rolled out are scheduling appointments, re-ordering prescriptions, having access to physicians notes and maintaining a communication channel between the patient and provider. The ultimate PHR would also allow for maintaining certain healthcare information, and it could be used for a patient to upload their images to have them available whenever needed. A patient would simply authorize the provider access to the information, which can then be exchanged in a standard manner.

CD exchange: For comparison or referral purposes, images are often hand-carried by the patient, which has its own logistics and interoperability challenges. A chronically sick patient might have literally dozens of CD’s that need to be exchanged at each appointment with a different specialist. There are still institutions that do not create images compliant with standards, making them impossible to read and/or import in a workstation for comparison. The AMA has complained about the wide variety of embedded image viewers, however, the resulting IHE profile definition, which is an attempt to standardize features and icons, does not seem to have gotten much traction. This is still the most common method of exchanging images for referral, which hopefully will be replaced in the not too distant future with other image sharing options described here.

Image sharing using social media: It is not uncommon for patients to post their images publicly on the Internet, sometimes just to share them, but also to ask for advice, in particular if it concerns a rare disease or something that is hard to diagnose. It is similar to radiology portals posting their “case-of-the-day” or of the week, but with the difference that the diagnosis is not (yet) known. There are also physicians that use their own Facebook or other social network to ask for advice. This is still an exception, and seems to contradict the increasing emphasis on patient privacy, however, I would argue that if a patient has no interest in keeping his information private, but rather would like to get as much exposure as possible for these images in order to get as many opinions as possible, this might be a valid option.

Connectivity: The network connectivity between the sending and receiving sides can be implemented in different ways; some are more common for certain applications than others. The most common implementations are:

SNKR – Sneakernet: In the CD exchange scenario, the information is exchanged in person or by mail, commonly referred to as the “sneakernet.”

PPDCM – Point to point DICOM: Images are typically exchanged between modalities or a PACS and pushed to a remote viewing station or to a Teleradiology PACS server using the DICOM format and protocol. In the case that one uses the public Internet, a VPN is set up to guarantee confidentiality of the information to be exchanged. The DICOM protocol relies on the reliable delivery by the underlying TCP/IP communication layers. If the bandwidth of the connection is limited and/or the study sizes are large, standard DICOM compression is used such as JPEG or Wavelet (aka JPEG2000).

GTWAY – DICOM to edge server/gateway: If the connection to the Internet is unreliable, or not available, one might need to use alternative communication channels such as the phone network or dedicated satellite links. In that case, an edge server or gateway is used that converts the DICOM protocol in a proprietary protocol, which in most cases uses a high compression ratio and very robust communication protocol. The gateway functions as a store-and-forward box, making sure that delivery is taken care of. This edge server talks to a server or a destination that has the reverse gateway, i.e. makes sure the images are received without any corruption and preferably then uses DICOM to pass them on. This solution is common for Teleradiology applications in rural areas, or disaster and military zones.

PPP – Point to point proprietary: This is commonly used by workstations that access the PACS server of the same vendor. They use the radiology worklist provided by the PACS, and, if they use a public network, a VPN is needed to encrypt the information being exchanged.

WEB – Web based protocol: The web server clients typically use a secure https protocol to access the images. Some PACS vendors also use https for regular in-house image access, but this is not common.

EML– Email: Emailing an image poses quite a few issues as the images are quite large even if they are compressed, and there is no context information. This assumes that one uses secure email to start with and that the receiver can recognize the .dcm file extensions, which are created for that purpose. DICOM has addressed this but the DICOM email has never taken off in the US, although it has been implemented in Germany and is somewhat common there.

XPHR – Personal  Health Record Exchange: This is an HL7 version 3 document exchange definition using the CDA or Clinical Document Architecture, which can exchange all relevant information between the Personal Health Record and a Electronic Medical Record.

XDS-I – Cross Document-Image sharing: The IHE organization has defined a series of profiles, including how to exchange documents and images. The XDS-I profile uses a series of transactions that allow an image producer and consumer to exchange both registry and repository information with a HIE. The image exchange uses the web version of the DICOM protocol, aka WADO or Web Access to DICOM objects. The XDS-I protocol is widely implemented by PACS vendors, especially those who claim to offer a Vendor Neutral Archive or VNA, however the number of institutions that actually use this protocol, especially in the US, is still relatively sparse. Note that there are also different variants of this mechanism defined by IHE, i.e. the Cross Community Access for Imaging (XCA-I) and the Cross Enterprise Document reliable Image exchange method (XDR-I). These are not using a registry but providing a direct query/retrieve and push mechanism for image exchange. These implementations are also still in their infancy.

RSTF – Restful Services: A new version of the DICOM protocol is being defined which expands beyond the WADO protocol and has greater functionality. The “traditional” DICOM protocol that includes a negotiation step to set up the association between two devices and uses the DICOM specific set of commands is not that suitable for accessing information over the web. This new DICOM extension is still very much in its early phases, however it might become popular as the need for web based access, especially from embedded viewers in an EMR becomes common.

INT– Internet: uploading images on a server using a proprietary protocol is typically used by social media, such as Facebook or other image-sharing services. The image would have to be converted to a web-friendly image type such as JPEG or TIFF, which almost certainly impacts the image quality. Therefore, one can typically only see gross anatomy and small findings are almost certainly not visible.

The table below shows the use cases with their typical architectures and the communication options that would be commonly used.



Use cases

Em. Medicine
Prim. Reading
Second Opinion
Referral
Implementation
Modality to viewer
PPDCM, GTWAY
PPDCM, GTWAY
EML
EML
PACS to viewer
PPDCM, GTWAY
PPDCM, GTWAY
EML
EML
PACS worklist
PPP
PPP


Multiple PACS
PPP
PPP


PACS web server
WEB, GTWAY
WEB, GTWAY


EMR access
WEB, GTWAY

WEB, GTWAY,RSTF
WEB, GTWAY, RSTF
Cloud sharing


GTWAY, EML, RSTF
GTWAY, EML, RSTF
HIE sharing


XDS-I, RSTF
XDS-I, RSTF
PHR sharing


XPHR
XPHR
CD exchange

SNKR
SNKR
SNKR
Social media


INT



Maturity: Some of the architectures and connections as described above are very mature, as a matter of fact, Teleradiology was implemented widely during the 1990’s, but some of these methods such as cloud services, the use of the XDS protocol, and Restful Services are still very much in their infancy. One way  to express the maturity is by using the “hype cycle” used by the IT consulting firm Gartner, which is used to represent the maturity, adoption and social application of emerging technologies. It maps maturity against visibility in a curve and it identifies five phases:  1) the technology trigger, where a potential new technology kicks off, 2) the peak of the inflated expectations, when a number of success stories as well as failures are produced, 3) the trough of disillusionment, when interest wanes as implementations fail to deliver, 4) the scope of enlightenment when the technology is starting to be understood and 5) the plateau of productivity when mainstream adoption takes off. I have listed my assessment of these technologies on this curve in the figure below.


In conclusion, there are many reasons for image exchange, and several different architectures and implementations with different communication mechanisms. Each has its advantages, some of which are very mature and some still very immature. Both the industry and provider community are trying to figure out how and what to do knowing that many of the solutions are still in the early phases of the hype cycle. Time will tell which method and protocol will prevail, but, as with any technology, at that time there will be other technologies pushing the curve, which makes this field so interesting and never boring.