Monday, July 29, 2013

Do You Want a Side With That Procedure Code?

Summary: Communicating laterality via procedure codes is challenging, and varies between coding systems and across interfaces.

Long Version.

There are basically two ways to communicate in a structured manner the laterality of a procedure (i.e., left or right knee, versus both knees, versus an unpaired body part like the pelvis).

One can either send one code that is defined to mean both the procedure and the laterality, so called pre-coordination, or send multiple codes (or elements or attributes) with the meanings kept separate.

SNOMED, for example, does not pre-coordinate the laterality with the procedure. For example, it specifies only P5-09024 (241641004) as the generic code for MR of the knee. There is a SNOMED generic qualifier for right, G-A100 (24028007). This is a somewhat arbitrary limitation, since SNOMED does pre-coordinate the concepts of "MR" and "Knee", however.

LOINC, on the other hand, has pre-coordinated codes for left, right and bilateral procedures. For the MR knee example, these are 26257-6, 26258-4 and 26256-8 respectively.

The UK has a National Interim Clinical Imaging Procedure (NICIP) code set, and it also uses pre-coordinated codes, in this case MKNEL, MKNER and MKNEB, respectively. The NICIP code set has the interesting feature of being mapped to SNOMED, which we will return to later.

So if one has a procedure code with laterality pre-coordinated, one is good to go. These codes can be used in the HL7 V2 ordering messages (Universal Service ID), and passed through the DICOM Modality Worklist and into the images and the MPPS unhindered.

Better still would be to have the modality extract the laterality from the supplied procedure code and populate the DICOM Laterality attribute (or Image Laterality, depending on the IOD) so as to facilitate downstream use (hanging protocols, etc.) and reduce the need for operator entry or selection. This would be easier, of course, if the laterality concept were sent separately, but it isn't. The extraction is not often (ever?) done, and population of Laterality, if populated at all, remains operator-dependent. Nothing prevents a clever downstream PACS or VNA extracting this information on ingestion though, and creating a "better" Laterality value and coercing it in the stored object, if none is already present in the images, or raising an alert if there is a conflict with what is present in the images.

Nor is laterality required, or even mentioned, in the Assisted Acquisition Protocol Setting option of IHE Scheduled Workflow (SWF). There is, however, the possibility of sending laterality information in the protocol codes, as opposed to the procedure codes, but this is not usually done either.

On the other hand, if one is using SNOMED for one's procedure codes, there are several practical problems. SNOMED's contemporary solution would be to create values that could be sent in a single attribute by using post-coordinated expressions using their "compositional syntax". For the MR of the right knee example, that might be "241641004 | Magnetic resonance imaging of knee | : 272741003 | laterality | = 24028007 | right".

This is all very well in theory, and the British and the Canadians (well, the Ontarians anyway) are very excited about using SNOMED for procedure codes, but there is the small practical matter of implementing this in HL7 V2 and DICOM. Code length limits are (probably) not an HL7 V2 issue, but they certainly are in DICOM.

Since both modalities and worklist providers can only encode 16 characters in the Code Value (which has an SH Value Representation), we are out of luck trying to encode arbitrary length compositional expressions. Indeed even switching to the SNOMED-CT style numeric Concept IDs (24028007), rather than using the SNOMED-RT style Snomed ID strings (G-A100) that DICOM has traditionally used for Code Value, is a problem. The Concept ID namespace mechanism allows for up to 18 characters, which is too long for DICOM unless there happen to be enough elided leading zeroes, and this is a special problem for national extensions. Bummer.

Unfortunately, the Code Value length limit cannot be changed since it would invalidate the installed base. There have been various discussions about adding alternative attributes or Base64 encoding to stuff in the longer numeric value, but there is no consensus yet.

For the time being, for practical use, either laterality has to be pre-coordinated in the single procedure code, or it has to be conveyed as a separate attribute in the DICOM Modality Worklist.

With respect to the possibility of a separate attribute, a forthcoming white paper from the IHE Radiology Technical Committee, Code Mapping in IHE Radiology Profiles, discusses the flow of codes throughout the system. It mentions the matter of laterality, and what features of IHE Scheduled Workflow can be used if laterality is conveyed separately from the procedure code. In short, there are specific HL7 V2 attributes (OBR-15 in v2.3.1 and OBR-46 in v2.5.1), whose modified use is defined by IHE to convey laterality. And there is an accompanying requirement to append the value to Requested Procedure Description (0032,1060) for humans to read, but that is better than nothing (or depending on a piece of paper or the RIS terminal). But there is no standard way to convey laterality separately and in a structured manner in the DICOM Modality Worklist, which means there is no (automated) way to get it into the images.

Another effort to standardize procedure codes, the RadLex Playbook, also currently defines pre-coordinated codes for left (RPID708) and right (RPID709) MR of the knee. A minor and remediable issue is that it does not currently have a concept for a bilateral procedure, unless one gets more specific and additionally pre-coordinates the use of intravenous contrast. This does highlight that the RadLex PlayBook is a bit patchy at the moment, since it grows over time as new concepts are required when encountered during mapping of local coding schemes. Earlier attempts to include every permutation of the attributes of a procedure resulted in an explosion of largely meaningless concepts and was abandoned, so the current approach is a good one, but these are early days yet.

On the subject of contrast media, one does not usually use intravenous contrast for MR of joints, unless there is a specific reason (infection, tumor, rheumatoid arthritis). On those occasions when it is required, it is desirable to be able to specify it during ordering or protocolling, and it certainly affects mapping to billing codes. There is also the possibility of intra-articular contrast (MR arthrography) to consider.

Each of these concepts needs to be pre-coordinated with the side to come up with one code. It can be difficult to determine, unless separate concepts are defined, whether the more general code (contrast not mentioned) is intended to mean the absence of contrast, or if it is just not specified and is a "parent" concept for more specific child concepts that make it explicit. SNOMED, for example, does indeed have concepts for knee MR with IV contrast, P5-09078 (432719005), and knee MR arthrography, P5-09031 (241654006). These are both children of 241641004, implying that the later is agnostic about contrast. There are no contrast-specific SNOMED concepts that have laterality pre-coordinated, though, as expected.

So, for codes specific to the MR of the right knee, in LOINC, NICIP and RadLex one finds:

26258-4MKNERRPID709contrast unspecified
36510-6noneRPID1610without IV contrast
36228-5MKNERCRPID1611with contrast IV
26201-4noneRPID1606with and without contrast IV
43453-0nonenonedynamic IV contrast
36127-9MJKNRnonewith intra-articular contrast

So LOINC is the only scheme currently that is sufficiently comprehensive in this respect. There is talk of a RadLex-LOINC harmonization effort, which, when underway should address that gap in RadLex. There is also a new LOINC-SNOMED agreement that has recently been announced, which will hopefully result in pre-coordinated LOINC codes being those used "on the wire" (encoded in messages and objects), but with the advantage of the availability of a mapping to their equivalent SNOMED concepts. It will be interesting to see how those who have hitched their wagon to encoding SNOMED on the wire are affected by this new agreement, or whether they switch to using LOINC codes.

By the way, NICIP has an MRI Knee dynamic code too, MKDYS, but it is not side-specific, so there is some patchiness therein as well.

NICIP is also interesting because mapping issues related to laterality are explicitly described in Guidance for the National Interim Clinical Imaging Procedure (NICIP) Mapping Table to OPCS-4. I gather that OPCS-4 is the UK equivalent of a billing code set, but used for operational and resource management purposes. Specifically, the issue of mapping side-specific NICIP codes to SNOMED's non-specific code is addressed, in the context of a body region multiplier that is needed. To use their example, an MR of the left knee would map from MKNEL to SNOMED 241641004, and thence to U13.3 (MR of bone) and Z84.6 (knee joint) but would need to have laterality post-coordinated with the SNOMED code to translate to Y98.1 (radiology of one body area). Whereas an MKNEB would translate to Y98.2 (radiology of two body areas) (and interestingly a different primary code (U21.1 MR), though that may just be an error in their mapping table).

Most folks, in the US at least, don't use standard procedure codes for ordering and instead rely on those codes internally developed for use in their "charge master", which may or may not bear some resemblance to billing codes or something that a vendor has supplied. This may change as more robust and well accepted standard schemes are developed and harmonized, or integration is required with other systems for handling appropriateness of ordering and utilization, and reporting of quality measures.

Regardless, whether one uses standard or local codes, the question of communicating laterality in a structured electronic manner remains a challenging one. It is best addressed by looking at all the systems as an integrated whole, to take advantage of as much automation as possible, without manual re-entry, to improve quality and operational efficiency. Hopefully as many standard attributes and mappings can be leveraged as possible, without local customization.


Sunday, July 28, 2013

Cloudy, With a Chance of Collapse

Summary: The Software as a Service (SaaS) business model has long-term viability challenges. Cloud/SaaS enthusiasts beware.

Long Version.

I came across a piece of hype about "cloud-based makeovers" for imaging and lab results in Venture Beat, referenced from a Linked In Clinical Trial Imaging group posting.

It is nice perhaps that an EHR vendor executive apparently "gets it". What interested me though, was not the fact that that folks were repeating the obvious, that CDs suck, and it is worth exploring  a "cloud" or Software as a Service (Saas) medical image delivery method, whether for clinical care or clinical trials.

Rather, at the top of the page, was a link to another article entitled "the unprofitable SaaS business model trap" by Jason Cohen. Now that was interesting, not because I indulge fantasies of starting a SaaS business (at least not on a very regular basis or very seriously), but because it caused me to start to wonder about how potential customers of SaaS services for EHR and medical image sharing, PACS and VNA assess the potential longevity of any service provider they get into bed with.

Not that "cloud" and "SaaS" are necessarily synonymous (e.g., see Wikipedia's description of Cloud Computing, "SaaS" and even Storage as a Service (STaaS), "Cloud Computing vs SaaS", "Demysityfing SaaS vs. Cloud", "Cloud Computing Versus Software as a Service", "Cloud vs SaaS", "Understanding the Cloud Computing Stack: SaaS, PaaS, IaaS"). For the sake of argument, in the context of EHR and image transfer or sharing or distribution or viewing, let us assume that the customer is using a pay-as-you-go (PAYG?) service, which is the issue discussed in the article.

Healthcare use cases have an additional quality and regulatory burden that is inflicted, for better or for worse. This creates the need for even more spending by the provider, beyond R&D and Admin cited by Chohen. So the long term viability question should perhaps be even more at the forefront of healthcare customers' minds. Not to mention wasteful certification (aargh!) spending, as well as the costs of integration and the cost/risk of migration at the end of a failed service-provider relationship. Cohen describes 75% annual retention, with the potential for complete customer turnover after 4 years; it would be interesting to see healthcare-specific numbers.

Some vendors have been successfully offering SaaS in the PACS world for a while. This 2011 Aunt Minnie article summarizes an InMedica report. It would be interesting to see what the relative proportions are now, and whether the 1% share in 2010 that was both storage and software hosted by a third party has grown since, and by how much.

One question I might have for a potential service provider would be how diversified they are, and whether the SaaS offering is their only source of revenue. Diversity though, is no guarantee the provider would not kill off an unprofitable business line, of course. How many large PACS vendors regularly completely change their architecture or even their entire product line and end the lives of their customers' installations, whether a capital acquisition or service was involved?

Sophisticated customers and vendors probably have stock questions and responses in this respect. I do wonder how often the small, inexperienced customer gets sucked in by a "0% API for an introductory period" pitch, and potentially puts their data at risk in the face of impending penalties or loss of incentives if they don't "go electronic". Or the small and enthusiastic provider makes promises with the best of intentions but without the ability to follow through. Or perhaps, without the best of intentions, has a "someone big will buy us for our customer base before our burn rate catches up with us" exit strategy.

On the other hand, worst case, if your SaaS vendor goes under and takes all your data down with them, how bad could it actually be? No worse than an imaging center or clinic or hospital closing, and no longer being a reliable source of priors or historical records, maybe. Here is an interesting article about "Protecting Patient Information after a Facility Closure" that is worth a read, perhaps with respect to what you might want in a SaaS contract. At least you won't have to worry about migration though, if the data is completely lost.

I guess in the long term, as health care systems globally collapse under the weight of aging and sicker populations, it will merely be a matter of which races to the bottom faster, the non-viable SaaS providers or the non-viable health care providers. Their lost or inaccessible electronic records will probably be the least of our worries. It's a cloudy, gray and rainy day in the North-East today!


Thursday, July 25, 2013

MU Stage 3 Imaging Comments

Summary: Early this year comments were submitted for MU Stage 3, addressing viewing, downloading and transmitting images and radiation dose information.

Long Version.

I should have posted this back in January when I submitted my own comments, but better late than never.

The HIT Policy Committee put out a Request for Comment Regarding the Stage 3 Definition of Meaningful Use of Electronic Health Records (EHRs), Docket ID: HHS-OS-2012-0007, which was just that, an RFC, and not a proposed rule making. Within it, several issues were raised of relevance to the image sharing community, including the following that I considered it was important to comment on:
  • Moving Stage 2 Menu Item to Core, regarding "imaging results consisting of the image itself and any explanation or other accompanying information are accessible through Certified EHR Technology"
  • With respect to View, Download and Transmit (VDT), a question was asked about exploring the readiness of vendors and the pros and cons of including certification for actual images, not just reports
  • With respect to View, Download and Transmit (VDT), a question was asked about exploring the readiness of vendors and the pros and cons of including certification for radiation dosing information from tests involving radiation exposure in a structured field so that patients can view the amount of radiation they have been exposed to
If you are interested in reading my comments, you can find them in the docket as HHS-OS-2012-0007-0082. I won't repeat them here, though I did just notice a typo (IID will be tested at the 2014 connectathon, not the 2015 connectathon).

Other folks also made relevant comments, including MITA (HHS-OS-2012-0007-0559), DICOM (HHS-OS-2012-0007-0575), and the ACR ITIC (HHS-OS-2012-0007-0571).

The government's site allows you to search the contents of the docket to find relevant comments.

For example, if you search on the word "DICOM", you will find in addition to the aforementioned, a bunch more from various vendors and facilities, some of which are generally supportive (e.g., Aware, Green Leaves, ACR, Siemens, lifeIMAGE, AAO, ACC), some less so (e.g., Philips, Heart Rhythm Society, Boston Medical Center, AAFP) and even some still completely opposed, for example, to providing images to patients (Intuit Health).

Even the EHRA comments this time, though still expressing concern, were not relentlessly negative; confirming perhaps, that the strategy of using a link and having the images supplied by a different type of system, does indeed assuage the EHR vendors concerns expressed last time around.

One can dig deeper, e.g., by looking for all comments related to "image", though one gets a lot of spurious hits. But for example, one finds individual facilities expressing concern. For example Montefiore, who are concerned about the need for integration with radiology practices, and that "interpretation of the image is not within the expertise of the orderer".

There does seem though, to be a positive trend in the direction of including imaging more comprehensively and in a standard manner in Stage 3, though there is certainly a long way to go yet. Who knows who is listening, whether they have an open mind, and whether any proposed rule making will goes as far as imaging-centric folks like me might hope (not to mention what standards, if any, might be required).


Wednesday, July 24, 2013

Display It! Now! I Command You!

Summary: The new IHE Invoke Image Display (IID) Profile enables an EHR/EMR/PHR/RIS to command a PACS/VNA/Viewer to display one or more imaging studies, without being concerned about where those images live or what form the viewer takes.

Long Version.

One of the good things about Meaningful Use is that it has drawn attention to the View use case for images, all limitations with respect to Download and Transmit that I have bemoaned before aside. A similar use case is important to the UK Imaging Informatics community, and no doubt everywhere else too.

The ink is drying on the new Invoke Image Display (IID) Profile from the IHE Radiology Technical Committee, which is intended to help with this use case.

Since I have to give a Webinar on the subject next week, I thought I would discuss the general principles (you can find the slides here).

IID works with a simple HTTP GET request and some parameters encoded in the URL. One system, like an EHR or EMR or PHR (or RIS or HIS or whatever the "non-image-aware" system is), can request that one or more studies, identified generically by id and date range or recency, or specifically by UID or accession number, etc., be displayed by another system (like a PACS, VNA, Workstation, Viewer, Image Portal (Staff or Patient), Proxy or Gateway or whatever).

No questions asked. Just display it. No concerns about format. No SOAP. No XML. No REST. No arguments about capabilities. Just do what you are told. This approach appeals to the closet (?) autocrat in me.

Examples of different requests: ?requestType=PATIENT &patientID=99998410^^^AcmeHospital &mostRecentResults=1 ?requestType=STUDY &accessionNumber=93649236 ?requestType=STUDY &studyUID=1.2.840.113883.19.110.4,1.2.840.113883.19.110.5 &viewerType=IHE_BIR &diagnosticQuality=true
The "viewer" (Image Display), however it is invoked, whether it be on/from a phone, tablet or desktop, within the user's web browser, zero footprint or not, thin or thick client, or even a separate workstation sitting beside the browser computer (e.g. a mammography workstation), has certain minimum responsibilities. They are summarized as interactive viewing. They include navigating within the requested studies (including change studies, series, and scroll between images and frames), manipulating the appearance of the displayed image (window, zoom and pan), control over diagnostic quality or not, and key images only or not. The full Basic Image Review Profile is not required, but is a named type of viewer that may be requested and optionally supported.

This approach begs the question of how the requester knows which server to call. The answer in brief is by configuration (and perhaps matching of report locations to pre-configured lists of servers, etc.). But this is an alternative to having n:m proprietary customizations and configurations of EHR to PACS, and it is an alternative to hardwired URLs (e.g., to proprietary or WADO references to images) that may go stale, and require a separate viewer. And if the approach is adopted then an additional standard endpoint discovery mechanism could be figured out.

It also avoids questions of security (authentication, authorization and access control) by de-referencing these to whatever standard mechanisms can be deployed at a lower level that are appropriate for HTTP requests. So whether SAML, OAuth or something else prevails, or if in the worst case the invoked display requires one to log in yet again (ugh), or is just pre-configured to trust the requester, this again is a matter for site configuration.

There are other deployment questions that are important to consider, not the least of which is browser capability and permissions to install/execute JavaScript, Java, ActiveX, plug-ins, or whatever, assuming that the requester is even browser-based, and not a thick client or native app performing HTTP requests.

Regardless, it is expected that the deployment burden is lower with this approach than with proprietary customizations of a combinatorial explosion of pairs of EHR and PACS.

Thus, IID is one more standard "component" to use as a tool bring to bear on the non-trivial problem of image distribution and sharing, particularly with loosely coupled non-integrated systems.

Note also that IID is not confined to staff viewing use cases; there is no reason why the same mechanism can't be used for a patient portal that is not image enabled to request an imaging system to display images for a patient (non-trivial authentication, access control and provisioning issues having been addressed).

It is also potentially useful for commanding behavior in a workflow managed environment, i.e., to use a workflow application to command a workstation to display something (that it has or knows where to get), rather than having a workstation pull a work list and have a user select from it.

Historically, to give credit where it is due, the idea came from the IHE Cardiology group. They introduced it as a transaction in their Image Enabled Office Profile, and we have extended it and brought it out as a separate profile so that it may be more generally applicable (and Harry tells me he will update IEO retrospectively to account for our tweaks).

So, get coding ... it would be great to have a few IID implementations register for the IHE NA Connectathon in snowy Chicago in January 2014 to work out the kinks. Maybe I will see you there, if I haven't quit IHE by then because of the Certification nonsense, which continues to spread like a cancer throughout the IHE organization.


Tuesday, July 9, 2013

Out of Body Experience - Anatomical Information in Images

Summary: Anatomical information is sometimes hard to come by in images, but it's not as bad as you might expect.

Long Version.

Information about the anatomic region included in a set of images is useful for a number of obvious reasons.

First and foremost, whether the user be an imaging specialist, a clinician who performs their own imaging, a referring practitioner who has requested imaging or is interested in procedures already performed, or a radiographer/technologist about to begin a new procedure, if one is browsing through a patient's record trying to find the "right" images(s) to answer some clinical question, anatomy, together with modality and approximate date are useful.

A related use case, and one which is largely behind the scenes but impacts the quality of the user experience, is to pre-fetch images for any of the first set of use cases, and as we discussed last time, pre-fetching is back in vogue for one reason or another.

Hanging protocols are another application, particularly for longitudinal comparison of complex procedures that involve multiple parts, e.g. skeletal surveys.

So where does the anatomical information come from, in terms of who populates it and in which data elements?

In an ideal world, the anatomy would be implicit in a standard procedure code that was supplied in the request from the order entry system, which might be refined somewhat during the "protocolling" step in the RIS, then fed to the modality via the modality worklist, amended by the operator if they need to perform something other than what was requested, and then recorded in the images and the performed procedure step, and included in the reports. This procedure code, being standard, would have a standard mapping to its related concepts, i.e., what the general anatomic region was, and what the anatomic focus was.

Though such standard procedure codes do exist, in SNOMED, LOINC and more recently in RadLex (which has recently been extended to include CR/DX and NM), they aren't widely used. Indeed, as far as I can tell, they aren't used at all yet. In over a decade of performing international multi-center cancer clinical trials in my last job at RadPharm/CoreLab Partners/BioClinica, I never saw a standard value in the Procedure Code Sequence data element of any image (with the occasional exception of US CPT-4 codes, which though arguably "standard" are billing not ordering codes). Most often there was nothing there, or sometimes illegal empty values or garbage dummy values. If anything was present, it was a private or local code.

That said, there does seem to be reliable standard anatomic information in the image headers a large proportion of the time.

The history of this begins with the original DICOM standard released in 1993. Prior to that time, there were no data elements defined for describing the anatomy in the ACR-NEMA standards (of 1985 and 1988). DICOM introduced the Body Part Examined data element at the Series level, primarily for use with projection radiography (CR at the time). The original list was relatively short, 16 defined terms, ABDOMEN, ANKLE, BREAST, CHEST, CLAVICLE, COCCYX, CSPINE, ELBOW, EXTREMITY, FOOT, HAND, HIP, KNEE, LSPINE, PELVIS, SHOULDER, SKULL, SSPINE, TSPINE. Being defined terms, vendors (and users) are permitted to extend this list, as long as they don't duplicate the meaning of an existing term, and for example Fuji CR describes in its conformance statements also sending HEAD, NECK, LOW_EXM, UP_EXM and TEST.

How did CR modalities obtain a value to populate this data element? Simple, they asked the operator. In the case of Fuji CR, the image processing and parameters applied to make an interpretable image are body part specific, and so the operator selection serves multiple purposes, applying the right processing and populating the DICOM data element. Over time, more general image processing algorithms have evolved that may not require anatomical information, but as X-Ray generators and tubes have become integrated, the body part specific selection of X-Ray technique factors provides another source of this information.

The Digital X-Ray object, introduced in 1998, both to support digital detectors and to improve upon the CR object in DICOM, went one step further and "coded" the anatomy more formally. I.e., rather than using a single string value, a triplet of coding scheme (e.g., SRT for SNOMED), code value (e.g., T-04000) and code meaning (e.g., "Breast") were used in a data element called Anatomic Region Sequence. A list of SNOMED codes for useful anatomic regions was provided, longer this time, 73 if I have counted those listed in Supplement 32 correctly. Included was a mapping from the "older" Body Part Examined string values to to the new SNOMED codes, the list of standard values having grown slightly in the interim.

Some of these new codes remained at the same general level of specificity as the historical Body Part Examined values, e.g., (T-D3000, SRT, "Chest") and CHEST. Others were very specific and for particular uses of radiography, such as to support particular views (e.g., (T-61300, SRT, "Submandibular Gland") to describe submandibular sialograms); others were specialty-specific (i.e., support was added for not only general radiography, but also mammography and dentistry). As an aside, a much more rich description of the projection or view was also added, including codes for epnymous views (such as (R-102AE, SRT, "Waters"), etc.). The approach used at the time was to go through the classic projection radiography textbooks, enumerate all documented techniques, describe their anatomy and other dimensions, and add data elements and coded values for each, and then iterate with radiologists and applications specialists to assure comprehensive coverage. Some implementers expressed skepticism about burdening the console/QC station/plate reader operator, but with education about the possibility of using integrated generator/gantry information to capture the data, and the need to orient the image correctly and document its orientation, progress was made. I used to preach about this in my RSNA Refresher Course on Digital Radiography.

Over the years, all subsequent new DICOM image objects have been defined to use Anatomic Region Sequence, but Body Part Examined remains popular, and has been retrofitted with standard string values for a broad range of purposes, and the list now contains 112 standard values (including the aforementioned examples of GALLBLADDER and SUBMANDIBULAR). This has been done largely in recognition of the fact that the CR object has not gone away (despite the DX object being superior in every way, though I am not biased at all). Sadly, many PACS and viewers are still too dumb to handle coded triplets for display or switching. To be fair, if a PACS or viewer is going to allow the user or site to customize behavior based on some of these values, it is easier to develop a configuration user interface that allows them to enter plain text strings to match, rather than force them to think about codes or choose from a pre-populated drop down list of SNOMED codes (that may not be up to date).

The list of body parts and anatomic region codes has been extended to cover the cross-sectional modalities too. In the early days, there was absolutely no indication of body part in CT and MR images. The standard described the use of Body Part Examined in the General Series module, so it was available, but you may recall that there was nowhere in the user interface on the console to enter it. There was no cutesy little homunculus to point and click to select the protocol, in which the anatomy was implicit. Before the days of modality worklist, there was no place to copy it from (not to say that anatomy is explicit in MWL either, but it can be derived from the Requested Procedure Code, or Scheduled Protocol Code Sequence, or nowadays the Protocol Context Sequence). Indeed, there were no standard protocols and one had to select (or type in) all the technique parameters individually every time. The best one could hope for was something meaningful in Study Description (more on that later).

CT and MR operators nowadays have it pretty easy by comparison, and as vendors have made the user interface more automated and graphical and intelligent, more information has become available for re-use. Many contemporary CT and MR modalities are indeed populating Body Part Examined and/or Anatomic Region Sequence, using values derived from operator protocol selection (and in some cases IHE Assisted Acquisition Protocol Setting).

Ultrasound is a tricky modality, being so operator-dependent in terms of positioning, as well as requiring discipline in terms of selecting from the user interface a description of each captured image. After an abortive attempt in the original DICOM standard to define encoding of ultrasound images, which included stuffing body part information into a value of the Image Type data element, a much cleaner Ultrasound IOD was quickly released, in Supplement 5. It was one of the first to use the Anatomic Region Sequence with codes, as described earlier, thanks to the influence of Dean Bidgood. Unfortunately, it seems that very few, if any, ultrasounds devices actually provide a means for the user to populate this attribute. Nor is Body Part Examined populated in ultrasound as far as I can tell.

Which brings us back to the question of reality. What does one actually see in real world image objects received from various sites? Are these Body Part Examined and/or Anatomic Region Sequence being populated? Do they contain standard values or non-standard strings or codes? Even if they are populated, are they correct and reliable?

The bottom line seems to be that in this day and age, for many modalities, they are often being populated, and if populated they are much more often using standard rather than non-standard values, and appear to be reliable when populated. This may be contrary to some peoples' beliefs or observations, but I can only report my own experience in this respect. As I mentioned before, in my former cancer clinical trials life, I had the opportunity to monitor images from literally thousands of sites around the globe, for most modalities (ultrasound being a major exception), from all vendors and vintages of machine. I can't report exact figures, but on several occasions in the past I examined what we were receiving to ascertain the feasibility of various efforts to improve the workflow of comparing successive time points, for both projection radiography and nuclear medicine bone scans as well as cross-sectional modalities.

In general, for projection radiography with CR, Body Part Examined is populated with a standard value about 75% of the time, is empty or absent about 10%, and contains a non-standard value about 15% of the time. Spot checks on individual images showed that the value sent is rarely incorrect.

This is surprisingly good for CR perhaps, which one might expect to be the least reliable, given the ease with which some vendors allow their sites to customize what can be put in there. If one inspects what non-standard customized values are being sent, they fall into a couple of categories:
  • local language equivalents, e.g., BASSIN rather than PELVIS, BRUSTKORB rather than CHEST
  • extensions that include the view too, e.g. CHEST_PA
  • reasonable values that we should probably add to the standard list, e.g., FOREARM
  • incorrectly spelled equivalents, e.g. "L SPINE" with a space or "L_SPINE" with an underscore, instead of the standard "LSPINE"
  • incorrectly capitalized equivalents, e.g., "Chest" instead of "CHEST"
  • literal copies (sometimes capitalized) of some procedure or billing code, e.g., "CHEST 1 VIEW" or "XR ACUTE ABDOMEN W/PA CXR"
Not infrequently, non-standard values are not only non-standard, they are illegal. The CS (Code String) value representation does not permit lowercase or most special characters or accents, for example, and is limited in length to 16 bytes.

I can see why non-English-speaking sites are tempted to replace all the codes with local language equivalents, since the literally encoded value may be displayed in some modality and PACS user interfaces, or at least in some configuration screens, such as for hanging protocols. But they really shouldn't, since the standard values are supposed to be used regardless of the locale, and the user interface should perform the translation. This is just a bad, though understandable, practice.

One of the strengths of using Anatomic Region Sequence instead of Body Part Examined is that it is local language independent and one can send, and recognize, the same code value, regardless of the code meaning. I.e., one can send (T-D3000, SRT, "Chest") or (T-D3000, SRT, "Thorax") or (T-D3000, SRT, "Tórax") or (T-D3000, SRT, "Brystet") or (T-D3000, SRT, "胸郭") and they all mean the same thing. The idea is that hanging protocols, routers, pre-fetchers or just ordinary human readable browsers should recognize the code (T-D3000, SRT) and render to the user whatever is the locale appropriate string. The code meaning encoded in the message is only there as a fall back in case the code is unrecognized (and indeed it used to be optional in DICOM when coded tuples were first introduced). Theoretically; unfortunately, the lowest common denominator in localization of PACS and viewing applications is probably not up to substituting code meanings yet, probably as a result of user's having higher priorities than localization (or their requirements not being taken seriously by the vendors).

For cross-sectional modalities, given their history, I was expecting a lot worse than I actually observed. For CT, for example, about 60% of the time there is no value sent. No surprise there, but it could be much worse, and this is a sign of improvement. About 35% of the time there is a standard value, and about 5% of the time there is a non-standard value. For MR one sees values much less frequently; roughly 85% of the time there is no value, 10% a standard value, and 5% a non-standard value. For PET though, neither Body Part Examined or Anatomic Region Sequence are ever sent, which is pretty lame (how hard is it to send the code for "whole body" anyway?).

Nuclear medicine is a mess. Like the ultrasound objects, the NM objects were revised early and redefined to include Anatomic Region Sequence. One standard value one sees fairly often is ("T-11000", "SRT", "Skeletal") for whole body bone scans, not surprising in an oncology practice. For historical reasons, the coding scheme may be "99SDM" or "SNM3" rather than "SRT", the price NM pays for being an early adopter of coded tuples. That said, one also sees a lot of private codes from one particular vendor, who sends "99NMG" for the coding scheme, and then sends codes that include not only the anatomy but also the view, which is the wrong thing to do since there is a separate coded data element for that.

Interestingly, I do not see very many combined body parts showing up, apart from TLSPINE. This is probably a consequence of the fact that Body Part Examined is a Series level attribute (and Anatomic Region Sequence is image level). In other words, two different Series in a single Study may have different values for these attributes. This is important to account for if one wants to come up with a single anatomic descriptor of the entire procedure, so a system may need to have the ability to detect and combine these. DICOM defines a bunch of these combined parts, and adds more as they are conceived of (for example, I recently realized we don't have a good value for aortic arch plus carotids plus circle of willis for MRAs). There is a trivial example of how to do this using the available combinations defined in DICOM in com.pixelmed.anatproc.CombinedAnatomicConcepts in my PixelMed toolkit if you are interested; i.e., one doesn't need the complete SNOMED ontology to recognize the relationships, only a tiny subset of it (more on that in a later blog post perhaps).

On the subject of tools as well as limited structured anatomical information, I cannot finish without mentioning Study Description and its ilk, Series Description and maybe even Protocol Name. Worse even than non-standard string values in Body Part Examined, these descriptive data elements can contain anything at all. Indeed that was their intent, to be a human readable description, and not something that was machine recognized. Originally, the modality operator typed in free text values, and often they still have that flexibility, or at least the ability to edit what is pre-populated by protocol selection. Sadly, given that Study Description and Series Description are the most frequently populated data elements in practice, though they are incredibly useful for human browsing, it has become common place to try to match or parse their content to dictate downstream behavior, such as for hanging protocol selection or matching.

Anyhow, given a site-specific set of such description data element values, one can either parse them and try to find anatomic words or phrases, in order to be adaptable to local variations, or one can just do a straight match on the entire string. In order to better support some of my use cases, particularly extracting anatomy for radiation dose extraction projects, I spent a while working on the parsing descriptions problem, with some success. You can find in the com.pixelmed.anatproc package a bunch of attempts to do this, both for cross-sectional and projection radiography, as well as for multiple languages. By comparison, you might want to look at the RadMapps approach, which just does a straight out full string mapping, which requires one to build a mapping once for any sites' list of descriptions, and then maintain it as they evolve. This is the approach being used for the ACR's Dose Index Registry, for example, where they only have to cover a small subset of all possibly procedures. In these approaches, there is some blurring between purely anatomical information and other interesting things one might want to also extract, like why the procedure is being performed or the particular manner in which it is being performed (such as being a CT angiogram or being thin slice, etc.), but the anatomy is a key part of the process. For some use cases it may not even be necessary to extract the anatomy separately, since the goal may be to map to a particular standard procedure code.

Indeed, one might suspect that the primary reason for the popularity of VNAs and the dreaded "dynamic tag morphing" is to deal with the impedance mismatch between the way different vendors and sites have their modalities populating Study and Series Description and the limited configurability of some PACS hanging protocols that depend on these. Of course, I hate to say it, but the "dynamic tag morpher" is probably a good tool to do the extraction or matching of descriptive attributes to populate structured attributes with standard codes for procedures and anatomy, if it has the sophistication required; i.e., use it not just to "clean up" descriptive attributes, but to augment the header with codes extracted from them. Better of course would be to get it right "first", i.e., off the modality or fixed during ingestion, and for everyone to use the same standard codes as the interoperable set, rather than have to "dynamically" coerce the values to match varied expectations of the recipients.

The bottom line is that reliable anatomical information is almost certainly available somewhere, if you want to go to the trouble to extract it, in decreasing order of desirability, increasing order of difficulty, and increasing order of likelihood of availability, from:
  • implicit in a standard Procedure Code Sequence value, supplied by the worklist and encoded in the header
  • in a standard Body Part Examined value or Anatomic Region Sequence code, extracted from the worklist procedure code, automatically or operator selected protocol, or operator selected dropdown
  • extracted by matching or parsing the Study and/or Series Description or Protocol Name data element string values

PS. Before someone asks, in DICOM, laterality is conveyed separately, encoded in either Laterality or Image Laterality (or in some cases Frame Laterality), and not pre-coordinated with (built in to)  the Anatomic Region Sequence or Body Part Examined. The opposite is true for Procedure Code Sequence, which has no separate laterality modifier, and for which laterality needs to be pre-coordinated.

Saturday, July 6, 2013

Pre-Fetching: Zombie Apocalypse or Nirvana?

Summary: Pre-fetching is back, driven by sluggish access to cloud-based archives and the need for a "local cache".

Long Version.

Like characters in a bad horror movie, or an eighties band, pre-fetching is back, resurrected from the dead (if it ever was truly dead).

For a while, with the concept of "all images spinning all the time for all users" we thought we were on a roll in terms of on-demand access. Assuming all those images were spinning "locally" that is. Tape and optical disk was going the way of the dodo and we didn't have to listen to StorageTek marketing presentations about hierarchical storage masquerading as scientific abstracts at SPIE and SCAR (SIIM) any more. Worst case one could approach image egalitarianism, i.e., all image access equally fast or slow for everyone, if one also made available equal bandwidth.

Not so, it would seem.

When the HIPAA Security rule required everyone in the US to have a means of disaster recovery, and reliable off-site archives came into vogue, it was not expected that these archives would necessarily have on-demand access performance, though it created an obvious opportunity for off-site access. Likewise with the DI-r's in Canada. But nowadays the distinction between the off-site archive and the only archive you have is becoming blurred, as everyone jumps on the "cloud" (aka. Software as a Service (SaaS), or Storage as a Service (STaaS), formerly Application Service Provider (ASP)) bandwagon, based on the naive assumption that if it is good for streaming movies on your smart phone or tablet, the "cloud" must be good for everything else too.

The aggressive marketing of the Vendor Neutral Archive (VNA) concept, often implemented as, or confounded with, cloud storage, has resulted in the introduction of another "layer" between the PACS user and where the images are, in some cases.

Some disk (arrays) and their interfaces are also cheaper, and potentially slower, than others, so even in the absence of awful media like tape and optical disk, the concept of different "tiers" of storage performance (in terms of either access or in some cases reliability), has not gone away either. Obsession with regulatory and legal issues has led many people to initially purchase far more expensive storage than is perhaps the minimum necessary to do the "caring for the patient" part of the job, and left a nasty (expensive) taste in some customers' mouths. Regardless, it is hard to argue with the economies of scale a provider like Amazon might be able to obtain (as long as it wasn't branded "medical" aka. unnecessarily regulated and excessively expensive and ripe for profit taking).

Anyhow, the buzzword du jour, much bandied about at the last SIIM, was "local cache". I.e., the images that you can access in reasonable time because they live on site and are optimized for performance, and perhaps are already "inside" your PACS and don't need to be retrieved from some other person's product (like a VNA). As opposed to those that are not, for which access performance may suck. Even if you don't have a PACS per se, or access images through it, but perhaps use a (buzzword alert) "universal viewer", the performance difference between images cached in a local server rather than pulled from off-site on demand may be "noticeable", to put it mildly.

I was interested in a comment from someone (can't remember who it was, or what system or architecture they were using), who reported that a colleague genuinely thought that the "A" flag in their study browser stood for "Absent". Apparently it really stands for "Archived", but they drew their own conclusion based on their experience. [Update: Skip Kennedy claims responsibility for telling me this :)]

So, whether you want one or not, it sounds like a "local cache" is in your future, if you don't already have one, whether it be for radiologists' priors or for other users' access to contemporary or older procedures.

How do images get into such a cache in the first place? If the cache is the PACS, the obvious way is to keep the recent stuff, i.e., stuff that was recently acquired, or imported from CD or received from outside for contemporary patient care events (even if they are in the ED or the clinic and have nothing to do with radiology, i.e., are not read again). If the cache is not the PACS, but some pseudo-pod of the off-site archiving allowed to extrude into your local area network (i.e., the on-site box bit of the off-site archiving solution), then likewise, anything recent can be routed to it. But the PACS or local box may fill up, and hence a purging strategy is required (assuming failure and buying more disks are not options, which this discussion presupposes). Not every PACS can do this but let's assume it can. It might even do so intelligently (e.g., purge dead people (assuming Haley Joel Osment doesn't take up radiology), adults, acute not chronic conditions, etc.), but that is a digression.

Sooner or later the priors that are potentially useful for new procedures or for clinical care will be purged and access will be slow or non-existent. Enter the pre-fetcher, which tries to bring some intelligence to bear (?bare) on the problem of what to fetch back and when, and hopefully do it in time. The literature from the 1990's and early 2000's is replete with articles about this (just search the SCAR/SIIM, CARS, SPIE Medical Imaging conference proceedings, journals like JDI and even RadioGraphics, as well as text books like Bernie Huang's). If you are interested, a couple of classics are Levin and Fielding from SPIE MI 1990, Siegel and Reiner JDI 1998, Andriole et al JDI 2000, Bui et al in JAMIA 2001, and the work of Olivia Sheng's group and Okura et al JDI 2002 on artificial intelligence methods. Approaches range from the simple expedient of the age of the study, through using the modality, the body part or the clinical question. The relevance of the body part in particular will be discussed in a follow up post here, and was my motivation for addressing this topic in the first place.

One of the important things to bear in mind is that pre-fetching is relevant not just for radiologists' priors before reporting the current procedure. It is also important for the clinicians, who may well be interested to know, even outside the context of a current radiological procedure, that other procedures have been performed in the past, whether locally or at other facilities, and want to access them without delay at the time of patient consultation, or surgery or some other intervention. Figuring out what is relevant for a clinician maybe considerably more complicated (to optimize) in some of these scenarios than finding priors for radiology reporting, and some of the systems in these users' offices may be much less robust. In particular, local cache sizes and bandwidth may be relatively low, and so not only is fast on-demand access for large studies like whole body CTs, PETs and breast tomosynthesis challenging, but excessive pre-fetching of all images for every scheduled patient encounter may overwhelm resources, and hence need to be selective and optimized. An interesting twist to this pre-fetching scenario, is that there may be no RIS involved and hence no access to the certain events and information; on the other hand the report will likely have been completed and more information may be available from the EMR/EHR/PHR.

Another SIIM theme this year, the decomposition of traditional PACS into its various component parts, archive, display and workflow, for example, seems to be well under way, with new hardware and software technology being brought to bear on classical problems, or having to leverage classical solutions. Hopefully lessons learned in the 1990's will be effectively reapplied, rather than needing to be reinvented. New factors, such as the ability to pre-fetch from central repositories and other facilities, will add interesting challenges, or opportunities if you choose to look at them that way. Likewise, the PACS migration problem potentially overlaps with pre-fetching when the decision is made to migrate patients or studies only on anticipated need, rather than all in advance.

Don't forget though, that "A" should be for "Accessible" not "Absent", and whether it is "Archived" or not should be irrelevant to the users' experience.

It is good to know that accessibility Nirvana (the goal, not the band) is just around the corner, once again.


PS. And yes, before you comment about it, I know about "server side rendering", and about Citrix, and why sometimes the images don't have to live locally, if these mechanisms float your boat.

PPS. Just for clarity, I am obviously not talking about the use of the term "cache" in the HTTP protocol sense, by which means, as Jim Philbin regularly reminds us, non-specific "stuff" that has not changed in its content can be served up closer to where it is needed by various caching proxies using technology that has nothing to do with medical imaging applications. This is one of the major justifications for the WADO-RS DICOM stuff that grew out of the MINT project. Though, of course, if it hasn't been pre-fetched, it won't have been seen by the HTTP caches recently either, and even if it has been pre-fetched, it still might not be cached in the intervening proxies on the way to the user.

Wednesday, July 3, 2013

My PACS has fallen down, and I can't get it upgraded

Summary: Many people have PACS that are not the latest version, and hence cannot use new features; new features are not added to old PACS versions.

Long Version.

In my travels preparing for the Breast Tomo forum that Rita Zuley and I hosted at SIIM (Digital Breast Tomosynthesis & the Informatics Infra-Structure: How DBT Will Kill Your PACS/VNA), I was surprised to discover that the key question was not just "Does your PACS vendor support the DICOM Breast Tomosynthesis SOP Class?", as one might have expected, or even "Do you have the bandwidth/storage/memory/display hardware to handle the large data volume?".

Rather, it was "Do you even have the current version of your PACS?"

This rather surprised me initially, but made sense when I thought of some of the barriers to upgrading, like the need for a fork-lift in some cases (or more seriously, the cost of the necessary server-side hardware). The site that initially exposed me to this dilemma has a problem that may be slightly unusual, extensive customization of additional services added on to a much older version of the PACS, which they cannot do with out.

To try to get a better handle on how widespread this problem was, I did a little survey on a couple of forums, like pacsadmin and comp.protocols.dicom. The response wasn't very great, and in retrospect I should probably not have chosen returning an Acrobat form by email as the survey mechanism, but the online survey tools I checked out first had some limitations too.

Anyhow, since I promised to share the survey results, and did at SIIM, here goes. I ultimately got 23 responses.

Systems were from
  • different countries (18 US, 2 Canada, 2 Europe, 1 Asia),
  • various settings (13 metropolitan, 2 rural, 8 mixed),
  • various scales (5 multi-enterprise, 10 enterprise, 4 multi-departmental, 3 departmental and 1 sub-departmental) and
  • multiple vendors (2 Agfa, 2 DR, 3 Fuji, 6 GE, 2 InteleRad, 2 McKesson, 2 Merge, 1 Philips, 2 Sectra, 1 Siemens).
Only 5 (22%) reported that they had the current (i.e., latest) version of their PACS in use, but 14 (61%) did say that they planned to deploy the current version within 3 months to 1 year (2 in 3 months, 4 more in 6 months, 8 more within 1 year).

The structured capture of reasons for not having the latest included:
  • cost (5)
  • resources for deployment (1)
  • resources for validation (4)
  • Meaningul Use distraction (3)
  • custom RIS interface (1)
  • custom reporting/speech interface (0) 
  • custom data mining interface (0)
  • custom other interface (0)
  • awaiting vendor change (2)
  • awaiting VNA (0)
  • other reasons (13)
Some of the other reasons for delaying that were described in text comments (and which overlapped with some of the structured questions) included the need for validation and user feedback, new features not being "significant enough" (so waiting for next version), server hardware replacement being needed, completing an interim version that needs to be installed first, awaiting a possible vendor change, or the practice of waiting for a while until a release has been generally available (presumably to see what problems it has).

The remainder said they were not going to deploy the current version either for more than 2 years (2 sites) or ever (2 sites). Reasons cited were that the PACS was externally managed & the supplier refuses, or it already "works" so no need for it.

In terms of what they were missing out on by not upgrading:
  • media export (2), import (2)
  • key images (1)
  • annotations (3)
  • 3D (4), fusion (4)
  • DCE (4), breast DCE (3)
  • IHE Mammo Profile (3)
  • Breast tomo (3)
  • JPEG 2000 (4)
  • WADO (2), XDS-I.b (2)
Other stuff mentioned as missing was remote caching, life cycle management and auto-deletion, increased exam capacity, reasonable performance (!), and some new SOP Classes (unspecified).

Note that the survey did not include the initial site that prompted my interest, which has too much customized stuff that depends on an obsolete version, and were certainly missing out on Mammo tomo.

This was not a very scientific survey, and the respondents may well have been biased by the context in which the questions were asked, and selectively been more likely to respond if they had an older PACS version perhaps.

The information that Julian Marshall from Hologic presented at the same forum also suggested that there was poor uptake of new SOP Classes (and sufficient hardware performance) to cope with breast tomo.

Hopefully SIIM will post the slides and transcript on their web site soon, but in the interim, here are my slides from the forum, and if you need any images to kill your lame old PACS with, try these tomo ones. If you have any of your own to contribute, let me know and I will provide a place to share them.


PS. Interestingly nobody mentioned that a reason was that their PACS vendor had failed and gone out of business, which I guess is a good thing :) Or even mentioned that they had been acquired by another vendor, which is interesting too. Too small a sample, methinks.

PPS. Here is a link to the survey form used, in case you are interested, or want to complete it yourself; I will continue collating results.

Digital Breast Tomosynthesis & the Informatics Infra-Structure
How Digital Breast Tomosynthesis Kills Your PACS/VNA - See more at: