Medical Connectivity

September-9-2014

18:06

This summer, FDA proposed lifting regulations from certain currently regulated medical devices. This unprecedented policy shift targets devices known as Medical Device Data Systems (MDDS) and is intended to benefit the mobile app industry and companies like Google, Apple and others. The current regulatory burden for MDDS devices is Class I, 510(k) exempt. This means manufacturers have to follow a basic quality system (i.e., design controls) on par with ISO9001, and report instances of patient injury or death in addition to any product recalls to FDA.

The following is a guest blog post embodied in an abridged version of a comment submitted to FDA in response to their draft guidance.

Introduction

Many EHRs and a significant number of health IT (HIT) and clinical decision support (CDS) systems are, by current law, de facto Class III Medical Devices because they have not heretofore been regulated and classified.  Class III devices are the high-risk devices and subject to the highest level of regulatory control.  Because they are new and have never been regulated (and thus embody an unknown level of patient safety risk) new products are by default classified as Class III devices. Most products that come to market are “updates” of existing solutions based on older technology. These products can claim previously regulated devices, known as predicate devices, and are typically classified as Class II devices. After their initial regulation as a Class III device, new products for which there is no predicate device are often classified as Class II devices.

The FDA currently practices regulatory enforcement discretion over many HIT and CDS systems (but not quite all – e.g. Blood Bank software), leaving them in a sort of classification limbo.  If properly classified, some might end up Class II, Class I, or even unregulated.  (https://en.wikipedia.org/wiki/Enforcement_discretion)

In 2011 the FDA took a concrete step to rationalize regulation in the HIT space when it finalized the Medical Device Data System (MDDS) rule basically a new Class I, 510(k) exempt device regulation for the simplest type of HIT medical device interfaces that transmit, store, and display medical device data without significantly altering it.  Since then at least 316 MDDS devices have been listed with the FDA. (To view current FDA registered MDDS devices, go here and enter “oug” into the Product Code field in the query form.)

In a surprising move, in June 2014 the FDA issued the Medical Device Data Systems, Medical Image Storage Devices, and Medical Image Communications Devices Draft Guidance for Industry and Food and Drug Administration Staff (MDDS Draft Guidance) that proposes to eliminate FDA regulatory oversight of MDDS through enforcement discretion.  The agency’s justification was:

“Since down-classifying MDDS, the FDA has gained additional experience with these types of technologies, and has determined that these devices pose a low risk to the public.”

The MDDS Draft Guidance did not describe what the “additional experience” was to merit their determination, “that these devices pose a low risk to the public.”  Separately, on public blogs FDA officials have stated they expect HIT products to be regulated by a different agency within HHS – even though no law or authority de-classifying those products (such as EHRs) as medical devices has been passed.

We personally don’t know anyone who doesn’t feel the FDA and their processes couldn’t be improved, but in our opinion it’s the best system we have in place now.  Like other HIT, MDDS’s can create significant patient safety and cybersecurity risks even if their intended functionality is as a simple data pipe. Dropping regulatory oversight over MDDS  devices is throwing the baby out with the bathwater.  The quality and value of HIT technologies like MDDS and their connected EHR and CDS systems depends on having interfaces that provide trustworthy data. Otherwise garbage inputs will result in garbage outcomes.

We submitted a comment through regulation.gov opposing the proposal to eliminate enforcement of the MDDS rule.  The major points are summarized below.  The entire MDDS rule can be found here, and the MDDS Draft Guidance Document can be found here.  Our full comments are available here, as published on www.regulation.gov.

Overview

Our Comment on the Draft MDDS Guidance Document covers three topics:

1)     Cybersecurity Risks

2)     Software Defects from Complex Connected Systems

3)     Known MDDS Device Defects

Cybersecurity Risks

In any component, system, or system of systems, the security of the whole is only as good as its weakest link.  FDA’s “accessory rule” concept applies well in the domain of security and privacy.  Any interface, as a component of a larger system, poses a potential security or privacy vulnerability. Additionally and importantly the functionality, classification, interface standard conformance, or intended use of the interface does not correlate with the potential for that interface to contain a security or privacy vulnerability. For example a wireless connection is not any more or less vulnerable to cyber attack if it is intended only to receive occasional physiological data. Every MDDS, especially wireless ones, create potential cybersecurity vulnerabilities.  Design controls are a necessary, but not a sufficient, means to reduce those vulnerabilities.  Without them MDDS’s and everything they touch becomes more vulnerable.

We believe providers and clinicians under-report security and privacy violations and will continue to do so until they have additional liability protection. Thus the FDA’s collection of cybersecurity vulnerabilities is incomplete.  The only argument can be over how much they miss.  Post-market surveillance of cyber security should probably be beefed up for all medical devices, particularly for HIT such as EHRs.

There may well be systemic, real, or imagined reasons why providers, hospitals and HIT manufacturers are reluctant to report cybersecurity vulnerabilities in their medical devices and HIT systems.  Which is why we further suggest that the in addition to the FDA continuing regulatory oversight over MDDS’s including post-market surveillance, the FDA as well as other agencies such as NIST, FCC, and ONC should expand their cooperative efforts to improve collection and analysis of the nation’s entire HIT infrastructure’s cybersecurity vulnerabilities. But that is a large topic best left for another article.

Software Defects from Complex Connected Systems

In 2012 The Food and Drug Administration Center for Devices and Radiological Health Office of Compliance Division of Analysis and Program Operations published Medical Device Recall Report FY2003 to FY2012. The entire report can be found here (pdf).

The report concluded (see page 18) that software design failures were the most common cause of medical device recalls and recommended expanding regulatory oversight of software medical devices.  We agreed with that finding in 2012 and still agree with it now. Increasingly connected, integrated, interfaced, or interoperable systems are more complex and have more complex interactions.  Therefore they are more likely to contain defects in their individual components or the systems as a whole.

Basic Systems Engineering tells us that the FDA’s proposal to drop design controls over the MDDS “connection” part of such systems is exactly wrong.  It creates a potential weak link, and makes detecting and fixing other defects within systems, more difficult.  The MDDS Draft Guidance is a step backwards for cybersecurity, software quality, and patient safety.

Published MDDS Defects

A search of the FDA’s MAUDE database for the keyword “MDDS” returned 66 hits – reports on MDDS’s, or devices or EHRs connected to MDDS’s.  Yet only 316 or so MDDS’s have been listed to FDA for commercial marketing, ever since April 18, 2011.  This seemed on the surface to be a high rate of reports.  We examined a few MDDS-related MAUDE reports, MEDSUN entries, and Recall Letters.  None of the defect descriptions contained anything surprising to someone with even a modicum of hands-on IT experience.  Four selected MDDS defect reports are described below.  We quote directly from the FDA databases (typos may be from the original documents) and provide a link to the original complete documents.

Abbott Initiates Voluntary Recall of FreeStyle lnsulinx Blood Glucose Meters

The company has determined that at extremely high blood glucose levels of 1024 mg/dL and above, the FreeStyle lnsulinx Meter will display and store in memory an incorrect test result that is 1024 mg/dL below the measured result. For example, at a blood glucose value of 1066 mg/dL, the meter will display and store a value of 42 mg/dL (1066 mg/dL – 1024 mg/dL = 42 mg/dL). No other Abbott blood glucose meters are impacted by this issue.

The functionality described in the recall included only the communication, storage, and display of a physiological value (blood glucose levels) from a medical device.  If those functions were compartmentalized they would be an MDDS.  In other words, Abbott found a defect in an MDDS serious enough that they issued a voluntary recall of that device.  Abbott is a large highly respected medical device manufacturer with vast experience in design controls and post-market surveillance.  We are concerned that had a similar MDDS been developed by a different company without design controls and no experience with medical devices, this defect would unlikely have been detected and the product would not have been voluntarily recalled.

The following three reports describe defects in systems incorporating MDDS’s, and speak for themselves.

General Electric Centricity MDDS PACS

A critically ill pt under went radiographic eval of chest and abdomen. The last name of the pt contained one apostrophe. The radiograph images could not be accessed on the ehr results mdds. It was determined that the one entering the pt’s name at the imaging vendor entered a double apostrophe, rather than one. It could not be corrected for days, once the images were found, 5 days after they were done. It took another 3 days for pacs vendor to correct this misidentification issue. The vendor’s device is defective because it allowed absurdity (there is never a name with consecutive apostrophes) and it failed to warn of the error. These mdds devices need tighter regulation, surveillance, and safety.

ProTouch MDDS/EHR Device

Complex case with multi organ failure was on high doses of potassium supplements and potassium sparing medications. The potassium level obtained in the lab and electronically sent to the dhr mdds had increased from 4. 0 to 4. 9 mg% over a 24 hour period of tome. The nurses were not alerted by the mdds of new results, nor did they open the mdds to check the interval change of the potassium level prior to administering 40 meq potassium chloride twice. This points out the defect in the mdds, which is its failure to notify of new results and provide meaningfully useful decision support. These devices are not safe and require oversight.

Seimens Soarian

Cultures were obtained from a deep skin infection involving an implanted medical device. Multiple cultures grew serratia marcescens. The antibiotic sensitivities were lost from the mdds section of the ehr, or they were never posted due to an interface failure. A work around was required to find the results, but they remain absent from the silo of the ehr where they should appear. This defect causes delays in care and adversity due to delays in pinpointing the correct antibiotic to use in this critical situation. This genre of flaw raises doubt in the health care professionals as to whether the presentation of results on any pt are accurate.

The last MAUDE report nicely summarizes systemic risks of MDDS devices. A flaw in any MDDS whose purpose is to populate a patient record with physiological data raises doubts on the accuracy of ALL patient data in ALL EHRs.  Modernizing our country’s healthcare delivery system requires EHRs and associated HIT systems that are well designed, correctly implemented, diligently operated, and trusted by payers, providers and patients.  If only a few MDDS’s are found to be significantly defective in functionality, reliability, operation, security, or privacy, then the trust placed in all EHR data (including financial and demographic data) by clinicians and patients will be broken – regardless of the quality of their own particular systems.

There is no doubt that implementing design controls is a non-trivial effort, particularly when compared to the pure development cost of small mobile software applications that may perform, on the surface, similar functionality.  But the country is not in need of cheap vulnerable apps and untrustworthy interfaces. Improving healthcare requires high quality, reliable, and effective software that safely and correctly interacts with other regulated and unregulated HIT systems.

The FDA has been improving their regulatory processes and supporting innovation with the Mobile Medical Apps guidance document, recognizing more standards, and other published and in-process guidance’s and rules.  We applaud the FDA for their diligent work protecting patient safety and improving their regulatory processes.  However we disagree with the MDDS Draft Guidance.  We recommend that for Systems Engineering, Cybersecurity, and Patient Safety reasons the FDA should continue regulatory oversight over MDDS class devices.

The authors would like to thank the numerous subject matter experts who have contributed suggestions, critiques, and edits to our comments and to this post, but for professional reasons choose to remain anonymous.  You know who you are.

Respectfully,

John Denning, MHA
Lynn Haven, FL
 
Robert J. Morris, MD (UK)
Pasadena, CA
 
Mikey Hagerty, Ed.D., CISSP/ISSAP, CIPP/IT
Carmel, CA
 
Michael Robkin, MBA
Los Angeles, CA
 
George Konstantinow, PhD
Santa Barbara, CA
 

Pictured above is the Capsule Neuron, a major component of Capsule’s MDDS.

Share

August-25-2014

14:35

Clinical alarm safety can be hard to achieve, and once attained, a struggle to maintain. There are so many challenges:

  • False/positive and non-actionable alarms;
  • Optimizing default alarm limits across patient populations and for individual patients;
  • Spread out nursing units with high patient-to-nurse ratios;
  • Numerous alarm notification methods – audible signal amplification, monitor techs and alarm notification systems;
  • And the constant threat of complacency and alarm fatigue.

The inaugural Clinical Alarm Safety Symposium, November 20-21, 2014, will delve into these issues and more to provide attendees with actionable information that can be later applied in your institution to ensure continued clinical alarm safety.

Topics Include

  • Methods for researching and analyzing alarm sources and settings in your institution
  • Reducing nuisance and non-actionable alarms
  • The role of monitoring techs in addressing alarm fatigue
  • The role of alarm notification systems in improving the reliability and timeliness of alarm notification
  • The impact of house-wide patient monitoring on alarm management and response
  • Noise associated with medical device alarms and the resulting impact on patients and staff
  • The impact of decentralized nursing stations and private rooms on alarm notification
  • A sample project plan for assessing and optimizing the clinical practice of alarms: data collection, analysis, optimization and ongoing management
  • A sample project plan to conform to the Joint Commission’s National Patient Safety Goal on Alarms
  • Optimizing policy on alarm limit defaults and the process defined to adjust defaults per patient
  • Using alarm data analytics as a management tool for patient safety, workload balancing and staffing
  • Key requirements for alarm notification systems: capabilities, performance and usability
  • The role and key requirements for mobile devices for alarm notification applications
  • The technology management life cycle for alarm notification systems
  • IT and Biomedical/Clinical Engineering governance best practices for alarm notification systems
  • The role of rapid response teams in alarm notification

The symposium also includes exhibitions from sponsoring and supporting organizations.

Call for Speakers

Speakers are actively being sought for this symposium. Please note that due to limited speaking slots, preference is given to hospitals and research centers, regulators, and those from academia. Additionally, vendors/consultants who provide products and services to these companies and institutions are offered opportunities for podium presentation slots based on a variety of Corporate Sponsorships.

Those interested in nominating speakers or submitting a presentation proposal themselves may contact the program chairperson (email) or TCBI.

Meeting Overview

The symposium will be held at the Hyatt hotel at Dulles International Airport in Herndon, Virginia. More details on the event can be found here.

The symposium is produced by The Center for Business Innovation (TCBI) and is scheduled for a full day November 20th and with a morning session until noon on the 21st. The afternoon of the 21st, will include one or more optional half day workshops (which will be available at an additional cost separate from the symposium).

To my knowledge, this is the first event dedicated to alarm safety since the Medical Device Alarms Summit in 2011. With the first milestone recently past for compliance to the Joint Commission’s NPSG on Alarms, the time is now for health care providers to gather together to share best practices and lessons learned.

Share

August-4-2014

11:18

This interview is with a long established thought leader in patient monitoring and alarm notification, Jim Welch. Jim has demonstrated a knack for bringing a fresh approach to long-term persistent problems in monitoring, nursing vigilance and patient care. At Sotera Wireless, Jim’s had a chance to re-imagine patient monitoring in low acuity settings with predictably innovative results.

At the AAMI 2014 conference, I had the opportunity to attend the breakfast symposium where Jim presented, Transforming Care in Non-ICU Settings through Disruptive Continuous Monitoring Technology. The following discussion centers on patient monitoring data analytics, pioneered by Sotera Wireless.

What is the value of data analytics applied to medical device alarms?

For many years caregivers have had to struggle under the weight of a large number of false and non-actionable alarms. The resulting cognitive overload often results in alarm fatigue. Sotera has determined that a very effective way to reduce non-actionable alarms is to optimize alarm default settings.

Before you can improve something, you must be able to measure it. Medical device manufacturers have always generated log files of patient data, alarms and other system data processed or generated by their patient monitoring systems. But this data was only used in product development, troubleshooting, and incident investigations. What is needed is to give clinicians access to this data – and tools to analyze the data – to reduce non-actionable alarms.

Presently, hospitals are forced to use a trial and error approach to alarm management, which means multiple iterations, which is not ideal. First, it takes a fair amount of labor. Secondly, it takes a long time because it’s an iterative approach. So that, in the absence of high fidelity analytics, customers experiment not knowing the consequences of their experiments, which means it potentially can move too far in one direction versus the other.

For example, if they make their alarms too sensitive, they open themselves up to more nuisance alarms. If they make their alarms less sensitive, then the specificity of detecting a patient who is truly deteriorating can be a patient safety concern. High fidelity alarm analytics fills that gap.

What do you mean by the term “high fidelity medical device data?”

High fidelity device data really means capturing all of the digitized information that the device is collecting at the origin. In the case of physiologic monitoring, that means the raw waveform. It also means all of the reduced data derived from the raw waveforms such as individual vital signs, like heart rate, respiration rate, pulse rate, SpO2, blood pressure, temperature and any alarms that occured. The reason high fidelity data is so important is that it allows retrospective simulations on that data and therefore avoids the iterative trial and error approach towards alarm management.

You mentioned simulating the results of alarm adjustments, how does that contrast with a conventional trial and error approach, and what’s the impact of your approach in clinical practice?

Well, there’s a significant difference between using high fidelity data analytics that do simulation versus taking an iterative approach, not that iterative approach is entirely bad, it just takes a long time, requires significant labor and time investment by the hospital.

If you have high fidelity data captured, this data represents the environment of use, and it represents the patient population of interest. Then you can take that high fidelity data and run “what-if scenarios” at different alarm configurations and see the difference in the number and types of alarms that are generated based on different alarm limit settings. This method avoids the iterative approach and the enormous time that it takes to do it.

How is the simulation actually done?

Sotera’s high fidelity analytics is an evidence-based approach to optimizing alarm settings. We upload de-identified high fidelity patient data into a secure private cloud. As of this date (late July, 2014) we have about 25,000 hours of data from the general care area across multiple care units, across multiple hospitals. By the end of 2015 we expect to exceed 100,000 hours of multi-parameter vital signs data.

We take the aggregate data and run large simulation scenarios in order to optimize the what-ifs, for the purpose of reducing false and nuisance alarms. Each new customer’s data is individually analyzed and compared to the ever-growing aggregate data. This comparison allows the customer to compare their results to the aggregate data. We have found this tool to be very effective in helping our customers rationalize their setting and to set expectations of alarm experience with ViSi Mobile before a broader adoption.

Considering less than 5% of alarms are clinically actionable, this tool allows the hospital to significantly reduce non-actionable events. We know within the aggregate data set there are no reported adverse events. But, we are not stopping here. Sotera is engaged in an IRB approved study to report on the types of actionable events that are identified by alarm signals. We hope to publish our findings next year.

Time stand-offs, where notification of a transient alarm is withheld for a predetermined period of time, has recently emerged as a key tool in reducing non-actionable alarms. How do time stand-offs work and what role do they play in reducing non-actionable alarms?

A time hold-off, or time delay, requires that whichever physiological parameter is violated stays in the alarm state for a persistent predetermined amount of time before an alarm is activated.

The human physiology is a wonderful system that often has temporary swings in physiology to compensate for a short-term condition. For example, the first time a patient ambulates after surgery places stress on their cardiovascular system. In response, we may see a transient change in heart rate and blood pressure. These changes may cause a true but non-actionable alarm. Likewise, patients recovering from anesthesia may experience short episodes of oxygen desaturation. These events are important to capture and display, but not necessarily cause an alarm condition because these alarms do not require an immediate intervention to avoid a harmful event. Time hold-offs provide a filter – the time delay – to help differentiate between those very short episodic changes and true harmful physiologic changes. Non-clinically actionable changes are filtered out of the alarm equation.

After a hospital completes an analysis of their high fidelity medical device data, what kinds of issues have emerged that have challenged these hospitals?

Before answering your question it is important to contrast ICU patients from non-ICU patients where ViSi Mobile is applied. In the ICU, the patient’s physiology is often being manipulated by drugs or external devices such as ventilators. In this environment of care clinicians are very concerned about very small deviations in physiology, and therefore, alarms are set to very sensitive levels. In the case of the general care area we have a very different alarm management challenge. The non-ICU patient is in a recovery period of their hospital stay. They are receiving medications that help them recover. They ambulate as part of the recovery process. We see from our high fidelity data that they occasionally have transient episodes of physiologic stress. What we are finding is that we can address the non-actionable alarm and alarm fatigue issue through this high fidelity data analytics.

What has surfaced in our early deployments of ViSi Mobile are people and process issues within the general care area. Our biggest challenge is partnering with our clinical customers in improving their clinical thinking skills in interpreting data – data that has a different context from higher acuity monitoring environments, and data that is new to the lower acuity general care areas.

For example, if a patient’s heart rate climbs above 160 beats per minute and we get an alarm, what does the nurse do at the bedside to correct that? It could be the patient is experiencing anxiety, or they forgot to disclose a medication they were taking at home prior to admission. Or, is this change an indication of the beginning of deterioration? So our focus really is in the area that we have termed transformation of care at the bedside where we are investing in the training of nurses to respond to alarms in a meaningful way, especially the actionable events.

So along with new data about their patients comes an increased need to be able to respond appropriately to that data?

Yes. So let me give you a couple of examples. What we’re finding is that ViSi Mobile is a disruptive technology to the non-ICU patient care area. The general care nurse is not accustomed to receiving real time physiologic information. So they’re discovering for the first time that their patients are experiencing the early stages of a harmful condition more often than they realize.

Sotera has discovered that we must first overcome the natural human element of denial. How could our patients have this many alarms or this many physiologic conditions that require our nurses’ response at the bedside? We have to overcome that barrier through training and investment in their day-to-day operation. And that often comes to working directly at the policy level within the nursing community. Let me give you an example of that.

It is very typical for nurses in the general care floor not to have within their scope of practice the ability to change alarm limits on a patient without a physician order. If you’ve ever worked in the general care floor, you’ll know that the nurses are very reluctant to call physicians for these kinds of permissions. So what typically happens is you get a few patients that are alarming all the time, and the nurses are reluctant to get a physician to write an order to change alarm limits.

As a result, we frequent engage our clinical customers in discussing policy issues that allow an extension of current scope of practice to allow clinical interventions (including changing alarm limits) within a limit defined by senior clinical leadership. In essence, we are empowering each nurse to intervene sooner to a deteriorating patient condition.

What’s the relative value of a device manufacturer’s own alarm analytics solution, like Sotera’s, and a patient-centric alarm analytic solution that accounts for all the devices attached to patients from a third party like a messaging middleware vendor?

Well, clearly from a workflow standpoint, the environment of care is more than just physiologic alarms. There are out of bed alarms, nurse call alarms, stat results from laboratories, and so forth. The true solution to the overall nuisance alarm problem really involves a new technology ecosystem that includes not only the individual devices and their alarm management at the source of the alarm, but also the integration of that information with other contextual information about the patient.

So, does a hospital need both kinds of analytics tools? Or is one better than the other?

In my opinion, it’s not an either or proposition, both compliment one another. Solving alarm fatigue requires strengthening each link in the system chain, starting from the choice of sensors and continuing all the way to how the nurse receives alarm information.

I think the device manufacturers are obligated to do whatever they can to strengthen their algorithms, to help customers analyze their device data to identify truly actionable events. Then the messaging middleware system has to take that data, combine it with other contextual data like demographics, admitting diagnosis, drug medications, comorbidities and consolidate all this information to create a higher level of decision support, such that nurses are only getting information that they have to act on, in a timely way to avoid harm.

Alarm settings are a key part of the clinical practice of alarms and a major contributor to minimizing non-actionable alarms. Once the hospital has gotten a handle on that, what other factors in effective alarm management must be addressed?

The answer to that question comes down to people, process and technology. So, through our alarm analytics and simulations we’re solving the technology component where only actionable information moves into the messaging or notification system. The next challenge is how do we use that information to cause sustainable behavioral and/or process changes within the institution. Our experience has shown us that the bigger elephant in the room is the investment in the critical thinking skills of the nurses at bedside.

It has been all too often in my career as a Clinical Engineer that hospitals will purchase a system with the expectation that it’s the technology that is the solution to nuisance alarm. That’s not entirely true. Technology plays a very important role in solving alarm fatigue, but if the hospital doesn’t invest in the training programs, the policy changes, the cultural changes, the process changes, at the bedside then any new technology in my opinion will be very short lived.

More often than not, hospitals buy these types of solutions only to abandon them later, because they’re not getting the improvement and outcomes in patients by whatever metric they decide. It is because they haven’t adequately invested in the process change, the policy change required to realize the full potential of the technology.  In 2010 Dartmouth Hitchcock Hospital published a remarkable reduction in ICU transfers due to a multidisciplinary approach to early detection and interventions. They invested in people, process, and technology. Since implementation they have reported no unanticipated cardiopulmonary arrests, clearly an improvement in outcomes. Yet, no other institution has achieved similar results. Why? I can only conclude it was due to a deliberate improvement in the culture of care at Dartmouth that was enabled by a new technology.

We recently submitted an article that talks about a capability maturity model for organizations to address alarm management from a foundation to a sustainable level. It has been my experience that if you don’t go through those process changes and make those investments, then the hospital will struggle with realizing a sustainable solution.

Pictured above is the ViSi Mobile monitor.

Share

July-28-2014

9:00

Developing and launching a competitive product, and getting initial traction in the market are not inconsiderable milestones. And yet for the entrepreneur and their investors, this is just the beginning. What was record setting last quarter is barely acceptable this quarter, and next quarter had better be back on track.

Developing a solid plan for growth depends on two things: a good understanding of the basic means to drive growth, and a deep understanding of the market. This post seeks to combine both of these in a brief survey of the key factors to drive messaging middleware revenue growth in health care. We’re going to consider three basic growth strategies: organic growth, product line extension, and the roll-up strategy.

Organic Growth

For start ups, organic growth can be realized first by targeting a market segment that has broad appeal and large numbers of early and late adopters. Going back to Moore’s market adoption model, it’s relatively easy to identify a market need and generate initial sales to innovators and early adopters. These early buyers want technology and performance, something new the buyer can leverage to gain a competitive advantage of their own.

These early buyers tend to be large institutions with a corporate culture of innovation and the internal resources to support such endeavors. Accounts like the Cleveland Clinic, Mayo Clinic, Partners Healthcare, come immediately to mind. Kaiser Permamente would also fall into this group, except they are held back by their need to have solutions that can scale to considerable extremes, a requirement that is not applied to these other health care provider titans. There is even a cadre of smaller nimble early buyers: Overlake Hospital, Bellevue, Washington, and El Camino in Mountain View spring to mind. Spend enough time in this industry and the early buyers tend to make themselves known. The problem is that this population of early buyers is quite limited; early buyers will only take a company so far.

Once most of these early buyers in a market segment have bought, the market adoption chasm arises because the next group of buyers to adopt – the much larger early majority – don’t want technology and performance, they want complete, proven, easy to adopt solutions. This shift gives rise to the conventional wisdom that, “hospitals want solutions to problems, not tools they can use to solve their own problems.” For vendors, the importance of this is self evident when considering how to maintain or even increase their growth rate over time. For providers, it’s important to recognize from which side of the chasm your organization is operating and proceeding accordingly.

To cross the chasm, vendors must add to the original innovative technology the required features and services to create a whole product solution that is laser focused on a recognizable problem. Figuring out exactly what it is that’s required to transform an innovation worthy of inspiring early buyers into the safe and reliable solution required by the early majority is a challenge. Recognizing the gaps and knowing how best to fill them is not easy, although there are processes that can be used to identify those requirements and confirm that they’re met.

Moore calls the process of creating and going to market with the whole product solution being in the bowling alley. The bowling alley let’s you shift your growth from the early market, which may be nearing penetration, to the much larger early  majority portion of the market. Crossing the chasm is an essential objective for new companies. In a crowded market like messaging middleware, numerous companies will be struggling with crossing the chasm.

Achieving strong organic growth is an excellent indicator that, beyond a solid whole product solution, sales and market are also top notch. Sales and marketing are especially important because health care is not a field of dreams market, where “if you build it, they will come.” Brand awareness, demand generation and market education are key marketing tasks. Sales requires effective sales tools and proofs in support of a sales strategy or process that leads first time buyers to the right decision in an efficient and reliable manner.

Product Line Extension

A main characteristic of the messaging middleware market is the variety of different problems that can be solved by the same basic technology. These different problems are reflected as market segments. Each of the different market segments listed in the previous blog post can potentially support a start up, or represents a potential product line extension. Moore frames these other market segments as additional bowling alleys that leverage the same foundation of product and services that make up the original whole product solution. Some product line extensions may require changes to the whole product solution to gain early majority market adoption.

Much like selecting the initial target market for a start up, the key is to identify new bowling alleys with sufficient market demand (of course, competition is also a factor). Synergy with preexisting whole product solutions is also desired. It’s also helpful if the new bowling alleys under consideration target the same markets (e.g., physician practices or hospitals) so that existing sales and marketing resources can be easily leveraged to take advantage of cross-sell and up-sell opportunities that emerge. If different bowling alleys target different markets – say, physician practices for one and hospitals for another – each target market will require major investments in marketing and sales; potential synergy from a targeting a common market are lost.

Sometimes a product line extension includes product changes that add substantive new features to the platform. For example, a secure messaging solution that is designed to support a single enterprise might add the capability to support users across multiple enterprises, or the addition of a scheduling module to support a more complete secure messaging solution for on-call physicians.

Roll-up Strategy

A roll-up strategy entails a series of acquisitions used to construct a bigger company made up of complementary products or solutions. A relevant example of this strategy can be found in Amcom Software. After their merger with Xtend Communications, Amcom came to dominate the hospital operator console market (due to their HL7 integration capability) and related telephony applications. Subsequent acquisitions extended Amcom’s reach with various communications solutions for health care, government and other vertical markets.

  • 2007, Xtend Communications operator console, PBX, paging gateway and related telephony solutions
  • 2008, Telident 911 Solutions mass notification and emergency management solution
  • 2008, Comtech Wireless hospital messaging middleware solution
  • 2009, SDC Solutions operator console and call center technologies
  • 2012, IMCO Technologies critical test results management

Amcom Software was acquired by USA Mobility in 2011 for $176,800,000. The combined company is now called Spok (pronounced spoke with a long “o”). Starting with the merger with Xtend, the Amcom Software strategy was to build a company through acquisitions and then sell the company. With a 2010 revenue of $60 million, things appear to have worked out well for Amcom’s investors.

Because of the nature of this market, a roll-up strategy can be challenging. Unlike the product line extension strategy, where a company’s existing technology is reconfigured or enhanced to target new market segments, the roll-up strategy entails the acquisition of other companies. How those acquired products, employees and customers are optimized is the challenge.

Mergers and acquisitions occur frequently in the health care industry. The goals of these transactions include:

  • Capturing the target company’s revenue or cash flow
  • Capturing the target company’s installed base or market share
  • Gaining access to the company’s technology
  • Acquiring the company’s patents
  • Capturing the human resources employed by the target company
  • Eliminating a competitor

The first two bullets are obviously related, however the degree and ways they’re related depends on the specific companies and their business models. A company that goes to market selling mostly capital goods (hardware and licensed software) is quite different from a company selling their solution as a cloud based service.

As discussed in a previous post, most messaging middleware solutions are built using a similar architecture that is often made up of software engines. These engines can be licensed from commercial vendors or from open source projects. The resulting solutions can be built relatively quickly and for modest sums. Consequently, the value in purchasing a messaging middleware vendor for their technology may be limited.

Creating interfaces between multiple messaging middleware acquisitions can be problematic. To date, messaging middleware systems have been designed to operate alone; manufacturers do not intend for their messaging middleware system to be one of a constellation of messaging solutions serving the same user base. Some manufacturers have added to existing designs by implementing APIs and other integration points to facilitate the incorporation of other messaging middleware apps – often to fill feature gaps demanded by prospective buyers. Implementing multiple messaging middleware solutions via acquisitions raises questions about message routing, escalation and the existence of more than one rules engine impacting message flow. A system of systems made up of messaging middleware solutions gets very complicated very quickly, increasing configuration and verification and validation test complexity.

An acquiring company with older software technology may see value in the acquired software platform, or in the intellectual property and expertise behind the development of that software. Further, the acquired company may have software capabilities that are extensions to messaging middleware solutions – such as the staff scheduling for on-call physician messaging example used earlier.

The acquisition of mVisum by Vocera is worth a closer look. It should be noted that Vocera does not appear to be executing a classic roll-up strategy but the rationale that may have driven this acquisition is of interest. mVisum was a start up with an attractive messaging middleware product. Unlike many other messaging middleware solutions, mVisum was FDA cleared for alarm notification, conveyed snippets of medical device waveforms with medical device alarms (important for screening non-actionable false/positive alarms), and also included remote medical device surveillance features. The company subsequently ran into some patent infringement issues with AirStrip Technologies. mVisum was  acquired by Vocera for $3.5 million less than a year later.

There is considerable overlap between Vocera and mVisum solutions. Potential areas of value for Vocera include mVisum’s FDA clearance for alarm notification, one of the strongest messaging middleware market segments. mVisum also filed a number of patent applications that may be of value to Vocera. Vocera was founded in 2000, so there may be some value in mVisum’s software architecture – if not the actual software, then the requirements and design may be leveraged in future versions of Vocera’s software.

To summarize the roll-up strategy applied to messaging middleware, there is likely not a lot of value in acquiring other messaging middleware companies when compared to the product line extension strategy. The main reason is because most software architectures will be similar. There are exceptions to this, some of which are alluded to in the Vocera/mVisum discussion above. Because the messaging middleware market is relatively undeveloped – we’re far short of a penetrated market – there’s little opportunity to buy cash flow or market share through acquisitions. Nor is the market so developed that human resources are a likely justification for acquisition.

The roll-up strategy does make more sense when one looks beyond messaging middleware. Just as Amcom Software took a broader view of vertical market messaging and communications solutions that included messaging middleware as a portion of the whole, one could frame a roll-up strategy from a similar, higher level. For example, a roll-up targeting health care could encompass point of care solutions, rolling-up messaging middleware with nurse call, medical device data systems (MDDS), data aggregation and patient flow with enabling technologies like real time location systems (RTLS) and unified communications (enterprise phone systems). The resulting entity could define a new enterprise software category: point of care workflow automation.

Another practical application of the roll-up strategy is the secure messaging market targeting physicians. There is little apparent differentiation between solutions and vendors with good adoption in a particular geographic market will be difficult to dislodge. Here a classic roll-up, where the acquiring company offers broader economies of scale superior to those of regional players has a lot of potential. Such a strategy would be complex to implement, due to the technical product integration issues noted above. Provided they could dedicate sufficient cash flow, this could be an attractive strategy for Spok, although any company with access to several tens of millions could pull this off.

With 100+ competitors, the messaging middleware market is remarkably crowded. Over time, many of these firms will fade away as they fail to gain initial market traction, cross the chasm or get acquired. There will certainly be mergers and acquisitions. There will be some who plan and execute well, and grow their companies to tens and hundreds of millions in annual revenue. Some degree of luck with be a factor. But regardless of the strategy or outcome, the imperative shared by them all will be the drive for growth.

Other Posts on Messaging Middleware

You can find a post Messaging Middleware Defined here and the post on Messaging Middleware Market Segmentation & Adoption here. In the coming week a post on HIPAA will be published. Be sure to check back!

Pictured is the new Motorola MC40-HC Android smartphone running the Extension Healthcare Engage software client.

Tim Gee is Principal of Medical Connectivity Consulting. He is a master connectologist, technologist and strategist working for medical device and IT companies and various provider organizations. You can learn more about Tim here.

Share

July-16-2014

9:00

I was listening today to the CE-IT Webinar on CE and HIT from the 2014 AAMI conference in Philadelphia. Much of the session reviewed what has happened over the last five years and it got me thinking about my experiences and what I’ve seen over the last ten years in medical device connectivity and remote monitoring. It’s been an interesting ride and yet I realize there are a few basic ideas that have resonated over the years. These basic ideas are:

  1. Specifying those requirements that are unique to my situation are where I have the most control in acquisition;
  2. There are other players in the market who may change the landscape of what is available to me; and,
  3. The government may require something which can constrain my options.

Ten years ago, I was working for a very large integrated healthcare system as a clinical engineer. One of my projects was to choose and implement the medical device integration system for integrating patient monitoring and ventilator data into the ICU charting portion of our EMR system. There were three main vendors at the time which weren’t part of the large medical device companies and eventually we chose one of the major ones for the system. My responsibility was to ensure the device data went from the device at bedside to the device integration server and out to the interface broker to the EMR application.

While choosing the device integration product, I had to keep in mind my healthcare enterprise infrastructure. I had thirteen hospitals that needed to connect to the two separate instances of the EMR application. Being able to standardize on the device integration system implementation design and management became one of my paramount concerns as I needed to be able to scale the solution over the infrastructure. Additionally, I knew that if I was successful in that particular region, the solution would need to scale over to other regions and nationally.

During that time, I also was involved in some of the organizations promulgating the use of standards at the medical device integration system/interface broker interface. The standards organizations wanted me to include the standards as requirements in my procurement documents. And yet, I resisted because I did not see the standards as either being mature enough or being overly burdensome requiring adherence through all layers of the OSI 7 layer model.

In retrospect, I believe I should have insisted on the use of at least the data standards from the devices embedded in the messaging standards (HL7). We were using HL7 at the output of the device integration server, but the EMR application separately mapped each data item to a data base element and had to use the device vendors’ HL7 implementation guide to figure out what the data items meant. If we had specified IEEE 11073 device data standards (perhaps even later on as we evolved), we would have been able to more easily change medical device vendors in the future, if desired, and not have to worry about ‘breaking’ the interface to the EMR interface broker.

With regard to the other standards, physical, networking, etc., I’ve found that the IT industry does a good job of defining standards and then converging to an interoperable solution. Those standards are required across various vertical markets, so there is more demand for the convergence of the standards and products with those standards. What is unique to healthcare is the data and messaging information. And, that is what is most important to the clinicians and patients – consumers of the data. All of the other standards are mostly mechanisms for transmitting the data from where it is generated to where it is acted upon in some fashion.

I see the same thing happening in remote monitoring and mHealth. Buyers are too focused on short term and immediate issues and not realizing that specifying data standards can help them be interoperable in the long run. Again, not having to worry about the data format of another vendor’s sensor data being integrated into the EMR can save time and money as well as allow quicker scaling across your organization.

However, there are other players on the scene now, which may make the buyer’s job a bit easier. With ACA in the USA, the term meaningful use (MU) has led to the establishment of standards which EHR applications must use in order to be certified to allow the US government to reimburse some of the costs of EHR implementation. In fact, the first MU stage was going to include remote monitoring standards to include certain medical devices data (HITSP IS77), however, it was eliminated for that stage. It is anticipated that the last MU stage will require medical device interoperability. The original date for that was projected as 2015, however, the MU stage 3 has been postponed, so, that will most probably postpone the medical device standards identification for MU. Nevertheless, medical device interoperability requirements for MU which will specify medical device data standards will be coming in the near future to the USA.

In other countries, they are also using government to select and mandate data and messaging standards for remote monitoring. The Danes have issued a reference architecture which specifies Continua guidelines for remote monitoring solutions to interact with their national health network and EHR. Norway, Sweden and Finland may follow Denmark. The EU has many projects it has funded which have recommended the use of interoperability standards for remote monitoring. These recommendations usually have been using products that adhere to the Continua Guidelines and/or specific IHE profile conformance. It is no secret that the underlying standards in those guidelines and profiles are very similar. For medical device data it is IEEE 11073 and for messaging it is HL7.

Industries outside the normal healthcare market are responding as well. The mobile operators are very keen to be involved in the healthcare market and ‘disrupt’ it. They have a unique proposition in that they have a one-to-one relationship with their customers and have developed back-end business infrastructures and processes which facilitate that relationship. Moreover, they have managed to build their customer base from ~1 million to ~7 billion worldwide by identifying and enforcing standards adherence beginning with the 3G/PP initiative (which started in 1998, the same year HL7 was started!).

Examples of required standards for 3G/PP include transmission protocols, security requirements (encryption), and user identification (uniqueness). Basically the mobile operators do not allow a handset that does not adhere to the standards to connect to their transmission network. In another example, mobile operators wanted to be able to sell services for images and required that all handsets have cameras and adhere to specific image data standards. It is nigh on impossible now for someone to purchase a handset without a camera and the proliferation of products and services that have sprung up for the management and sharing of these photos is phenomenal. This is due to the mobile operators insisting on data standards for a specific use. Because the standards were specified and enforced, interoperability soared and market penetration and size soared as well.

With that in mind, it is also interesting to note that the most recent handsets have integrated sensors in them which lend them to being used in mHealth applications. The Samsung Galaxy has a 10 sensors built in; a gyro, barometer, fingerprint, hall, accelerometer, heart rate, proximity, RGB ambient light, gesture and compass. Each of these can be used individually or in a combination to measure or provide remote monitoring in a healthcare sense.

In addition, with the use of short range networking (BTLE, ANT, NFC, etc), other sensors can use the mobile handset as a ‘ramp’ to the network. The ‘wearable sensor’ market depends heavily on mobile handsets for data display, computation and network transmission. As before, the mobile operators could require that medical device sensors adhere to certain standards or they will not allow the handset to use the transmission infrastructure.

Other developments have occurred with the handset manufacturers and other technology companies. All of them have announced some type of health data aggregation product with development kits for entrepreneurs (Apple Healthkit, Samsung Digital Health Initiative, Google Fit and the ongoing Microsoft HealthVault). While several initiatives by some of the same companies have failed in the past, many believe now is the tipping point for involvement in mHealth. There is recognition that leveraging the now ubiquitous mobile telecommunications infrastructure to solve some of more pressing healthcare issues is a ‘no-brainer.’

Therefore, medical device connectivity (or medical sensor connectivity) is becoming more prolific and will most likely end up being more extensive outside the currently controlled healthcare enterprise infrastructure. It is imperative that at least data standards be specified and enforced at the different interfaces to ensure true healthcare data interoperability across all of the disparate infrastructures. Healthcare providers currently have a lot of control over this market, however, there are forces outside that will in the future define large parts of the market and may make it easier for the standards to be identified and enforced.

Pictured is the Vital Connect healthpatch patient worn sensor. The Vital Connect business model is based on the assumption that their product will be interoperable with a variety of gateway devices such as smartphones.

Bridget A. Moorman, CCE, is president of BMoorman Consulting, LLC, providing consulting to healthcare providers, standards promulgation organizations and medical device and information technology companies regarding their medical device integration strategies.  She can be reached via email  or at her website.

Share

Blog url: 
http://medicalconnectivity.com

Follow Us: