Clinical alarm safety can be hard to achieve, and once attained, a struggle to maintain. There are so many challenges:
The inaugural Clinical Alarm Safety Symposium, November 20-21, 2014, will delve into these issues and more to provide attendees with actionable information that can be later applied in your institution to ensure continued clinical alarm safety.
The symposium also includes exhibitions from sponsoring and supporting organizations.
Speakers are actively being sought for this symposium. Please note that due to limited speaking slots, preference is given to hospitals and research centers, regulators, and those from academia. Additionally, vendors/consultants who provide products and services to these companies and institutions are offered opportunities for podium presentation slots based on a variety of Corporate Sponsorships.
The symposium is produced by The Center for Business Innovation (TCBI) and is scheduled for a full day November 20th and with a morning session until noon on the 21st. The afternoon of the 21st, will include one or more optional half day workshops (which will be available at an additional cost separate from the symposium).
To my knowledge, this is the first event dedicated to alarm safety since the Medical Device Alarms Summit in 2011. With the first milestone recently past for compliance to the Joint Commission’s NPSG on Alarms, the time is now for health care providers to gather together to share best practices and lessons learned.
This interview is with a long established thought leader in patient monitoring and alarm notification, Jim Welch. Jim has demonstrated a knack for bringing a fresh approach to long-term persistent problems in monitoring, nursing vigilance and patient care. At Sotera Wireless, Jim’s had a chance to re-imagine patient monitoring in low acuity settings with predictably innovative results.
At the AAMI 2014 conference, I had the opportunity to attend the breakfast symposium where Jim presented, Transforming Care in Non-ICU Settings through Disruptive Continuous Monitoring Technology. The following discussion centers on patient monitoring data analytics, pioneered by Sotera Wireless.
What is the value of data analytics applied to medical device alarms?
For many years caregivers have had to struggle under the weight of a large number of false and non-actionable alarms. The resulting cognitive overload often results in alarm fatigue. Sotera has determined that a very effective way to reduce non-actionable alarms is to optimize alarm default settings.
Before you can improve something, you must be able to measure it. Medical device manufacturers have always generated log files of patient data, alarms and other system data processed or generated by their patient monitoring systems. But this data was only used in product development, troubleshooting, and incident investigations. What is needed is to give clinicians access to this data – and tools to analyze the data – to reduce non-actionable alarms.
Presently, hospitals are forced to use a trial and error approach to alarm management, which means multiple iterations, which is not ideal. First, it takes a fair amount of labor. Secondly, it takes a long time because it’s an iterative approach. So that, in the absence of high fidelity analytics, customers experiment not knowing the consequences of their experiments, which means it potentially can move too far in one direction versus the other.
For example, if they make their alarms too sensitive, they open themselves up to more nuisance alarms. If they make their alarms less sensitive, then the specificity of detecting a patient who is truly deteriorating can be a patient safety concern. High fidelity alarm analytics fills that gap.
What do you mean by the term “high fidelity medical device data?”
High fidelity device data really means capturing all of the digitized information that the device is collecting at the origin. In the case of physiologic monitoring, that means the raw waveform. It also means all of the reduced data derived from the raw waveforms such as individual vital signs, like heart rate, respiration rate, pulse rate, SpO2, blood pressure, temperature and any alarms that occured. The reason high fidelity data is so important is that it allows retrospective simulations on that data and therefore avoids the iterative trial and error approach towards alarm management.
You mentioned simulating the results of alarm adjustments, how does that contrast with a conventional trial and error approach, and what’s the impact of your approach in clinical practice?
Well, there’s a significant difference between using high fidelity data analytics that do simulation versus taking an iterative approach, not that iterative approach is entirely bad, it just takes a long time, requires significant labor and time investment by the hospital.
If you have high fidelity data captured, this data represents the environment of use, and it represents the patient population of interest. Then you can take that high fidelity data and run “what-if scenarios” at different alarm configurations and see the difference in the number and types of alarms that are generated based on different alarm limit settings. This method avoids the iterative approach and the enormous time that it takes to do it.
How is the simulation actually done?
Sotera’s high fidelity analytics is an evidence-based approach to optimizing alarm settings. We upload de-identified high fidelity patient data into a secure private cloud. As of this date (late July, 2014) we have about 25,000 hours of data from the general care area across multiple care units, across multiple hospitals. By the end of 2015 we expect to exceed 100,000 hours of multi-parameter vital signs data.
We take the aggregate data and run large simulation scenarios in order to optimize the what-ifs, for the purpose of reducing false and nuisance alarms. Each new customer’s data is individually analyzed and compared to the ever-growing aggregate data. This comparison allows the customer to compare their results to the aggregate data. We have found this tool to be very effective in helping our customers rationalize their setting and to set expectations of alarm experience with ViSi Mobile before a broader adoption.
Considering less than 5% of alarms are clinically actionable, this tool allows the hospital to significantly reduce non-actionable events. We know within the aggregate data set there are no reported adverse events. But, we are not stopping here. Sotera is engaged in an IRB approved study to report on the types of actionable events that are identified by alarm signals. We hope to publish our findings next year.
Time stand-offs, where notification of a transient alarm is withheld for a predetermined period of time, has recently emerged as a key tool in reducing non-actionable alarms. How do time stand-offs work and what role do they play in reducing non-actionable alarms?
A time hold-off, or time delay, requires that whichever physiological parameter is violated stays in the alarm state for a persistent predetermined amount of time before an alarm is activated.
The human physiology is a wonderful system that often has temporary swings in physiology to compensate for a short-term condition. For example, the first time a patient ambulates after surgery places stress on their cardiovascular system. In response, we may see a transient change in heart rate and blood pressure. These changes may cause a true but non-actionable alarm. Likewise, patients recovering from anesthesia may experience short episodes of oxygen desaturation. These events are important to capture and display, but not necessarily cause an alarm condition because these alarms do not require an immediate intervention to avoid a harmful event. Time hold-offs provide a filter – the time delay – to help differentiate between those very short episodic changes and true harmful physiologic changes. Non-clinically actionable changes are filtered out of the alarm equation.
After a hospital completes an analysis of their high fidelity medical device data, what kinds of issues have emerged that have challenged these hospitals?
Before answering your question it is important to contrast ICU patients from non-ICU patients where ViSi Mobile is applied. In the ICU, the patient’s physiology is often being manipulated by drugs or external devices such as ventilators. In this environment of care clinicians are very concerned about very small deviations in physiology, and therefore, alarms are set to very sensitive levels. In the case of the general care area we have a very different alarm management challenge. The non-ICU patient is in a recovery period of their hospital stay. They are receiving medications that help them recover. They ambulate as part of the recovery process. We see from our high fidelity data that they occasionally have transient episodes of physiologic stress. What we are finding is that we can address the non-actionable alarm and alarm fatigue issue through this high fidelity data analytics.
What has surfaced in our early deployments of ViSi Mobile are people and process issues within the general care area. Our biggest challenge is partnering with our clinical customers in improving their clinical thinking skills in interpreting data – data that has a different context from higher acuity monitoring environments, and data that is new to the lower acuity general care areas.
For example, if a patient’s heart rate climbs above 160 beats per minute and we get an alarm, what does the nurse do at the bedside to correct that? It could be the patient is experiencing anxiety, or they forgot to disclose a medication they were taking at home prior to admission. Or, is this change an indication of the beginning of deterioration? So our focus really is in the area that we have termed transformation of care at the bedside where we are investing in the training of nurses to respond to alarms in a meaningful way, especially the actionable events.
So along with new data about their patients comes an increased need to be able to respond appropriately to that data?
Yes. So let me give you a couple of examples. What we’re finding is that ViSi Mobile is a disruptive technology to the non-ICU patient care area. The general care nurse is not accustomed to receiving real time physiologic information. So they’re discovering for the first time that their patients are experiencing the early stages of a harmful condition more often than they realize.
Sotera has discovered that we must first overcome the natural human element of denial. How could our patients have this many alarms or this many physiologic conditions that require our nurses’ response at the bedside? We have to overcome that barrier through training and investment in their day-to-day operation. And that often comes to working directly at the policy level within the nursing community. Let me give you an example of that.
It is very typical for nurses in the general care floor not to have within their scope of practice the ability to change alarm limits on a patient without a physician order. If you’ve ever worked in the general care floor, you’ll know that the nurses are very reluctant to call physicians for these kinds of permissions. So what typically happens is you get a few patients that are alarming all the time, and the nurses are reluctant to get a physician to write an order to change alarm limits.
As a result, we frequent engage our clinical customers in discussing policy issues that allow an extension of current scope of practice to allow clinical interventions (including changing alarm limits) within a limit defined by senior clinical leadership. In essence, we are empowering each nurse to intervene sooner to a deteriorating patient condition.
What’s the relative value of a device manufacturer’s own alarm analytics solution, like Sotera’s, and a patient-centric alarm analytic solution that accounts for all the devices attached to patients from a third party like a messaging middleware vendor?
Well, clearly from a workflow standpoint, the environment of care is more than just physiologic alarms. There are out of bed alarms, nurse call alarms, stat results from laboratories, and so forth. The true solution to the overall nuisance alarm problem really involves a new technology ecosystem that includes not only the individual devices and their alarm management at the source of the alarm, but also the integration of that information with other contextual information about the patient.
So, does a hospital need both kinds of analytics tools? Or is one better than the other?
In my opinion, it’s not an either or proposition, both compliment one another. Solving alarm fatigue requires strengthening each link in the system chain, starting from the choice of sensors and continuing all the way to how the nurse receives alarm information.
I think the device manufacturers are obligated to do whatever they can to strengthen their algorithms, to help customers analyze their device data to identify truly actionable events. Then the messaging middleware system has to take that data, combine it with other contextual data like demographics, admitting diagnosis, drug medications, comorbidities and consolidate all this information to create a higher level of decision support, such that nurses are only getting information that they have to act on, in a timely way to avoid harm.
Alarm settings are a key part of the clinical practice of alarms and a major contributor to minimizing non-actionable alarms. Once the hospital has gotten a handle on that, what other factors in effective alarm management must be addressed?
The answer to that question comes down to people, process and technology. So, through our alarm analytics and simulations we’re solving the technology component where only actionable information moves into the messaging or notification system. The next challenge is how do we use that information to cause sustainable behavioral and/or process changes within the institution. Our experience has shown us that the bigger elephant in the room is the investment in the critical thinking skills of the nurses at bedside.
It has been all too often in my career as a Clinical Engineer that hospitals will purchase a system with the expectation that it’s the technology that is the solution to nuisance alarm. That’s not entirely true. Technology plays a very important role in solving alarm fatigue, but if the hospital doesn’t invest in the training programs, the policy changes, the cultural changes, the process changes, at the bedside then any new technology in my opinion will be very short lived.
More often than not, hospitals buy these types of solutions only to abandon them later, because they’re not getting the improvement and outcomes in patients by whatever metric they decide. It is because they haven’t adequately invested in the process change, the policy change required to realize the full potential of the technology. In 2010 Dartmouth Hitchcock Hospital published a remarkable reduction in ICU transfers due to a multidisciplinary approach to early detection and interventions. They invested in people, process, and technology. Since implementation they have reported no unanticipated cardiopulmonary arrests, clearly an improvement in outcomes. Yet, no other institution has achieved similar results. Why? I can only conclude it was due to a deliberate improvement in the culture of care at Dartmouth that was enabled by a new technology.
We recently submitted an article that talks about a capability maturity model for organizations to address alarm management from a foundation to a sustainable level. It has been my experience that if you don’t go through those process changes and make those investments, then the hospital will struggle with realizing a sustainable solution.
Pictured above is the ViSi Mobile monitor.
Developing and launching a competitive product, and getting initial traction in the market are not inconsiderable milestones. And yet for the entrepreneur and their investors, this is just the beginning. What was record setting last quarter is barely acceptable this quarter, and next quarter had better be back on track.
Developing a solid plan for growth depends on two things: a good understanding of the basic means to drive growth, and a deep understanding of the market. This post seeks to combine both of these in a brief survey of the key factors to drive messaging middleware revenue growth in health care. We’re going to consider three basic growth strategies: organic growth, product line extension, and the roll-up strategy.
For start ups, organic growth can be realized first by targeting a market segment that has broad appeal and large numbers of early and late adopters. Going back to Moore’s market adoption model, it’s relatively easy to identify a market need and generate initial sales to innovators and early adopters. These early buyers want technology and performance, something new the buyer can leverage to gain a competitive advantage of their own.
These early buyers tend to be large institutions with a corporate culture of innovation and the internal resources to support such endeavors. Accounts like the Cleveland Clinic, Mayo Clinic, Partners Healthcare, come immediately to mind. Kaiser Permamente would also fall into this group, except they are held back by their need to have solutions that can scale to considerable extremes, a requirement that is not applied to these other health care provider titans. There is even a cadre of smaller nimble early buyers: Overlake Hospital, Bellevue, Washington, and El Camino in Mountain View spring to mind. Spend enough time in this industry and the early buyers tend to make themselves known. The problem is that this population of early buyers is quite limited; early buyers will only take a company so far.
Once most of these early buyers in a market segment have bought, the market adoption chasm arises because the next group of buyers to adopt – the much larger early majority – don’t want technology and performance, they want complete, proven, easy to adopt solutions. This shift gives rise to the conventional wisdom that, “hospitals want solutions to problems, not tools they can use to solve their own problems.” For vendors, the importance of this is self evident when considering how to maintain or even increase their growth rate over time. For providers, it’s important to recognize from which side of the chasm your organization is operating and proceeding accordingly.
To cross the chasm, vendors must add to the original innovative technology the required features and services to create a whole product solution that is laser focused on a recognizable problem. Figuring out exactly what it is that’s required to transform an innovation worthy of inspiring early buyers into the safe and reliable solution required by the early majority is a challenge. Recognizing the gaps and knowing how best to fill them is not easy, although there are processes that can be used to identify those requirements and confirm that they’re met.
Moore calls the process of creating and going to market with the whole product solution being in the bowling alley. The bowling alley let’s you shift your growth from the early market, which may be nearing penetration, to the much larger early majority portion of the market. Crossing the chasm is an essential objective for new companies. In a crowded market like messaging middleware, numerous companies will be struggling with crossing the chasm.
Achieving strong organic growth is an excellent indicator that, beyond a solid whole product solution, sales and market are also top notch. Sales and marketing are especially important because health care is not a field of dreams market, where “if you build it, they will come.” Brand awareness, demand generation and market education are key marketing tasks. Sales requires effective sales tools and proofs in support of a sales strategy or process that leads first time buyers to the right decision in an efficient and reliable manner.
A main characteristic of the messaging middleware market is the variety of different problems that can be solved by the same basic technology. These different problems are reflected as market segments. Each of the different market segments listed in the previous blog post can potentially support a start up, or represents a potential product line extension. Moore frames these other market segments as additional bowling alleys that leverage the same foundation of product and services that make up the original whole product solution. Some product line extensions may require changes to the whole product solution to gain early majority market adoption.
Much like selecting the initial target market for a start up, the key is to identify new bowling alleys with sufficient market demand (of course, competition is also a factor). Synergy with preexisting whole product solutions is also desired. It’s also helpful if the new bowling alleys under consideration target the same markets (e.g., physician practices or hospitals) so that existing sales and marketing resources can be easily leveraged to take advantage of cross-sell and up-sell opportunities that emerge. If different bowling alleys target different markets – say, physician practices for one and hospitals for another – each target market will require major investments in marketing and sales; potential synergy from a targeting a common market are lost.
Sometimes a product line extension includes product changes that add substantive new features to the platform. For example, a secure messaging solution that is designed to support a single enterprise might add the capability to support users across multiple enterprises, or the addition of a scheduling module to support a more complete secure messaging solution for on-call physicians.
A roll-up strategy entails a series of acquisitions used to construct a bigger company made up of complementary products or solutions. A relevant example of this strategy can be found in Amcom Software. After their merger with Xtend Communications, Amcom came to dominate the hospital operator console market (due to their HL7 integration capability) and related telephony applications. Subsequent acquisitions extended Amcom’s reach with various communications solutions for health care, government and other vertical markets.
Amcom Software was acquired by USA Mobility in 2011 for $176,800,000. The combined company is now called Spok (pronounced spoke with a long “o”). Starting with the merger with Xtend, the Amcom Software strategy was to build a company through acquisitions and then sell the company. With a 2010 revenue of $60 million, things appear to have worked out well for Amcom’s investors.
Because of the nature of this market, a roll-up strategy can be challenging. Unlike the product line extension strategy, where a company’s existing technology is reconfigured or enhanced to target new market segments, the roll-up strategy entails the acquisition of other companies. How those acquired products, employees and customers are optimized is the challenge.
Mergers and acquisitions occur frequently in the health care industry. The goals of these transactions include:
The first two bullets are obviously related, however the degree and ways they’re related depends on the specific companies and their business models. A company that goes to market selling mostly capital goods (hardware and licensed software) is quite different from a company selling their solution as a cloud based service.
As discussed in a previous post, most messaging middleware solutions are built using a similar architecture that is often made up of software engines. These engines can be licensed from commercial vendors or from open source projects. The resulting solutions can be built relatively quickly and for modest sums. Consequently, the value in purchasing a messaging middleware vendor for their technology may be limited.
Creating interfaces between multiple messaging middleware acquisitions can be problematic. To date, messaging middleware systems have been designed to operate alone; manufacturers do not intend for their messaging middleware system to be one of a constellation of messaging solutions serving the same user base. Some manufacturers have added to existing designs by implementing APIs and other integration points to facilitate the incorporation of other messaging middleware apps – often to fill feature gaps demanded by prospective buyers. Implementing multiple messaging middleware solutions via acquisitions raises questions about message routing, escalation and the existence of more than one rules engine impacting message flow. A system of systems made up of messaging middleware solutions gets very complicated very quickly, increasing configuration and verification and validation test complexity.
An acquiring company with older software technology may see value in the acquired software platform, or in the intellectual property and expertise behind the development of that software. Further, the acquired company may have software capabilities that are extensions to messaging middleware solutions – such as the staff scheduling for on-call physician messaging example used earlier.
The acquisition of mVisum by Vocera is worth a closer look. It should be noted that Vocera does not appear to be executing a classic roll-up strategy but the rationale that may have driven this acquisition is of interest. mVisum was a start up with an attractive messaging middleware product. Unlike many other messaging middleware solutions, mVisum was FDA cleared for alarm notification, conveyed snippets of medical device waveforms with medical device alarms (important for screening non-actionable false/positive alarms), and also included remote medical device surveillance features. The company subsequently ran into some patent infringement issues with AirStrip Technologies. mVisum was acquired by Vocera for $3.5 million less than a year later.
There is considerable overlap between Vocera and mVisum solutions. Potential areas of value for Vocera include mVisum’s FDA clearance for alarm notification, one of the strongest messaging middleware market segments. mVisum also filed a number of patent applications that may be of value to Vocera. Vocera was founded in 2000, so there may be some value in mVisum’s software architecture – if not the actual software, then the requirements and design may be leveraged in future versions of Vocera’s software.
To summarize the roll-up strategy applied to messaging middleware, there is likely not a lot of value in acquiring other messaging middleware companies when compared to the product line extension strategy. The main reason is because most software architectures will be similar. There are exceptions to this, some of which are alluded to in the Vocera/mVisum discussion above. Because the messaging middleware market is relatively undeveloped – we’re far short of a penetrated market – there’s little opportunity to buy cash flow or market share through acquisitions. Nor is the market so developed that human resources are a likely justification for acquisition.
The roll-up strategy does make more sense when one looks beyond messaging middleware. Just as Amcom Software took a broader view of vertical market messaging and communications solutions that included messaging middleware as a portion of the whole, one could frame a roll-up strategy from a similar, higher level. For example, a roll-up targeting health care could encompass point of care solutions, rolling-up messaging middleware with nurse call, medical device data systems (MDDS), data aggregation and patient flow with enabling technologies like real time location systems (RTLS) and unified communications (enterprise phone systems). The resulting entity could define a new enterprise software category: point of care workflow automation.
Another practical application of the roll-up strategy is the secure messaging market targeting physicians. There is little apparent differentiation between solutions and vendors with good adoption in a particular geographic market will be difficult to dislodge. Here a classic roll-up, where the acquiring company offers broader economies of scale superior to those of regional players has a lot of potential. Such a strategy would be complex to implement, due to the technical product integration issues noted above. Provided they could dedicate sufficient cash flow, this could be an attractive strategy for Spok, although any company with access to several tens of millions could pull this off.
With 100+ competitors, the messaging middleware market is remarkably crowded. Over time, many of these firms will fade away as they fail to gain initial market traction, cross the chasm or get acquired. There will certainly be mergers and acquisitions. There will be some who plan and execute well, and grow their companies to tens and hundreds of millions in annual revenue. Some degree of luck with be a factor. But regardless of the strategy or outcome, the imperative shared by them all will be the drive for growth.
You can find a post Messaging Middleware Defined here and the post on Messaging Middleware Market Segmentation & Adoption here. In the coming week a post on HIPAA will be published. Be sure to check back!
I was listening today to the CE-IT Webinar on CE and HIT from the 2014 AAMI conference in Philadelphia. Much of the session reviewed what has happened over the last five years and it got me thinking about my experiences and what I’ve seen over the last ten years in medical device connectivity and remote monitoring. It’s been an interesting ride and yet I realize there are a few basic ideas that have resonated over the years. These basic ideas are:
Ten years ago, I was working for a very large integrated healthcare system as a clinical engineer. One of my projects was to choose and implement the medical device integration system for integrating patient monitoring and ventilator data into the ICU charting portion of our EMR system. There were three main vendors at the time which weren’t part of the large medical device companies and eventually we chose one of the major ones for the system. My responsibility was to ensure the device data went from the device at bedside to the device integration server and out to the interface broker to the EMR application.
While choosing the device integration product, I had to keep in mind my healthcare enterprise infrastructure. I had thirteen hospitals that needed to connect to the two separate instances of the EMR application. Being able to standardize on the device integration system implementation design and management became one of my paramount concerns as I needed to be able to scale the solution over the infrastructure. Additionally, I knew that if I was successful in that particular region, the solution would need to scale over to other regions and nationally.
During that time, I also was involved in some of the organizations promulgating the use of standards at the medical device integration system/interface broker interface. The standards organizations wanted me to include the standards as requirements in my procurement documents. And yet, I resisted because I did not see the standards as either being mature enough or being overly burdensome requiring adherence through all layers of the OSI 7 layer model.
In retrospect, I believe I should have insisted on the use of at least the data standards from the devices embedded in the messaging standards (HL7). We were using HL7 at the output of the device integration server, but the EMR application separately mapped each data item to a data base element and had to use the device vendors’ HL7 implementation guide to figure out what the data items meant. If we had specified IEEE 11073 device data standards (perhaps even later on as we evolved), we would have been able to more easily change medical device vendors in the future, if desired, and not have to worry about ‘breaking’ the interface to the EMR interface broker.
With regard to the other standards, physical, networking, etc., I’ve found that the IT industry does a good job of defining standards and then converging to an interoperable solution. Those standards are required across various vertical markets, so there is more demand for the convergence of the standards and products with those standards. What is unique to healthcare is the data and messaging information. And, that is what is most important to the clinicians and patients – consumers of the data. All of the other standards are mostly mechanisms for transmitting the data from where it is generated to where it is acted upon in some fashion.
I see the same thing happening in remote monitoring and mHealth. Buyers are too focused on short term and immediate issues and not realizing that specifying data standards can help them be interoperable in the long run. Again, not having to worry about the data format of another vendor’s sensor data being integrated into the EMR can save time and money as well as allow quicker scaling across your organization.
However, there are other players on the scene now, which may make the buyer’s job a bit easier. With ACA in the USA, the term meaningful use (MU) has led to the establishment of standards which EHR applications must use in order to be certified to allow the US government to reimburse some of the costs of EHR implementation. In fact, the first MU stage was going to include remote monitoring standards to include certain medical devices data (HITSP IS77), however, it was eliminated for that stage. It is anticipated that the last MU stage will require medical device interoperability. The original date for that was projected as 2015, however, the MU stage 3 has been postponed, so, that will most probably postpone the medical device standards identification for MU. Nevertheless, medical device interoperability requirements for MU which will specify medical device data standards will be coming in the near future to the USA.
In other countries, they are also using government to select and mandate data and messaging standards for remote monitoring. The Danes have issued a reference architecture which specifies Continua guidelines for remote monitoring solutions to interact with their national health network and EHR. Norway, Sweden and Finland may follow Denmark. The EU has many projects it has funded which have recommended the use of interoperability standards for remote monitoring. These recommendations usually have been using products that adhere to the Continua Guidelines and/or specific IHE profile conformance. It is no secret that the underlying standards in those guidelines and profiles are very similar. For medical device data it is IEEE 11073 and for messaging it is HL7.
Industries outside the normal healthcare market are responding as well. The mobile operators are very keen to be involved in the healthcare market and ‘disrupt’ it. They have a unique proposition in that they have a one-to-one relationship with their customers and have developed back-end business infrastructures and processes which facilitate that relationship. Moreover, they have managed to build their customer base from ~1 million to ~7 billion worldwide by identifying and enforcing standards adherence beginning with the 3G/PP initiative (which started in 1998, the same year HL7 was started!).
Examples of required standards for 3G/PP include transmission protocols, security requirements (encryption), and user identification (uniqueness). Basically the mobile operators do not allow a handset that does not adhere to the standards to connect to their transmission network. In another example, mobile operators wanted to be able to sell services for images and required that all handsets have cameras and adhere to specific image data standards. It is nigh on impossible now for someone to purchase a handset without a camera and the proliferation of products and services that have sprung up for the management and sharing of these photos is phenomenal. This is due to the mobile operators insisting on data standards for a specific use. Because the standards were specified and enforced, interoperability soared and market penetration and size soared as well.
With that in mind, it is also interesting to note that the most recent handsets have integrated sensors in them which lend them to being used in mHealth applications. The Samsung Galaxy has a 10 sensors built in; a gyro, barometer, fingerprint, hall, accelerometer, heart rate, proximity, RGB ambient light, gesture and compass. Each of these can be used individually or in a combination to measure or provide remote monitoring in a healthcare sense.
In addition, with the use of short range networking (BTLE, ANT, NFC, etc), other sensors can use the mobile handset as a ‘ramp’ to the network. The ‘wearable sensor’ market depends heavily on mobile handsets for data display, computation and network transmission. As before, the mobile operators could require that medical device sensors adhere to certain standards or they will not allow the handset to use the transmission infrastructure.
Other developments have occurred with the handset manufacturers and other technology companies. All of them have announced some type of health data aggregation product with development kits for entrepreneurs (Apple Healthkit, Samsung Digital Health Initiative, Google Fit and the ongoing Microsoft HealthVault). While several initiatives by some of the same companies have failed in the past, many believe now is the tipping point for involvement in mHealth. There is recognition that leveraging the now ubiquitous mobile telecommunications infrastructure to solve some of more pressing healthcare issues is a ‘no-brainer.’
Therefore, medical device connectivity (or medical sensor connectivity) is becoming more prolific and will most likely end up being more extensive outside the currently controlled healthcare enterprise infrastructure. It is imperative that at least data standards be specified and enforced at the different interfaces to ensure true healthcare data interoperability across all of the disparate infrastructures. Healthcare providers currently have a lot of control over this market, however, there are forces outside that will in the future define large parts of the market and may make it easier for the standards to be identified and enforced.
Pictured is the Vital Connect healthpatch patient worn sensor. The Vital Connect business model is based on the assumption that their product will be interoperable with a variety of gateway devices such as smartphones.
A while back I had the opportunity to chat with Todd Dunsirn, the CEO of True Process. True Process provides products and services to both hospitals and various manufacturers. The company is focused on the point of care market offering a medication administration solution and a medical device data system.
What was the genesis for starting True Process?
I started the company in 2004. I have an engineering background, and had several other companies doing IT consulting and then web development, and application development. Then I had a friend contact me to develop a bar-code point-of-care simulation so that sales reps that were selling infusion pumps could demonstrate the five rights process with the pump. So, of course he said, “Hey can you do this? It’s gotta be done in three months.” And keep in mind, I had never heard of bar-code point-of-care [chuckle] prior to this, so I’d really never thought about infusion pumps.
So I was on the road for weeks learning the technology, spent time in hospitals seeing how it’s used and developed this application. When it was complete, the client was just about to release their wireless suite of products and they said, “Hey, you’ve done all this research, why don’t you do the first installation because over the last three months you became the expert at doing this?” So I became the installation department and there was one site, then another site and it quickly grew from there.
It was apparent to me that, okay, there’s a need here. These medical device companies didn’t have the resources or core competencies on the IT side of things, in their companies right now, and that’s how it started. So we scaled up, and we had created a business out of this single opportunity. We made a conscious decision to just stay focused in healthcare, because it’s a unique industry and a unique space with regards to IT, and I didn’t want be bouncing around across vertical markets.
We’ve branched out in recent years with other companies and other technologies that are in healthcare. Right now we’re doing a lot of work in the RTLS (real time location system) space, with asset management and workflow studies based on the actual location of people and things. We’ve seen a lot of growth in that area, a lot of interest for what we do. So, it’s starting to be an interesting blend of technology now with medical devices and other things like RTLS.
Are these RTLS projects for RTLS manufacturers or are they for medical device companies that want their products tracked? Or are they for hospitals?
Right now we are working primarily with the RTLS hardware manufacturers and system integrators of these types of platforms. The systems integrators have a vendor neutral platform that can use RTLS hardware and provide a variety of tracking and workflow automation applications. Then they have a rules engine so they can trigger things based on location or how long a device or a patient has been in a certain location; they also provide analytics. These systems can do some really interesting workflow optimization and awareness things. So we’re working with a lot of companies like that.
True Process is pretty unique in that you work for both manufacturers and hospitals. Based on that perspective, what industry trends jump out at you?
The biggest that that sticks out in my mind is that it always feels like health care behind the times. It feels like it’s the last industry to try and catch up. Sometimes it feels like things are made more complicated than they need to be. A lot of medical device manufacturers look at wireless as this high-end premium feature when really in every other industry it’s a given that you have that. It’s not something that really should set you apart anymore. It should be assumed that every device is connected as wireless and can communicate. Another thing that jumps out is that there hasn’t been a lot of movement from the device manufacturer’s side, and there hasn’t been a lot of pressure from the buyer’s side to work towards some level of standardization with respect to how devices communicate.
We have all these standards going on with the EHR and health exchanges and all that, and that’s great. But when you look at the medical device side, everybody’s doing it differently. The messages are different. How they connect is different. The security is different. It’s baffling that in an industry where you can have devices that are recalled and unavailable, the vast majority of products are tied to proprietary messaging and wireless infrastructure.
Take the infusion pump manufacturers for example, and the recent product recalls. If you’re a hospital and you have this gateway and these devices and there’s a recall, you’re done. You’re not communicating or you can’t just get another manufacturer’s pump and put it in there because the messaging is all different, the platform’s all different. It’s like if you had to replace your cable modem or router in your house and you had to replace all your devices accessing the Internet. Nobody would ever do that. But hospitals and healthcare, they currently do that.
I think device manufacturers in general would benefit from some sort of standardization, because they’re all really trying to do the same thing. And I think what it would allow them to do is really just follow some best practices for connectivity and really put their focus on the functionality of the product itself and what it’s supposed to do, and how it’s supposed to deliver a medication or monitor a vital or whatever it is.
I don’t know how that’s going to change. I would think some of the large purchasing organizations would be able to really drive that. I’ve seen different initiatives to try to standardize the buying process of connectivity and say, “Hey, this is how we want to do things and it’s got to meet these requirements, and if it doesn’t, we’re not buying it.” They just haven’t taken that to the next step, and I don’t know where it falls in the priority list of things that hospitals are doing with the recent last few years of EMR integrations and changes, but I think as connectivity moves forward we’ll see that happen.
Tell us about your MDDS platform, ViNES. What is it and what are its capabilities? And do you sell it to medical device manufacturers, or do you sell it to hospitals?
Today in hospitals they have these 10 to 15 different device gateways that they have to manage for all these different medical devices devices, and the data’s all scattered into these separate gateways – or just streamed into the bit bucket. We saw this and thought about this for years. And we saw the need to develop a platform that is vendor neutral, where anything can connect to it, and it stores the data in a common format on a common platform. We use the standards that are out there, whether they’re ones from IHE or Continua Alliance or IEEE. We try to use those standards when we develop our database so that all data is stored the same way, so it can be accessed the same way.
As we got started, it became apparent that what we were developing was more then just a device gateway. It really developed more into an integration platform that included medical devices and other HIT systems. Take for instance, getting an ADT feed, getting vitals data. It doesn’t matter what the data source is, we can take that data and we can accept it and we can organize it based on the structure that we created. And then we wanted to make that data accessible through an interface that really allows hospitals or even a device manufacturer to access that data however they want, whenever they want in a known format that they can control. So that was really how we got started with ViNES, we looked at this vendor neutral device gateway platform.
What about ViNES sets it apart from other connectivity solutions and what do you see as important differences between connectivity solutions.
First, ViNES actually stores and organizes the data. So it’s not just routing data, it’s not just a serial-to-serial connection that’s getting data from the device and sending it to the EMR. We’re actually storing the data, so we have this whole data warehouse structure. The goal was to collect everything from every device, store it in a patient-centric way, and then be able to organize it by unit, or hospital or however you want. So that’s one of the unique things, it’s not just a connectivity engine, it’s a connectivity and data warehousing engine.
Everybody talks about big data and analytics, but this is mostly applied to data in the EMR and claims data – all abstracted and summarized data. If you look at how much data is generated at the point of care in a hospital just from devices, it’s staggering. It’s this full resolution physiological and therapeutic data that, when combined with the HIT data we have now, will really transform big data into a took with a transformational impact on health care delivery. And that’s why we created ViNES to collect all that data and organize it intelligently. Even if it’s not being used today that data in another year or two – just having access to it, you’re going to be able to do some very interesting things with it.
Also, we’re really tried to create something that is developed around current technology. We use a lot of open source technologies to keep current and to keep the cost down. Consequently, we’re able to approach the market with a very scalable and flexible licensing agreement. Take for instance, low census devices like intra-aortic balloon pumps, dialysis machines or, somebody that has five scales that wants to integrate them. Our approach to licensing can scale down to these small deployments to price ViNES so that it can be affordable in most situations.
In the future, ViNES is designed to be cloud based so we can host customer data on an Amazon cloud, and provide connectivity for somebody that didn’t want to have manage the infrastructure. The goal is to enable a hospital had ViNES, to be able to go onto a screen, select whatever device they wanted to connect, configure it, and be up and running without really any intervention from us. We’re also moving to modularize software as plug-ins and provide the ability to send data to a data warehouse where we really don’t have to be involved. And that goes back to the standards.
We’re not looking to create some sort of system where they have to pay a lot of money to get the data out of their platform. It’s their data, they should have access to it, and we think there are opportunities and growth in other aspects of developing the product other than charging them data access usage fees and things like that.
Unlike some medical device manufacturers I can think of.
That’s how the industry is, and we’re just trying to be different. We’re trying to shake things up, and the more we talk to hospitals, CIOs and people that are in charge of these efforts, you see the movement and the frustration. It’s like when are we going wake up and do things the right way and take control of this and demand solutions that meet our business needs, meet our cost structure, and are technologically done the right way.
That’s a great segue to my last question. What are the trends that you’re seeing in the connectivity market? Is growth increasing? Are we starting to peak because of the meaningful use roadmap? Are buyers getting better informed? Are they asking harder questions, or are they still just buying whatever their favorite vendor wants to sell?
I think medical device manufacturers are starting to really look at the value of a prospective connectivity solution. We’re working on several opportunities right now, two opportunities in the last week, where companies that are with a current provider had said the cost is just unsustainable. What I’m hearing it like, “This is great, but I’m not going pay all this money to do it just because I can. It’s got to mean something. And it’s got to fit into our strategy.” So that’s a trend that we’re seeing. You look at how much some of these connectivity systems cost, and that’s being driven by what they’re doing and also how the work is done. It seems like nothing’s really turnkey, and there’s a lot of time and a lot of effort to get things done. And I don’t know if that’s because of the technology itself or their design approach.
Take the core cost of a system – the software licenses and hardware costs – and now add on all this consulting and development time to get things working, and it can easily become a black hole of cost. With ViNES, we’re really trying to change that paradigm and to make it more efficient.
The final trend we’re seeing is a glimmer of recognition regarding the value of medical device data beyond clinical documentation in the EMR. Not everybody understands right now the value of this data. Everybody talks about big data, but few people know what they’re really talking about or what it really means or how it applies to them. This is true for both medical device manufacturers and the data their devices produce, and hospitals and the data generated from their patients. A lot of data now is used really in the short term. It’s used right away, but it’s not mined or analyzed to learn what really happened here, and what were the events that happened, and how this lines up to what was actually done, and the ultimate patient outcome. That’s just not happening now.
The photo above is from the True Process booth at HIMSS14.