Question from Mandi Bishop, health IT consultant:
What are/could health IT innovators do to markedly improve healthcare and achieve Triple Aim goals, in lieu of forced compliance with CMS/ONC mandates, to offset Meaningful Use incentive dollars and associated reimbursement penalties?
Don’t laugh. I’m serious!
Instituting governance is the single most innovative thing health IT can do in lieu of Meaningful Use that would offset the loss of incentive dollars and reimbursement penalties, improve population health, and improve patient clinical outcomes.
Sound like Triple Aim goals? It should. You don’t have to implement new technology to make the most of what you have. It’s all about the process, baby.
How much faster would your revenue cycle be if 30% more patient data was valid?
What if identifying a vocabulary owner of LOINC freed your lab from its local compendium, and you were able to reconcile and aggregate your own labs with third-party labs? How many fewer unnecessary tests might you order?
How is the patient experience improved when the patient and all caregivers have access to accurate, timely, relevant health information from all their care sources?
Innovating is thinking outside the box. Right now, that box is Meaningful Use – the program where deadlines don’t just discourage, but blatantly ignore, fundamental IT governance principles, and our nation’s patients have become unwitting lab rats in the grandest series of human trials the FDA never realized it should have reviewed.
So be a rebel. Get back to basics. Block, tackle, and institute governance.
The following is a guest blog post by Dawn Crump, VP of Audit Management Solutions at HealthPort.
The RACs are back and they’re offering acute care and critical access hospitals a sweet deal—at least for now.
The Recovery Audit Contractor (RAC) program had been on hold due to the reassigning and re-contracting of regions. In addition, there was a lawsuit pending between Centers for Medicare and Medicaid Services (CMS) and CGI over RAC reimbursement rates, models and approaches. The lawsuit was resolved in August. But CGI quickly appealed causing further delay in full resumption of the RAC program.
So while everyone awaits another court decision and green light from CMS, two important RAC announcements were made by CMS.
Limited Restart Underway
Until the RAC program is 100 percent back in session, some reviews will be conducted. These will be mostly automated reviews, but there will be some records requests and a limited number of complex reviews in certain select areas. During the restart, RACs will not review claims to determine whether the care was delivered in the appropriate setting. CMS said it hopes that the new RAC contracts will be awarded later this year.
From the Aug. 5 edition of the American Hospital Association’s News Now: “CMS will allow current RACs to restart a limited number of claim reviews beginning this month. The agency said most reviews will be done on an automated basis. However, a limited number will be complex reviews on certain claims, including spinal fusions, outpatient therapy services, durable medical equipment, prosthetics, orthotics and supplies, and Medicare-approved cosmetic procedures.
One example of the latter is blepharoplasty, also known as an eyelid lift. The number of claims for this procedure has tripled in recent years, so I expect the RACs will make this procedure a hot target. To be covered under Medicare, vision must be impaired. What’s needed? Physician documentation of the reasons for surgery (e.g., eyelid droop interfering with vision).
Here are three specific steps to take with regard to the limited RAC restart:
But the limited restart wasn’t the only important news.
Partial Repayment Deal Announced
In their September 9th, 2014 inpatient hospital reviews announcement, CMS announced an administrative agreement for acute care and critical access hospitals. To reduce the backlog of cases in appeal status and overall administrative costs, these hospitals now have the option to withdraw their pending appeals in “exchange for timely partial payment (68% of the allowable amount)”, according to the CMS administrative agreement.
Of course there are parameters to understand and details to sort out regarding the settlement opportunity. Here is what we know so far:
Many more details are available on the CMS.gov website.
Eligible hospitals must determine if requesting a settlement offer makes sense for cases in appeal that meet the specified parameters. For some cases, it will make sense to take the 68 percent settlement and cut your losses. For other denials, waiting out the appeal process may be a better choice.
Each denial will be different and each case unique. Time, money and resources must be balanced against the potential revenue retained or returned potential. Audit management directors, in conjunction with their revenue cycle and finance teams, must analyze RAC data for each eligible case. It’s a complicated equation. And with a deadline of October 31, 2014, there is no time to lose.
About Dawn Crump
Dawn Crump, MA, SSBB, CHC, has been in the healthcare compliance industry for more than 18 years and joined HealthPort in 2013 as Vice President of Audit Management Solutions. Prior to joining HealthPort, Ms. Crump was the Network Director of Compliance for SSM. She is a former board director of the Greater St. Louis Healthcare Finance Management Association chapter and currently serves as the networking chair.
Question from Rob Brull, product manager at Corepoint Health:
In what ways does it make sense to extend Direct Project beyond those already defined by Meaningful Use?
Most of the implementations I am aware of for Direct protocol involve the sending and receiving of Summary of Care documents to satisfy the Transfer of Care (ToC) requirement for Meaningful Use. There is one radiology site my company is working with to send a Diagnostic Imaging Report via Direct. My thought is that lab orders, lab results, and radiology reports are obvious workflows for implementing the Direct protocol. No VPN or agent is required and hopefully all provider facilities will already have a Direct address based on ToC requirements. The real hurdle will be for vendors to extend their capabilities to send and receive payloads other than Summary of Care.
This is a continuing story of the California Medical Association asking for a delay with starting the United Premium Designation Program and you can read more at the link below where they said no, algorithms have to run no matter what. You may not even know what the Premiums Designation is but it’s more analytics and scoring of doctors that they have to go through to maintain being in network with United.
The company says this program started in 2005 and that’s about right as I remember reading a brochure and information about it I said recently around 10 years back to my memory fits well here as at that time I was spending a lot of time around a primary practice office writing and programming an EMR which I sold for a very short time. You see a lot of stuff when you spend a lot of time at a doctor’s office. You can also visit the company’s site for MDs and see what all they have in store for MDs. Doctors can log in and see their Premium Assessment Results from there. There’s a pdf that will make your head swim as a patient that outlines all the grading formats. Here’s all the Optum/United software that is used for this complex evaluation from claim data.
“The Premium program uses different software programs to collect or "group" claims data into quality measures and episodes of care, including Symmetry Episode Treatment Groups® (ETG®), Symmetry Procedure Episode Groups® (PEG®), Symmetry Episode Risk Groups® (ERGs®), Symmetry EBM Connect® and 3MTM All Patient Refined Diagnosis Related Groups (APR DRG).”
The Practice Rewards page is left blank. So they have tools for the doctors to use with the symmetry software and the first is what they call EMB Connect which includes all the data from claims and is supposed to help doctor assess you, the patient to make you more compliant. Then it goes on to episode risk groups teaching doctors more on how to do a risk assessment on you and tools to predict your current and future healthcare usage.
There’s a couple others as well called Symmetry Episode Treatment groups and Pharmacy Risk and those dive into Population Health risks and assessments. So did you think population was a CMS invention, nope, it’s been here for around 10 years as I remembered seeing the beginning stages and all one has to do is pretty much just read the United Healthcare Annual Report and you can see where CMS gets their models and it’s been that way with a flurry of United people in government ever since Hillary Clinton rescued Lois Quam, who was a big executive at United, who was one of many who were named in the DOJ Criminal case the SEC brought forward in 2008-9 related to back dating stock to increase the value. I think it’s still the largest derivatives fine on record. The former CEO took the hit and the current CEO rushed to give his money back and moved into the current CEO job from being the former COO or CIO.
Anyway, as you can see here there’s a lot going on here with evaluating all the data on the doctors and when you add on that many of them in southern California via use of complex contracts are already getting paid at rates less than Medicare, this does not make them real happy to have a bunch of algorithmic figures and more analytics to jump through. This process has become more complex as all health insurers are hiring more quants to create models, although United I could guess has had their share for years since that’s where 1/3 of their revenue comes from these days, analytic, software, licensing and 2/3 from selling policies. You do have to wonder on top of their algorithms and models they are when you see the company just kick out a check for a $175,000 hammer toe procedure and then again hammer down many doctors to getting paid less than Medicare.
There’s also this example here too where the company goes out and bids and wins a Medicare Advantage contract and then has to give it back as they had no doctors in network to see the patients.
Here’s a couple more quotes on the system and see how your doctor has to be quantified by United Healthcare with stats, that may or may not lead to better care but this is the deal to be a Premium so remember that when you see the rankings. You’ll know that United has quantified him/her with numbers and is working to put you the patient into population health management and keep your doctor in the evidence based medicine area, so if he/she varies outside of what evidence shows with something new where there’s no evidence yet, they slip down a rank or so.
“EBM Connect helps you assess patient compliance with proven evidence-based treatment standards. EBM Connect compares the medical claim, pharmacy claim, lab result and enrollment data from your plan with evidence-based best practices for over 90 clinical conditions and almost 600 measures of care. These measures provide a quantifiable basis for actionable interventions by health plans, employers, disease managers and others.”
So as you can see there’s a lot here and why the CMA was asking for some changes and delays and their concern on you’re the patient being confused when you see the “Premiums” rankings. It means your doctor has been quantified and that’s a model and all models have flaws as well. I think where the CMA is concerned as this article says is in the fact that relying on such quantification heavily will hurt the doctor and patients, and we all know how flawed data is all over the place today. You will be steered to a layer of tiers, to select a doctor. Actually if you wait long enough the internet with all the flawed data out there today will end up in time giving you some kind of diagnosis:)
So if it were me, I’d still ask others about doctors as that’s still the best and you’re going to get a “data presentation” of your intended selected doctor and not the whole picture. There’s a video at the footer of my blog, #1, called “Context is Everything” and I advise everyone to watch it and see how you get duped on data and numbers and make choices with too much virtual world value and not enough real world value. When you ask another person about a doctor, you’re getting “real” world value and when you look at a doctor’s “data” this is all virtual stuff here with quantitated mass data using criteria that an insurance company decides is what counts, not you. This is a real problem today with way too many confusing virtual and real world values and insurance companies are not immune to that either.
I’m sure this is part of what the CMA is trying to address here is that stats and numbers don’t tell the whole story and boy do we remember that with the VA in Phoenix as they could not get into the real world when interviewed by Anderson Cooper and he too walked away scratching his head saying the same thing, “all they could talk about were their numbers”. Some of United’s models now are starting to fail anyway in some other areas as just like hedge funds their models fail after an amount of time and don’t make money anymore and the same with models like this, they get dated and either need an update or be dumped as they may no longer work with current economic times. BD
"In its current form, the program will not only confuse patients but will also fail to provide them with meaningful information that could actually assist them in making important health care decisions,” wrote CMA President Richard Thorp, M.D., in an August 13, 2014, letter to the insurer.
The program uses clinical information from health care claims to evaluate physicians against various quality and cost-efficiency benchmarks. CMA believes that the program as currently planned will only lead to confusion among patients and physicians and fails to achieve a central stated goal of UHC –to modify physician practice patterns to improve both quality and cost-effectiveness.
UHC has regrettably chosen to avoid making any substantive changes to the Premium Designation program. The serious flaws that were ignored by UHC and remain in the program, CMA continues to believe, can cause real damage to physicians and patients, especially as UHC begins to use the inaccurate designations as a basis of steering patients into various tiers.
This article calls it “data repackaging” and I have had another name for it, “data flipping” but it’s all the same thing. Referenced here is Senator Rockefeller and he’s on I have hounded for a couple of years to get on this and pass a law requiring an index of all data sellers, which would require licensing and disclosure of what kind of data they sell and to who. You need an index #1 before anything else can happen and I’ve been writing about that fact for 3 years but non data mechanics logic folks can’t get that “fact” through their heads and thus it goes on and on and on. What if we had stock brokers running around with no licenses and heck I even have to get a license to catch a little fish that has virtually no risk involved for me.
Fast of the matter here is they are getting sloppy with their mailing lists and data and let me tell you, flawed data gets the same price as accurate data so there’s nothing to create any reason for them to be responsible at all as that’s just over head. It’s truly a sad state of affairs that these bottom feeders can’t even a bit of integrity to the data they sell, as again they don’t care. Want to get into wearables, well think again until this is resolved if you like privacy as the FTC found 7 of them selling your data and they have not named who they are.
I have had my issues with flawed data and if you read the link above Acxiom on day tried to tweet with me and I tweeted back politely with facts. I have been on my own time working on this for 3 years and my readers here know it and get beat over the head about it. Three years ago when proclaimed this an epidemic folks thought I was crazy but I use data mechanics logic and I’m ahead of everyone, not a talent per se, I just visualize using my former development coding writing and sales ability combined and I figure it out almost every time.
So here’s the campaign for privacy and transparency if you can kick in a few bucks, that would be great as I’m a couple years into this already and I do update what I’m doing, did one one today. Nothing will work until we do step one and that is to identify who we want to regulate. At the campaign link there are links that tell you both about insurers buying your credit card data and some real details about how they record and analyze your voice at their call centers, which becomes yet one more item about you that they could potentially sell.
Guess what, insurers love this data too and I have heard from friends they get letters and do not have diabetes that begin with “now that you have been diagnosed with diabetes” …and that’s not right. Granted we need the awareness and it’s out there but we don’t need flawed data doing a diagnosis. You’ll get scored and denied something based on this flawed data, happens all the time and it’s on your dime to fix the data that the sellers made millions distributing.
This is the biggest racket around as they all want to make their millions and billions and then it comes back to us, the consumer to prove our innocence of whatever nonsense their data for profit lists. Again all these non data mechanic logic folks have been just swimming in this with goofy perceptions on how take action and I’m tired of it as they do nothing except wish for some Algo Fairies to come along and save the day..duh? Here’s a little example of one of my posts being read by Senator Warren’s office.
Not only that but the data brokers get off easy if they can’t pay their fine, like this one who was fined $1 million but had it cut back to $60,000.00 as the fine would put him out of business and he couldn’t afford it, well I say fine to the second option by all means. One less out there selling flawed data is great for consumers.
60 Minutes videos will fill you in as well about this nasty business that hurts consumers.
So what, forget going to the doctor and let the internet diagnose you now with all these so called predictive analytics, no thanks! Time to license and index all data sellers so we know who they are! Once you’re done here, take a visit over to Healthgrades and Vitals and see if you can find a “dead doctor” or one with flawed data over there to make an appointment with. Three links below will give you some good insight there on how flawed their data is. BD
The 42-year-old information technology worker's name recently showed up in a database of millions of people with "diabetes interest" sold by Acxiom Corp., one of the world's biggest data brokers. One buyer, data reseller Exact Data, posted Abate's name and address online, along with 100 others, under the header Sample Diabetes Mailing List. It's just one of hundreds of medical databases up for sale to marketers.
In a year when former National Security Agency contractor Edward Snowden's revelations about the collection of United States phone data have sparked privacy fears, data miners have been quietly using their tools to peek into America's medicine cabinets. Tapping social media, health-related phone apps and medical websites, data aggregators are scooping up bits and pieces of tens of millions of Americans' medical histories. Even a purchase at the pharmacy can land a shopper on a health list.
"People would be shocked if they knew they were on some of these lists," said Pam Dixon, president of the non-profit advocacy group World Privacy Forum, who has testified before Congress on the data broker industry. "Yet millions are."
They're showing up in directories with names like "Suffering Seniors" or "Aching and Ailing," according to a Bloomberg review of this little known corner of the data mining industry. Other lists are categorized by diagnosis, including groupings of 2.3 million cancer patients, 14 million depression sufferers and 600,000 homes where a child or other member of the household has autism or attention deficit disorder.
"It is outrageous and unfair to consumers that companies profiting off the collection and sale of individuals' health information operate behind a veil of secrecy," said Sen. Jay Rockefeller, D-W.Va. "Consumers deserve to know who is profiting."
Exact Data's Chief Executive Officer Larry Organ said the list posted on its website shouldn't have included last names and street addresses, and the company has since deleted any identifiable information. He said the data came from Acxiom and Exact Data was reselling it.
Here’s that topic again that nobody liked to talk about “hospital inequality” or in this case we have “no hospital” in a poor county in Virginia. Remote Area Medical is there this weekend to provide free healthcare in a community that has only 3 dentists and where the doctors are struggling. In Texas last week, people showed up to the ER room of hospital to find a “sorry we’re closed” sign on the door and we’ll tell you more later, they ran out of money is the long and short of it.
You can go here to watch the video as I can’t embed it and it links to Facebook, one of those media stations still running news through Facebook, and I see more are changing to fix that, thank goodness. I wish the TV station would change their parameters on the Kaltura player to allow embedding:) It can be done as I’ve worked with their platform.
Some people have been waiting a year for care and even the football team is out unloading the trucks and planes. East Tennessee and Southwest Virginia are two place where the need is great and where Ram has had many free clinics. As a matter of fact, there’s a big free clinic going on in Los Angeles this weekend too, so it’s not just rural areas where healthcare is needed. BD
Lee County's airport is being converted into a temporary medical facility by Remote Area Medical and their team of volunteers to provide free services that are much needed. "It's a poor county. We need to be there, Remote Area Medical needs to be there. Their hospital closed, their doctors are struggling. In the entire county they have three working dentists. This is an area of great need," says Health Wagon medical director Dr. Joe Smiddy.
Neuroscientists who initially supported the HBP feel that they have been taken advantage of with this project and can’t determine if there’s going to be any value found here. You can read a portion of the report in Nature below or use the link and read the entire article. One option to save the project would be to eliminate neuroscience from the Brain project and just focus on technological objectives.
Secondly they could split the focuses of neuroscience and technology into separate sections. Their last option mentioned here would be to attempt to put the entire project back on track which means funding again and to try to revive trust in the project and involve additional neuroscience entities.
This project gave birth to the US Brain Project which focuses on techniques development, only one element so if the European project is having this kind of difficulty what does that say for what the US is doing here? The Neuroscientists feel this took advantage of them and again it became bad model with a bunch of data on servers to maintain.
Are these folks not well enough attached to reality here? The European Board announced plans to dissolve the cognitive-neuroscience program. 18 principal investigators left as well as the director. All they are left with is again a massive data base limited to numerical simulations and by themselves yield no understanding. So I think we need to keep a close eye on the US Brain Project at this point and not saying there won’t be any value at all, but rather to watch and see if “value” can be created and if they can get the models right. If the European project is not restructured it looks like they have a waste of money on hand. BD
Contrary to public assumptions that the HBP would generate knowledge about how the brain works, the project is turning into an expensive database-management project with a hunt for new computing architectures. In recent months, the HBP executive board revealed plans to drastically reduce its experimental and cognitive neuroscience arm, provoking wrath in the European neuroscience community.
The crisis culminated with an open letter from neuroscientists (including one of us, G.L.) to the European Commission on 7 July 2014 (see www.neurofuture.eu), which has now gathered more than 750 signatures. Many signatories are scientists in experimental and theoretical fields, and the list includes former HBP participants. The letter incorporates a pledge of non-participation in a planned call for 'partnering projects' that must raise about half of the HBP's total funding. This pledge could seriously lower the quality of the project's final output and leave the planned databases empty.
The HBP blends two styles. One comes from a history of successful interdisciplinary collaborations in the European Union in brain- and neuron-inspired computation1. The second originates from a computational research programme, the Blue Brain Project2, initiated by Markram in 2005 (see 'Brain activity'). This collaboration between the Swiss Federal Institute of Technology in Lausanne (EPFL) and the IBM computing corporation aimed to build large-scale 'bottom up' numerical simulations of a rat's neocortical column, a set of about 100,000 neurons considered to be a functional unit within the brain.
The crisis results mainly from ambiguities concerning the place of neuroscience in the HBP. From the beginning, neuroscientists pointed out that large-scale simulations make little sense unless constrained by data, and used to test precise hypotheses. In fact, we lack, among other resources, a detailed 'connectome', a map of connections between neurons within and across brain areas3 that could guide simulations. There is no unified format for building functional databases or for annotating data sets that encompass data collected under varying conditions. Most importantly, there are no formulated biological hypotheses for these simulations to test4.
Many scientists also feared that the HBP would siphon funds from fundamental research. The European Commission's investment in a large 'brain project' would influence what other research areas it chooses to fund. Nonetheless, such an opportunity seemed unlikely to arise again, and neuroscientists (ourselves included) joined up, even if they did not agree with all aspects of the HBP proposal or with certain promises used to sell it. We put our faith in open and interdisciplinary collaboration, trusting that intellectual and operational details would take shape gradually and collectively.
Neuroscience in the HBP is now limited mainly to simulations and to building a massive infrastructure to process mostly existing data. The revised plan advances a concept in which in silico experimentation becomes a “foundational methodology for understanding the brain”5. Numerical simulations and 'big data'6 are essential in modern science, but they do not alone yield understanding. Building a massive database to feed simulations without corrective loops between hypotheses and experimental tests seems, at best, a waste of time and money. The HBP's goals now look like a costly expansion of the Blue Brain Project, without any further evidence that it can produce fundamental insights.
Health IT vendor Greenway Health recently finished its rollout of a cloud-based EHR to all 8,200 Walgreens stores in the U.S. When I was offered the chance to interview CEO Wyche T. “Tee” Green III about this, I decided to take it a step further.
In all my years of covering health IT, I’ve never met nor even spoken to Green, so I figured a podcast was in order. After all, I had written a piece for Health Data Management earlier this year about how pharmacies are reshaping themselves as true healthcare companies. (This interview also comes in the wake of CVS Caremark ending its sale of tobacco products and changing its name to CVS Health.)
I also had a lot of questions about interoperability issues in health IT and the many criticisms that lately have been heaped on both EHR vendors for perceived usability problems and the federal Meaningful Use EHR incentive program. The timing couldn’t have been better.
Podcast details: Interview with Greenway Health CEO Tee Green, recorded Sept. 8, 2014. MP3, mono, 128 kbps, 25.5 MB. Running time 27:51
1:00 Walgreens rollout and EHRs for “retail health”
3:20 Future expansion to Walgreens Healthcare Clinic locations
4:15 My own experience with lack of interoperability at a CVS MinuteClinic
5:30 Achieving EHR interoperability
7:30 Frustration with slow progress on Meaningful Use
10:30 Data liquidity
12:30 Update on CommonWell Health Alliance
14:25 Addressing criticisms that vendors are hindering interoperability
16:30 EHR usability
18:10 Greenway Marketplace app store
22:15 Patient engagement and slow start to Stage 2 Meaningful Use
24:10 Dealing with the rise of consumerism in healthcare
I’ve been kicking around in my mind the idea of hosting a regular podcast, perhaps as frequently as weekly. If so, what day of the week would you prefer to hear a new episode?
#HITsm T2: Geography should follow architecture behind DNS (how we organize the Internet). We pick a "home base" & the world finds us.
— Jared Alfson (@jalfson) September 5, 2014
I was absolutely intrigued by the idea of structuring the healthcare data architecture after DNS. As a techguy, I’m quite familiar with the structure of DNS and it has a lot of advantages (Check out the Wikipedia for DNS if you’re not familiar with it).
There are a lot of really great advantages to a system like DNS. How beautiful would it be for your data to be sent to your home base versus our current system which requires the patient to go out and try and collect the data from all of their health care providers. Plus, the data they get from each provider is never in the same format (unless you consider paper a format).
One challenge with the idea of structuring the healthcare data architecture like DNS is getting everyone a DNS entry. How do you handle the use case where a patient doesn’t have a “home” on the internet for their healthcare data? Will the first provider that you see, sign you up for a home on the internet? What if you forget your previous healthcare data home and the next provider provides you a new home. I guess the solution is to have really amazing merging and transfer tools between the various healthcare data homes.
I imagine that some people involved in Direct Project might suggest that a direct address could serve as the “home” for a patient’s health data. While Direct has mostly been focused on doctors sharing patient data with other doctors and healthcare providers, patients can have a direct address as well. Could that direct address by your home on the internet?
This will certainly take some more thought and consideration, but I’m fascinated by the distributed DNS system. I think we healthcare data interoperability can learn something from how DNS works.
KevinMD, a prominent physician blogger for many years, posted a recent note in which he discusses the disappointing results of the broad hospital EHR deployment enabled by the HITECH act of 2009. This legislation committed us to the expanded adoption of health information technology, expecting electronic health records (EHRs) to transform medical care (see: Disappointing outcomes despite a massive investment in EHRs). Here is what he describes instead as the consequences of this act: Five years and $25 billion later, the results have fallen short of expectations, and there are multiple reasons for our disappointment. Here are the five reasons he says that EHRs have been disappointing:
...EHRs were designed to document the provision of health care as it was just delivered. Most EHRs arose from a programming background emphasizing billing and claims processing;....As a consequence, systems were not designed to provide sophisticated guidance to health care practitioners for “what comes next” in the care of a patient. With some important exceptions, the most important part of a patient’s medical care is the ongoing plan, and – unfortunately — EHRs still don’t effectively facilitate planning the future of a patient’s care.
...EHRs have been woefully inadequate when used for population health care management. Software companies are only belatedly realizing their obligation to enable analysis of health care needs and disparities across entire populations of patients (see: Population Health Management; Software Designed to Support ACOs). Without a well-designed and implemented patient registry, an EHR cannot identify groups of patients with similar needs, thus impeding a practitioner’s ability to direct limited care resources to patients who would benefit from intensive management.
...[E]ngaging the patient — presumably an important party in improving health care through IT — has been an afterthought. Adoption of electronic patient portals has been slow, in part because the design and user interface of portals lack polish, and in part because portals fail to provide patients with actionable information to guide personal health care management.
...One of EHRs largest failures is their inability to communicate with one another, a prerequisite to attaining the promised goals. The health care IT industry has been derelict in its responsibility to comply with standards of interoperability, and no funding mechanism has been established to develop the requisite interfaces among software systems in current use.....Diagnostic results and hospital records in one system are frequently unavailable within another electronic platform, requiring physicians to access multiple systems for these records, each system requiring a different username and password.....
...[W]e have succumbed to an all-consuming demand for privacy of health care information without considering the implications. Personal health care information provided to one’s practitioner during a visit is available only to that practitioner, ignoring the potential that the patient may present to another practitioner with a related problem....We should reasonably expect that all relevant information is immediately available to any healthcare practitioner who needs it to provide safe and effective care, regardless of facility or location, and yet we have tolerated the development of laws and IT systems that make it impossible.
Here's a link to a another recent article about why physicians hate EHRs with some overlap with KevinMD's list of problems above (see: Why doctors hate electronic health records).
Although I think that some progress is being made regarding EHRs, I doubt whether they will change in any major way over the next five or more years except perhaps for the improvement of patient portals. Hospitals have invested too much money in their current EHRs, inadequate as they are, for them to now make radical shifts to new systems. Also, the EHR vendors are making too much money for them to have any incentives to change their current systems. EHRs will remain largely as they exist today in the short-term and all of us will learn to live with this. The most interesting health IT changes will be seen with the diagnostic cloud systems that process cancer genomic information (see: New IT Model for Cancer Genomics; Diagnostic Cloud Nodes). The esoteric diagnostic labs that deploy these specialized systems in the cloud will keep a safe distance from EHRs in order to avoid their gravitational pull and inevitable dumbing-down.
How can healthcare technology better serve the aging population? One in 5 Americans will be senior citizens by 2030, and we are all living longer.
Healthcare technology can better serve the aging population by helping people live independently longer, and manage chronic disease more effectively. Current trends include baby boomer caregivers that will soon move into the senior population themselves. We are only at the beginning of understanding how wearable technologies and sensors can improve health, including managing chronic disease for the elderly.
Today, caregivers and family members are looking for healthcare technology that keeps tabs on parents and grandparents. Popular solutions include medication management apps and activity sensors. Developers must keep in mind that the older population wants their opinion to be heard and considered, including privacy concerns. Read more on senior trends and technologies in my next blog post, “Healthcare Technology and Our Aging Population” on September 18th.
The following is a guest post by Vishal Gandhi, CEO of ClinicSpectrum as part of the Cost Effective Healthcare Workflow Series of blog posts. Follow and engage with them on Twitter @ClinicSpectrum and @csvishal2222.
One of the biggest trends we’re seeing in healthcare today is a shift towards high deductible plans. This shift first started as more and more employers stopped offering insurance or cut the type of health insurance they offered. This started the trend towards individuals purchasing high deductible insurance plans.
While the shift to high deductible insurance plans started well before the Affordable Care Act (ACA), the government mandated health insurance and associated health insurance exchanges (HIX) have thrown gas on the already flaming fire. What most patients didn’t realize when they signed up for insurance on the government’s HIX is that a large majority of the plans were high deductible insurance plans. This has led to a huge influx in high deductible plans entering medical offices.
What does this increase in high deductible plans mean?
This change is one of the most significant changes in healthcare reimbursement we’ve seen. High deductible plans mean a major shift in who will be paying the bill. Instead of collecting most of your money from insurance companies, your clinic will need to become expert at collecting money from patients as well. Yes, that’s right. You’re still going to have to collect from the insurance companies like before, but you’re going to have to build additional expertise around collecting payments from patients too.
While it’s true that clinics have been collecting payments from patients forever, that doesn’t mean that clinics have been doing a good job of actually collecting the money. In fact, I find practice after practice who hasn’t stayed on top of their patient collections. In the end, they often send their patient collections to a collections agency which frustrates the patients and tarnishes their name or they just write off the patient pay portion completely.
Suggestions to Improve Patient Collections
The first step to improving patient collections is to really understand the details of your patient’s insurance plan. This starts with doing an insurance eligibility check and verifying your patient’s plan details. We wrote about ways to streamline your insurance eligibility checks previously. Doing it right takes time, but with the right workflow automation solutions you can make sure that those working in your practice have the right insurance information. Once they have the right payment information, you’re much more likely to collect the payment from the patient while they’re standing in front of you at the office.
While collecting the patient payment from the patient while their in your office is ideal, there are dozens of reasons why this won’t happen. Some don’t have the money on them. Some walk out before you can collect. Etc etc etc. How then do you engage the patient in the payment process once they’ve left your office? In the past, the best solution was to send out bill after bill through the US postal service or possibly call the patient directly. This is an extremely time consuming and costly process that can take 60 to 90 days to obtain results.Plus, it costs several hours of man power and postage.
In the electronic world we live in, the first thing you can do to improve your patient collection process is to implement an online patient payment portal. This online payment process increases patient collections dramatically. The next generation patient is so unfamiliar with writing checks and sending snail mail, that those payments often get delayed. However, by offering the online patient payment option, you remove this barrier to payment.
The other way to improve patient collections is to use an automated messaging and collection process. This approach uses a collection of text, secure text, email, secure email and even smart phone notifications and automated calls in order to ensure the patient knows about their bill and has the opportunity to pay the bill. Plus, these customized decision rules provide a much more seamless and consistent approach to patient collections.
This movement to the empowered patient with a high deductible insurance plan is not likely to go away. Employers are happily getting out of the health insurance business and many want patients to have more responsibility over the healthcare they receive. Being sure that you have a well thought out patient collection workflow is going to be critical to the ongoing success of any medical practice.
The Cost Effective Healthcare Workflow Series of blog posts is sponsored by ClinicSpectrum, a leading provider of workflow automation solutions for healthcare. Their Invoice Spectrum and Auto Collect Spectrum products are a great way to handle the increase in high deductible plans that are entering medical offices.
What trends do you see among health IT startups?
I see hundreds of healthcare startups every year, many in the health IT space. There remains a lot of excitement in wearables and on the data collection side, but it all ties back to the ownership and control of our personal health data. Startups have taken note and I see many interesting business models aimed at giving consumers access to their data. With all the data collection, there is a big push to utilize these data assets. Integrating genomics and incorporating predictive modeling in the handling of patient care are hot topics. The marriage of these data sets to EHR systems is inevitable as we hope to extrude ever more information and gain predictive and actionable insights. Protecting healthcare data remains a strong concern, and businesses are developing more solutions around authentication, data security, and safe, dedicated cloud storage. Telemedicine in all its variants continues its growth, especially in the self-insured/high deductible market.
Exciting times to be in health IT!
Today I was lucky to finally have a long lunch with Mike Semel from Semel Consulting. Ironically, Mike has a home in Las Vegas, but with all of his travel, we’d never had a chance to meet until today. However, we’ve exchanged a lot of emails over the years as he regularly responds to my blog posts. As Mike told me, “It feels like I’ve known you for a long time.” That’s the power of social media in action.
At lunch we covered a lot of ground. Mostly related to HIPAA security and compliance. As I try to process everything we discussed, the thing that stands out most to me is the just enough culture of HIPAA compliance that exists in healthcare. I’ve seen this over and over again and many of the stories Mike shared with me confirm this as well. Many healthcare organizations are doing just enough to get by when it comes to HIPAA compliance.
You might frame this as the “ignorance is bliss” mentality. In fact, I’m not sure if it’s even fair to say that healthcare organizations are doing just enough to comply with HIPAA. Most healthcare organizations are doing just enough to make their conscience feel good about their HIPAA compliance. People like to talk about Steve Jobs “reality distortion field” where he would distort reality in order to accomplish something. I think many in healthcare try and distort the realities of HIPAA compliance so they can sleep good at night and not worry about the consequences that could come upon them.
Ever since HIPAA ombnibus, business associates have to be HIPAA compliant as well. Unfortunately, many of these business associates have their own “reality distortion field” where they tell themselves that their organization doesn’t have to be HIPAA compliant. I don’t see this ending well for many business associates who have a breach.
The solution is not that difficult, but does take some effort and commitment on the part of the organization. The key question shouldn’t be if you’re HIPAA compliant or not. Instead you should focus on creating a culture of security and privacy. Once you do that, the compliance part is so much easier. Those organizations that continue this “just enough” culture of HIPAA compliance are walking a very thin rope. Don’t be surprised when it snaps.
Aetna’s health platform CarePass, announced about three years ago, met an earlier demise at the end of August than many would have expected, with some important signals to the “health platform” marketplace. Among all the HealthKit and Apple Watch attention, it’s important to recall what’s really important with health and health data: giving users value in a way they can trust.
According to HealthData Management:
“’The closure of Aetna’s CarePass illustrates the struggles companies in the digital health space are experiencing and facing in developing and sustaining users, and business models to scale,’ said Frost & Sullivan’s mHealth/telehealth expert Daniel Ruppar.”
The closure highlights the difficulty that health insurers have in providing tools and solutions for users, and open platforms for developers. It shows there needs to be a bigger combined value proposition and strategy than providing sources of data for insurance companies. The user’s benefit must stay central and trust must be core to the products.
At the Healthcare Unbound conference in the summer of 2013, I asked a CarePass executive what they were going to do about trust because healthcare insurers are consistently ranked near the bottom for consumers. The executive’s answer made it clear that the whole issue was a bit of an afterthought, which was predictive. After hearing about all of the great things the platform was going to do, I was left with, ‘Well, that sounds great for Aetna, but what does it do for patients beyond what other apps can already do?’
I could clearly see what CarePass as a platform did in combing all this data, but I couldn’t see an overwhelming value proposition that would convince me to share my information with an insurance company. The only way to truly get there is to pay for outcomes and for insurers to dig deep into how people want to be healthy—on the users’ terms.
The trust, while improving, is still low. These reactions may change, albeit slowly, if consumers become convinced that insurers are on their side with more accountable care models. It will take time and words backed up by actions and outcomes.
Along Comes HealthKit
The news on CarePass led VentureBeat to ask “is this a bad sign for Google (Google Fit) and Apple (HealthKit and Apple Watch) ?”
My answer is maybe. As usual with data, particularly health data, and building a health data economy (yes, we may have arrived), it comes down to trust of who’s holding the data, and what they can or can’t be trusted to do with it.
Interestingly, whether people trust a company to keep data safe, and whether they trust a company are flipped upside down.
On the bright side for insurers, people appear to trust health insurers to keep their personal data secure, second-most by industry, but still low overall at 26% behind financial services firms, according to Gallup. Social networks and applications came in dead last at 2%.
Overall, there are very few industries or organizations we trust with our data, but perhaps surprisingly, these numbers get flipped on their head when we talk about which industries we trust. Trust in data security and trust in what companies will do with that data are very different. You may trust a bank to keep your money safe, but maybe that’s because you see it as their job.
On the opposite side of the spectrum, people trust technology companies as entities at 79%, but they don’t trust many of the companies to protect their data—financial services companies come in dead last.
So these are very different studies, and I don’t want to draw to many conclusions, but what it means to me is that there is a real opportunity for a technology company to manage health data in a way people trust, that’s the opportunity for Apple. It’s going to be difficult for Facebook and Google, whose business models depend on selling data.
While the timing could not have been worse for the widespread publicity of hacked celebrity iPhones and iCloud accounts, there is an opportunity here for Apple to win or lose trust. Apple hasn’t been known for selling personal data to their own ends, even if they make money off of apps that do. In one convincing bit of information in relation to Apple Watch and SmartPay, via mobilehealthnews:
“With Apple Pay, users can make electronic payments without Apple seeing what they bought, who they bought it from, or how much they paid. And the cashier doesn’t see the user’s name, credit card number or security number. The promise had shades of Apple’s recent ban on future HealthKit developers from selling data collected via the platform to third party data aggregators or targeted advertising systems.”
Apple is not reliant on traffic flows or user data, only indirectly through app developer success and the sale of hardware that supports those developers. Selling data has never been part of Apple’s DNA. As mentioned above, Apple smartly announced plans to prohibit app developers for HealthKit from selling or distributing data to advertisers or even storing health data in iCloud.
While that may not fill users with trust in the platform, it’s a smart move. If any company, historically, has been committed to (and has reaped the rewards from) giving consumers control over their environment and their data, it’s Apple. We’ll have to see if that’s enough trust and enough of a value proposition to get consumers to use HealthKit, but it’s a step in the right direction. Questions on whether they can control how the data are used remain open both in reality and in public perception.
Trust has three core elements: 1) consistency 2) competency and 3) alignment (do you have my back?). The problem for insurers has always been with alignment. It’s never been clear that health insurers have been on the side of the members they serve, not through any moral decay, just misaligned incentives. When consumers need the company to pay for services, and the company would often prefer not to pay (or, historically, delay payment), conflict and distrust develop.
It’s amazing how many companies forget trust when it comes to platforms, networks and user data, but it is fundamental. Facebook and Google may use my data, but the value is so high we tend to forgive and try to forget what’s happening, or tend not to dwell on it at least. I’m not so cavalier with my health data, which is 1000-fold more valuable to a hacker than credit card data (it’s used for identity theft). We all weigh perceived risks and consequences, often attempting to keep the perceived risk low in our mind’s eye, but that’s more difficult with health data, and requires a deeper commitment to trust.
I’ve found it helpful to think about three or four fundamental trust requirements for platforms to connect with people or entities:
Most organizations forget the importance of these fundamental pieces, thinking first about what users will provide to them.The parties have to know that the platform exists to be on ‘their side’ and can act as their agent. Google has elements of this (Google Maps), Apple even more so.
So ask yourself, “Will people trust me to be their agent? What do I need to do to encourage that trust?”or, alternatively, “What can I give away that will be so compelling users will forgive and try to forget?”
Many of us probably spend far too much time fooling around with our cell phones but I previously had the sense that teenagers and college students set the curve on this. Most of the appeal of cell phones for younger people seems to be instant messaging and social media. However, I no inkling of how much time college students actually spend on their cell phones until I came across a recent article on this topic (see: College Students In Study Spend 8 to 10 Hours Daily on Cell Phone). Below is an excerpt from it:
A new study from researchers at Baylor University has found that women college students spend an average of 10 hours a day on their cell phones, while men students spend nearly eight hours....The study found that approximately 60 percent of college students admit they may be addicted to their cell phone, and some indicated they get agitated when it is not in sight....The study, based on an online survey of 164 college students, examined 24 cell phone activities and found that time spent on 11 of those activities differed significantly across the sexes. Some functions, such as Pinterest and Instagram, are associated significantly with cell phone addiction....But others that might seem to be addictive, such as Internet use and gaming, were not, according to the researchers.The students reported spending the most time texting, with an average of 94.6 minutes a day. That was followed by sending emails..., checking Facebook ..., surfing the Internet ..., and listening to music (26.9 minutes). The study also found that women spend more time on their cell phones. While that finding seems contrary to the traditional view that men are more invested in technology, “women may be more inclined to use cell phones for social reasons, such as texting or emails to build relationships and have deeper conversations,” [according to an author of the study]. Another finding is that men send about the same number of emails, but spend less time on each....Excessive use of cell phones poses a number of possible risks for students...."Cell phones may wind up being an escape mechanism from their classrooms," [said one of the authors].
I guess that a key question for all of us and stimulated by this article is whether spending X hours per day on a cell phone is useful or necessary or even harmful. If not useful, what are the alternatives? Probably some large percentage of this cell phone time, as noted in the excerpt, is based on boredom. Time on cell phones may well have a relationship to face-time with friends prior to the development of cell phones.
The risky part of prolonged cell phone usage is when people use them in job settings in which total attention is required. This is analogous to distracted driving which is illegal in many states. I posted a note about distracted diagnostics (see: Distracted Diagnostics: Is This Really a Problem?). This latter relates to telepathology or teleradiology when a pathologist/radiologist renders a diagnostic opinion via a handheld device, perhaps when called away from a dinner engagement or a movie. Cell phone distraction in hospitals is also probably more common than we would like to think about (see: Surgeons send 'tweets' from operating room)
This summer, FDA proposed lifting regulations from certain currently regulated medical devices. This unprecedented policy shift targets devices known as Medical Device Data Systems (MDDS) and is intended to benefit the mobile app industry and companies like Google, Apple and others. The current regulatory burden for MDDS devices is Class I, 510(k) exempt. This means manufacturers have to follow a basic quality system (i.e., design controls) on par with ISO9001, and report instances of patient injury or death in addition to any product recalls to FDA.
The following is a guest blog post embodied in an abridged version of a comment submitted to FDA in response to their draft guidance.
Many EHRs and a significant number of health IT (HIT) and clinical decision support (CDS) systems are, by current law, de facto Class III Medical Devices because they have not heretofore been regulated and classified. Class III devices are the high-risk devices and subject to the highest level of regulatory control. Because they are new and have never been regulated (and thus embody an unknown level of patient safety risk) new products are by default classified as Class III devices. Most products that come to market are “updates” of existing solutions based on older technology. These products can claim previously regulated devices, known as predicate devices, and are typically classified as Class II devices. After their initial regulation as a Class III device, new products for which there is no predicate device are often classified as Class II devices.
The FDA currently practices regulatory enforcement discretion over many HIT and CDS systems (but not quite all – e.g. Blood Bank software), leaving them in a sort of classification limbo. If properly classified, some might end up Class II, Class I, or even unregulated. (https://en.wikipedia.org/wiki/Enforcement_discretion)
In 2011 the FDA took a concrete step to rationalize regulation in the HIT space when it finalized the Medical Device Data System (MDDS) rule basically a new Class I, 510(k) exempt device regulation for the simplest type of HIT medical device interfaces that transmit, store, and display medical device data without significantly altering it. Since then at least 316 MDDS devices have been listed with the FDA. (To view current FDA registered MDDS devices, go here and enter “oug” into the Product Code field in the query form.)
In a surprising move, in June 2014 the FDA issued the Medical Device Data Systems, Medical Image Storage Devices, and Medical Image Communications Devices Draft Guidance for Industry and Food and Drug Administration Staff (MDDS Draft Guidance) that proposes to eliminate FDA regulatory oversight of MDDS through enforcement discretion. The agency’s justification was:
“Since down-classifying MDDS, the FDA has gained additional experience with these types of technologies, and has determined that these devices pose a low risk to the public.”
The MDDS Draft Guidance did not describe what the “additional experience” was to merit their determination, “that these devices pose a low risk to the public.” Separately, on public blogs FDA officials have stated they expect HIT products to be regulated by a different agency within HHS – even though no law or authority de-classifying those products (such as EHRs) as medical devices has been passed.
We personally don’t know anyone who doesn’t feel the FDA and their processes couldn’t be improved, but in our opinion it’s the best system we have in place now. Like other HIT, MDDS’s can create significant patient safety and cybersecurity risks even if their intended functionality is as a simple data pipe. Dropping regulatory oversight over MDDS devices is throwing the baby out with the bathwater. The quality and value of HIT technologies like MDDS and their connected EHR and CDS systems depends on having interfaces that provide trustworthy data. Otherwise garbage inputs will result in garbage outcomes.
We submitted a comment through regulation.gov opposing the proposal to eliminate enforcement of the MDDS rule. The major points are summarized below. The entire MDDS rule can be found here, and the MDDS Draft Guidance Document can be found here. Our full comments are available here, as published on www.regulation.gov.
Our Comment on the Draft MDDS Guidance Document covers three topics:
1) Cybersecurity Risks
2) Software Defects from Complex Connected Systems
3) Known MDDS Device Defects
In any component, system, or system of systems, the security of the whole is only as good as its weakest link. FDA’s “accessory rule” concept applies well in the domain of security and privacy. Any interface, as a component of a larger system, poses a potential security or privacy vulnerability. Additionally and importantly the functionality, classification, interface standard conformance, or intended use of the interface does not correlate with the potential for that interface to contain a security or privacy vulnerability. For example a wireless connection is not any more or less vulnerable to cyber attack if it is intended only to receive occasional physiological data. Every MDDS, especially wireless ones, create potential cybersecurity vulnerabilities. Design controls are a necessary, but not a sufficient, means to reduce those vulnerabilities. Without them MDDS’s and everything they touch becomes more vulnerable.
We believe providers and clinicians under-report security and privacy violations and will continue to do so until they have additional liability protection. Thus the FDA’s collection of cybersecurity vulnerabilities is incomplete. The only argument can be over how much they miss. Post-market surveillance of cyber security should probably be beefed up for all medical devices, particularly for HIT such as EHRs.
There may well be systemic, real, or imagined reasons why providers, hospitals and HIT manufacturers are reluctant to report cybersecurity vulnerabilities in their medical devices and HIT systems. Which is why we further suggest that the in addition to the FDA continuing regulatory oversight over MDDS’s including post-market surveillance, the FDA as well as other agencies such as NIST, FCC, and ONC should expand their cooperative efforts to improve collection and analysis of the nation’s entire HIT infrastructure’s cybersecurity vulnerabilities. But that is a large topic best left for another article.
In 2012 The Food and Drug Administration Center for Devices and Radiological Health Office of Compliance Division of Analysis and Program Operations published Medical Device Recall Report FY2003 to FY2012. The entire report can be found here (pdf).
The report concluded (see page 18) that software design failures were the most common cause of medical device recalls and recommended expanding regulatory oversight of software medical devices. We agreed with that finding in 2012 and still agree with it now. Increasingly connected, integrated, interfaced, or interoperable systems are more complex and have more complex interactions. Therefore they are more likely to contain defects in their individual components or the systems as a whole.
Basic Systems Engineering tells us that the FDA’s proposal to drop design controls over the MDDS “connection” part of such systems is exactly wrong. It creates a potential weak link, and makes detecting and fixing other defects within systems, more difficult. The MDDS Draft Guidance is a step backwards for cybersecurity, software quality, and patient safety.
A search of the FDA’s MAUDE database for the keyword “MDDS” returned 66 hits – reports on MDDS’s, or devices or EHRs connected to MDDS’s. Yet only 316 or so MDDS’s have been listed to FDA for commercial marketing, ever since April 18, 2011. This seemed on the surface to be a high rate of reports. We examined a few MDDS-related MAUDE reports, MEDSUN entries, and Recall Letters. None of the defect descriptions contained anything surprising to someone with even a modicum of hands-on IT experience. Four selected MDDS defect reports are described below. We quote directly from the FDA databases (typos may be from the original documents) and provide a link to the original complete documents.
The company has determined that at extremely high blood glucose levels of 1024 mg/dL and above, the FreeStyle lnsulinx Meter will display and store in memory an incorrect test result that is 1024 mg/dL below the measured result. For example, at a blood glucose value of 1066 mg/dL, the meter will display and store a value of 42 mg/dL (1066 mg/dL – 1024 mg/dL = 42 mg/dL). No other Abbott blood glucose meters are impacted by this issue.
The functionality described in the recall included only the communication, storage, and display of a physiological value (blood glucose levels) from a medical device. If those functions were compartmentalized they would be an MDDS. In other words, Abbott found a defect in an MDDS serious enough that they issued a voluntary recall of that device. Abbott is a large highly respected medical device manufacturer with vast experience in design controls and post-market surveillance. We are concerned that had a similar MDDS been developed by a different company without design controls and no experience with medical devices, this defect would unlikely have been detected and the product would not have been voluntarily recalled.
The following three reports describe defects in systems incorporating MDDS’s, and speak for themselves.
A critically ill pt under went radiographic eval of chest and abdomen. The last name of the pt contained one apostrophe. The radiograph images could not be accessed on the ehr results mdds. It was determined that the one entering the pt’s name at the imaging vendor entered a double apostrophe, rather than one. It could not be corrected for days, once the images were found, 5 days after they were done. It took another 3 days for pacs vendor to correct this misidentification issue. The vendor’s device is defective because it allowed absurdity (there is never a name with consecutive apostrophes) and it failed to warn of the error. These mdds devices need tighter regulation, surveillance, and safety.
Complex case with multi organ failure was on high doses of potassium supplements and potassium sparing medications. The potassium level obtained in the lab and electronically sent to the dhr mdds had increased from 4. 0 to 4. 9 mg% over a 24 hour period of tome. The nurses were not alerted by the mdds of new results, nor did they open the mdds to check the interval change of the potassium level prior to administering 40 meq potassium chloride twice. This points out the defect in the mdds, which is its failure to notify of new results and provide meaningfully useful decision support. These devices are not safe and require oversight.
Cultures were obtained from a deep skin infection involving an implanted medical device. Multiple cultures grew serratia marcescens. The antibiotic sensitivities were lost from the mdds section of the ehr, or they were never posted due to an interface failure. A work around was required to find the results, but they remain absent from the silo of the ehr where they should appear. This defect causes delays in care and adversity due to delays in pinpointing the correct antibiotic to use in this critical situation. This genre of flaw raises doubt in the health care professionals as to whether the presentation of results on any pt are accurate.
The last MAUDE report nicely summarizes systemic risks of MDDS devices. A flaw in any MDDS whose purpose is to populate a patient record with physiological data raises doubts on the accuracy of ALL patient data in ALL EHRs. Modernizing our country’s healthcare delivery system requires EHRs and associated HIT systems that are well designed, correctly implemented, diligently operated, and trusted by payers, providers and patients. If only a few MDDS’s are found to be significantly defective in functionality, reliability, operation, security, or privacy, then the trust placed in all EHR data (including financial and demographic data) by clinicians and patients will be broken – regardless of the quality of their own particular systems.
There is no doubt that implementing design controls is a non-trivial effort, particularly when compared to the pure development cost of small mobile software applications that may perform, on the surface, similar functionality. But the country is not in need of cheap vulnerable apps and untrustworthy interfaces. Improving healthcare requires high quality, reliable, and effective software that safely and correctly interacts with other regulated and unregulated HIT systems.
The FDA has been improving their regulatory processes and supporting innovation with the Mobile Medical Apps guidance document, recognizing more standards, and other published and in-process guidance’s and rules. We applaud the FDA for their diligent work protecting patient safety and improving their regulatory processes. However we disagree with the MDDS Draft Guidance. We recommend that for Systems Engineering, Cybersecurity, and Patient Safety reasons the FDA should continue regulatory oversight over MDDS class devices.
The authors would like to thank the numerous subject matter experts who have contributed suggestions, critiques, and edits to our comments and to this post, but for professional reasons choose to remain anonymous. You know who you are.
Respectfully,John Denning, MHA Lynn Haven, FL Robert J. Morris, MD (UK) Pasadena, CA Mikey Hagerty, Ed.D., CISSP/ISSAP, CIPP/IT Carmel, CA Michael Robkin, MBA Los Angeles, CA George Konstantinow, PhD Santa Barbara, CA
Pictured above is the Capsule Neuron, a major component of Capsule’s MDDS.