April 17,2014

13:04

Dear Accelarad customer,You should have received an email from me on Monday of this week, when I provided our customers an early insight into the announcement that Accelarad is now a part of Nuance Communications. At this time, I wanted to provide you some additional information and invite you to learn more.

You can read the full press release here: (Nuance Unveils PowerShare – April 17,2014)As discussed, this new union brings together our cloud-based medical image sharing technology and Nuance’s PowerScribe radiology reporting and communication platform. The partnership will give you, our valued customer, access to Nuance’s expansive healthcare technology and professional services, while continuing to provide you with the proven software and solid relationships you have come to expect from Accelarad. With this partnership, Accelarad’s SeeMyRadiology solution has been rebranded to align with the Nuance diagnostic brand, and will be part of the Nuance PowerShare Network. To learn more about PowerShare | Image Sharing, sign up to join one of our webinars.

Most importantly, know that the products and people you have come to rely on will not change. Accelarad's leadership team and valued employees will be deeply involved in creating a smooth transition for our customers, and our focus remains on making sure you continue to receive the excellent service you deserve.

Thank you again for your support and confidence in us. We will keep you informed about any incremental changes along the way and are open to your feedback.

Sincerely,                

Willie Tillery, CEO, Accelarad 

Rodney Hawkins, General Manager, Diagnostic Solutions, Nuance
For your viewing pleasure, here is the press release:

Nuance PowerShare Network Unveiled for Cloud-Based Medical Imaging and Report Exchange
Industry’s Largest Medical Imaging Network Helps Providers and Patients Coordinate Care and Share Information Across Distances and Disparate Healthcare Systems

BURLINGTON, Mass., – April 17, 2014 – Nuance Communications, Inc. (NASDAQ: NUAN) announced today the immediate availability of Nuance PowerShare™ Network, the industry’s largest cloud-based network for securely connecting physicians, patients, government agencies, specialty medical societies and others to share essential medical images and reports as simply as people exchange information using social networks. Nuance PowerShare Network promotes informed and connected physicians and patients who can instantly view, share and collaborate while addressing patients’ healthcare needs.

“Organizations are being tasked to communicate efficiently both in and out of their networks to provide clinical insight to physicians beyond one person or office to a much broader team involved in the continuum of care,” said Keith Dreyer, DO, PhD, FACR, vice chairman of radiology at Massachusetts General Hospital and Chair of the American College of Radiology (ACR) IT and Informatics Committee. “Nuance PowerShare Network addresses the information sharing challenge physicians face today with a network that supports things we’ve dreamed of doing for years,” he adds.

Fully Connected Patients & Providers
Nuance PowerShare Network is already used by more than 1,900 provider organizations for sharing images via the cloud using open standards. Made possible through the acquisition of Accelarad, this medical imaging exchange eliminates the costly and insecure process of managing images on CDs and removes silos of information in healthcare that inhibit providers from optimizing the efficiency and quality of care they provide. Anyone can join the network regardless of IT systems in place to instantly view and manage images needed to consult, diagnose or treat patients, enabling clinicians to more seamlessly evaluate and deliver care for patients who transition between facilities or care settings.

Nuance is already used by more than 500,000 clinicians and is a critical component within the radiology workflow and a trusted partner for 1,600+ provider organizations that rely on Nuance PowerScribe for radiology reporting and communications. Healthcare organizations that use Nuance PowerScribe, a group that produces more that 50 percent of all radiology reports in the U.S., can immediately leverage their existing investment and begin sharing radiology reports along with images, such as X-rays, MRIs, CT scans, EKGs, wound care images, dermatology images or any other type of image. This simplifies secure health information exchange between multiple providers, patients and disparate systems without costly and time-consuming interfaces, CD production or the need to install additional third-party systems.

“The challenge of sharing images with interpretive reports is something we’ve heard about consistently from our customers and EHR partners, and we know Nuance PowerShare Network will overcome this major obstacle, helping physicians treat patients more efficiently and effectively,” said Peter Durlach, senior vice president of marketing and strategy, Nuance Communications. “This nationwide network, one that is fully integrated into the EHR workflow and already connected to approximately half of all clinicians producing diagnostic imaging information, is a ground-breaking solution that delivers immediate benefits at an unprecedented scale to our healthcare system.”

“Integrated image and report sharing helps us deliver quality care and drive down costs especially when patients transfer from one facility to another. Whether at their desktop or on their mobile device, our physicians can see the study that was done along with the interpretive report, which provides the information they need to treat the patient and avoid duplicate testing,” says Deborah Gash, vice president and CIO, Saint. Luke’s Health System in Kansas City. “By integrating this with our EHR, PowerShare will enable physicians to manage inbound imaging through one point of access and login. Physicians in our 11 hospitals and 100-mile radius referral network see this cutting-edge technology as a way to deliver the highest level of patient care,” she adds.

To learn more about the PowerShare Network and the new image sharing solution, visit http://www.nuance.com/products/PowerShareNetwork to join one of our webinars. Connect with Nuance on social media through the healthcare blog, What’s next, as well as Twitter and Facebook.
Definitely an interesting constellation of services! I wonder where this might lead. Ironically, Rodney is also an old friend from the AMICAS days...

April 12,2014

15:02
A long time ago (November, 2005 to be exact), sitting in a radiology department far, far away from most of you, I bemoaned the problem of the "Portable Patient" in one of my early AuntMinnie.com articles:
Of the thousand daily frustrations I experience as a radiologist, perhaps the most painful is that of the "portable patient." You see, patients migrate from hospital to hospital, from clinic to clinic, and from office to office. They may be searching for a second opinion, a superspecialist, someone who will give them the particular answer they seek (some want to hear good news, some prefer bad news), convenience, drugs, or some combination of the above.

As often as not, they acquire a mountain of imaging studies along the way. When asked why they had a particular study at a particular site, the answer is invariably, "My doctor told me to have it there."

Add to that the dependence on our ERs for emergent (or maybe just impatient care, as I like to call it), and the ER's love of imaging studies. Put them together and you've got a collection of the patient's imaging studies spread across a city or even a state.
I was pretty smart back then, identifying a problem that many folks far wiser than I have been trying to solve since. And last year, I authored a follow-up article:
I've introduced you to a portable patient, and you can see what happened to her because no one knew about the examinations she had already undergone. She was irradiated, magnetized (probably less of a problem), and scared to death (arguably more damaging than radiation) because we have no way to connect the dots of her various studies.

Well, that isn't quite true. We do have ways -- we just aren't using them... Many years ago, when our old PACS needed replacing, I suggested to the IT types that the three hospital systems in our average town in the South combine efforts to create a single citywide PACS to serve all three hospitals and, particularly, all of their patients. I was told by the illustrious chief information officer that we couldn't even think of working with one of the other hospitals because it was "suing us" (which wasn't quite a lie ... they were challenging a certificate of need application). Millions of dollars and patient welfare down the toilet over C-suite egos.

There were and are other approaches. As an alternative to a central repository, connecting one PACS to another isn't that hard. The best way to do this -- and fulfill all HIPAA requirements in the process -- is to use an image-sharing system such as lifeImage (my personal favorite by a mile).

Don't even bother to suggest that CD-ROMs solve anything. They don't. They get lost, they get broken, they don't always load, the patient forgets to bring the disk, or the original imaging site forgets to send it, and darn, they're closed today...

At one of the clinics we staff, the clinicians come at me at least twice a day, every day, with an outside CD. After three years, I finally was able to convince the powers that be to load the damn things into PACS and merge the data with local exams. But the clinicians don't want to bother with waiting for the disks to load -- they want results now. In my opinion, CDs aren't even worthy of being drink coasters, given that huge hole in the middle. (And their older PACS rejects a significant percentage of the disks anyway.)

{snip}

Here's where I'm going to anger a lot of people, and this is of course why you like to read my rantings. The following is something that needs to be said, however, and I'm going to say it.

Given that ...
  • Not knowing that the patient has had prior studies leads to unnecessary imaging
  • Unnecessary imaging may expose the patient to unnecessary radiation, costs, and anxiety
  • Unnecessary radiation is bad for you, as is anxiety
  • We have ways to share prior studies
... then it stands to reason that today, in the 21st century, shirking our responsibilities to the patient in this aspect of medical imaging is malpractice. Yes, I used the "M" word. But that's exactly what it is. We are not doing what we should -- and what we must -- for patient care. It is high time to apply technology that has been around for a long time to unify patients' records, imaging and otherwise.

We are harming our patients out of ignorance, out of hubris (why would they go to any doctor/hospital/clinic other than me/mine?), and out of greed (I get the revenue if I repeat the study!). This is completely unacceptable...
Forgive the massive regurgitation of the last post, but you must acquire (or reacquire) the mindset of the necessity of image-sharing.

If you wondered if exams were really repeated under the "portable patient" scenario, let me assure you that they are.

A study from western New York showed:
(A)pproximately 90% of duplicate and potentially unnecessary CT scans were ordered by physicians who have little to no usage of the HIE when combining slices of users with less than 500 queries in 18 months. An opportunity therefore exists to reduce the number of duplicate CT scans if the physician is utilizing HEALTHeLINK to look up information and recent test results on their patients prior to ordering more tests. In addition, this also highlights a need to get more physicians participating and using the HIE in a meaningful way as more than 70% of duplicate CT scans were ordered by physicians who did not query HEALTHeLINK.
Another study from the University of Michigan found:

RESULTS:
In our sample there were 20,139 repeat CTs (representing 14.7% of those cases with CT in the index visit), 13,060 repeat ultrasounds (20.7% of ultrasound cases), and 29,703 repeat chest x-rays (19.5% of x-ray cases). HIE was associated with reduced probability of repeat ED imaging in all 3 modalities: -8.7 percentage points for CT [95% confidence interval (CI): -14.7, -2.7], -9.1 percentage points for ultrasound (95% CI: -17.2, -1.1), and -13.0 percentage points for chest x-ray (95% CI: -18.3, -7.7), reflecting reductions of 44%-67% relative to sample means.

CONCLUSIONS:
HIE was associated with reduced repeat imaging in EDs. This study is among the first to find empirical support for this anticipated benefit of HIE.
That's a lot of repeat studies. And a lot of excess radiation. We can wait for the study to be delivered from the outside place, or the outside CD to be loaded ("Film at Eleven") or we can redo the study. None of these choices are optimal. We can all see that.

So...Now that you've gone through the indoctrination, we can proceed.

I've known Hamid Tabatabaie for many years, starting back when he was CEO of AMICAS. (I guess that dates me. Like Mrs. Dalai's grandfather who died at 93 after having outlived 5 of his internists, I've gone through two subsequent AMICAS CEO's and I'm on my second or third Merge CEO. Justin, you'd better hope I get out of this business soon!) Hamid is one of the visionaries behind web-based PACS, of which AMICAS Merge PACS is still one of the best examples. Today, he heads lifeIMAGE, my favorite among the image sharing companies out there.

The story is making the rounds that Nuance, one of my least favorite companies, is diving into this arena, with the purchase of Accelarad. From Hamid's blog (I guess everyone has one now):
I spoke with a friend today who is now the sixth person to have heard rumors about Nuance entering the image sharing market. He thinks it will announce the acquisition of a small Atlanta-based company imminently. I know the target company rather well, think highly of the founders, and I’m happy to see them finally reap some benefit from their 15-year-old startup odyssey. They started out as a small PACS company and then carved out a niche by selling data center based teleradiology PACS, which I think delivers the great majority of its $6M or so annual sales.
This little company is apparently Accelarad. More on them in a moment. Back to Hamid:
We (lifeIMAGE) started out working with innovators and early adopters who believed in our cause. We believe in eliminating duplication of imaging, avoiding delays in care and excessive radiation, and improving quality of care for patients. To realize our goal, we build software that helps make medical images part of a patient’s record and helps physicians access imaging histories conveniently, from any setting. We’ll soon announce our fifth anniversary as a well funded, privately held company, with many remarkable results that make our team very proud...

..(I)mage sharing for serving radiology, with 25,000 or so US radiologists, where Nuance has its major presence, has been around for a long time. Innovations in teleradiology are well past their prime, so, we at lifeIMAGE do not see a disruptive opportunity to innovate in that area. We are focused on the far broader need, which exists among large health systems that need to avoid the cost and problems associated with repeat imaging orders. Their ordering physicians, our end-users, are non-radiology image intensive specialists who need access to patients’ imaging histories in order to reduce the rate of repeat exams. 
The cure for the portable patient indeed.
Recently, I’ve been fascinated with what professor Everett Rogers called “the law of diffusion of innovation.” It basically spells out that there is a point at which an innovation reaches critical mass. “The categories of adopters are: innovators, early adopters, early majority, late majority, and laggards.[1]” The early majority buy into a technology when it’s been well vetted by innovators and early adopters first. Every innovative and disruptive company looks for the sign that its technology has started to be adopted by the “early majority.” Nuance’s entrance into the image sharing market is an indication for me that the market is getting ready for broad adoption, validating what we already see in the lifeIMAGE customer statistics. Professor Rogers suggests that once 16% of the market has signed up for a technology, that’s when the early majority starts to adopt. Current lifeIMAGE customers represent nearly 16% of all US physicians...

lifeIMAGE is the most utilized image sharing network, designed for use by physicians across a wide range of clinical disciplines—neurology, orthopedics, cardiology, oncology, surgery, etc. Our position is unique in that our engine of innovation is fueled by this population of doctors, who encounter patients with outside imaging histories on daily basis. We also help providers with patient engagement strategies and lead the way in providing access to patients who can in turn share their imaging records with providers of their choice. So, indeed new market forces may very well validate the market and expedite adoption of our disruptive and expansive technology, innovation for which is guided by multi-disciplinary specialists, including radiologists....

When I was CEO of AMICAS, our team spent some time studying the concepts around disruptive technology. Its definition in Wikipedia is, “A disruptive innovation is an innovation that helps create a new market and value network, and eventually disrupts an existing market and value network (over a few years or decades), displacing an earlier technology.” That is what our web-based PACS was back in 1999.
To me, being rather more concrete than some, a "disruptive" technology is one that interrupts my workflow, and nothing could fit that definition better than what Nuance is really known for: Speech Recognition, also incorrectly known as Voice Recognition. Here we have a technology that displaces the human transcriptionist, freeing the hospital from the tyranny of employing said human and paying their salary and benefits. It dumps the work of transcribing and editing onto the radiologist with no increase in pay for the effort. And it barely works. A friend who is totally enamored with SR tried to show me how wonderful it functions in his enterprise. I watched him focus his entire attention onto the report screen, which was three monitors away from the radiographic image he was supposed to be interpreting. Yah, this is great and wonderful stuff. Now it does speed things along. My friend claims to be able to read 300 exams in 8 hours with <1% error-rate because of his beloved SR. I'll simply say that it wouldn't work that well in my hands.

I'm digressing, but for a reason. Nuance and the other SR vendors have made inroads into hospitals and other imaging emporiums with their disruptive technology. They ride in on the white horse of decreased turn-around time (TAT) which warms the cockles of the administrative types who live and die by picayune metrics like that. In addition, they convince these folks that it's CHEAPER to have the computer do the job than a cadre of benefit-sucking humans, and that's all they need to say.

I'm sure Nuance wouldn't enter the image-sharing market if they didn't think it would be lucrative. Few in this business (including me) do things for free out of the goodness of their hearts. As Hamid implies, Nuance's entrance to this space validates the concept, and I think validates lifeIMAGE as well, which I maintain does it better than anyone.

Accelarad seems to have the basics down, and Nuance has apparently made the GE-like choice of buying the technology en bloc rather than developing its own. Fine with me. Here's their description:

Our medical imaging solution combines the ease of social networking with the clinical precision and security that medicine demands, making medical image sharing with patients, colleagues and other organizations easier than ever. Accelarad allows you to quickly and securely upload, access, manage and share medical images from any Internet-connected computer, mobile device or via our app. So you have images and reports from any originating institution, physician or system at your fingertips from a single portal, allowing you to focus on what you do best–delivering patient care.



They say all the right things, and I'm sure the product does what it says it does. However, I'm equally sure that lifeIMAGE does it better:



Don't just take my word for it. Look at their website and arrange a demo.

In many ways, Nuance's entry presents an opportunity for lifeIMAGE to get its foot into (or back into) doors that might otherwise be closed. I've tried to become a lifeIMAGE customer. I believe in their system, and I know most of their people, many of whom brought me AMICAS years ago. But I cannot convince those that control the purse strings that image sharing is a critical necessity. They see that lifeIMAGE has a cost associated to it, nominal per patient though it is, which can be eliminated by someone sticking the CD-ROM that came taped to the trauma patient into a workstation. IF it works. IF it came at all. But happily, if there wasn't a CD-ROM to be found, well, gee, we'll just have to rescan the patient and CHARGE for the privilege. In other words, image sharing LOSES them money on both ends. But it is still best for the patient, and I'll stick to my inflammatory statement above: it is malpractice NOT to utilize it.

It may be that with Nuance pushing the concept using the sales force that sold the bean-counters on SR, proper consideration will finally be given to image sharing at places that shunned it before. Then, we can have the real discussion as to which company does it best. I've had many an argument with those who say only the large PACS companies will survive. In the image sharing space, there are no large companies as yet, although Merge's iConnect and Honeycomb are good starts. The entry of Nuance into the field could be a game changer...for the company that does it right. We'll see. Film at Eleven.

ADDENDUM

I am without a doubt getting old and I'm not completely on my game, the game of paranoia, that is. Normally, I would have seen this possibility, but it took a friend to analyze the data and inform me of the consequences. Here is what he said (he wishes to remain anonymous for obvious reasons...):

I pushed hard for an "outside study" solution. We were regular victims of Philips PACS non-DICOM CD's every night from a particular hospital. We looked at both lifeIMAGE and Accelarad, and went with the latter and it works well for us. However, the Nuance purchase suggests to me that they want to be a complete 3rd party reading group, and replace groups like Optimal. Once they can share images well, dictate reports and disseminate results, they become a radiology department for anyone. I'll bet they start advertising over-reads/consults by big institution names before it's all over.

It just looks to me like they are assembling the pieces of the puzzle to become "Uber Radiology". The video mentions/shows a graphic for telemedicine; that screams 3rd party. Any site can be set up to just put their system as a destination on each modality. Boom, you send them your images, they can be read. It's not even a "PACS to PACS transfer" but a replacement PACS. No onsite storage is needed, just the Nuance cloud.. oops until the internet is down and you don't have your images anywhere...
Hey, just because you're paranoid doesn't mean they aren't out to get you...

And Yet Another ADDENDUM

Interesting coincidence...Nuance just hired someone to "document, share and use" clinical information per their recent press release:
BURLINGTON, Mass., – April 7, 2014 – Nuance Communications, Inc., (NASDAQ: NUAN) today announced that it has named Trace Devanny as president of Nuance’s Healthcare business. Mr. Devanny will oversee Nuance’s largest division and lead its efforts to deliver a more seamless approach for healthcare professionals to document, share and use clinical information. He will report to Paul Ricci, Nuance chairman and CEO.

“Our healthcare business presents a significant opportunity for innovation, leadership and growth in today’s dynamic healthcare environment,” said Paul Ricci, chairman and CEO of Nuance. “As a healthcare technology industry veteran, Trace brings a powerful skillset that combines operational excellence, team development, customer engagement and a strategic vision. I look forward to working with him to lead Nuance and our healthcare business through its next phase of growth.”

Mr. Devanny has more than 30 years of executive leadership experience in the healthcare IT industry, having held executive leadership roles in multi-billion dollar, international healthcare organizations. He joins Nuance from TriZetto Corporation, where he served as chairman and CEO. At TriZetto, he drove revenue and bookings growth in excess of 20 percent and led the organization through a business and sales model transition. Previously, he held several executive roles at Cerner Corporation, most recently as president, over an eleven year period where he was instrumental in growing the company and revenues from a $340 million business in 1999 to a $1.8 billion healthcare IT leader. Earlier in his career, Devanny was president and COO of ADAC Healthcare Information Systems and held a series of executive positions with IBM and its healthcare business. He holds a BA degree from the University of the South.

“Improving quality of care while driving down healthcare costs is one of the most significant challenges that providers face today. Nuance is advancing these initiatives through innovative solutions that make it easier for providers to deliver patient care,” said Trace Devanny. “I look forward to working with this talented and ambitious organization to build on our momentum and make an even greater impact on the healthcare system at this important point in its history.”
Only the paranoid would put this together with my friend's speculation and see anything interesting...  What? Me? Paranoid? NEVER!

April 1,2014

13:23
Courtesy Wikipedia

You may recall my earlier post declaring my retirement within two years.

Fuggedaboutit.

I had attempted to start my retirement clock, and we had some long discussions on the topic. In the course of the discourse, various factors were mentioned, introduced, revealed, discovered, or otherwise made to appear which had not been present before. The cost for officially entering the short-term glide-path became more onerous than I thought it should.

Therefore, my request is now withdrawn. When my numbers and the stars align properly, I'll be giving my 90 day notice. That might be tomorrow, or it might be 10 years from now. So much for a heads-up to allow for planning, hiring, etc.

And this time, you may disregard the date of the post.

April 17,2014

10:00

Brain Buzz 2014I admire those who can explain the complex simply. In researching the latest developments in neuroscience and technology, I discovered the brilliant Dr. Story Landis, a neurobiologist and the Director of the National Institute of Neurological Disorders and Stroke.

Dr. Landis is part of the leadership for the President’s new “BRAIN Initiative,” a Grand Challenge of the 21st Century, and provides an easy overview of the latest advances in neurotechnology in this video (starting at 5:05).

She presented at the Society of Neuroscience’s Annual Convention as part of a distinguished panel to discuss the new brain initiatives in the United States and in Europe for 2014.

What is the U.S. BRAIN Initiative?

The acronym, BRAIN, stands for Brain Research through Advancing Innovative Neurotechnologies.

According to the National Institutes of Health, “By accelerating the development and application of innovative technologies, researchers will be able to produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space.”

The goal of the initiative is to develop tools for researchers to discover new ways to treat, cure, and even prevent brain disorders. Through these technologies, researchers will explore “how the brain enables the human body to record, process, utilize, store, and retrieve vast quantities of information, all at the speed of thought.”

Why Don’t We Have a Consistent Map of the Brain?

Neuroscientists need a consistent map of brain anatomy, but there isn’t one yet. Why? According to the Kavli Foundation, one of the partners of the initiative, “In the fast-moving field of neuroscience, researchers constantly reorganize brain maps to reflect new knowledge. They also face a vocabulary problem. Sometimes, different research groups will use several words to describe a single location; other times, a single word may mean different things to different researchers. Nor do maps remain consistent when moving across species.”

Advances in Neurotechnology to Visualize the Brain

The Connectome

A Connectome is a structural description of the brain first proposed by Olaf Sporns. The Human Connectome Project (HCP) is a consortium comprehensively mapping brain circuitry in 1,200 healthy adults using noninvasive neuroimaging, and making their datasets freely available to the scientific community. Get the HCP data here.

Four imaging modalities are used to acquire data with unprecedented resolution in space and time. Resting-state functional MRI (rfMRI) and diffusion imaging (dMRI) provide information about brain connectivity. Task-evoked fMRI reveals much about brain function. Structural MRI captures the shape of the highly convoluted cerebral cortex. Behavioral data provides the basis for relating brain circuits to individual differences in cognition, perception, and personality. In addition, 100 participants will be studied using magnetoencephalography and electroencephalography (MEG/EEG). – HumanConnectome.org

connectdome

Brainbow

Brainbow is the process by which individual neurons in the brain can be distinguished from neighboring neurons using fluorescent proteins. The idea is to color-code the individual wires and nodes, and was developed at the Center for Brain Science at Harvard.

Brainbow

CLARITY

CLARITY (Clear, Lipid-exchanged, Anatomically Rigid, Imaging/immunostaining compatible, Tissue hYdrogel) is a method of making brain tissue transparent, and offers a three-dimensional view of neural networks. It was developed by Karl Deisseroth and colleagues at the Stanford University School of Medicine.

The ability for CLARITY imaging to reveal specific structures in such unobstructed detail has led to promising avenues of future applications including local circuit wiring (especially as it relates to the Connectome Project). Pictured is a mouse brain with CLARITY.

CLARITY_brain

optogeneticsOptogenetics

Optogenetics uses light to control neurons that have been genetically sensitized to light. Optogenetics is credited with providing new insights for Parkinson’s Disease, autism, Schizophrenia, drug abuse, anxiety and depression.

A Revolution is Taking Place in Brain Science

Also part of the leadership for the BRAIN initiative is neuroscientist William Newsome of Stanford University:

Most of us who have been in this field in the last few decades understand that there is a revolution going on right now, so these tools we’ve mentioned already did not exist 8 years ago, and some did not exist 6 months ago. The pace of technological change is so rapid right now that those of us who were traditional experimental scientists say, “Whoa, what does it even mean to be an experimental scientist in this day and age?” We have to totally rethink what experiments are even possible, and it opens up vistas that were unimaginable 10 years ago.

Dr. Newsome recently wrote about the Initiative in JAMA Neurology:

“Missing, however, has been an understanding of how the many millions of neurons associated with a perception, thought, decision, or movement are dynamically linked within circuits and networks. Even the simplest perceptual task involves the activity of millions of neurons distributed across many brain regions. How simple percepts arise from patterned neural activity and how the resulting percepts are linked to emotion, motivation, and action are deeply mysterious. In the past, answers to these questions seemed out of reach.”

New Brain Health Registry

To get a deeper understanding of the brain before and after disorders, neuroscientists from the University of California San Francisco have established a new “Brain Health Registry.” Their goal is to address one of the biggest obstacles to cures for brain disorders – the costs and time involved in clinical trials. To register your brain, participate in games, and help scientists, read more in the FAQs.

The Brain and Disorders by the Numbers

The average adult brain is about 1,300 to 1,400 grams or 3 pounds, and is about 5.9 inches or 15 centimeters long. It is often quoted that are 100 billion neurons in the human brain, but Dr. Suzana Herculan-Houzel of Brazil recently discovered there are 14 billion fewer. According to her research, the human brain has 86 billion neurons or nerve cells.

What is the impact of brain disorders in the U.S.?

According to the World Health Organization, brain disorders are a leading contributor to the global disease burden, and the fourth highest for Western developed countries. About 50 million people in the U.S. suffer from damage to the nervous system, and there are more than 600 neurological diseases.

Psychiatric Illness – About 1 in 4 American adults suffer from a diagnosable mental disorder in any given year, according to the NIMH.

Alzheimer’s – In 2014, there are 5.2 million people in the U.S. with Alzheimer’s Disease, according to the Alzhemier’s Association. With the growth of the Baby Boomer generation, it is expected that between 11 and 16 million will be affected by 2050.

Parkinson’s – The Parkinson’s Foundation estimates 1 million Americans live with Parkinson’s Disease.

Autism – One in 68 children in the U.S. are affected by Autism Spectrum Disorder, a 30% increase from two years ago.

Innovation Requires a Multi-Disciplinary Approach to Research and Technology

The BRAIN Initiative involves a number of government agencies and private partners fostering a multi-disciplinary approach to research and technology. Specifically, it is a unique collaboration across disciplines involving the National Institutes of Health and the National Science Foundation. Learn more in this video with Dr. Tom Insel, Director of the NIMH, and Dr. Fleming Crim of the NSF, as they discuss exploring the connections between the life sciences and physical sciences in understanding the brain.

Call to Action from the White House

Through a Call to Action, the White House has asked to hear from companies, health systems, patient advocacy organizations, philanthropists, and developers about the unique activities and capabilities underway that could be leveraged to catalyze new breakthroughs in our understanding of the brain.

Do you have an idea? You have until May 1st to send your ideas to: brain@ostp.gov. 

Categories: News and Views , All

April 8,2014

10:41

The month of April has brought with it another mass shooting at Fort Hood in Texas. This is the second mass shooting at the Army base since 2009 and according to Mother Jones, the 67th mass shooting in the U.S. since 1982.

Following mass shootings, two common public discussions seem to arise—gun control and mental health. Usually the two get blended together and it becomes about those with mental illness accessing guns. We’re not going to talk about that. In fact, we’re going to take guns out of the equation completely and strictly talk about mental health.

While mental illness is often brought up in the context of these shootings – it’s suspected shooters Jared Loughner, James Holmes and Adam Lanza may all have had some form of mental illness – mental illness does not necessarily drive those who deal with it to commit headline-grabbing crimes. Mental illness can be much more subtle and much more common than these incidents lead us to believe. In fact, the National Institute on Mental Health estimates about one in four adults ages 18 and over suffer from a diagnosable mental disorder in a given year.

Mental health disorders include everything from depression, bipolar disorder, suicide, schizophrenia, anxiety, obsessive-compulsive disorder, PTSD, eating disorders to personality disorders. Yet, despite the common nature of mental illness, those experiencing mental health issues may have difficulty getting treatment.

Mental illness still carries a huge stigma, causing embarrassment for those with mental illness that often prevents them for reaching out for help when they are struggling. Fear of being judged as unstable, potentially violent or “crazy” can prevent those with mental illness from getting the help they need. Access to mental health care also has historically been a challenge.

Because coverage for mental health issues was not on equal standing with coverage for other medical issues, Congress passed the Mental Health Parity Act in 1996. This law prohibited large employer-sponsored group health plans from imposing higher annual or lifetime dollar limits on mental health benefits than those applicable to medical or surgical benefits. In 2008 Congress passed the The Mental Health Parity and Addiction Equity Act of 2008 to close some of the holes that where in the original parity act.

Still, just as with other medical issues, factors like living in a rural area, income, insurance, and other factors can hinder access to care.

Technology is one possible way to help breakdown stigma and barriers to care and can provide a tool to help raise awareness and build support. Here are some ways the tech community can help those with mental illness.

Therappy is an online community of discussion forums. Therappy’s goal is to become the leading community and source of information for everyone involved in mental health technology. Therappy is a new community, it just launched in March, and it is looking for those with an interest in how technology can affect mental health to participate in the discussions.

Target Zero to Thrive is the Depression and Bipolar Support Alliance’s new social media campaign that runs during April. The campaign “challenges mental health care professionals, researchers, and individuals living with or affected by mood disorders to raise treatment goals to complete remission—to zero symptoms.” As the organization points out, cancer treatment sets a goal for a patient to become cancer free and the same standards should be for treatment for those with mood disorders. Reducing symptoms is not enough, the target of zero symptoms is what’s needed for patients to thrive. By visiting the Target Zero campaign site, you can find ways to support the campaign this month.

WeCounsel is an online counseling website for mental health care providers, patients and healthcare organizations. It provides users with a HIPAA-compliant telehealth platform that connects mental healthcare providers to their clients online. The website allows secure and confidential videoconferencing for online counseling sessions.

Getting the word out about mental health resources was the focus of a recent challenge sponsored by Johnson & Johnson. Participants were tasked with coming up with ideas to increase awareness and use of mental health services for depression and anxiety disorders. The winners of the challenge where Tulane University graduate student Alejandra Leyton and University of Maryland medical student Veena Katikineni. The pair proposed a solution called MHealth for Mental Health which  is a free SMS text service that sends “relevant information to people between the ages of 15-49 years old who present with symptoms of depression or anxiety, as well as the community at large with the hope that members will refer one another to the service.”

Smartphone apps are another possible tool for improving mental healthcare. It should be noted that while many exist, few have hard scientific evidence to back up their claims of effectiveness, say researchers at the Black Dog Institute at the University New South Wales in Sydney, Australia.

But some users already swear by them. A recent New York Times article, delved into the use of mindfulness smartphone apps to combat anxiety. The ease and convenience of use is a main part of the appeal of these apps.

If you want to join in the mental health discussion on Twitter you can follow the hashtags #mentalhealth #mhtech #mhchat or #mhsm.

Categories: News and Views , All

April 3,2014

9:15

2013 significantly changed the context of the healthcare security and privacy conversation. From the Snowden NSA revelations, to HIPAA Omnibus rule, changes in breach characteristics, to connected devices, mhealth, IoT and increasing use of cloud and corporate BYOD policies, one thing is clear: security by obscurity equals no security at all. The burden of protecting PHI is now spread across all data holders, patients, providers and payers alike. Outlined below are some of the unique security issues that will need addressing as healthcare technology moves into a data analytics mindset.

Breach Characteristics

More than 7 million patient records were exposed in 2013 alone, marking a perceived 138% increase from reported 2012 healthcare data breaches. Expect to see a change in how breaches occur, and keep in mind, an uptick in breach notifications doesn’t necessarily imply an uptake in actual data breaches. Everyday PHI breaches of years past went largely unnoticed whereas now technology helps track and log access. 2014 will see a new focus on targeted identity theft and less focus on lost laptops and stolen hard drives. Human error still accounts for 75 percent of all healthcare data breaches, but medical-related identity theft accounted for 43 percent of all identity thefts reported in the United States in 2013.

Federal regulators are planning for a more permanent HIPAA audit program to support the 2013 HIPAA Omnibus rule, and industry can expect increased scrutiny for violations pertaining to inappropriate disclosure of data and denial of patient access. What has not yet been directly addressed is if the NSA has accessed, reconstructed or inferred any personally-identifiable information covered by HIPAA, such as that through Google, Microsoft, Apple, and through mobile games, and how a BAA will hold up in such a data collection scenario. Currently, cases are being heard regarding the warrantless access of state controlled health databases by other federal agencies, and the verdict has been in favor of patient privacy.

Patient Best Practices Awareness

In other sectors, user data purging, and security tools are entering the mainstream. Apps to help consumers navigate terms of services and platform data deletion shortcuts to password managers, and tools to avoid search and web tracking are helping users gain control of their personal information. But when it comes to healthcare, how common is it to leave a credit card on file or how often do patients really check their charts for errors?

The internet of things, and connected reality as it plays into mobile and personal health apps adds another layer to patient security awareness. Malware attacks through network connected appliances such as refrigerators, HVAC and media centers have been of concern recently, and they present an unsuspecting entry into a home network. What used to be as simple as using a WPA key on a home router and not handing out a SSN is now a different conversation. Enterprise security has long favored an onion type approach, or defense-in-depth, but that’s far from the case with personal information security. And the question remains, is defense-in-depth even effective in the personal security space, given its shortcomings in enterprise IT?

PHI in the Cloud

Healthcare IT is finally trusting cloud storage and computing. As of 2013, 30% of healthcare organizations are leveraging cloud technology, and nearly twice that are confident in the future of cloud security. Other industries have proven that cloud computing can be a safe, economical, collaborative and scalable approach to enterprise data management problems. While cloud security will garner much of the spotlight for the next several years, the privacy aspect of distributed data liquidity must be addressed.

Currently, there are no HIPAA restrictions on the use or disclosure of de-identified health data, even though 87% of all Americans can be uniquely identified using only zip code, birthdate, and sex. PHI is currently, and will be increasingly, sold to third-party data warehousers, insurers, pharma, marketers, researchers, and more. Current standards for anonymized data do not prevent positive backwards identification. This is the conversation the healthcare industry, and patients, should be having in 2014 regarding cloud computing.

Corporate BYOD

Sorry, but that cat left the bag 5 years ago. Employees are using their personal devices at work, regardless of policy. The best bet to mitigate BYOD security risks is to address it head on, and support secure solutions that enable user’s workflows. Secure SMS and texting has been solved. HIPAA compliant platform-as-a-service is a thing. There are mobile apps to address medical imaging, rounding, clinical diagnosis, EHR integration, and countless vendors are developing platform-down solutions for providers.

Beyond mobile security, and BYOD policy, the issue will be how breaches on these devices will be reported, and analyzed. Currently, the HIPAA Wall of Shame classifies all mobile device breaches under the catch-all “Other Portable Electronic Device” which as mhealth really enters the mainstream, will be a near useless designation.

Mobile Health Security

In this context, mHealth refers to medical apps used by patients, not wellness/fitness apps nor clinical practice or reference apps. Current efforts in the private sector to certify mobile health applications have failed, largely due to a lack of understanding around mobile health security. Mobile apps and devices come with complex challenges not seen elsewhere in healthcare, particularly around workflow data integration, security and user experience. Two camps have emerged: platform-down apps such as those from athenahealth and Greenway, and independent shops like AliveCor and Glooko who have yet to meaningfully integrate into major vendors. The third obvious play would come from valley tech giants, but despite rumors, nothing of substance has been shipped.

While certain security best practices should never be skipped (encryption, SSL, passkeys, etc), user experience should come first and foremost. Security is nearly insignificant if no one uses an app, and patients will not tolerate poor design. Many questions remain regarding shortcomings of FDA mhealth software regulation. Are medical providers the best individuals to evaluate a mhealth app for security and patient usability, and how may the design, developer and infosec communities better help educate the medical community? It will be important to address provider shortcomings in prescribing and recommending patient-facing mhealth tools, especially around efficacy, privacy and security.

Here are the chat topics I proposed during the #HITsm chat, April 4. I would love to hear your feedback on the topics, or any other related issues, below in the comments.

  1. Does theft of your electronic health record cause more concern than theft of other private info?
  2. Should there be different security requirements for govt access to PHI data vs others?
  3. How can the health IT infosec community help journalists/consumers/patients evaluate mobile apps and enterprise health IT solutions?
  4. Are docs qualified to RX and recommend health apps? How can mHealth be transparent regarding PHI risks?
  5. Should patients be allowed to opt out of the sharing of their anonymized PHI data if used for profit? If so, how?
Categories: News and Views , All

January 6,2014

16:11
GNUmed now supports the following workflow:

- patient calls in asking for documentation on his back pain

- staff activates patient

- staff adds from the document archive to the patient
  export area a few documents clearly related to episodes
  of back pain

- staff writes inbox message to provider assigned to patient

- provider logs in, activates patient from inbox message

- provider adds a few more documents into the export area

- provider screenshots part of the EMR into the export area

- provider includes a few files from disk into export area

- provider creates a letter from a template and
  stores the PDF in the export area

- provider notifies staff via inbox that documents
  are ready for mailing to patient

- staff activates patient from inbox message

- staff burns export area onto CD or DVD and
  mails to patient

- staff clears export area

Burning media requires both a mastering application
(like k3b) and an appropriate script gm-burn_doc
(like the attached) to be installed. Burning onto
some media the directory passed to the burn script
produces an ISO image like the attached.

Karsten
--
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

November 26,2013

5:10
Here it is

0.) do a full backup. Save it on some other media then your harddisk ! Do it,
now.

1.) Install PG 9.3 ( I tried with 32bit but should not matter).
- http://get.enterprisedb.com/postgresql/postgresql-9.3.1-1-windows.exe

2.) Run the installer and select (English_UnitedStates) for locale (others
might work as well). Make sure it installs itself on port 5433 (or other but
never ! 5432).

3.) Make sure both PG 8.4 and PG 9.3 are running (e.g. via pgadmin3 from PG
9.3)

4.) open a command shell (dos box) - "run as" administrator (!) in Win7

5.) type : RUNAS /USER:postgres "CMD.EXE"
- this will open another black box (command shell) for user postgres
- for the password use 'postgrespassword' (default)

6.) type: SET PATH=%PATH%;C:\Programme\PostgreSQL\9.3\bin;
- instead of Programme it might be Program Files on your computer

7.) type: cd c:\windows\temp
- changes directory to a writable temporary directory

8.) type: pg_dump -p 5432 -Fc -f gnumedv18.backup gnumed_v18

9.) type: pg_dumpall -p 5432 --globals-only > globals.sql

Important : Protect your PG 8.4 by shutting it down temporarly

10.) type in the first command shell : net stop postgresql-8.4
- check that is says : successfully stopped

11.) psql -p 5433 -f globals.sql
- this will restore roles in the new database (PG 9.3 on port 5433)

12.) pg_restore -p 5433 --dbname postgres --create gnumedv18.backup
- this will restore the database v18 into the PG 9.3 on port 5433

Congratulations. You are done. Now to check some things.

########################################
Here you could run the fingerprint script on both databases to check for an
identical hash

https://gitorious.org/gnumed/gnumed/source/f4c52e7b2b874a65def2ee1b37d8ee3fb3566ceb:gnumed/gnumed/server/gm-fingerprint_db.py

########################################

13.) Open gnumed.conf in c:\programme\gnumed-client\
For the profile GNUmed database on this machine ("TCP/IP": Windows/Linux/Mac)]
change port=5432 to 5433.

14. Run the GNUmed client and check that it is working. If it works (no wrong
schema hash detected) you should see all your patient and data.

15. If you have managed to see you patients and everything is there close
GNUmed client 1.3.x.

16.) in the first command shell type: net stop postgresql-9.3

17.) Go to c:\Ptogramme\PostgresPlus\8.4SS\data and open postgresql.conf. Find
port = 5432 and change it to port = 5433

18.) Go to c:\Programme\Postgresql\9.3\data and open postgresql. Find port =
5433 and change it to 5432. This effectively switches ports for PG 8.4 and 9.3
so PG 9.3 runs on the default port 5432.

19.)  Open gnumed.conf in c:\programme\gnumed-client\
For the profile GNUmed database on this machine ("TCP/IP": Windows/Linux/Mac)]
change port=5433 to 5432.

20.) Restart PG 9.3 with: net start postgresql-9.3.

21.) Open the GNUmed client and connect (to PG 9.3 on port 5432).

22.) Leave PG 8.4 in a shutdown state.

So far we have transferred database v18 from PG 8.4 to 9.3. No data from PG
8.4 is touched/lost.

23.) Now you are free to install gnumed-server v19 and gnumed -client 1.4.
Having installed gnumed-server v19 select 'database upgrade' (not boostrap
database) and it will upgrade your v18 database to a v19 database.

In case you experience problems you can always shut down PG 9.3, switch ports again, install client 1.3.x, start PG 8.4 (net start postgresql-8.4) and work with your old setup.

November 13,2013

7:26
The release notes prominently tell us that GNUmed 1.4.x requires at least PostgreSQL 9.1.

If you are running the Windows packages and have let GNUmed install PostgreSQL for you you are good to go since it comes with PostgreSQL 9.2 already.

If you are on Ubuntu or Debian Chances are your system still has PostgreSQL 8.x installed.

First check if you run any software that requires you to continue using PostgreSQL 8.x. If so you can install PG 9.1 side by side with it. If not let PG 9.1 replace PG 8.x

It usually works like this.

sudo apt-get install postgresql-9.1
sudo pg_upgradecluster 8.4 main

Then if you don't need PG 8.4 anymore you could

sudo pg_dropcluster --stop 8.4 main
sudo apt-get purge postgresql-8.4

Have fun.

March 6,2013

11:53

Healthcare executives are continuously evaluating the subject of RFID and RTLS in general.  Whether it is to maintain the hospitals competitive advantage, accomplish a differentiation in the market, improve compliance with requirements of (AORN, JCAHO, CDC) or improve asset utilization and operating efficiency.  As part of the evaluations there is that constant concern around a tangible and measurable ROI for these solutions that can come at a significant price.

When considering the areas that RTLS can affect within the hospital facilities as well as other patient care units, there are at least four significant points to highlight:

Disease surveillance: With hospitals dealing with different challenges around disease management and how to handle it.  RTLS technology can determine each and every staff member who could have potentially been in contact with a patient classified as highly contagious or with a specific condition.

Hand hygiene compliance: Many health systems are reporting hand hygiene compliance as part of safety and quality initiatives. Some use “look-out” staff to walk the halls and record all hand hygiene actives. However, with the introduction of RTLS hand hygiene protocol and compliance when clinical staff enter or use the dispensers can now be dynamically tracked and reported on. Currently several of the systems that are available today are also providing active alters to the clinicians whenever they enter a patient’s room and haven’t complied with the hand hygiene guidelines.

Locating equipment for maintenance and cleaning:

Having the ability to identify the location of equipment that is due for routine maintenance or cleaning is critical to ensuring the safety of patients. RTLS is capable of providing alerts on equipment to staff.

A recent case of a hospital spent two months on a benchmarking analysis and found that it took on average 22 minutes to find an infusion pump. After the implementation of RTLS, it took an average of two minutes to find a pump. This cuts down on lag time in care and can help ensure that clinicians can have the tools and equipment they need, when the patient needs it.

There are also other technologies and products which have been introduced and integrated into some of the current RTLS systems available.

EHR integration:

There are several RTLS systems that are integrated with Bed management systems as well as EHR products that are able to deliver patient order status, alerts within the application can also be given.  This has enabled nurses to take advantage of being in one screen and seeing a summary of updated patient related information.

Unified Communication systems:

Nurse calling systems have enabled nurses to communicate anywhere the device is implemented within the hospital facility, and to do so efficiently. These functionalities are starting to infiltrate the RTLS market and for some of the Unified Communication firms, it means that their structures can now provide a backbone for system integrators to simply integrate their functionality within their products.

In many of the recent implementations of RTLS products, hospital executives opted to deploy the solutions within one specific area to pilot the solutions.  Many of these smaller implementations succeed and allow the decision makers to evaluate and measure the impacts these solutions can have on their environment.  There are several steps that need to be taken into consideration when implementing asset tracking systems:

•             Define the overall goals and driving forces behind the initiative

•             Develop challenges and opportunities the RTLS solution will be able to provide

•             Identify the operational area that would yield to the highest impact with RTLS

•             Identify infrastructure requirements and technology of choice (WiFi based, RFID based, UC integration, interface capability requirements)

•             Define overall organizational risks associated with these solutions

•             Identify compliance requirements around standards of use

Conclusion

RFID is one facet of sensory data that is being considered by many health executives.  It is providing strong ROI for many of the adapters applying it to improve care and increase efficiency of equipment usage, as well as equipment maintenance and workflow improvement. While there are several different hardware options to choose from, and technologies ranging from Wi-Fi to IR/RF, this technology has been showing real value and savings that health care IT and supply chain executives alike can’t ignore.

February 21,2013

14:41

It was not long after mankind invented the wheel, carts came around. Throughout history people have been mounting wheels on boxes, now we have everything from golf carts, shopping carts, hand carts and my personal favorite, hotdog carts. So you might ask yourself, “What is so smart about a medical cart?”

Today’s medical carts have evolved to be more than just a storage box with wheels. Rubbermaid Medical Solutions, one of the largest manufacturers of medical carts, have created a cart that is specially designed to house computers, telemedicine, medical supply goods and to also offer medication dispensing. Currently the computers on the medical carts are used to provide access to CPOE, eMAR, and EHR applications.

With the technology trend of mobility quickly on the rise in healthcare, organizations might question the future viability of medical carts. However a recent HIMSS study showed that cart use, at the point of care, was on the rise from 26 percent in 2008 to 45 percent in 2011. The need for medical carts will continue to grow; as a result, cart manufacturers are looking for innovative ways to separate themselves from their competition. Medical carts are evolving from healthcare products to healthcare solutions. Instead of selling medical carts with web cameras, carts manufacturers are developing complete telemedicine solutions that offer remote appointments throughout the country, allowing specialist to broaden their availability with patients in need. Carts are even interfaced with eMAR systems that are able to increase patient safety; the evolution of the cart is rapidly changing the daily functions of the medical field.

Some of the capabilities for medical carts of the future will be to automatically detect their location within a healthcare facility. For example if a cart is improperly stored in a hallway for an extended period of time staff could be notified to relocate it in order to comply to the Joint Commission’s requirements. Real-time location information for the carts could allow them to automatically process tedious tasks commonly performed by healthcare staff. When a cart is rolled into a patient room it could automatically open the patient’s electronic chart or give a patient visit summary through signals exchanged between then entering cart and the logging device kept in the room and effectively updated.

Autonomous robots are now starting to be used in larger hospitals such as the TUG developed by Aethon. These robots increase efficiency and optimize staff time by allowing staff to focus on more mission critical items. Medical carts in the near future will become smart robotic devices able to automatically relocate themselves to where they are needed. This could be used for scheduled telemedicine visits, the next patient in the rounding queue or for automated medication dispensing to patients.

Innovation will continue in medical carts as the need for mobile workspaces increase. What was once considered a computer in a stick could be the groundwork for care automation in the future.

September 10,2012

9:35

This has been an eventful year for speech recognition companies. We are seeing an increased development of intelligence systems that can interact via voice. Siri was simply a re-introduction of digital assistants into the consumer market and since then, other mobile platforms have implemented similar capabilities.

In hospitals and physician’s practices the use of voice recognition products tend to be around the traditional speech-to-text dictation for SOAP (subjective, objective, assessment, plan) notes, and some basic voice commands to interact with EHR systems.  While there are several new initiatives that will involve speech recognition, natural language understanding and decision support tools are becoming the focus of many technology firms. These changes will begin a new era for speech engine companies in the health care market.

While there is clearly tremendous value in using voice solutions to assist during the capture of medical information, there are several other uses that health care organizations can benefit from. Consider a recent product by Nuance called “NINA”, short for Nuance Interactive Natural Assistant. This product consists of speech recognition technologies that are combined with voice biometrics and natural language processing (NLP) that helps the system understand the intent of its users and deliver what is being asked of them.

This app can provide a new way to access health care services without the complexity that comes with cumbersome phone trees, and website mazes. From a patient’s perspective, the use of these virtual assistants means improved patient satisfaction, as well as quick and easy access to important information.

Two areas we can see immediate value in are:

Customer service: Simpler is always better, and with NINA powered Apps, or Siri like products, patients can easily find what they are looking for.  Whether a patient is calling a payer to see if a procedure is covered under their plan, or contacting the hospital to inquire for information about the closest pediatric urgent care. These tools will provide a quick way to get access to the right information without having to navigate complex menus.

Accounting and PHR interaction: To truly see the potential of success for these solutions, we can consider some of the currently used cases that NUANCE has been exhibiting. In looking at it from a health care perspective, patients would have the ability to simply ask to schedule a visit without having to call. A patient also has the ability to call to refill their medication.

Nuance did address some of the security concerns by providing tools such as VocalPassword that will tackle authentication. This would help verify the identity of patients who are requesting services and giving commands. As more intelligence voice-driven systems mature, the areas to focus on will be operational costs, customer satisfaction, and data capture.

February 5,2013

18:01

[...] medical practice billing software  encourage [...]

April 14,2014

17:44

I recently chaired a couple of conferences and my next HealthIMPACT event is coming up later this month in NYC. At each one of the events and many times a year via twitter and e-mail I am asked whether the Direct Project is successful, worth implementing in health IT projects, and if there are many people sending secure messages using Direct. To help answer these questions, I reached out to Rachel A. Lunsford, Director of Product Management at Amida Technologies. Amida has amassed an impressive team of engineers to focus on health IT for patient-centered care so their answer will be well grounded in facts. Here’s what Rachel said when I asked whether Direct is a myth or if it’s real and in use:

Despite wide adoption in 44 States, there is a perception that Direct is not widely used. In a recent conversation, we discussed a potential Direct secure messaging implementation with a client when they expressed concern about being a rare adopter of Direct messaging.  While the team reassured them that their organization would in fact be joining a rich ecosystem of adopters, they still asked us to survey the market.

In 2012, the Officer of the National Coordinator for Health Information Technology (ONC) awarded grants to State Health Information Exchanges to further the exchange of health information. There are two primary ways to exchange information: directed and query-based. ‘Directed’ exchange is what it sounds like – healthcare providers can send secure messages with health information attached to other healthcare providers that they know and trust. The most common type of ‘Directed’ exchange is Direct which is a secure, scalable, standards-based way to send messages. Query-based is a federated database or central repository approach to information exchange which is much harder to implement and growth in this area is slower.  Thanks in part to the grants and also in part to the simplicity of the Direct protocol, 44 States have adopted Direct and widely implemented it. And yet the myth persists that Direct is not well adopted or used.

As with other new technologies, it may be hard to see the practical applications. When Edison and Tesla were dueling to find out which standard – direct or alternating current – would reign supreme, many were unsure if electricity would even be safe enough, never mind successful enough, to replace kerosene in the street lamps. It was impossible for people to foresee a world where many live in well-lit homes on well-lit streets, and none could have imagined using tools like the computer or the Internet. Thankfully, the standards debate was sorted out and we continue to benefit from it today.

There are two groupings of data we can look towards for more detail on use of Direct. The first are the States themselves; they self-report transaction and usage statistics to the ONC. It was reported in the third quarter of 2013 that the following were actively exchanging some 165 million ‘Directed’ messages:

  • 20,376 Ambulatory entities (Entities/organizations that provide outpatient services, including community health centers, independent and group practice, cancer treatment centers, dialysis centers, etc.)
  • 738 Acute Care Hospitals (Hospitals that provider inpatient medical care and other related services for surgery, acute medical conditions or injuries)
  • 157 Laboratories (Non-hospital clinical)
  • 16,329 other health care organizations (Home health care, long-term care, behavioral health programs/entities, psychiatric hospitals, payers, release of information vendors, health care billing services, etc.)

Another organization collecting data on Direct implementation is DirectTrust.org. Charged by ONC, DirectTrust.org oversees development of the interoperability framework and rules used by Direct implementers, works to reduce implementation costs, and remove barriers to implementation. Additionally, DirectTrust supports those who want to serve as sending and receiving gateways known as health information service providers (HISPs). By DirectTrust.org’s count, the users number well over 45,000 with at least 16 organizations accredited as HISPs. Further, over two million messages have been exchanged with the roughly 1,500 Direct-enabled sites. With Meaningful Use encouraging the use of Direct, we can expect even more physicians and healthcare organizations to join in.

As more doctors are able to exchange records, everyone will benefit. When a provider can receive notes and records from other providers to see a fuller, more complete view of her patient’s health, we have a greater possibility of lowering healthcare costs, improving health outcomes, and saving lives.  Once we open up the exchange to patients through things like the Blue Button personal health record, the sky is the limit.

 

 

March 27,2014

13:52

As I’ve been preparing to chair the HealthIMPACT conference in Houston next Thursday I’ve been having some terrific conversations with big companies like Cisco, some of our publishing partners, and smaller vendors entering the health IT space for the first time. One great question I was asked during a discussing yesterday by a tech publisher was “so what’s it going to take to achieve real interoperability in healthcare and how long will it take?” To that my answer was:

  • We need to move from anecdote-driven systems engineering to evidence-driven systems engineering and
  • We need to move from complaint-based interoperability design to evidence- and workflow-driven interoperability design

Although the discussion was over an audio telecon I could almost see the eyebrows being raised by the editors on the other side of the phone and could tell they were thinking I might be a little weird. I proceeded to explain that systems engineering and interoperability design in healthcare IT suffer from three major flaws:

  • The myth that there is a lack of interoperability
  • The myth that we don’t have enough standards
  • The assumption that health IT leadership has provided staff with the tools they need to do proper systems engineering and interoperbility design

The first myth is perpetuated usually through anecdote after anecdote by anyone who has ever had to fill out their name on two separate paper forms in a waiting room. The fact that you have to fill out forms (the anecdote) doesn’t mean that there isn’t interoperability — it just means that the cost of filling out a form is probably lower than the cost of integrating two systems. Healthcare systems are already interoperable in areas where they have to be — namely, where required by statue, regulation, or law. And, systems are interoperable where there’s a reimbursement (payment) reason to have it or in many cases if there’s a patient safety reason to have it (e.g. for Pharmacy or Lab Orders). Unfortunately, convenience and preference (e.g. for patients to not have to fill out forms twice) doesn’t factor into designs much right now because we have bigger fish to fry. If a non-integrated multi-system workflow isn’t demonstrably unsafe for patients, isn’t costing a lot of money that can be easily counted, or isn’t required by a law that will force leadership’s hands then complaining about lack of interoperability won’t make it so. We need to come up with crisp and clear evidence-driven workflow reasons, patient safety reasons, cost savings reasons, or revenue generating reasons for interoperability if we want improvement.

The second myth of lack of standards is perpetuated by folks who are new to the industry, looking for excuses (vendors do this), or are otherwise clueless (some of our health IT leaders are guilty of this). There are more than enough standards available to solve most of our interoperability woes. If we do workflow-based evidence-driven analysis of systems we come to see that most interoperability can be achieved quickly and without fanfare using existing MU-compliant standards. We have HL7, we have CCDA, ICD, CPT, LOINC, and many other format, transport, and related standards available. I’m not talking about flawless, pain free, error-free, interoperability across systems — I’m talking about “good enough” interoperability across systems where workflow reasons, patient safety reasons, cost savings reasons, or revenue generating reasons are clearly identifiable.

The third problem, lack of proper leadership, is probably the most difficult to tackle but perhaps the most important one. I’ve been as guilty of this as anyone else — we have many environments where we’re demanding interoperability and not giving the time, resources, budget, or tools to our staff that will allow them to prioritize and execute on our interoperability requirements. Leadership means understanding the real problem (workflow-driven, not anecdotal), making decisions, and then providing your staff with everything they need to do their jobs.

If we want to make progress in healthcare interoperability we need to train the next generation of leaders that proper systems engineering approaches are required, better interoperability is possible because some of it already exists now, and that you shouldnt wait for standards to get started on anything that will benefit patients and caregivers. Health IT integration woes can be overcome if we get beyond anecdotes and complaining and start doing something about it.

March 20,2014

14:17

My friend John Lynn was kind enough to cover the new HealthIMPACT Conference that I’m chairing in Houston on April 3 in his recent piece entitled “Getting Beyond the Health IT Cheerleaders, BS, and Hype Machine“. While the article was great, Beth Friedman’s comment was priceless:

What are the criteria to be considered part of the cheerleader squad? This PR agency wants to be sure we are providing valuable, actionable, [practical], relevant content….NOT HYPE! And we’re open to your guidance.

John gave her a great reply:

It’s interesting how in high school you always wanted to be a cheerleader, but in marketing you don’t want to be seen as the cheerleader;-)

I think your description describes what you need to do to avoid hype. You have to focus on what really matters to the customers. Provide value to the customer as opposed to trying to sale your product. A deep understanding of the domain will create a relationship where people trust your views and then can talk about what you’re doing to solve their problems which you understand deeply.

Since Beth posted a great question I wanted to do it justice by answering more specifically. By the way, we’ll be covering a lot similar material at the inaugural Health IT Marketing Conference taking place in Vegas on the 7th and 8th of April. Join us!

Health IT “Cheerleaders” in my mind are those that push technology without considering deep value, return on investment, return on assets, and productivity loss. The hype machine is built around technology when health IT “cheerleaders” focus more on the gadgetry itself rather than the value proposition. If content is built around workflows, workflow optimization, and those tasks within existing or new workflows that optimize patient care through the use of technology then that’s real value.

This morning a college student sent a great question around health IT related productivity loss:

I am currently working on a capstone project for my MBA and my team is required to address a set of challenges as well as opportunities. One of the challenges they seem to be concerned the most [about] is the reduction in productivity of the physicians during and after the implementation of the EHR platform (they are currently working on a paper-base workflow). Since EMR requires doctors to type in the information which eventually takes significantly larger amount of time compared to their traditional method of handwriting, they are asking about ideas what they should do.

I replied that this sounds like a great project – and promptly advised them to conduct an analysis on whether the concern or productivity loss is warranted. I suggested they do a current workflow analysis to figure out their efficiency of existing steps and how those steps would change after an EHR is installed. If such an analysis is not done, evidence-driven technology choices cannot be made.

For a great example of how to build content around clinical workflows, check out HRSA’s guidanceIt’s still surprising to me how many of us in the tech business suggest usage of technology without a deep understanding of workflow. Progress will come, and cheerleading will be reduced, when tech meets workflow in a measurable way.

March 12,2010

11:01
This blog is now located at http://blog.rodspace.co.uk/. You will be automatically redirected in 30 seconds, or you may click here. For feed subscribers, please update your feed subscriptions to http://blog.rodspace.co.uk/feeds/posts/default. Rodhttp://www.blogger.com/profile/12607263970096550308noreply@blogger.com0

March 3,2010

4:07
I've just heard about the Information Technology and Communications in Health (ITCH) which will be held February 24 - 27, 2011, Inn at Laurel Point, Victoria, BC Canada.I'd not heard of this conference before but the current call for papers looks interesting.Health Informatics: International Perspectives is the working theme for the 2011 international conference. Health informatics is now a Rodhttp://www.blogger.com/profile/12607263970096550308noreply@blogger.com0
3:59
The report of the Prime Minister’s Commission on the Future of Nursing and Midwifery in England sets out the way forward for the future of the professions which was published yesterday, calls for the establishment of a "high-level group to determine how to build nursing and midwifery capacity to understand and influence the development and use of new technologies. It must consider how pre- and Rodhttp://www.blogger.com/profile/12607263970096550308noreply@blogger.com0

April 17,2014

13:54
Physical-Cyber-Social Computing

Final submissions due: 1 September 2014
Publication issue: May/June 2015

Please email the guest editors a brief description of the article you plan to submit by 15 August 2014

Guest Editors: Payam Barnaghi, Manfred Hauswirth, Amit Sheth, and Vivek Singh (ic3-2015 AT computer.org)

Computing, communication, and mobile technologies are among the most influential innovations that shape our lives today. Technology advancements such as mobile devices that reach over half of Earth's population, social networks with more than a billion members, and the rapid growth of Internet-connected devices (the Internet of Things) offer a unique opportunity to collect and communicate information among everybody and everything on the planet. Interacting with the physical world enriches our existing methods of information exchange — sharing our thoughts, communicating social events, and work collaboration via the new dimension of physical computing. This all-encompassing "new world of information" requires that we be able to process extremely large volumes of data to extract knowledge and insights related to our surrounding environment, personal life, and activities, on both local and global scales.

These trends have led to an emergence of physical-cyber-social (PCS) computing, which involves a holistic treatment of data, information, and knowledge from the physical, cyber, and social worlds to integrate, understand, correlate, and provide contextually relevant abstractions to humans and the applications that serve them. PCS computing builds on and significantly extends current progress in cyber-physical, socio-technical, and cyber-social systems. This emerging topic seeks to provide powerful ways to exploit data that are available through various IoT, citizen and social sensing, Web, and open data sources that have either seen or will soon see explosive growth. Providing interoperable information representations and extracting actionable knowledge from the deluge of human and machine sensory data are key issues.

This special issue seeks innovative contributions to computer systems and interaction design, information processing and knowledge engineering, and adaptive solutions associated with PCS computing and the novel applications it enables. Potential topics include:
  • semantics and information modeling; semantic integration, fusion, and abstraction strategies;
  • stream processing and reasoning on complex PCS data; real-time feedback control and response systems; human/event/situation-centered views of data streams;
  • pattern recognition, trend detection, anomaly and event detection, semantic event processing, and inferring actionable knowledge techniques;
  • spatio-temporal, location-aware, continuous, scalable, and dynamic analysis;
  • security, privacy, and trust issues in collection, storage, and processing; and
  • novel and significant PCS applications, deployments, and evaluations in areas including personalized and contextualized information and alerts, health, biomedicine, smart cities, and human/social/economic development.
Submission Guidelines

All submissions must be original manuscripts of fewer than 5,000 words, focused on Internet technologies and implementations. All manuscripts are subject to peer review on both technical merit and relevance to IC's international readership — primarily practicing engineers and academics who are looking for material that introduces new technology and broadens familiarity with current topics. We do not accept white papers, and we discourage strictly theoretical or mathematical papers. To submit a manuscript, please log on to ScholarOne (https://mc.manuscriptcentral.com:443/ic-cs) to create or access an account, which you can use to log on to IC's Author Center and upload your submission.

My source:
Announcements mailing list
Announcements AT ubicomp.org
http://ubicomp.org/mailman/listinfo/announcements_ubicomp.org
Categories: News and Views , All

March 21,2014

12:48
http://hijournal.bcs.org/index.php/jhi
Informatics in Primary Care Journal
On Wednesday I visited HC2014. Having the week off and with the event literally on the doorstep (London in 2015) it was very convenient.

I'd received an email the evening before on the new informatics federation and heard the official announcement in the opening session. My last HC event was 2005, my first 1986 also in Manchester if I remember correctly.

I posted the federation news this morning. On the BCS stand I picked up a copy of the INFORMATICS IN PRIMARY CARE.

This journal is open access. Despite the title and perhaps illustrative of the dependencies within and need to integrate health (and social care) the journal's coverage is broad and inclusive:
We are interested in how computerised medical records can better record the clinical status of patients and can be used to measure the quality, safety and efficiency of health care professionals and organisations – including primary care, hospital, mental health, and social and community care.  The scope of the journal also includes integrated care and how genetic data might be used to enhance health care.

I will reflect a little more on HC2014 soon.



Categories: News and Views , All
4:24
UKCHIP IHRIM

19 March 2014

BCS, The Chartered Institute for IT, the UK Council of Health Informatics Professions (UKCHIP) and the Institute of Health Records and Information Management (IHRIM) are working collaboratively to create a new federation for the Informatics profession. The three autonomous bodies will work closely together in a federation to ensure that UK health informatics is recognised as a valued profession. 

Justin Whatling, Chair of BCS Health, part of the Chartered Institute for IT, explains: “This is a very exciting moment for health informatics. Today technology has an immense and profound impact on the health and wellbeing of people. Therefore it’s time for the profession to mature to meet the increasing demand on our skills and capability. We want health to be an attractive place for informatics professionals from other sectors to come and work, and we want to provide a clear career path and professional development opportunities to retain those already working in health. The federation will help us to achieve this.”

The initiative comes as the NHS is under increasing pressure to find and implement new models of health and social care that will provide services closer to people’s homes. This requires health professionals to share accurate information securely and confidentially. In addition, the Caldicott 2 Review has introduced a Duty of Care to Share health information. Both of these things have happened at a time when public trust in the NHS’ ability to handle personal health information has taken a hit.

The federation will be open to all other informatics professional bodies, the private sector, the home countries and lay representation. It will provide leadership of the overall profession with a single professional register and point of entry for professionals, oversee an agreed regulatory framework with a common code of ethical practice and coordinate access to resources providing a unified set of capabilities for all professional areas of practice.

Read more
Categories: News and Views , All

October 14,2012

20:05

Image of clipboard with checklist

 

Twitter, like the Internet in general, has become a vast source of and resource for health care information. As with other tools on the Internet it also has the potential for misinformation to be distributed. In some cases this is done by accident by those with the best intentions. In other cases it is done on purpose such as when companies promote their products or services while using false accounts they created.

In order to help determine the credibility of tweets containing health-related content I suggest the using the following checklist (adapted from Rains & Karmikel, 2009):

  1. Author: Does the tweet contain a first and last name? Can this name be verified as being a real person by searching it on the Internet?
  1. Date: When was the tweet sent? If it is a re-tweet when was the original tweet sent?
  1. Reference: Does the tweet reference a source? Is this source reliable?
  1. Statistics: Does the tweet make claims of effectiveness of a product or service using statistics? Are the statistics used properly?
  1. Personal story or testimonials: Does the tweet contain claims from an individual who has used or conducted research on the product or service? Is this individual credible?
  1. Quotations: Does the tweet quote or cite another source of information (e.g. a link) that can be checked? Is this source credible?

Ultimately it is up to the individual to determine how to use health information they find on Twitter or other Internet sources. For patients anecdotal or experiential information shared by others with the same illness may be considered very credible. Others conducting research may find this a less valuable information source. Conversely a researcher may only be looking for tweets that contain reference to peer-reviewed journal articles whereas patients and their caregivers may have little or no interest in this type of resource.

Reference

Rains, S. A., & Karmike, C. D. (2009). Health information-seeking and perceptions of website credibility: Examining Web-use orientation, message characteristics, and structural features of websites. Computers in Human Behavior, 25(2), 544-553.

 

 

 

 

 

June 26,2012

14:35

The altmetric movement is intended to develop new measures of production and contribution in academia. The following article provides a primer for research scholars on what metrics they should consider collecting when participating in various forms of social media.

Twitter

ThinkUp

If you participate on Twitter you should be keeping track of the number of tweets you send, how many times your tweets are replied to, re-tweeted by other users and how many @mentions (tweets that include your Twitter handle) you obtain. ThinkUp is an open source application that allows you to track these metrics as well as other social media tools such as Facebook and Google +. Please read my extensive review about this tool. This service is free.

Bit.ly

You should register with a domain shortening service such as bit.ly, which will provide you with an API key that you can enter into applications you use to share links. This will provide a means to keep track of your click-through statistics in one location. Bit.ly records how many times a link you created was clicked on, the referrer and location of the user. Consider registering your own domain name and using it to shorten your tweets as a means of branding. In addition, you can use your custom link on electronic copies of your CV or at your own web site. This will inform you when your links have been clicked on. You should also consider using bit.ly to create links used at your web site, providing you with feedback on which are used the most often. For example, all of the links in this article were created using my custom bit.ly domain. In addition, you can tweet a link to any research study you publish to publicize as well as keep track of how many clicks are obtained. Bit.ly is a free service.

TweetReach

Another tool to measure your tweets is TweetReach. This service allows you to track the reach of your tweets by Twitter handle or tweet. It provides output in formats that can be saved for use elsewhere (Excel, PDF or the option to print or save your output by link). To use these latter features you must sign up for an account but the service is free.

Buffer

Buffer is a tool that allows you to schedule your tweets in advance. You can also connect Buffer to your bit.ly account so links used can be included in your overall analytics. Although Buffer provides its own measures on click-through counts this can contradict what appears in bit.ly. This service is free but also has paid upgrade options available that provide more detailed analytics.

Web presence

Google Scholar Citation Profile

You can set up a profile with Google Scholar based on your publication record. The metrics provided by this service include a citation count, h-index and i10-index. When someone searches your name using Google Scholar your profile will appear at the top before any of the citations. This provides a quick way to separate your articles from someone else who has the same name as you.

Google Feedburner for RSS feeds

If you maintain your own web site and use RSS feeds to announce new postings you can also collect statistics on how many times your article is clicked on. Feedburner, recently acquired by Google provides one way to measure this. You enter your RSS feed ULR and a report is generate, which can be saved in CVS format.

Journal article download statistics

Many journals provide statistics on the number of downloads of articles. Keep track of those associated with your publication by visiting the site. For example, BioMed Central (BMC) maintains an access count of the last 30 days, one year and all time for each of your publications.

Quora

Other means of contributing to the knowledge base in your field include participating on web-based forums or web sites such as Quora. Quora provides threaded discussions on topics and allows participants to both generate and respond to the question. Other users vote on your responses and points are accrued. If you want another user to answer your question you must “spend” some of your points. Providing a link to your public profile on Quora on your CV will demonstrate another form of contribution to your field.

Paper.li

Paper.li is a free service that curates content and renders it in a web-based format. The focus of my Paper.li is the use of technology in Canadian Healthcare. I have also created a page that appears at my web site. Metrics on the number of times your paper has been shared via Facebook, Twitter, Google + and Linked are available. This service is free.

Twylah

Twylah is similar to paper.li in that it takes content and displays it in a newspaper format except it uses your Twitter feed. There is an option to create a personalized page. I use tweets.lauraogrady.ca. I also have a Twylah widget at my web site that shows my trending tweets in a condensed magazine layout. It appears in the side bar. This free service does not yet provide metrics but can help increase your tweet reach. If you create a custom link for your Twylah page you can keep track of how many people visit it.

Analytics for your web site

Log file analysis

If you maintain your own web site you can use a variety of tools to capture and analyze its use. One of the most popular applications is Google Analytics. If you are using a content management system such as WordPress there are many plug-ins that will add the code to the pages at your site and produce reports. WordPress also provides a built-in analytic available through its dashboard.

If you have access to the raw log files you could use a shareware log file program or the open source tool Piwik. These tools will provide summaries about what pages of your site are visited most frequently, what countries the visitors come from, how long visitors remain at your site and what search terms are used to reach your site.

Summary

All of this information should be included in the annual report you prepare for your department and your tenure application. This will increase awareness of altmetrics and improve our ability to have these efforts “count” as contributions in your field.

June 24,2012

12:52
  1. The following provides a timeline of articles that appeared in newspapers and blogs from January 2011 to present. The articles demonstrate a progress from patient engagement in online communities to those that include reference to increasing provider involvement.
  2. January 5th, 2011
  3. February 3rd, 2011
  4. February 22nd, 2011
  5. March 23rd, 2011
  6. April 2nd, 2011
  7. April 25th, 2011
  8. May 14th, 2011

Follow Us: