Another incredibly powerful post published on KevinMD.com, this from an anonymous medical student. Read it and weep. I did.It was 4:30 a.m., and I was on the side of the road, drenched in sweat and tears. I had finally slowed my breathing to normal. I was going to be late for rounds. No time to obsess over possible questions. No time to memorize lab values, or practice regurgitating them.
“Isn’t it obvious?” (the PACSMan) asked. “Here’s the deal. No one knows where healthcare is going, so we’re all going to start enjoying Thanksgiving again for the first time in 75 years. Instead of freezing our asses off, we’ll do an interactive virtual conference with scheduled demos and everything. No muss, no fuss, and no ‘free’ meals. As a bonus, system prices will drop 30% because vendors won’t have to pay for RSNA. It’s sheer brilliance, I tell ya!"Mike was referring to the vendor extravaganza at RSNA, but I think this applies to site-visits as well. There is simply no need to haul people across the countryside (or country, for that matter) to see the scanner. They all look pretty much the same, and decisions are not made on the basis of their appearance. (Bore size and other specs are important, but that's all in the specs.)
Given your interest in patients and families, how satisfied are you with the CCDA as a means for data exchange with patient oriented apps?
Partially satisfied. I would really like to see HL7 standards created that make it possible for the patient to become the custodian of their own data. That’s really only possible if standards for data migration are created that enable a patient to securely pull complete data from all their sources of choice. Those sources would include not just EHRs, but the whole of health information technology. The patient could then choose when and how much to share using other existing standards, such as CCDA, or these new standards.
Question from Mandi Bishop, health IT consultant:
What are/could health IT innovators do to markedly improve healthcare and achieve Triple Aim goals, in lieu of forced compliance with CMS/ONC mandates, to offset Meaningful Use incentive dollars and associated reimbursement penalties?
Don’t laugh. I’m serious!
Instituting governance is the single most innovative thing health IT can do in lieu of Meaningful Use that would offset the loss of incentive dollars and reimbursement penalties, improve population health, and improve patient clinical outcomes.
Sound like Triple Aim goals? It should. You don’t have to implement new technology to make the most of what you have. It’s all about the process, baby.
How much faster would your revenue cycle be if 30% more patient data was valid?
What if identifying a vocabulary owner of LOINC freed your lab from its local compendium, and you were able to reconcile and aggregate your own labs with third-party labs? How many fewer unnecessary tests might you order?
How is the patient experience improved when the patient and all caregivers have access to accurate, timely, relevant health information from all their care sources?
Innovating is thinking outside the box. Right now, that box is Meaningful Use – the program where deadlines don’t just discourage, but blatantly ignore, fundamental IT governance principles, and our nation’s patients have become unwitting lab rats in the grandest series of human trials the FDA never realized it should have reviewed.
So be a rebel. Get back to basics. Block, tackle, and institute governance.
Question from Rob Brull, product manager at Corepoint Health:
In what ways does it make sense to extend Direct Project beyond those already defined by Meaningful Use?
Most of the implementations I am aware of for Direct protocol involve the sending and receiving of Summary of Care documents to satisfy the Transfer of Care (ToC) requirement for Meaningful Use. There is one radiology site my company is working with to send a Diagnostic Imaging Report via Direct. My thought is that lab orders, lab results, and radiology reports are obvious workflows for implementing the Direct protocol. No VPN or agent is required and hopefully all provider facilities will already have a Direct address based on ToC requirements. The real hurdle will be for vendors to extend their capabilities to send and receive payloads other than Summary of Care.
Healthcare executives are continuously evaluating the subject of RFID and RTLS in general. Whether it is to maintain the hospitals competitive advantage, accomplish a differentiation in the market, improve compliance with requirements of (AORN, JCAHO, CDC) or improve asset utilization and operating efficiency. As part of the evaluations there is that constant concern around a tangible and measurable ROI for these solutions that can come at a significant price.
When considering the areas that RTLS can affect within the hospital facilities as well as other patient care units, there are at least four significant points to highlight:
Disease surveillance: With hospitals dealing with different challenges around disease management and how to handle it. RTLS technology can determine each and every staff member who could have potentially been in contact with a patient classified as highly contagious or with a specific condition.
Hand hygiene compliance: Many health systems are reporting hand hygiene compliance as part of safety and quality initiatives. Some use “look-out” staff to walk the halls and record all hand hygiene actives. However, with the introduction of RTLS hand hygiene protocol and compliance when clinical staff enter or use the dispensers can now be dynamically tracked and reported on. Currently several of the systems that are available today are also providing active alters to the clinicians whenever they enter a patient’s room and haven’t complied with the hand hygiene guidelines.
Locating equipment for maintenance and cleaning:
Having the ability to identify the location of equipment that is due for routine maintenance or cleaning is critical to ensuring the safety of patients. RTLS is capable of providing alerts on equipment to staff.
A recent case of a hospital spent two months on a benchmarking analysis and found that it took on average 22 minutes to find an infusion pump. After the implementation of RTLS, it took an average of two minutes to find a pump. This cuts down on lag time in care and can help ensure that clinicians can have the tools and equipment they need, when the patient needs it.
There are also other technologies and products which have been introduced and integrated into some of the current RTLS systems available.
There are several RTLS systems that are integrated with Bed management systems as well as EHR products that are able to deliver patient order status, alerts within the application can also be given. This has enabled nurses to take advantage of being in one screen and seeing a summary of updated patient related information.
Unified Communication systems:
Nurse calling systems have enabled nurses to communicate anywhere the device is implemented within the hospital facility, and to do so efficiently. These functionalities are starting to infiltrate the RTLS market and for some of the Unified Communication firms, it means that their structures can now provide a backbone for system integrators to simply integrate their functionality within their products.
In many of the recent implementations of RTLS products, hospital executives opted to deploy the solutions within one specific area to pilot the solutions. Many of these smaller implementations succeed and allow the decision makers to evaluate and measure the impacts these solutions can have on their environment. There are several steps that need to be taken into consideration when implementing asset tracking systems:
• Define the overall goals and driving forces behind the initiative
• Develop challenges and opportunities the RTLS solution will be able to provide
• Identify the operational area that would yield to the highest impact with RTLS
• Identify infrastructure requirements and technology of choice (WiFi based, RFID based, UC integration, interface capability requirements)
• Define overall organizational risks associated with these solutions
• Identify compliance requirements around standards of use
RFID is one facet of sensory data that is being considered by many health executives. It is providing strong ROI for many of the adapters applying it to improve care and increase efficiency of equipment usage, as well as equipment maintenance and workflow improvement. While there are several different hardware options to choose from, and technologies ranging from Wi-Fi to IR/RF, this technology has been showing real value and savings that health care IT and supply chain executives alike can’t ignore.
It was not long after mankind invented the wheel, carts came around. Throughout history people have been mounting wheels on boxes, now we have everything from golf carts, shopping carts, hand carts and my personal favorite, hotdog carts. So you might ask yourself, “What is so smart about a medical cart?”
Today’s medical carts have evolved to be more than just a storage box with wheels. Rubbermaid Medical Solutions, one of the largest manufacturers of medical carts, have created a cart that is specially designed to house computers, telemedicine, medical supply goods and to also offer medication dispensing. Currently the computers on the medical carts are used to provide access to CPOE, eMAR, and EHR applications.
With the technology trend of mobility quickly on the rise in healthcare, organizations might question the future viability of medical carts. However a recent HIMSS study showed that cart use, at the point of care, was on the rise from 26 percent in 2008 to 45 percent in 2011. The need for medical carts will continue to grow; as a result, cart manufacturers are looking for innovative ways to separate themselves from their competition. Medical carts are evolving from healthcare products to healthcare solutions. Instead of selling medical carts with web cameras, carts manufacturers are developing complete telemedicine solutions that offer remote appointments throughout the country, allowing specialist to broaden their availability with patients in need. Carts are even interfaced with eMAR systems that are able to increase patient safety; the evolution of the cart is rapidly changing the daily functions of the medical field.
Some of the capabilities for medical carts of the future will be to automatically detect their location within a healthcare facility. For example if a cart is improperly stored in a hallway for an extended period of time staff could be notified to relocate it in order to comply to the Joint Commission’s requirements. Real-time location information for the carts could allow them to automatically process tedious tasks commonly performed by healthcare staff. When a cart is rolled into a patient room it could automatically open the patient’s electronic chart or give a patient visit summary through signals exchanged between then entering cart and the logging device kept in the room and effectively updated.
Autonomous robots are now starting to be used in larger hospitals such as the TUG developed by Aethon. These robots increase efficiency and optimize staff time by allowing staff to focus on more mission critical items. Medical carts in the near future will become smart robotic devices able to automatically relocate themselves to where they are needed. This could be used for scheduled telemedicine visits, the next patient in the rounding queue or for automated medication dispensing to patients.
Innovation will continue in medical carts as the need for mobile workspaces increase. What was once considered a computer in a stick could be the groundwork for care automation in the future.
This has been an eventful year for speech recognition companies. We are seeing an increased development of intelligence systems that can interact via voice. Siri was simply a re-introduction of digital assistants into the consumer market and since then, other mobile platforms have implemented similar capabilities.
In hospitals and physician’s practices the use of voice recognition products tend to be around the traditional speech-to-text dictation for SOAP (subjective, objective, assessment, plan) notes, and some basic voice commands to interact with EHR systems. While there are several new initiatives that will involve speech recognition, natural language understanding and decision support tools are becoming the focus of many technology firms. These changes will begin a new era for speech engine companies in the health care market.
While there is clearly tremendous value in using voice solutions to assist during the capture of medical information, there are several other uses that health care organizations can benefit from. Consider a recent product by Nuance called “NINA”, short for Nuance Interactive Natural Assistant. This product consists of speech recognition technologies that are combined with voice biometrics and natural language processing (NLP) that helps the system understand the intent of its users and deliver what is being asked of them.
This app can provide a new way to access health care services without the complexity that comes with cumbersome phone trees, and website mazes. From a patient’s perspective, the use of these virtual assistants means improved patient satisfaction, as well as quick and easy access to important information.
Two areas we can see immediate value in are:
Customer service: Simpler is always better, and with NINA powered Apps, or Siri like products, patients can easily find what they are looking for. Whether a patient is calling a payer to see if a procedure is covered under their plan, or contacting the hospital to inquire for information about the closest pediatric urgent care. These tools will provide a quick way to get access to the right information without having to navigate complex menus.
Accounting and PHR interaction: To truly see the potential of success for these solutions, we can consider some of the currently used cases that NUANCE has been exhibiting. In looking at it from a health care perspective, patients would have the ability to simply ask to schedule a visit without having to call. A patient also has the ability to call to refill their medication.
Nuance did address some of the security concerns by providing tools such as VocalPassword that will tackle authentication. This would help verify the identity of patients who are requesting services and giving commands. As more intelligence voice-driven systems mature, the areas to focus on will be operational costs, customer satisfaction, and data capture.
[...] medical practice billing software encourage [...]
Cambridge HealthTech Institute (CHI) invited me to attend their Next Generation Point of Care Diagnostics Conference and I came away thoroughly impressed with the content, speakers, and organization. Since I chair several conferences a year I know how hard it is to pull off a good one so I’d like to thank CHI for a job well done. While I took the notes and attended the event, this post was written by HITSphere‘s Vik Subbu, our Digital Health editor that focuses on Bio IT and Pharma IT. Bio IT, Pharma IT, Health IT, and MedTech are all going to be merging over the next few years and Vik will be helping our audience understand those shifts and what they mean to Digital Health innovators. Here’s Vik’s recap of the conference:
Goals & Attendees
The goal of the event was to provide a progress update to the healthcare industry on the advances in next generation point-of-care (POC) diagnostics while highlighting the advent of innovative platforms and use of digital information systems to aid in the development of novel POC diagnostics. The conference was attended by industry experts from various disciplines ranging from academic institutions, non-profit computational and bioinformatics centers, venture capital, service providers, pharmaceutical, diagnostic and biotechnology companies.
Why does Point of Care Dx matter to Digital Health innovators?
The interactions and cross-fertilization of ideas among various disciplines in the diagnostic arena was the highlight of the conference. The ability to have real time interactions between academic researchers, clinicians, product developers and reimbursement specialists provided a ‘one stop’ venue for an attendee to obtain a holistic overview of both the promises and pitfalls in developing point-of-care diagnostics. The outcome of the conference should yield greater public-private collaborations involving novel platforms, available NGS datasets, and academic laboratories. Such partnerships will hopefully enable the industry to overcome product development and reimbursement barriers while paving the way for effective and streamlined approval process for next generation POC diagnostics. All of this will help integrate POC better into next generation Digital Health innovations.
The intimate setting and the organization of the parallel track discussions/presentations were well designed and covered key aspects of POC diagnostics. For one looking to learn the current and future directions of POC diagnostics, the conference provided a nice platform to learn, understand and meet key contacts to support their individual interests. Entrepreneurs and innovators focusing on bridging the “gap” between healthcare IT and diagnostics will find that there was a recurring theme that surfaced in many of the presentation but wasn’t really the focal point of any one specific presentation. That topic was data. There were many presentations that highlighted the “use of genomic data” or “the use of computational super tools to assimilate or generate vast amounts of data” or “ the need for better data standards to achieve meaningful results”. While these were great presentations, none of the speakers focused on the “HOW” piece (which is a huge opportunity for entrepreneurs). For example, “”how can one can gain broader insights from these datasets?” or “how can we solve the issues of standardization of datasets?”. Perhaps, this was the homework assignment that we must complete in time for next year’s conference.
Top Ten Insights for Healthcare IT innovators:
Given the number of breaches we’ve seen this Summer at healthcare institutions, I’ve just spent a ton of time recently on several engineering engagements looking at “HIPAA compliant” encryption (HIPAA compliance is in quotes since it’s generally meaningless). Since I’ve heard a number of developers say “we’re HIPAA compliant because we encrypt our data” I wanted to take a moment to unbundle that statement and make sure we all understand what that means. Cryptology in general and encryption specifically are difficult to accomplish; CISOs, CIOs, HIPAA compliance officers shouldn’t just believe vendors who say “we encrypt our data” without asking for elaboration in these areas:
When you look at encrypting data, it’s not just “in transit” or “at rest” but can be in transiting or resting in a variety of places.
If you care about security, ask for the details.
These days it’s pretty easy to build almost any kind of software you can imagine — what’s really hard, though, is figuring out what to build. As I work on complex software systems in government, medical devices, healthcare IT, and biomedical IT I find that tackling vague requirements is one of the most pervasive and difficult problems to solve. Even the most experienced developers have a hard time building something that has not been defined well for them; a disciplined software requirements engineering approach is necessary, especially in safety critical systems. One of my colleagues in France, Abder-Rahman Ali, is currently pursuing his Medical Image Analysis Ph.D. and is passionate about applying computer science to medical imaging to come up with algorithms and systems that aid in Computer Aided Diagnosis (CAD). He’s got some brilliant ideas, especially in the use of fuzzy logic and storytelling to elicit better requirements so that CAD may become a reality some day. I asked Abder-Rahman to share with us a series of blog posts about how to tackle the problem of vague requirements. The following is his first installment, focused on storytelling and how it can be used in requirements engineering:
I remember when I was a child how my grandmother used to tell us those fictional and non-fictional stories. They still ring in my ears, even after those many years that have passed by. We used to just sit down, open our ears, stare our eyes, move around with our thoughts, and we don’t get out of such situation until the story ends. We used to make troubles sometimes, and to get us calm, we were just being called to hear that story, and the feelings above came to use again.
Phebe Cramer, in her book, Storytelling, Narrative, and the Thematic Apperception Test, mentions how storytelling has a long tradition in human history. She highlights what have been considered the significant means by which man told his story. Some of those for instance were the famous epic poems, the Iliad and the Odyssey from the ninth century B.C., the Aeneid from 20 B.C., the east Indian Mahabharata and Ramayana from the fourth century A.C., …etc. This is how history was transmitted from one generation to the other.
Storytelling Tips and Tales emphasizes that stories connect us to the past, and enlighten for us the future, lessons can be learned from stories, and information is transmitted transparently and smoothly through stories. Teachers in schools are even being encouraged to use storytelling at their classrooms. The books also believes that storytelling is an engaging process that is rewarding for both the teller and the listener. Listeners will like enter new worlds by just hearing the words of the teller. Schank and Abelson even see that psychological studies have revealed that human beings learn best from stories, in their Knowledge and Memory: The Real Story.
Having mentioned that, a requirements engineer may ask, why couldn’t we just then bring storytelling to our domain? Especially that in our work, there would be a teller and a listener. Well, could that really be?
Let us examine the relationships between story elements and a software requirement in order to answer that question.
In his book, Telling Stories: A Short Path to Writing Better Software Requirements, Ben Rinzler highlights such relationships as follows (some explanations for the points was also used from Using Storytelling to Record Requirements: Elements for an Effective Requirements Elicitation Approach):
So, yes, a relationship and an analogy exists between storytelling and software requirements.
In future posts in the series, Shahid and I will dig more deep on how storytelling could be employed in the requirements engineering process, and will also try to show how can fuzzy logic be embedded in the process to solve any issues that may be inherent in the storytelling method.
Meanwhile, drop us comments if there are specific areas of requirements engineering complex software systems that you’re especially interested in learning more about.
“Large collections of electronic patient records have long provided abundant, but under-explored information on the real-world use of medicines. But when used properly these records can provide longitudinal observational data which is perfect for data mining,” Duan said. “Although such records are maintained for patient administration, they could provide a broad range of clinical information for data analysis. A growing interest has been drug safety.”
In this paper, the researchers proposed two novel algorithms—a likelihood ratio model and a Bayesian network model—for adverse drug effect discovery. Although the performance of these two algorithms is comparable to the state-of-the-art algorithm, Bayesian confidence propagation neural network, by combining three works, the researchers say one can get better, more diverse results.
I saw this a few weeks ago, and while I haven't had the time to delve deep into the details of this particular advance, it did at least give me more reason for hope with respect to the big picture of which it is a part.
It brought to mind the controversy over Vioxx starting a dozen or so years ago, documented in a 2004 article in the Cleveland Clinic Journal of Medicine. Vioxx, released in 1999, was a godsend to patients suffering from rheumatoid arthritic pain, but a longitudinal study published in 2000 unexpectedly showed a higher incidence of myocardial infarctions among Vioxx users compared with the former standard-of-care drug, naproxen. Merck, the patent holder, responded that the difference was due to a "protective effect" it attributed to naproxen rather than a causative adverse effect of Vioxx.
One of the sources of empirical evidence that eventually discredited Merck's defense of Vioxx's safety was a pioneering data mining epidemiological study conducted by Graham et al. using the live electronic medical records of 1.4 million Kaiser Permanente of California patients. Their findings were presented first in a poster in 2004 and then in the Lancet in 2005. Two or three other contemporaneous epidemiological studies of smaller non-overlapping populations showed similar results. A rigorous 18-month prospective study of the efficacy of Vioxx's generic form in relieving colon polyps showed an "unanticipated" significant increase in heart attacks among study participants.
Merck's withdrawal of Vioxx was an early victory for Big Data, though it did not win the battle alone. What the controversy did do was demonstrate the power of data mining in live electronic medical records. Graham and his colleagues were able to retrospectively construct what was effectively a clinical trial based on over 2 million patient-years of data. The fact that EMR records are not as rigorously accurate as clinical trial data capture was rendered moot by the huge volume of data analyzed.
Today, the value of Big Data in epidemiology is unquestioned, and the current focus is on developing better analytics and in parallel addressing concerns about patient privacy. The HITECH Act and Obamacare are increasing the rate of electronic biomedical data capture, and improving the utility of such data by requiring the adoption of standardized data structures and controlled vocabularies.
We are witnessing the dawning of an era, and hopefully the start of the transformation of our broken healthcare system into a learning organization.
I believe if we reduce the time between intention and action, it causes a major change in what you can do, period. When you actually get it down to two seconds, it’s a different way of thinking, and that’s powerful. And so I believe, and this is what a lot of people believe in academia right now, that these on-body devices are really the next revolution in computing.
I am convinced that wearable devices, in particular heads-up devices of which Google Glass is an example, will be playing a major role in medical practice in the not-too-distant future. The above quote from Thad Starner describes the leverage point such devices will exploit: the gap that now exists between deciding to make use of a device and being able to carry out the intended action.
Right now it takes me between 15 and 30 seconds to get my iPhone out and do something useful with it. Even in its current primitive form, Google Glass can do at least some of the most common tasks for which I get out my iPhone in under five seconds, such as taking a snapshot or doing a Web search.
Closing the gap between intention and action will open up potential computing modalities that do not currently exist, entirely novel use case scenarios that are difficult even to envision before a critical mass of early adopter experience is achieved.
The Technology Review interview from which I extracted the quote raises some of the potential issues wearable tech needs to address, but the value proposition driving adoption will soon be truly compelling.
I'm adding some drill-down links below.
Practices tended to use few formal mechanisms, such as formal care teams and designated care or case managers, but there was considerable evidence of use of informal team-based care and care coordination nonetheless. It appears that many of these practices achieved the spirit, if not the letter, of the law in terms of key dimensions of PCMH.
One bit of good news about the Patient Centered Medical Home (PCMH) model: here is a study showing that in spite of considerable challenges to PCMH implementation, the transformations it embodies can be and are being implemented even in small primary care practices serving disadvantaged populations.
We are delighted to introduce our new series of Health Insights. These free to attend events for healthcare professionals feature interactive round table activities, news on how the latest innovations support the health and care community, and best practice experiences from NHS Trust colleagues.
CLICK HERE TO SEE NEW DATES AND LOCATIONS
Starting in Leeds and Newbury this October and held in association with NHS England, each one day conference will feature:
Digital Discovery Sessions
- facilitated round tables exploring procurement issues
An update from NHS England on Tech Funds and Open Source Programme
Host Roy Lilley, popular Healthcare Broadcaster, with lively panel debates
Speakers will include Rob Webster, CEO of NHS Confederation, Tim Straughan, Director of Health and Innovation at Leeds and Partners, and Clive Kay, Chief Executive of Bradford Teaching Hospitals.
REGISTER FREE TODAY
We hope to see you at your local Health Insights.
readiness to hand
information storage and retrieval, access, efficiency, space, security, information sharing, patient safety, legibility
cost, savings, governance, reporting (locally, nationally, internationally), policy integration
Twitter, like the Internet in general, has become a vast source of and resource for health care information. As with other tools on the Internet it also has the potential for misinformation to be distributed. In some cases this is done by accident by those with the best intentions. In other cases it is done on purpose such as when companies promote their products or services while using false accounts they created.
In order to help determine the credibility of tweets containing health-related content I suggest the using the following checklist (adapted from Rains & Karmikel, 2009):
Ultimately it is up to the individual to determine how to use health information they find on Twitter or other Internet sources. For patients anecdotal or experiential information shared by others with the same illness may be considered very credible. Others conducting research may find this a less valuable information source. Conversely a researcher may only be looking for tweets that contain reference to peer-reviewed journal articles whereas patients and their caregivers may have little or no interest in this type of resource.
Rains, S. A., & Karmike, C. D. (2009). Health information-seeking and perceptions of website credibility: Examining Web-use orientation, message characteristics, and structural features of websites. Computers in Human Behavior, 25(2), 544-553.
The altmetric movement is intended to develop new measures of production and contribution in academia. The following article provides a primer for research scholars on what metrics they should consider collecting when participating in various forms of social media.
If you participate on Twitter you should be keeping track of the number of tweets you send, how many times your tweets are replied to, re-tweeted by other users and how many @mentions (tweets that include your Twitter handle) you obtain. ThinkUp is an open source application that allows you to track these metrics as well as other social media tools such as Facebook and Google +. Please read my extensive review about this tool. This service is free.
You should register with a domain shortening service such as bit.ly, which will provide you with an API key that you can enter into applications you use to share links. This will provide a means to keep track of your click-through statistics in one location. Bit.ly records how many times a link you created was clicked on, the referrer and location of the user. Consider registering your own domain name and using it to shorten your tweets as a means of branding. In addition, you can use your custom link on electronic copies of your CV or at your own web site. This will inform you when your links have been clicked on. You should also consider using bit.ly to create links used at your web site, providing you with feedback on which are used the most often. For example, all of the links in this article were created using my custom bit.ly domain. In addition, you can tweet a link to any research study you publish to publicize as well as keep track of how many clicks are obtained. Bit.ly is a free service.
Another tool to measure your tweets is TweetReach. This service allows you to track the reach of your tweets by Twitter handle or tweet. It provides output in formats that can be saved for use elsewhere (Excel, PDF or the option to print or save your output by link). To use these latter features you must sign up for an account but the service is free.
Buffer is a tool that allows you to schedule your tweets in advance. You can also connect Buffer to your bit.ly account so links used can be included in your overall analytics. Although Buffer provides its own measures on click-through counts this can contradict what appears in bit.ly. This service is free but also has paid upgrade options available that provide more detailed analytics.
Google Scholar Citation Profile
You can set up a profile with Google Scholar based on your publication record. The metrics provided by this service include a citation count, h-index and i10-index. When someone searches your name using Google Scholar your profile will appear at the top before any of the citations. This provides a quick way to separate your articles from someone else who has the same name as you.
Google Feedburner for RSS feeds
If you maintain your own web site and use RSS feeds to announce new postings you can also collect statistics on how many times your article is clicked on. Feedburner, recently acquired by Google provides one way to measure this. You enter your RSS feed ULR and a report is generate, which can be saved in CVS format.
Journal article download statistics
Many journals provide statistics on the number of downloads of articles. Keep track of those associated with your publication by visiting the site. For example, BioMed Central (BMC) maintains an access count of the last 30 days, one year and all time for each of your publications.
Other means of contributing to the knowledge base in your field include participating on web-based forums or web sites such as Quora. Quora provides threaded discussions on topics and allows participants to both generate and respond to the question. Other users vote on your responses and points are accrued. If you want another user to answer your question you must “spend” some of your points. Providing a link to your public profile on Quora on your CV will demonstrate another form of contribution to your field.
Paper.li is a free service that curates content and renders it in a web-based format. The focus of my Paper.li is the use of technology in Canadian Healthcare. I have also created a page that appears at my web site. Metrics on the number of times your paper has been shared via Facebook, Twitter, Google + and Linked are available. This service is free.
Twylah is similar to paper.li in that it takes content and displays it in a newspaper format except it uses your Twitter feed. There is an option to create a personalized page. I use tweets.lauraogrady.ca. I also have a Twylah widget at my web site that shows my trending tweets in a condensed magazine layout. It appears in the side bar. This free service does not yet provide metrics but can help increase your tweet reach. If you create a custom link for your Twylah page you can keep track of how many people visit it.
Analytics for your web site
Log file analysis
If you maintain your own web site you can use a variety of tools to capture and analyze its use. One of the most popular applications is Google Analytics. If you are using a content management system such as WordPress there are many plug-ins that will add the code to the pages at your site and produce reports. WordPress also provides a built-in analytic available through its dashboard.
If you have access to the raw log files you could use a shareware log file program or the open source tool Piwik. These tools will provide summaries about what pages of your site are visited most frequently, what countries the visitors come from, how long visitors remain at your site and what search terms are used to reach your site.
All of this information should be included in the annual report you prepare for your department and your tenure application. This will increase awareness of altmetrics and improve our ability to have these efforts “count” as contributions in your field.