Physicians know that their clinical decisions should be informed by evidence. But it's a tall order to keep up with hundreds of journals and apply evidence from around the world to one's own patients.
So how can hospitalists determine the optimal course of evidence-based care while being sure to do no harm? Enter the evidence-based practice center, a model in which reviewers synthesize the evidence available for a specific drug, device, procedure, test, or other clinical innovation.
This year, ACP will launch its Center for Evidence Reviews, which will develop in-house evidence reviews to help inform the oldest clinical practice guidelines program in the country, said Amir Qaseem, MD, PhD, FACP, vice president of clinical policy for ACP. “We decided to establish an evidence-based practice center within the College so that we can select key clinical topic areas that are important for our membership,” he said.
Evidence-based practice centers also exist on the local health system level, albeit to a lesser extent. Hospital-based evidence review offers particular benefits, such as reviews that are more relevant to the local clinical climate and the opportunity to implement the findings into practice, experts said.
“This model of a center synthesizing evidence to inform decision making, it's not necessarily a novel model, per se. It's been used at many different levels across the globe, including by governments to make decisions....I think the novel aspect here is making this work at a local health system level,” said hospitalist Craig A. Umscheid, MD, MSc, FACP, who directs Penn Medicine's Center for Evidence-based Practice in Philadelphia.
So far, local evidence-based practice centers are uncommon in the U.S., but increasing focus on the value of health care could lead more hospitals and health systems to look at the model.
How it works
The University of Pennsylvania's center was established in 2006 for 2 main reasons: a desire to facilitate evidence-based decision making at the institutional level and concerns about industry influence on those decisions, Dr. Umscheid said. “For example, you could have a physician that's supported by industry at some level wanting to bring a product or a supply into the health system, or you could have a department advocating for itself to do a particular procedure,” he said. “So the question was, ‘How do you bring evidence to bear on these institutional decisions that, on some level, is more objective and impartial?’”
The center is organized as a group within the larger office of the chief medical officer, which serves the entire health system. In addition to Dr. Umscheid as director, other staff members include physician and nurse liaisons from different hospitals within the health system, librarians, a biostatistician, a health economist, and 3 research analysts who conduct the rapid literature reviews.
These reviews focus primarily on improving care for relatively common conditions among the inpatient population, said Dr. Umscheid, an assistant professor of medicine and epidemiology at Penn. “The external environment has really forced a closer look at quality and safety on the inpatient units, although incentives are broadening to focus on the ambulatory sites, as well,” he said. Past reviews have addressed topics such as sepsis care, blood product management, health care-associated infections, and hospital readmissions, Dr. Umscheid said.
In addition to the rapid reviews, the center also develops and deploys clinical decision support interventions through the electronic health record (EHR) and clinical pathways, Dr. Umscheid said. “Often, we'll do rapid evidence reviews to inform a decision, and then we'll implement the findings from that review through computerized clinical decision support,” he said.
One clinical decision support intervention the center launched in the EHR was an early warning and response system for sepsis. In real time, the tool monitored the lab values and vital signs of inpatients in the non-ICU acute setting and alerted clinicians if patients had 4 or more warning signs for sepsis at any given time. The tool produced a statistically significant increase in early sepsis care, ICU transfer, and sepsis documentation, according to results published in January 2015 by the Journal of Hospital Medicine (JHM).
The center produced about 250 rapid reviews in its first 8 years, and it took an average of 70 days to complete a review, according to an assessment that was published in the March JHM. About 12% of the reports helped inform decision support interventions in the EHR. The most common topics reviewed were drugs, medical devices, and processes of care, and the most common requestors of reviews were clinical departments, chief medical officers from across the health system, and purchasing committees.
In Southern California, Kaiser Permanente's Evidence-Based Medicine (EBM) Services, which has existed since the early 1990s, conducts evidence reviews of new technologies and procedures for the entire health system. It differs from Penn not only in geographic location and patient population but also in structure, explained Marguerite A. Koster, MA, MFT, its senior manager.
The program's 10 evidence analysts conduct rapid reviews, which are used to inform Kaiser's clinical practice guidelines, as well as its medical technology assessment and health system implementation programs, she said. The group does about 350 to 400 evidence reviews each year, about 20 of them on new technologies, Ms. Koster said.
Evidence-based reviews are presented to a medical technology assessment team, which determines if the quality of evidence is high enough for Kaiser to recommend the technology, she said. Then, an implementation team identifies the best way to implement those technologies within the health care system, Ms. Koster said. The unit also develops evidence-based clinical practice guidelines specifically for the Southern California region, as well as collaborates with Kaiser's national guidelines program to help develop a set of national guidelines that its 7 U.S. regions share, she said.
Because the EBM unit works for the entire health system, review requests include both the outpatient and inpatient setting, Ms. Koster said. “Primarily, we tend to focus on new medical technologies because that's where a lot of the action is right now. The existing procedures that have been going on for 10 or 20 years, unless there's a new application of that technology, we normally don't address those. It's really more around the new technologies that patients might be interested in,” she said. The unit has reviewed technologies such as automated hand hygiene monitors, intraoperative imaging in brain and spine surgery, and telemedicine in the ICU, Ms. Koster said.
Kaiser physicians can request rapid reviews, and patients can register appeals for a new technology, she said. EBM Services gathers the evidence and produces the reviews, but it leaves the clinical decision making to the physicians. “In my unit, we do not make any decisions about whether something is medically necessary,” Ms. Koster said.
Cincinnati Children's Hospital Medical Center in Ohio uses a similar review model to rapidly incorporate evidence into practice. Last year, the program created about 37 evidence summaries, which include a narrative, a summary table with details on the published studies, and a grade on the quality of evidence, according to Wendy Gerhardt, MSN, RN-BC, director of evidence-based decision making in the hospital's James M. Anderson Center for Health Systems Excellence.
The program has 3 full-time appraisers who review the evidence, and a typical rapid review takes about 2 to 4 weeks, she said. Requests generally come through clinical groups but are aligned with the organization's strategic plan, and a family adviser communicates with patients' families to get their input, as well, according to Ms. Gerhardt. The process is designed to meet the Institute of Medicine's standards for guideline development.
The program confirms evidence requests through use of the “PICO” method to maintain focus on the patient/population, intervention, comparison, and outcome. “I've always been surprised at how much clinicians think process rather than outcome.... We try to keep them focused on the outcome because that's actually what we're going to measure to demonstrate improvement has been obtained,” she said.
A local system that performs evidence reviews can provide validity, even if clinicians already know the evidence, Ms. Gerhardt said. “It actually provides [evidence] back in writing, and they have something they can actually do something with, like create a care recommendation, as well as help solidify standardized practice within their area,” she said. “I think that it's about credibility, as well—to make sure that people trust the care that they're getting.”
Although hospitals haven't been quick to develop their own evidence review centers, Dr. Umscheid said evidence is building that this is an effective model for improving quality and safety outcomes. And with reimbursements being relatively flat and the costs of care rising, hospital executives are paying close attention to the value of every dollar that they spend on patient care, he said.
“I think those forces converge to support evidence-based practice, not just at the individual physician-patient level but at an organizational or health system level,” Dr. Umscheid said. “I think there is real potential for this model to support an institutional culture of evidence-based practice.”