The unit team quickly saw several areas for improvement. Many patients needed endoscopy procedures, and fasting is required as prep. More frequently than we thought acceptable, the procedure was delayed, with the patient waiting, unable to eat, and confused about the timing. Or worse, the procedure was rescheduled for the next day, causing significant patient dissatisfaction because it required continued fasting. Sometimes the postponements caused delayed discharges, so this was an opportunity for operational improvement, as well as enhanced patient satisfaction.
Because the unit’s population typically had multiple medical problems, significant care coordination was required for successful patient discharge. Cooperation between the social worker and care coordinator was essential, but it quickly became apparent through our weekly meetings that these individuals abhorred working together and did not communicate effectively. Their mutual dislike was so intense that they avoided being on the floor at the same time, which obviously complicated care coordination. Our team also discovered that the nurse manager was often sequestered in her office handling paperwork and other business and did not regularly round on staff or patients. Furthermore, physicians rarely talked with nurses about care plans.
This small team, huddling once a week for about an hour, unearthed a variety of opportunities, some of which could be fixed easily and others that required more time and effort. Getting the nurse manager to spend more time rounding required her manager to set and enforce new expectations. Getting the social worker and case manager to work together simply involved critical conversations about their job responsibilities and holding them accountable for participating in teamwork. Improved schedule coordination between the unit and the endoscopy suite would require a better process, but identifying the problem was an important first step.
This modest unit project was a “quick win” and captured some “low-hanging fruit.” A month after we started huddling to address problems, the unit’s HCAHPS scores saw the highest rise in the organization—and in the history of the hospital (see Figure 9.1). To say I was thrilled would be a vast understatement. I felt like I had struck gold. This pilot project proved we could actually impact HCAHPS scores with simple solutions driven by frontline caregivers.
Figure 9.1 A “quick win” in a unit project.
When I updated my fellow executive team members about the project and showed these metrics, Steve Glass, our CFO, looked at me and said, “Now, this is really important stuff.” Cosgrove agreed! We demonstrated we could change the patient experience in measurable ways.
These small initial projects taught us an important lesson about piloting at the local level. But our next steps demonstrated the challenges of rolling out something enterprise-wide. We had scored a success because we went right to the local level for implementation of the patient experience huddles. When we tried to expand this tactic to other units and hospitals, we found little agreement that this was a best practice, despite our data. The patient experience huddle was considered an option, not a mandate. Every unit required coaxing the unit manager, the physician leader, and others to participate. We did not yet have the ability to force implementation, so it was only a soft win.
This early project also taught us lessons about hospital processes and tactics. Hospitals are full of processes, literally thousands of interconnected systems and processes that together deliver the complex product we call healthcare. Before we could layer on any patient experience “solution,” we had to first ensure that the basic hospital processes were functional. These processes generally work efficiently and achieve what they’re designed to do. However, many processes are managed in silos, with staff in charge of one process having no idea what other silos deliver. Their systems do not communicate, and they’re not engineered to work together.
An endoscopy being delayed or cancelled without informing the nursing team or patient is just one example. Many nurses complained they were powerless over what was happening in endoscopy, and yet they had to break bad news to patients about postponed procedures. Hospitals run on process. System failure results when processes don’t function effectively (endoscopy scheduling) or don’t interface smoothly with other processes (coordination between the endoscopy suite and the nursing unit). No amount of smiling, or layering service excellence tactics on top of the problem, will improve the experience for the patient. We have to fix broken processes.
It’s difficult to identify and repair faulty hospital processes. It also takes leadership courage because broken processes are typically owned by poor managers or managers who lack accountability. If meals delivered to patients do not match their menu selections, that needs to be fixed. If patients receive a continual busy signal when they call for appointments, that’s a problem that needs to be fixed. It’s a fallacy to believe that more layered service excellence strategies, extra apologies, or work-arounds will mitigate these problems. Fix what’s broken, and develop or outplace the bad manager. Don’t just put a Band-Aid on something that doesn’t work.
If silos are beginning to come down and we’re reasonably sure hospital processes are functioning, the question becomes what can be implemented to help make a difference. This brings us to the topic of best practices and another lesson learned from our experiment on the floor. There are good processes or tactics considered to be best practices that should be implemented.
A best practice is defined as “a method or technique that has consistently shown results superior to those achieved with other means and that is used as a benchmark. In addition, a ‘best’ practice can evolve to become better as improvements are discovered.”1
To be worthy of consideration, a best practice should be scalable in your environment and help attain your goals. When the practice was implemented in other areas, was improvement sustained, and what was the duration? There are differing opinions regarding how long something must work before it’s considered a best practice to be implemented more broadly; I recommend three to six months. Best practices should also have an associated metric so you know whether they actually make a difference.
Winnowing best practices is an important component of a patient experience improvement program. We cannot do everything, and what we choose to do should have broad impact. Nurse hourly rounding, or as some call it, purposeful hourly rounding, is an example of a best practice. This involves a nurse going into a patient’s room every hour and running through a checklist. The following are typical questions:
1. Do you have to use the bathroom?
2. Do you have pain?
3. Do you need to be repositioned?
4. Do you need your belongings moved closer?
5. Do you need anything else?
This practice has been demonstrated to improve patient satisfaction scores, reduce call-light usage, decrease falls and pressure ulcers,2 and reduce medication errors. Clearly, this best practice affects patient safety, quality, and satisfaction; its impact on the organization can be high yield.
At Cleveland Clinic, hourly rounding was practiced sporadically. When we evaluated the HCAHPS scores of floors where it was practiced routinely, performance was better. At one of our community hospitals, a nurse manager whose HCAHPs scores routinely achieved the 90th percentile was convinced it was due to routine hourly rounding.
K. Kelly Hancock, now our executive chief nursing officer but at the time director of nursing for the Heart & Vascular Institute, agreed to conduct a pilot. She picked several units and mandated hourly rounding. We added a new question to the inpatient survey sent after discharge asking patients whether a nurse visited hourly. Using the standard HCAHPS format, we asked whether a nurse always, usually, sometimes, or never came every hour. We collected 4,000 patient responses during the 90-day pilot.
Results were striking. If patients said they “always” saw the nurse, nursing domain HCAHPS scores achieved 90th percentile performance, as shown in Figure 9.2. Scores progressively worsened as the patient responded “usually,” “sometimes,” or “never.” There
was little doubt purposeful hourly rounding made a significant change in the scores. Hancock’s pilot validated in our organization what was well described in the nursing literature. The improvement was so significant that Cosgrove mandated nurse hourly rounding for all units, an unprecedented move that has had meaningful impact on the organization and how we care for patients. This is an example of how we took a best practice, tested it in several local environments, and, after confirming effectiveness, implemented it enterprise-wide.
Figure 9.2 Nurse hourly rounding and HCAHPS scores.
To ensure that rounds are done, we continue to survey patients about them, and we require bedside nurses to complete a tracking sheet in patient rooms. Nurse managers routinely audit the practice in their units. Nurse hourly rounding is a best practice that impacts not only patient satisfaction, but safety and quality, and it should be routinely practiced in every hospital worldwide.
This pilot also taught us an important lesson about partnering with critical stakeholders. Hancock was an early supporter and critical ally in all our efforts to improve the patient experience. While at the time she was responsible for only a small portion of our overall nursing infrastructure, without her leadership and support of this pilot, it would not have been successful. Once HCAHPS scores demonstrated the magnitude of improvement, the rest of the organization could not oppose implementation. Patient experience leaders need critical stakeholder collaborators like Hancock for efforts to succeed.
Our sophistication regarding how to tackle patient experience problems slowly improved. We were successfully piloting small projects, we had identified critical stakeholder partners, and we were slowly achieving success.
HCAHPS survey results would soon be linked to reimbursement in 2013, and we knew this would create a tremendous burning platform for our messaging. HCAHPS questions are neatly organized into different domains:
1. Nurse Communication
2. Doctor Communication
3. Responsiveness of Hospital Staff
4. Pain Management
5. Communication About Medicines
6. Discharge Information
7. Cleanliness and Quietness of Hospital Environment
8. Reputation-Related Measures
These domains allowed us to set HCAHPS scores as the initial primary outcome metrics for improving the patient experience. Anyone involved in hospital operations knows there are literally hundreds of metrics we could have chosen. For the patient experience alone, there are well over a hundred questions in the various surveys we distribute. As leaders, we cannot ask the organization to focus on all of them, but we must establish the most important ones.
We formed HCAHPS improvement teams for each domain, encompassing any projects or activities affecting that particular domain. Each team was led by a project manager and had broad representation from across the enterprise. We made it very clear that the team represented the enterprise; if a domain-related project was not sponsored by the team, it was not official and would not be resourced.
The quiet at night improvement team established the Help Us Sustain Healing (HUSH) protocol, which consists of the following elements:
1. Signs reminding people to be quiet posted on the nursing units
2. An announcement made at 8 p.m. to notify patients and visitors that it was nighttime and they needed to be mindful of patients resting
3. Dimming of lights on the nursing units
4. Closing the doors of some patient rooms
5. Providing education material asking patients and visitors to be mindful of patients’ recovery and to keep voices low and the television off after a specified time
The HUSH protocol also assigned team leads at every nursing unit to drive the tactics. In addition, the project leader audited individual floors for compliance and also supplied sound recordings of each floor. This information was fed immediately back to nurse managers and the HUSH champions.
Dividing up the HCAHPS domains also allowed us to distribute responsibility throughout our operational areas. A good example is cleanliness: the environmental services (EVS) team, those responsible for cleaning the hospital, took ownership of the cleanliness scores. Every EVS caregiver is trained on how his or her work impacts HCAHPS scores and the patient experience. Unit HCAHPS scores are regularly distributed to EVS caregivers. Cleveland Clinic’s cleanliness scores have made significant improvements and lead our peer group of major health systems, as shown in Figure 9.3.
Figure 9.3 Improvement in cleanliness scores.
Led by an innovative leader, Michael Visniesky, Senior Director for Environmental Services, EVS has become an energized and engaged team, adopting slogans and contests to engage caregivers. The team created buttons proclaiming, “Always clean!” But since Medicare banned the word always from the lexicon of what we’re permitted to say to patients, the EVS team developed a new button slogan, “Our goal: Clean at all times!” Participating in leadership rounds one day, I asked an EVS caregiver assigned to clean a nursing unit exactly what her role was. She responded, “My job is to ensure a great patient experience by helping our patients.” That is employee engagement!
Sometimes Things Don’t Work
Not all ideas are good ones, and while we hope to figure that out before we implement them across the enterprise, sometimes we’re fully deployed before we realize that tactics are just not having the desired effect. Making sure your program is adding value is extremely important, and stopping a program that is not having impact, while difficult, is at times necessary.
I inherited a program called service navigators. These were 12 nonclinical individuals assigned to specific inpatient floors. They rounded on patients daily to ensure they had everything they needed. For instance, if a navigator rounded on a patient who complained of not seeing the doctor, the navigator would call the doctor. If the patient needed help with preparations for going home, the navigator contacted the social worker or care coordinator. If the patient needed an extra pillow or blanket, the navigator would get it. If the patient was in pain, a nurse was found.
These caregivers were not licensed and were not considered clinical practitioners, so they could participate only in very rudimentary nonclinical activities. But they took care of lots of little details and bridged the gap between other providers.
At first, we believed this extra help was really impacting the patient experience. But we started to notice that floors with navigators were not performing any better than floors without them. The program was started after what appeared to be a very successful pilot on one of the inpatient floors. During the pilot, HCAHPS scores were evaluated before the start of the program and after it was up and running. Inpatient satisfaction scores had improved significantly. The improvement was attributed to the navigator program, so it was adopted in most units across our main campus hospital. After nearly two years of full implementation, we did not realize similarly improved scores in the other units.
More carefully evaluating the navigators, we discovered that while they were still rounding on patients, they had morphed into pseudo project managers and were conducting a variety of other activities for the units. One of the navigators chaperoned the volunteers who brought therapy dogs. This was just busywork; the volunteers didn’t need a chaperone. Some navigators had been “captured” by the units as extra caregivers to help out with duties as assigned. Overall, the navigators were spending less time rounding on patients and more time doing things not directly improving the patient experience performance.
We conducted a controlled trial, splitting a unit. One half had a navigator visit every patient daily, and we tracked all patient issues. The second half, which had a similar service line and patients, would not have a navigator. HCAHPS performance would be the ultimate measure. We ran the pilot for two months, reviewed patient feedback every week, and carefully tracked HCAHPS results.
The navigator certainly filled a variety of service gaps. Patients needed more frequent communication with providers,
and they had a variety of service needs, like occasionally requiring clean bed linen. The navigator developed good relationships with patients and families. She was a trusted member of the team and generally viewed as someone who could get things done. However, the HCAHPS scores did not change. Intense navigator follow-up made no difference in how patients rated their in-hospital experience.
The program needed to be retired. It was a difficult decision to lay off a dozen people, but it was right for the organization. Throwing in the towel when something isn’t working is tough. Managers and leaders tend to become emotionally attached to “good ideas.” Once programs are started, they’re often hard to stop and even more difficult to relinquish when layoffs are involved. The navigators were a terrific group—committed, hardworking, passionate, and caring. Every unit manager with one thought the navigator was a treasured member of the team, invaluable for care delivery. I would never dispute that; however, the navigators were not having demonstrable impact.
When I communicated the reduction in force to the affected clinical chairs and nursing managers and our key leaders across the enterprise, people were not happy, and some were outraged. There was shock and disbelief that we would eliminate such a “vital service” to patients. Many predicted that HCAHPS scores on the affected floors would take an immediate, significant plunge. Some expressed their displeasure to me directly; others talked behind my back. It was clear the navigators had won the hearts and minds of their leaders, and extracting them from their units was not a pleasant task. Everyone warned me that patient care would suffer without them and, more important, that HCAHPS scores would nose-dive. It would be less than honest to say I wasn’t worried they might be right. Only one nursing director supported my decision; in retrospect, this was more likely out of friendship than her actual belief that it was the right thing to do.
Service Fanatics Page 19