by Thomas H Lee
Data Issues
Even if clinicians accept the idea that patient experience is important and is important to measure, they often cite an array of concerns about the data that are being collected and, they feel, used against them. Creating an environment in which clinicians feel that the goal is to improve care rather than judge them is of obvious importance, since data collection will never be complete and the ability to analyze data will never be perfect. However, to be realistic, clinicians will always feel somewhat judged by patient experience measurement, and every effort must be made to make that judgment feel fair.
The types of concerns most frequently articulated by clinicians include the following:
• The sample sizes are too small.
• The respondents are not representative of the overall population of patients who are receiving care from a clinician.
• The respondents really received care from many clinicians, and it is inappropriate to attribute the results to any one of them.
• The data are old, and care has changed in the interim.
• The scores fluctuate.
• The scores are too tightly packed; so many patients give high ratings that patient-reported data cannot be used to discriminate among providers.
• The interpretation of the data/benchmarking does not have adequate adjustment for risk factors that may account for lower performance.
• Analyzing and reporting data to describe the relative performance of physicians or hospitals leads to a harsh picture; for example, physicians with fairly high reliability (e.g., 90 percent or more) on a measure might be described as average or even below average.
All these concerns have a basis in reality, and they can all be addressed or mitigated through the approaches described in the remainder of this chapter. More data can be collected in a timelier manner if new data collection approaches are used. The ideal of collecting data from 100 percent of patients will probably never be realized, but when more data are collected in a timely way, the trends they reveal become hard to ignore.
Risk adjustment will never be complete, and so even if 100 percent of patients respond to surveys, comparisons among providers will always be susceptible to error. That is why it is so important that clinicians consider the goal of performance measurement as improvement: they are competing with themselves, trying to be better next year than they are now. Assuming that a physician’s or hospital’s patient mix is not changing dramatically, trying to compete with oneself sounds like a fair fight and a good goal. Opportunities to learn and improve can be highlighted through the use of appropriate benchmarks, such as comparing performance against similar organizations or against doctors with a similar specialty.
It is true that patients are generous graders and tend to give providers high marks (e.g., 80 percent of patients give physicians a 5 on a 5-point scale when asked about their likelihood of recommending them), an observation that belies concerns that only angry patients take the time to fill out surveys. That said, 20 percent do not give top ratings. Also, when data are analyzed across multiple measures, there are plenty of patients who do not give top marks on all aspects of care to hospitals or doctors. Top box analyses on all hospitals using HCAHPS between January 1, 2013, and June 30, 2014, show that almost a decade after HCAHPS was introduced, only 20 percent of patients reported that 100 percent of their needs were met. Nationwide, hospital inpatients report HCAHPS items at the individual attribute level as optimal between 49 percent and 89 percent of the time.
In short, no providers are perfect, and there are plenty of opportunities for improvement if the data are used to find them. How do we resolve the tension between our need to improve and the imperfections in the data? The answer involves improving the measures, getting more data, and using the data appropriately.
Improving Measures, Data, and Reporting
Part of the leadership challenge driving an epidemic of empathy in the pursuit of better care is cultural. Ideally, clinicians and other personnel should accept the following:
• The organization has a noble goal that trumps all other concerns (e.g., the reduction of suffering).
• The goal is improvement, not being ranked the best. No one is the best. Everyone has aspects of care that can be improved, and everyone is starting from scratch with the very next patient.
• The orientation is toward care in the future, not what has happened in the past. Resting on one’s laurels is not an option in healthcare. Patients do not care what you did for those who came before them; they want relief of their present and future suffering. Data provide insight into opportunities for doing better.
• Measures and data will never be perfect. There will always be issues related to potentially perverse effects if measures are carried to an extreme (e.g., giving every patient narcotics). But the pursuit of that noble goal cannot be delayed until perfection in measurement and data collection is achieved. Not measuring would be a major strategic error. Fortunately, the organization uses common sense in the application of measures and data, mitigating those potentially perverse effects.
Acceptance of these cultural themes relies on good faith efforts by organizations to do all they can to make the measures and the data as good as they can be. Fortunately, tremendous progress in patient experience measurement and reporting is under way, enabling healthcare organizations to mitigate and even eliminate many of the most important concerns of clinicians. These advances allow clinicians to focus on the critical challenge—actually improving their care—rather than searching for weaknesses in the data that would provide a rationale for rejecting their implications. Examples are now available to show that modern era patient experience measurement can drive both patient-centered care and professional pride.
In sum, measurement of patient experience is imperfect and always will be. To use the resulting data to drive an epidemic of empathy, the right course is to make measurement better and more complete and use the data wisely. The key areas of improvement under way are the following:
1. Measuring what matters
2. Advances in data quantity and collection methods
3. Advances in data analysis
Measuring What Matters to Patients
The concept of patient-centered care is becoming increasingly clear to healthcare providers; it is nothing more and nothing less than organizing around meeting patients’ needs. Patients’ needs are not organized in patients’ minds in accordance with the traditional structure of medicine, which is based on various types of clinical expertise such as surgery and gastroenterology. Patients are not focused on whether individual clinicians are competent or reliable in their various roles; patients assume this competence exists, and they are usually correct.
These points lead to a conclusion that is disruptive for the measurement of performance in healthcare: the spotlight should be on the patient, not the provider. Clinicians’ reliability is important, of course, but it is a means to an end. The end is defined by whether patients’ needs are met.
My colleagues and I think that the goal of reducing patient suffering is consistent with most organizational mission statements and the motivations of virtually all healthcare clinicians and other personnel. The word suffering is an emotional one, of course, and one reason to use it is that it compels a response. However, the goal of performance measurement is not to make clinicians feel guilty; it is to help them respond to patients’ needs with reliability.
Can suffering be measured? What I learned from my career in clinical research is that if something is important, you will figure out how to measure it as well as possible. Even if the issue is difficult to measure, such as quality of life, pain, or functional status—the ability of people to do the things they want to do—you approach the issue with discipline and methodological rigor. You frequently need to collect data from many patients, knowing that they will give widely varying responses. But if you collect enough data and calculate the average, you will get valuable information.
&nb
sp; The famous story of the British statistician Francis Galton and the oxen at the country fair offers valuable insight. In 1906, Galton went to the annual West of England Fat Stock and Poultry Exhibition and observed a competition in which people tried to guess what the weight of a fat ox would be after it was slaughtered and prepared for sale as meat. Nearly 800 people made guesses. Some were “experts” (butchers or farmers), but many were “non-experts” (regular citizens). The guesses varied widely, of course, but the average was only one pound off from the actual weight.
What Galton realized is that in any guess, there is information plus error. If that error is random and you average the responses from many people, the errors cancel out, and what you are left with is information. This insight is described in the book The Wisdom of Crowds, in which James Surowiecki shows how groups of people are often smarter than the smartest individual.
The implication for the measurement of suffering is that if this enormous, complex issue is broken down into various components and information is collected from enough patients, providers can understand how their patients are suffering and try to reduce that suffering. There are many different types of suffering, of course; physical pain is just one of them. Therefore, the first critical step toward measuring suffering is to break it down into various types of unmet needs so that providers can organize themselves to address them.
My colleague Deirdre Mylod has been a key thought leader behind work to deconstruct suffering. Patient suffering can be categorized as inherent to the patient’s medical condition and associated treatment or as avoidable, resulting from dysfunction in the care delivery process13 (see Table 4.1).
Table 4.1 Deconstructing Suffering: Sources and Examples
The inherent suffering that patients experience before and after receiving a diagnosis may be unavoidable because of their specific medical problems. The role of providers is to anticipate, detect, and mitigate that suffering. Pain, other symptoms, and loss of function are just a few types of inherent suffering. Fear, anxiety, and distress over loss of autonomy are also of enormous concern to patients, sometimes even more than pain itself.
Inherent suffering also encompasses the impact of treatment for the patient’s condition. Medications and procedures can cause side effects, pain, discomfort, loss of function, and unwelcome changes in appearance even when they ultimately lead to recovery. Detecting and mitigating these side effects of treatment lead to improvement in Porter’s Tier 2 outcomes, which were described earlier in this chapter. Some pain cannot be eliminated. Some procedures will always be uncomfortable. Often, what providers can do to mitigate such suffering is help patients understand what to expect so that they are not frightened by the unknown.
If inherent suffering is driven by patients’ conditions and their necessary treatments, avoidable suffering has nothing to do with their diseases and everything to do with the way healthcare providers are organized. Poor coordination of care, excessive waits for appointments, uncertainty about what will happen next, and ineffective care transitions all erode patients’ trust and lead to anxiety, frustration, and fear. All these dysfunctions are preventable even if they seem beyond the control of individual personnel.
Collecting data that distinguish inherent from avoidable sources of suffering allows organizations to understand where patient needs have not been met and provides insight into what steps need to be taken to close that gap. Current questionnaires that measure patient experience do not directly ask patients about their level of suffering. However, they do ask patients to evaluate attributes of care, and those measures demonstrate where patients view their care as optimal versus less than optimal.
Suboptimal experiences help providers understand where patients’ needs are not being met. Table 4.2 organizes the measures into needs that stem from inherent suffering and from avoidable suffering. Although the examples are derived from the inpatient setting, the constructs are relevant to all types of patient care.
Table 4.2 Examples of Patient Needs in the Inpatient Setting
For example, patients have an inherent need for information. Uncertainty is unnerving and causes suffering. Survey questions on the extent to which physicians and nurses kept patients informed, the clarity of the communication, and the effectiveness of conveying to patients the side effects and purposes of tests and treatments provide insight into how well this need is being met.
There is an important and fundamental difference between organizing patient experience data around provider reliability and organizing it around patients’ unmet needs. With the former and more traditional approach, the analyses describe the overall reliability of physicians, nurses, and other personnel. With the latter, the data are analyzed around different types of patients’ needs.
These needs vary with a patient’s condition, of course. An emerging trend is to segment patients into groups defined by condition. Patients with the same condition tend to have shared needs that can be best met by multidisciplinary teams organized around that condition. For example, congestive heart failure (CHF) patients endure a chronic, progressive, yet unpredictable disease course that results in needs for information that are different from the needs of patients with other diagnoses. Being aware of differing needs can help clinicians communicate more effectively with CHF patients to help them better understand their diagnosis and care plans. Patients with Parkinson’s disease, diabetes, and other chronic conditions also have specialized information needs. It is thus important to collect, analyze, and report data for the levels at which accountability can be created and improvement can occur.
Advances in Data Quantity and Collection Methods
If getting feedback is painful, getting feedback that is based on a small sample of patients can be enraging. To drive improvement rather than generate rage, we need more data on more patients, and those data should be timely. Ideally, patient experience data should be like a vital sign collected as part of routine care at every opportunity so that problems can be detected early and addressed.
That ideal may never be fully attainable, but the collection of data from patients is undergoing extraordinary change, as is the flow of information in every other sector of life. The traditional approach to collecting patient experience data has been to use mailed surveys or telephone a modest sample of patients. This approach enabled surveillance for major problems but not improvement.
Although many institutions still rely on telephone and mailed surveys (and CMS and other regulatory bodies still require data collection via these older methods), the clear trend is toward electronic data collection. Using e-mail and other electronic approaches to contact patients allows rapid and efficient data collection, and so an increasing number of hospitals and physician practices are trying to collect information from every patient after every encounter.
Of course, not every patient has e-mail or is comfortable with Internet web pages or mobile devices, although the conventional wisdom that the elderly do not use e-mail is incorrect. As the baby boomers age, they are changing the nature of being old, just as they have changed every other institution in which they have been involved. Statistical adjustments for differences in age and other factors in patient populations make it possible to compare electronic survey results with data collected with traditional methods so that provider organizations can tell if they are improving or losing ground.
Response rates to e-surveys are about the same as those for mailed questionnaires at roughly 20 percent. This percentage is higher in some patient subsets (family members whose loved ones experienced hospice care respond more than 60 percent of the time) and lower in others (emergency department patients respond about 11 percent of the time). However, the bottom line is that e-surveying allows the collection of many more data much more quickly. When physicians receive feedback that is based on 250 patients instead of 25, it is harder for them to dismiss the data even if the sample may never be perfectly representative of their entire patient population. The conversation changes from why the d
ata should be ignored to what providers need to do to get better.
The timeliness of data collected electronically can be startling. As shown in Figure 4.3, about half the results obtained by e-surveys come in within a day, compared with about two weeks for mailed surveys. The freshness of e-survey data makes them that much more compelling to physicians and other healthcare personnel. Even though the surveys are anonymous, clinicians can often recall which patient was likely to have written the comments, making the feedback all the more real.
Figure 4.3 Percentage of total returned surveys by days elapsed
Another interesting difference is that with e-surveys, patients have a lower threshold for writing comments and tend to write longer comments. They frequently write paragraphs that paint a vivid picture of what they really appreciated or disliked about their care. These comments are proving to be compelling drivers of improvement for clinicians; no one would argue that risk adjustment is needed for such vignettes.
The success of collecting data electronically depends on an organization’s active integration of e-mail capture into its operation. Organizations that have been effective at e-mail collection have embedded the process into their operations and have made it a clear priority with their staff. Some even give financial incentives to front-office staff for capturing e-mail addresses.
Research shows that organizations that capture more patient feedback tend to perform better because their data can be fully leveraged for quality improvement purposes. As depicted in Figure 4.4, regardless of hospital size, organizations that survey more than 81 percent of their patient populations see a higher average top box percentile ranking on the Overall Hospital Rating HCAHPS survey question compared with hospitals that surveyed fewer.
Figure 4.4 Average percentile ranks by hospital size and sampling rate