More Than Good Intentions

Home > Other > More Than Good Intentions > Page 22
More Than Good Intentions Page 22

by Dean Karlan


  The doctor took an X-ray of her lower leg and reviewed it with her. What had been a hairline fracture in January had, in the intervening two months, opened into a wider fissure, which helped to explain why the swelling and pain persisted. He was adamant that a plaster-of-Paris cast was now the only option. Anything less might earn Elizabeth an amputation somewhere down the line. This proved to be a convincing argument. She agreed to the cast then and there, though she would spend another sixteen hours over three days in the waiting room before it was put on.

  Sorry, We’re (Always) Closed

  Jake felt let down when Elizabeth first told him she was seeing the herbalist. It seemed she was being duped, half her salary for a jar of ancient paste; it was superstition. At best, witchcraft. In any event, it was no cure for a broken bone. He was sure of that.

  But hearing the story of her return(s) to the hospital, the dozens of hours waiting, the utterly indifferent employees, it was easier to sympathize. When she visited the herbalist for her weekly checkup, she walked right into his office, sat down, had her appointment, and left. Whatever can be said about his medieval treatments, he obviously knew how to attend to a patient.

  Relative to its neighbors and to other developing countries, Ghana actually has a pretty good public health care system, with extensive coverage, even in rural areas, and fairly well-trained staff. But clearly customer service is not its strong point. When Jake told some Ghanaian colleagues about Elizabeth’s weeklong odyssey to get an X-ray, they didn’t bat an eyelash. One said, “When you go to hospital, then that is it. You will wait. And when it is only your leg paining you, you can wait even two or three weeks. When you want to see a doctor fast, for anything, then you should go see a herbalist.”

  Why could the herbalist provide such efficient and attentive service, when the hospital could not? We see this across all facets of life in developing countries. People settle for second-best because first-best is inconvenient. They borrow from moneylenders at high rates because microfinance banks have inflexible repayment schedules. They save their money in non-interest-bearing clubs because the clubs offer deposit collection at subscribers’ businesses. They send their children to more expensive private schools because private school tuition can be paid in installments. And they treat their broken bones with herbal salves because they don’t have to endure a week in the waiting room—and give up a week’s earnings in the process—to do it.

  Halfway around the world, in the mountains of rural Rajasthan, India, patients were facing the same problem. When they visited public clinics they, too, were met by interminable waits—if they got in at all. A 2003 survey of health care facilities in the area found that the clinics, which were supposed to be open six days each week for six hours a day, were closed a staggering 54 percent of their scheduled operating hours. The doctors and nurses just weren’t showing up. Over time, people had learned to take their health problems elsewhere, to more expensive private facilities or traditional healers. Less than a quarter of doctor visits took place at public clinics.

  Seva Mandir, the Indian NGO that had conducted the 2003 survey, saw in these findings a lamentable waste, both of the government’s resources and the people’s time. It also saw a potential solution. If you want staff to show up, they reasoned, you have to make it worth their while. Conversely, if you want to reduce the number of absences, you have to make missing work a costly proposition. In short, clinic workers’ salaries had to be tied to their attendance records.

  This was not the first time Seva Mandir had found a way to fight absenteeism with incentives. As we saw in the last chapter, their scheme involving disposable cameras and attendance-based pay proved incredibly effective in getting schoolteachers to come to work. They hoped clinic workers would be equally susceptible to the power of the rupee.

  While it was surely encouraged by its earlier results—and rightfully so—Seva Mandir is not an organization that likes to stand on conjecture. It is committed to rigorously testing many of its programs.

  It partnered with local government (which employed the clinic workers) to implement the incentives package, and it worked with Abhijit Banerjee, Esther Duflo, and Rachel Glennerster of J-PAL to design the specifics of the program and evaluate it with an RCT. The program was to be rolled out in about fifty clinics, so Seva Mandir identified one hundred clinics to study, and the researchers essentially flipped a coin for each. They assigned forty-nine to the incentives program and monitored the remaining fifty-one for control.

  The incentive scheme itself was similar in design to its educational predecessor. Workers received full pay for the month if they worked at least half of the days. If they worked fewer than half, they had to pay a penalty for each day they had missed. Two consecutive months of poor (less-than-half) attendance would result in a summary dismissal.

  To back up all the tough talk, Seva Mandir needed a reliable way to track attendance. Instead of using disposable cameras to monitor, as it had in the schools, it provided the forty-nine selected clinics with machines that punched paper cards with a tamperproof time and date stamp. Each employee had her own card and was instructed to clock in three times each workday—once in the morning, once at midday, and once in the afternoon—to earn an attendance credit. At the end of a month, employees’ salaries were calculated by tallying their attendance credits.

  Workers reacted to the incentives quickly and with gusto. Attendance, observed in a series of unannounced visits to the clinics, spiked. During the first three months, employees at the forty-nine program clinics showed up about 60 percent of days, compared with 30 to 45 percent in the control clinics. It looked like Seva Mandir had succeeded again. But the response began to sag as the months wore on. By the time a year had gone by, the picnic was clearly over. Attendance in all one hundred centers had settled in a dismal plateau around 35 percent. The incentives, like the employees themselves, had stopped working.

  The deterioration was more troubling than it was puzzling, but Seva Mandir was puzzled nonetheless. While it was clear from the unannounced visits that employees were missing a lot of work, it was equally clear that they weren’t suffering the consequences. Paychecks—and the clinics’ internal attendance figures, from which salaries were calculated—were as high as ever.

  Seva Mandir went looking for answers, thumbing through the stacks of time cards. Sure enough, something stunk. Cards from some clinics had long stretches of days without time stamps, which, according to clinic supervisors, corresponded not to staff absences but to periods when the stamping machine was out of order. Consulting their records, Seva Mandir found that the machines had often sat broken for weeks before being discovered by visiting auditors. Some even appeared to have been deliberately damaged—a few “looked as if they had been hurled into a wall.” Instead of calling Seva Mandir to request repairs, the clinic supervisors were treating these equipment failures like attendance holidays. They would manually sign the cards to verify that their workers were showing up, even if they weren’t.

  Further investigation revealed that the equipment-failure scam was just the tip of the iceberg. Supervisors had another ace up their sleeves—the power to excuse an absence. This prerogative had been built into the incentive system to answer concerns that it was too rigid. Why should an employee be penalized if, for instance, she was attending to work-related duties outside the clinic? So supervisors were allowed to grant “exempt days”—and you can guess how that turned out. Across the forty-nine program clinics, workers had an excused absence about one in every six days. It is unclear whether the supervisors were actively covering for their subordinates or just failing to investigate excuses the workers brought to them, but from the standpoint of attendance it didn’t really matter. Like aggressive dentistry, the exempt-days loophole pulled the teeth of the incentive system right out. From there, the clinic workers’ response left no doubt: They weren’t afraid of a pair of gums.

  Seen alongside the success of its incentives program with teachers, Seva Mand
ir’s experience in the Rajasthani health clinics underscores one of the big themes in the story of development, which is also a central motivation for the IPA’s research: context matters. Sometimes we talk about development initiatives as giving people tools to improve their lives, but it’s not like handing out screwdriver sets. Think of it more as a transplant. Sometimes the graft and host are compatible, sometimes not. In this case the public health system was so weak that it could not support a seemingly effective tool to fix it.

  Indeed, even for programs designed on seemingly universal principles like incentives, success and failure are situational. All the more reason to test them—repeatedly, and in a variety of contexts—in order to learn what types of hosts will accept what types of grafts. In medicine, we know something about the theory (e.g., that the blood type matters), which helps us understand when the host and graft will match and when they will not. In economics, we can take the same approach. And sometimes the answer is remarkably intuitive: Incentives only work if the monitoring tool for administering them is corruption-proof.

  Paying Patients to See the Doctor

  It’s easy enough to see why the sick were not flocking to Rajasthan’s rural public health clinics. When Seva Mandir started working there, a patient arriving during scheduled business hours was more likely to find the place closed than open. That had changed, albeit briefly, for the few months at the beginning of Seva Mandir’s incentive scheme, but people did not seem to notice. Despite the period of increased staff attendance, the average number of visits per day to the clinics stayed the same throughout the project. Had the incentives survived and staff attendance persisted, maybe the public would have responded over time by visiting more often; unfortunately, we cannot know. But it’s possible that the clinics would have been underutilized even if they managed to stay open during business hours.

  In the view of the Mexican federal government, this was among the problems facing the country in 1997. There was a functioning nationwide network of health clinics ready to provide care and counseling in a variety of areas, but not enough people used them. Easily preventable and highly injurious conditions like low birth weight and child malnutrition were widespread. So the government folded doctor visits into its landmark conditional cash transfer program, Progresa. We heard about Progresa in the chapter about education, where it was used to encourage school attendance. In that part of the program, poor families were eligible for a cash payment if their children went to class. In the health component, poor families could earn money by making use of the public clinics.

  It was a pretty good deal, all things considered. In exchange for accepting free preventative care, immunizations, pre- and postnatal care, and nutritional supplements, and for attending health, hygiene, and nutrition education programs, families could receive a cash payment of about a quarter of their monthly income. The program targeted low birth weight and child malnutrition specifically, so mothers and children received the most attention. But since all family members had to commit at least to annual preventative checkups, there were gains to be made across the board.

  Of course, the whole thing would likely have ground to a halt if it encountered an obstacle like the one Seva Mandir found in the Rajasthani supervisors. Health clinic staff could have undermined the Progresa incentives by saying participants had attended education programs or appointments when they hadn’t. For that matter, any number of other elements of the program, administrative or substantive, could just as easily have failed. Aware of all the potential weak links, the government set aside the first two years of Progresa’s implementation as an evaluation period.

  They were also concerned about politics getting the best of the program. In Mexico there is a long history of new administrations (even from the same political party) shutting down all prior social programs and creating new ones. This process is costly and wasteful, but it appears to be inevitable—unless the prior programs garner great critical acclaim. With Progresa, the administration sought to conduct a rigorous evaluation that would be above the political fray. This way, if the program worked, it would be hard for the next administration to get rid of it.

  Evaluations are usually designed and staffed by people with experience in the country. The idea is that local expertise helps improve the quality of the evaluation. But that’s not always how it works. In 1997, Paul Gertler, a professor of economics at Berkeley, received a call from a Mexican government official. Do you speak Spanish? No. Do you have any experience working in Mexico? No. Perfect! They wanted someone completely new to Mexico, so removed from Mexican politics that he did not even speak the language, much less know anyone. Then there could be no suspicion of partisanship or foul play by the evaluator.

  It was an ironic selection process, but it proved to be tremendously effective. To this day Progresa—and Paul’s study of it—remains one of the shining lights that guides both the politics and practice of evaluation. In many meetings I’ve attended in Latin America, the mere mention of “doing something like Progresa” piques practitioners’ interest and pushes the conversation forward.

  The full version of Progresa, which by the year 2000 reached 2.6 million Mexican citizens, was so ambitious in scale, and carried such a hefty price tag, that the government was dead set on demonstrating its impacts on recipients’ health. If nothing else, being able to show conclusively that improvements were due to Progresa could help justify the expense. An RCT would be essential to establishing whether the program was a cost-effective tool. This was a dream assignment for a development researcher—a rigorous study on a massive scale (about eighty thousand people in 505 communities) that would provide hard evidence about the program’s impacts.

  It turned out to be a boon for Progresa’s advocates too. When the results came in, one thing they could say without a doubt was that the public liked the program. In the 320 communities that were offered Progresa during the study, 97 percent of eligible families signed up. More impressive, 99 percent of those who enrolled ultimately got paid, which meant they had satisfied all the health care requirements. Unlike its counterpart in Rajasthan, Mexico’s health administration proved strong enough to enforce the incentives. There was no evidence of systematic fraud by doctors or patients—those check marks in the clinics’ records actually stood for real patient visits.

  As the program’s designers had expected (and hoped), health outcomes followed close behind the increase in usage. Follow-up surveys conducted during the two-year pilot found big impacts on children: Enrolled children saw a 23 percent reduction in illness overall, an 18 percent drop in the incidence of anemia, and a 1 to 4 percent increase in height. The program might have been declared a success on the strength of these results alone, but happily there was more good news to report. Beyond requiring doctor visits to receive payment, Progresa actually had a second mechanism for improving health—the cash transfer itself. A separate study looked at the ways participating families spent their extra money and found that, on average, 70 percent went toward increasing the quantity and quality of food available to the household. That meant more, and more nutritious, food for everybody. No doubt this dynamic contributed to the program’s success in improving children’s health; it also rubbed off on other family members. The follow-up surveys found that enrolled adults across all age brackets saw a decrease in the number of days they had difficulty doing basic activities due to illness, and an increase in the distance they could walk without fatigue.

  Everybody was benefiting; manna seemed to rain down from the sky. The biggest winners were the program participants, but they weren’t the only ones to get a slice. The Mexican government came out just fine, too, and with them the researchers who had designed and run the evaluation and, in the process, established the gold standard for government-researcher collaboration to help learn what works.

  It was not the first time a country had mounted a successful nationwide antipoverty campaign, but it was the first time one had used an RCT to rigorously demonstrate impacts on
so large a scale. The world took notice. In the years that followed, Progresa-style programs have sprung up in a half dozen other countries, and now serve tens of millions of families worldwide. Many of them are being rigorously evaluated. For advocates of effective development practice everywhere, this is a landmark victory and a great example of research-driven policymaking. We can make progress in the fight against poverty if we use tools that are proven to work.

  Make Your Own Incentives

  Progresa’s strategy of paying patients to see the doctor proved to be an effective tool to drive people’s health choices—so effective, in fact, that the social benefits from the resulting improvements in health justified the government’s expense. It was a big win for a big—and costly, and initially controversial—social program. But imagine: How much easier would it have been to pitch Progresa if people had been lining up to create their own monetary incentives for better health choices?

  This is exactly the idea behind stickK.com, the commitment contract Web site I started and mentioned in the chapter on saving. StickK.com gives anybody with an Internet connection and a credit card a way to push himself toward a goal of his own choosing by raising the stakes (many actually do not put money on the line, but rather put their reputation on the line by naming friends and family who will be informed of their success or failure). When there’s money (or reputation) on the line, slipups become an expensive proposition, and we work harder to avoid them.

  Since its inception in 2007, thousands of people in the United States and elsewhere have used stickK.com to achieve health goals like losing weight, exercising more, and quitting smoking. For many, these were long-standing ambitions. They had tried other approaches and come up short. StickK.com gave them the nudge they needed to succeed. Could a similar commitment approach help the poor get healthier?

 

‹ Prev