BI Beat: Increasing Work Activity Compliance
Lessons from MDRC’s work on TANF
Sometimes, taking something out of context makes it clearer. Recently, NASWA Behavioral Insights interviewed a member of the Center for Applied Behavioral Science (CABS), a team within MDRC, that improves government through behavioral science. They are very much like us, and a recent report—called “Applying Behavioral Science to Improve Participation in Work-Readiness Activities”—caught our eye. We had fun sitting down with Sophia Sutcliffe to compare notes.
Though the CABS’ project worked with clients and staff of a Temporary Assistance for Needy Families (TANF) program, the provision of benefits to the public requires similar steps, no matter what program you work in. The ways that CABS diagnoses issues, tests solutions, and understands risks rang true for us. Moreover, they aimed to improve clients’ compliance with required work activities. We hope this context gives you a new perspective on behavioral science—and maybe, new ideas to improve compliance within your UI program. We certainly learned a lot from the conversation.
Can you describe CABS and some examples of the work you do? I think there’s a lot of overlap with the mission of NASWA Behavioral Insights.
I work in the Center for Applied Behavioral Science (CABS), and we're at MDRC, which just celebrated its 50th anniversary as a nonprofit focused on policy, social policy, and education research. CABS is a research and design team that consults with a number of public organizations, such as those in the public benefits system. We try to make programs work better for the people who use them, as well as the people who operate them. Often, we’re working to address issues with low uptake or engagement or outcomes. The “CABS approach” is really kind of organizational diagnosis, to design a shift in program operations, communications, and increasingly within culture or leadership, to have more successful outcomes.
Can you tell me what motivated the research in your recent article?
This research tells the story of our work to improve participation and work readiness activities for the TANF workforce program in Washington State. The work was sponsored by the Administration for Children and Families under a project called Behavioral Interventions to Advance Self-Sufficiency - Next Generation. In Washington, there was concern about “drop off” [when clients would not complete their work readiness activities]. It’s better for both clients and the program if people comply. For clients, they might lose some or all of their cash benefits if they don't continue to participate – they might be cut off from benefits and have to reapply. From the perspective of the agencies, it adds administrative tasks – following up with clients, determining if there's a good cause for reducing benefits, and closing the case – often only to reopen it again later.
Can you talk about how you tried to solve this compliance problem?
We take a similar approach with all the organizations we work with. The first step is defining the problem that we want to focus on, a very clear neutral problem statement in terms of behavior. We focused on getting clients to complete their work readiness activity. We then went through a behavioral diagnosis by mapping out the process clients go through. We talked to program managers, looked at documents, and interviewed frontline staff. We mapped it all out: from how to apply all the way to what happens if they don't comply. We really put ourselves in the clients’ shoes.
Once we had the map, we conducted what's called a funnel analysis. We paired the steps of the map with administrative data to look for the “drop off.” So out of everybody that applies, how many attend orientation, complete their activity, and/or end up reapplying. We also conducted interviews to understand why people might be dropping off. Based on all of that, we came up with a few hypotheses about behavioral bottlenecks–what aspects of the process might be influencing clients’ engagement–and designed shifts in communications or processes to address those bottlenecks. We landed on a set of print materials and interview guides for clients’ one-on-one interviews with workforce specialists—to uplift the things that we wanted clients to pay attention to and think about after they left the meeting to ultimately encourage sustained engagement in the program. We also got feedback on these by having a few clients and workforce specialists actually try them out.
Are emotions relevant to government communications like this?
Yeah, I love this question. In some government communications, I think really the emotions do matter, and we need to be intentional about the emotional responses communications could have – like communications that let people know they need to do something, what happens if they don’t do something, or that tell people to make a decision. Whether we like it or not, our emotions do shape our behavior and decision-making. As humans, we're not robots. And you know it! Do you like to do things that make you feel confused, stressed or ashamed? I don't know that people should feel happy necessarily when they read these communications. But it's more like people should feel like they have the capacity to take the next steps—that they feel like the benefit of taking action or making a decision is worth the effort.
There were some challenges when you went about analyzing the data. Can you tell me a little bit about that?
So, we set up a randomized experiment to assess the impact of the intervention materials we designed. We randomly chose half of the workforce specialists to incorporate the materials into their interviews with clients. The others just followed business as usual. But, when we visited the offices to see how it was going, it was pretty clear that there was uneven adoption of the materials. When we talked to clients after, the only client who remembered the intervention materials and, like, really explained them well was in the standard group—they had been served by somebody that was not trained in the materials or supposed to use them! It was pretty clear that our design had been positively received by some people, but it was not being used according to our random assignment.
I really appreciated the way the team pivoted the research into implementation research, to offer advice on how to avoid this problem. Can you tell me more about that?
When the test ended, we conducted the implementation research as part of the project, and there was really so much there to learn from it. If we want to change behavior, to move the field from nudges, we need to go deeper into the organizational setting. Like, what would it take to change organizational behavior? What are the guidelines or management structures that might be limiting staff? We see this is where the field is heading–to move more from individual behavior change to more organizational behavior change, or even in the design of policies themselves. That’s the goal, right?
What do you think this experiment has taught you about the staff's ability to adopt new expectations?
A major thing that came up is limits. Staff are doing so many things in their jobs, and the part of the job that we're asking them to change is, like, only 20% of what they're doing. One supervisor we interviewed said the only thing that is constant is the rate of change. Staff are being asked to try new things, and there can be frustration. Something I'm noticing in the field is more codesign—especially when it is going to impact so many people's day-to-day life. If we could do it over, we would have had more of a formal pilot testing period where the staff were trying it out for a month or two, and we were revising the interview protocol—just to make sure that the staff had a say in it and bought into the design.
What did this experiment teach you about compliance?
To be in compliance with the program requirements, a lot of clients will be doing some activity for 32 to 40 hours a week. That's a full-time job. Also, within the interview [about employment barriers], clients are being asked a lot of really in-depth and sensitive questions. It's their whole life being poured out in this one conversation. Also, even if people felt like, rationally, that the effort was worth the benefit, they might not have the psychological or physical capability to follow through consistently. They might not have a license, or their mind might be so busy trying to figure out who's going to pick up their kid from daycare, or what to feed them for dinner, and how much does that leave me with now. It’s a lot.
One of your paper's conclusions was that management systems influence staff priorities sometimes in undesirable ways. Can you explain that?
Something that came out of our early conversations with staff was a concern that using the new materials of the experiment was going to take a really long time, and they were not allowed to take a really long time with each client. The program managers tried to made it clear, saying “It’s OK. Longer sessions with clients are a good thing.” But, staff still felt a time pressure. One of the reasons was their workflow management system. Staff would click a button when an interview started and when the interview ended. And there was a literal timer on their computer screen for most of the session. That's going to influence staff—even if it’s not intended. It’s important [for management] to message what are the real metrics of success and have those conversations with staff. Like, what is the goal here? I think sometimes it's assumed that staff know and there's a shared sense of mission, but I think that kind of exercise can be powerful. There is so much real magic and power at that local [staff] level that we have to remember.
What new strategies might you like to test or experiment with to promote?
I think that service continuity could open a lot in terms of more quality of service. Being assigned to the same worker that conducts your intake, just having an opportunity to connect again after that initial conversation. I think it would improve the staff's capacity to actually address what clients are sharing, but also make the client feel heard. Clients then trust the staff member—and that public services might actually be able to help them make this life transition towards employment. An overlooked part of the job—and really the importance of government workers—is that they are the face of the government. Every interaction is an opportunity to build trust in our institutions. And that in turn can help clients address the barriers to employment.

Learn More
If this article has inspired you to consider behavioral insights to address program challenges, let us know! Reach out to NASWA Behavioral Insights at integrity@naswa.org.





































