Filed under: Development Development 2.0 Social inclusion

My United Nations colleague Ian Thorpe has a habit of asking very important questions when it comes to global development.

His most recent question was about methods that can help us listen to our beneficiaries and better capture their perceptions, feelings and needs in order to jointly design more appropriate interventions, address these needs and track how well they’re doing.

Specifically he made a compelling argument for using polls to accomplish this. His post and my recent experience with applying complexity theory to development nudged me to present a case for an alternative method for detecting weak signals and subtle changes in the communities we work with.

But first, a bit about polls and surveys. I see three critical weaknesses of these inquiry methods that can blindside policy makers and researchers to emerging trends and changes.

  1. Time and attribution.

From the time a decision is made to conduct a poll to the time the results are in, my guess is that no less than two to three months go by. Another few months will go by in designing and implementing an initiative to address the results and a few more to see the results of the intervention.

Each step gives us a snapshot of a particular sample of a population at any one time – and a time well gone by, by the time a policy maker is considering interventions.

These methods don’t occur in a static environment (societies, especially in our growingly interdependent world, are anything but static), and they are not particularly fit to capture subtle or less subtle intervening variables that could have impacted the definition of the problem or the resulting intervention.

  1. Biases.

I can think of at least three biases that weaken results of polls and surveys.

First, when polled, people generally tell you what you want to hear, and when doing so, their responses are reflecting their most recent experience (which may or may not be relevant, but even if they are, you’re getting an aggregate perception of recent experiences of many people…talk about a snapshot in time). Just think of a last employee satisfaction survey you’ve completed – what were you thinking about when filling out the answers?

Second, the results are interpreted by “experts” – internal or external. This, in most cases, means that decision makers hear the story “experts” believe is the correct one or the one they think decision makers want to hear.

Third, and in my view the most critical, when polled, people give answers only to questions they’re asked. This limits researchers’ ability to scan for weak signals and emerging issues (those that typically end up blowing up in our face).

  1. Tricks, loops, and hunches.

The combination of time and attribution/bias makes it relatively easy to manipulate polls and surveys in order to validate whatever conclusion one would like to arrive at.

Worse yet, a rapid feedback loop that would allow for corrective action is very weak (but I guess that makes sense – if you’re trying to trick the system, while the system has a direct feedback link to you, then you’d be quickly found out, or not?).

Lastly, polling starts with a set of hypothesis one wants tested. This implies that we’re using the past to study the future, which while fitting in some cases, is likely to blindside the researcher.

(S)he is likely to miss emerging patterns that, viewed through the lense of the past, may be insignificant but could hold great potential for wreaking havoc on our economies and communities (think Black Swan events).

Instead, how about we let the data lead our thinking and further inquiry… so, what is an alternative?

Contrast these methods with narratives. With people, it has always been about storytelling. That’s how we pass knowledge from generation to generation. That’s how you tell your friend about your child’s first day in kindergarden, and that’s how an elderly man complains to you about low pensions in a supermarket checkout cue.   

Dave Snowden introduced us to the method of capturing these stories, and using them to gain insight into real-time issues and changes in a society.

The method is based on prompting people to tell stories that they themselves interpret (so no “expert” opinion). With a clever strategy for capturing these stories (having eighth graders for a full semester record weekly stories from adults about what frustrated them that week and allowing the professors to access the database of stories for research of their own), one can easily get to thousands stories streaming into the system.

If left unchecked, people’s petty and small annoyances, especially in an interconnected world, could quickly snowball into big issues. A real-time, continuous feed of stories allows a decision maker to address these as they arise.  You note a pattern forming and can have a quick, workable solution of a policy out? The next batch of weekly stories could show the impact.

Over the coming months, UNDP will test whether this method can help us find answers to some difficult development issues:

  • What is the cause of resentment of local populations in protected areas?
  • What hinders or facilitates local level citizen engagement?
  • How can we measure in real-time social inclusion of Roma?

I haven’t seen many critiques of the method, but what I did find does not really address the principle (that stories are a powerful method for tracking changes in the society) but ways and means of putting it in practice (capturing stories at scale, ensuring continuity and processes for quick policy designs).

The experience of Global Giving and Nominet Trust with narratives provides optimism in this respect.

With the disruption in just about any sector you can think of, from education and health to politics and energy, maybe the time has come for disruption in figuring out what people really want?

P.S.  Make sure to read Ian’s rebuttal to my arguments.

  • davesnowden

    I don’t think its a rebuttal he argues the approaches (as he understands them) are complementary. He’s now aware of the quant aspects of what you are doing either. Its interesting and I will try and write a blog on it later this week

  • sonjablignaut

    Very interesting article. I’ve also done narrative work regarding citizen engagement and protected areas in South Africa. Didn’t get to use Sensemaker for that yet though. Happy to share some learnings/results.

    • Millie

      Sonja, i would be thrilled to compare notes! I will be back from vacation on July 29th, my email is, any chance we could catch up sometime after? I could take you through the analysis of stories we got to date, and i would love to learn about your projects, methods you used, and results you’re seeing? I am thrilled you left a comment, and hope we could connect soon. Best, Millie

      • sonjablignaut

        That’ll be great Millie. I’ve sent you an email so long.