How evaluation can kill: Why we can't just keep evaluating to justify

Posted on 19 Dec 2018

By Matthew Schulz, journalist, Our Community

Suicide, family violence, homelessness, inter-generational poverty: they are all huge social battles that place grantmakers on the frontline of life-and-death issues.

But leading social impact thinker Ross Wyatt, managing director of Think Impact, said many policymakers, funders and grantseekers are trapped in an unproductive cycle of using social impact evaluations to prove what they did was right, rather than using them to learn how to do things better.

He said grantmakers and recipients can be held hostage to factors that work to encourage evaluating to justify, rather than evaluating to learn. These include:

  • ignoring the need to collaborate with beneficiaries, and focusing on doing things for them instead
  • relenting to pressures to “evaluate to justify”, so that a program can be deemed a success and is therefore allowed – and encouraged – to continue
  • an excessive focus on funding, and propping up income streams, instead of a focus on program impact
  • an obsession about “attribution” (who should take the credit for a program’s impact) rather than “contribution”
  • pressure to avoid failure and a related lack of tolerance for risk
  • diving into the data flood and gathering information for its own sake without properly understanding which measures are most appropriate
  • producing an evaluation that’s inappropriate for the scale of the program that it is supposed to measure, i.e. being too small, too large, or too costly.

Each of these misfires can cause harm to programs that are supposed to help.

Suzi
Think Impact director Suzi Young leading an assessment of a women's housing organisation.

“On average, eight people are committing suicide in Australia every day and every week a woman is killed by an intimate partner … people are dying under the current systems,” Mr Wyatt said.

He said the fact that so many social indicators continue to head in the wrong direction in Australia means we can’t keep assessing programs the same way and expect a better result.

Ross W
Ross Wyatt urges funders not to 'evaluate to justify', but to learn from their assessments.

“Too many social impact evaluations are focused on detecting that some outcomes might be occurring, but the fundamental things aren't changing.

“If we're just going to continue to evaluate to justify our current activity, that just works to maintain the status quo, and under the status quo people are dying; that's how evaluation kills people.”

There are many reasons why we’re squeamish about facing up to what’s not working well, and government-funded programs are often the worst culprits.

Those programs have a “low tolerance for failure, and a low tolerance for risk”, Mr Wyatt said.

The reasons are partly political, with the aversion then reflected in policies, he said.

Funding programs may face pressure from elected officials to be seen to be doing something effective, and so must come up with results that demonstrate this.

By the same token, organisations that are being funded, and their beneficiaries, may produce the results they believe are expected of them – again reinforcing existing programs.

Mr Wyatt’s observations of several hundred evaluations in recent decades reveal a pattern of seeking to prove that “everyone is doing a great job”.

“There’s an enormous effort in evaluation that doesn’t lead to a substantive improvement.”

He said that too often the process is a feedback loop that entrenches disadvantage by wheeling people back into the system.

The perverse thing, he said, is that the continued survival, or even success of many organisations relies on the continuation of the very problem they working to eradicate.

He gave the example of a homelessness services provider that attracts funding to tackle the issue. If it succeeds in reducing homelessness, then funding will dry up.

“It’s a strange realisation but the very survival of these organisations depends on the misery continuing.”

He said the job of an evaluator is to “shine a light” on problems and issues and look to solutions instead of maintaining the status quo. Evaluation should always be conducted with a view to service improvement, he said.

Why it's time to fight the inertia of the status quo

Mr Wyatt cited the example of a program that aims to help isolated Bhutanese migrants adjust to a new life in Australia.

Many are recovering from trauma and struggling to settle into life in regional Australia, Many face significant language barriers and even struggle to understand what’s on television.

The program sees volunteers collect migrants on a bus each week to bring them to the local library to draw. This has been their main social activity for nearly a year.

It is of course a welcome respite, yet Mr Wyatt said the grateful migrants are loath to complain or push for a better program. Instead, they’re quick to tell program leaders they are happy with the help they are getting. They think that’s what they want to hear.

This is an example of a program that would be easy to “evaluate to justify”: by demonstrating the volunteer effort and the satisfaction of the Bhutanese, and by showing that community connections have improved.

But Mr Wyatt asked what would happen if the community themselves were placed at the centre of the program design and evaluation. “I suspect we might see a different type of program emerge,” he said.

Rev
Collaborative effort is increasingly being adopted by organisations seeking to maximise their impact.

The revolution is coming: Collaboration for impact versus versus competition for funding

Mr Wyatt is calling for a “revolution and an evolution in program design and evaluation”. He summarised the keys to effective evaluation that can make a difference:

  • Collaborative approaches, especially those with clients “at the centre”
  • Unlocking and building the capacity for change
  • Supporting work on system change, including advocating for policy changes when needed
  • Focusing on funding impact, not activity
  • Focusing on contribution, not attribution, when looking at who should be taking credit for programs

New models of program design and evaluation are increasingly looking further to the horizon in assessing the success of projects, particularly in the most difficult challenges.

Citing homelessness again, Mr Wyatt said it is a complex issue that can’t be tackled in isolation.

It instead requires new equity and finance models, building and design, local government zoning and policies, street-level social work and a community-wide realisation that housing is a fundamental human right, not just a vehicle for building personal wealth.

This systemic approach will get better results in the longer term and will do more for the economy as a whole.

In his office, this type of joint effort is described as “impact-led co-design”, pulling together community organisations, policy makers, government agencies, donors and others in a collective focus on impact.

This kind of collaborative mindset helps to ensure that those involved develop a nuanced understanding of difficult issues such as homelessness, suicide, mental health and family violence, rather than a reactive one.

“You’re no longer just a bureaucrat writing grant criteria, but someone who understands what paths there are for dealing with complex issues like homelessness.”

Mr Wyatt said the organisations that are less focused on competition for funds or the battle to attribute credit are the organisations more likely to attract support. They become experts and can speak with governments and funders “as equals”.

“Every organisation that we’ve worked with to focus on impact, rather than survival, is flourishing financially (because) those who are focused on learning and maximising impact make for a more attractive funding proposition.”

Higher risk, higher rewards for government funders

It’s a trend that has implications on the other side of the ledger too, when it comes to funding collaborative effort, instead of pitting organisations against each other in a competition for funds.

Until now, philanthropic organisations have had a greater appetite for risk than most other funders.

Now, some of the biggest government agencies are taking action to do better on rewarding risk-taking, with the help of organisations such as Think Impact, and as a result of changing perceptions and policies regarding impact.

The Federal Department of Social Services (DSS), for instance, is examining a review of its grants system, which it has dubbed “Commissioning for Better Outcomes”.

Think Impact has been working closely with many government officials to help them recalibrate their thinking on grants, funding and impact.

“This whole approach is to challenge the contested or competitive funding model,” Mr Wyatt said. “Competitive procurement is appropriate when you’re buying toilet paper or pens, but it’s not when you’re trying to solve intergenerational poverty, family violence or youth opportunities.”

“Really, a whole different set of all rules apply, where people need to come around the table, not across it. And setting service providers against each other in that context is not getting the outcomes that they seek.”

The department said in a recent discussion paper that it accepts "existing methods for commissioning grants may be impacting our ability to achieve the outcomes sought".

As reported by AIGM earlier this year, the DSS – which distributes $217 million annually in grants to programs for families and children – has flagged a significant shakeup, shifting to tiered funding instead of program funding.

The authority will demand a focus on outcomes instead of outputs. Future programs will also have to be collaborative, targeted, and driven by evidence, and to have an increased focus on early intervention and prevention.

How much should I spend, and which model should I use?

The first issue to consider when allocating – or not allocating – resources to evaluation is to consider the cost of not evaluating, Mr Wyatt said.

“You end up with putting 100 percent of your money down a path that, year after year, you don't know whether it’s working or not.”

Some organisations might be comfortable with that, and it is their choice to make, he said.

Mr Wyatt said his organisation’s “rule of thumb” was 2%–10%, depending on the size of the program.

“If you have a $2 million program you may need to spend up to $200,000 on evaluation to ensure you get the most value from the investment. But of course, it really depends on the scale of the grant.”

He said the key to any evaluation was appropriateness.

By way of illustration, Mr Wyatt explained how Think Impact is assisting a program that aims to promote future environmental activists.

It found the organisation was probably trying to measure too many things: education curriculum, social media campaigns, and many other activities.

Think Impact advised the organisation to spend less on a single focus, to “spend a small proportion on understanding what’s changed, learning from that, and steering from there”.

Organisations that tried to measure everything can discover “you don’t know which thing worked”.

Appropriate evaluation also means collaborating with beneficiaries to use measures most appropriate to those groups.

Mr Wyatt described helping Aboriginal communities and government departments in regional Australia to work better together.

While departmental staff could point to numbers which seemed to show an improvement – a fact trumpeted by the agency involved as evidence of improvements to the service – “nothing had changed” from the point of view of the local communities.

That’s because their knowledge of the service was based on stories shared between users, not on the statistics the agency offered.

It’s a living example of how data gathering on its own can be ineffective, and data must be tied to an appropriate measure.

Mr Wyatt said organisations he’s worked with have measured connectedness by asking, for example, “Do you have someone who can help you move a fridge?”, or assess family dynamics by asking people to “describe the sounds in your house”.

There are many models of evaluation and measurement out there, many of them spelt out by the Centre for Social Impact in its “Compass”, which featured in the April 2018 edition of Grants Management Intelligence (How do you measure up?).

We also examined the benefits of community-based measurement studies such as Vital Signs in the November 2017 edition (Community reports with clout).

So whether you’re going to use theory-driven economic analyses or “integrated” approaches, the journey is often just as important as the results – sometimes more so.

Mr Wyatt said Social Return on Investment (SROI), for instance, which puts a dollar figure on social benefits and outcomes, is not a silver bullet, but it is a valuable tool that helps people to understand “the relativity of outcomes”.

"The biggest value is in the process, not the result you get in the end. It's that deep focus that you're employing."
Walkin
Evidence, joint effort and recognising contributions are important elements to help grantmakers to become more effective.

He disagrees with the critics who say social benefits and outcomes shouldn’t be measured in financial terms, saying that while SROI can’t always account for everything, dollar values can’t be beaten as a conversation starter.

For example, Think Impact helped measure the value of parks in Paramatta, western Sydney, showing a social value of between $8 and $37 for every dollar spent.

That included such things as health outcomes and pride of place, but it didn’t include biodiversity, pollination, or other local environmental effects.

Nevertheless, the value of those parks was “profound”, with some locals recalling those parks as places where they made marriage proposals or saw their kids on a swing for the first time.

“Monetising promotes the conversation about how value is created, what the value is that people place on things. We would much rather have a conversation about the level of the value than say that you can’t put a value on it, and therefore say the value is zero.”

Still disagree? Try this at home.

Randomly select a group of objects: a case of beer, a million dollars, a trip to the local park, a cup, a visit from the grandkids, a hug from your child, a book, a necklace, and other things. Now put them in order of how much you value them. Now try it on another member of your household and see if you get the same results. The same process could be repeated on a larger scale. It creates a sense of relative values.

“All of a sudden you have a picture.”

For instance, in areas of park scarcity, the value is higher for users compared to areas where there are many parks around.

As Mr Wyatt points out, the actual process of ranking is helpful in building understanding.

“The biggest value is in the process, not the result you get in the end. It’s that deep focus that you’re employing.”

That is perhaps the true secret of good evaluation. There’s no one “best” method. Instead, practitioners and users of different methodologies should understand that each do different things.

When the revolution comes, Mr Wyatt will be there still pushing for the ultimate goal of powerful social impact, properly understood.