Measure twice, cut once: Why social impact measurement is an essential skill

Posted on 18 Jun 2019

Grantmakers are in prime position to steer their masters towards best practice when it comes to impact and outcomes measurement, an Australian expert says.

Australian Social Value Bank (ASVB) impact specialist Andrew Callaghan has analysed new Australian government strategies for "commissioning for outcomes" and says grantmakers should be ready to lead when it comes to understanding impact and achieving measurable results.

After a decade in the rapidly developing field, Mr Callaghan continues to be surprised by the government and corporate sectors' lack of strategic leadership and general lack of understanding about the work, even as more governments are adopting impact measurement as an essential part of funding.

It's a worrying observation, and it comes as Our Community's executive officer and "chaos controller", Kathy Richardson, pegs outcomes-oriented grantmaking as the biggest single shift in the sector in recent times.

Mr Callaghan says funders can stay in front of their problems by formalising a social measurement process early with this quick checklist:



Impact specialist Andrew Callaghan

  • What is the procedure for communicating to the people you're giving grants to?
  • What is the support mechanism to ensure recipients are able to collect the data needed?
  • What is the process for auditing the data to ensure it is sound?

Don't waste your time, collect the right information

"I've worked on evaluations of large-scale government programs and grant-giving programs in Australia and New Zealand, and I've seen that lack of clarity and consistency in what we're asking for leads to a diverse range of types of reports," Mr Callaghan says.

"There's a lot of wasteful information collected - and time wasted - that could have been spent running the programs that were given the grants, if the grantmakers were clear about the information they wanted people to collect in the first place.

"This includes being clear on the kind of outcomes you're looking to achieve from those grants and having a simple and cost-effective way of being able to give provide that evidence."

Too often, he says, evaluations are conducted after the money has been spent, putting grantmakers under pressure to justify spending with low-quality assessments.

"In some cases, I wouldn't say organisations are making it up … but not far off it."

This is where grantmakers can step in and step up.

Why government grantmakers have the power

"Grant providers should understand that there are not a lot of people who would know how to do this well. And the people who do know how to do it well realise how hard it is to continually measure effectively across different types of programs and to get the data out within appropriate time frames," Mr Callaghan says.

Grantmakers in local, state and federal government organisations are among those at the coalface of the new measurement economy, either driving change or adopting new techniques.

They are front and centre partly because they have access to the cheapest and most effective way to conduct data analysis: using data that already exists, instead of collecting it from scratch.

Government grantmakers have access to data banks others can only dream of, and the power to work at the scale that's needed.

"Grant providers can be the leaders in showing how it can be done, partly because of the data sets that they have available," Mr Callaghan says.

He cites the example of imaginary government grantmakers wanting to boost the financial resilience of beneficiaries of government support, where agencies can both track any reduction in benefits claimed and watch the effects over the longer term.

Governments can access statistics on crime, health, school attendance and more, backed by systems to protect that data. Governments are also able to partner with others by drawing on data collected by grant recipients, which can be confidentially or anonymously analysed "behind the scenes".

Are we making a difference in the lives of New Zealanders - how will we know?
New Zealand's Social Investment Agency is using big data to tackle wellbeing.

This enables a government entity to assess a program's effectiveness without releasing sensitive information, allowing grantmakers to inform recipients, "keep doing what you're doing, because we can see changes happening with the people that you're targeting."

This method, Mr Callaghan says, was used successfully in the United Kingdom through a joint government and privately funded "social impact bond" that aimed to reduce recidivism rates by tracking subjects' court appearances, bail commitments and returns to prison.

Proper understanding of social impacts has massive implications for the funding of government programs. In New Zealand, the government continues to develop its Social Investment Agency, which uses big data to help the government to prioritise its spending by targeting the most vulnerable.

Mr Callaghan describes that approach as "borrowing money from the future", or taking a "forward liability" approach, based on the premise that spending now - on programs to reduce crime, for example - will save the government millions, and perhaps billions, in the longer term.

Mr Callaghan accepts he's not just an observer of the various implications of adopting social impact measures. He has skin in the game, as a practitioner who would like to see more organisations use the ASVB's approach, a variation of the cost-benefit analysis (CBA) methodology adopted by many governments globally.

adventure
The Department of Social Security has revamped its families and children program with an outcomes focus.

The wave of the future: Governments establish outcomes frameworks

Governments and departments of all persuasions as well as foundations and corporates are creating new guidelines requiring partner organisations to prove the efficacy of their work.

Last year, for example, as the AIGM reported, the Department of Social Security moved to revamp its $217 million families and children program, focusing on outcomes at the expense of the former program-related funding model.

The ASVB has been examining the release of outcomes frameworks for service areas in New South Wales and Western Australia.

The ASVB's basic argument is that organisations pitching for funding should be moving to "align their organisation's outcomes" to those frameworks.

New South Wales, for instance, has adopted a Human Services Outcomes Framework (HSOF) for its funding strategies, with outcomes now embedded into service contracts too.

The ASVB predicts, "In the future, it is highly likely that any organisation … unable to measure outcomes will find it hard to meet government selection criteria for contracts or funding."

There are significant similarities and important differences in the way the Western Australian government has approached the issue, Mr Callaghan says.

The overall aim of measuring achievement in the human services and social sector domains is the same, but in the west the government has not yet made outcomes a requirement of contracts.

More significantly, the WA Department of Premier and Cabinet has commissioned the state's social sector peak body, the Western Australian Council of Social Services (WACOSS), to develop that framework, rather than develop it internally.

Outcomesframework WA

Mr Callaghan supports the decision. "You're essentially allowing the sector - who are out there working on the ground - to get consensus and to build up an agreed framework of collective impact for the sector."

For tackling large-scale social issues, "that is a really good way of making sure everyone has agreed to that model".

The NSW approach of creating those frameworks internally mirrors Victoria's process, he says. In both cases, those states have developed a "bank of metrics" for organisations to employ.

But he predicts a challenge for those states in translating and applying what he describes as "population-level" metrics. He's referring to measures that test for improvements at state level but could prove problematic for assessing smaller-scale grants.

He cites the imaginary case of a not-for-profit working with schools to combat absenteeism across several schools. While the government stipulates the use of indicators by postcode, this fails to capture results that might apply to just a few dozen students in a string of different postcodes.

"How does that relate to individual organisations that get a grant for $20,000? How are they meant to relate those metrics to the impact they're having? That's where the disconnect is."

Instead, he says, funding organisations must aim for a set of principles that can capture information consistently across organisations and can also be combined across programs.

Not surprisingly, Mr Callaghan says that which models you choose for measurement and evaluation depends on the use case, but it helps to consider the purpose.

"At a more basic level where traditionally there was a pot of money, whether it's a grant or whether it's to deliver services, consider: what's the thinking going on within a department and how they're formulating that offer out to the people who win those grants or that funding? What are they getting back? How useful is it?"

Once those basic questions are answered, it is easier to improve that system and get on with the main game of "maximising impact through the use of that data".

Can we agree to disagree? Models for measurement

These NSW and WA approaches are but two of countless frameworks being tested across the globe.

Which begs the question: which model is best?

Perhaps the answer will be provided by the Universal Commons project, spearheaded by social investor Alan Schwartz, tied to a generous prize for anyone who can crack the code of valuing social good. (See our December 2018 report)

A single dominant model is perhaps a long way off, but there is a growing sense of the need to seek agreement on basic principles and practices.

"The methodology can be agnostic," Mr Callaghan says. "It doesn't have to be cost-benefit analysis (CBA), or social return on investment (SROI), but the reality is there has to be agreement on the principles, there has to be agreement around a certain level of kind of academic rigor, and especially that creation of value, which should be a kind of monetary value if we want social value to be easily understood."

Andrew Callaghan
Social investor Alan Schwartz os among those seeking a model of social good that can be measured and traded.

Earlier this year the OECD highlighted the problem, painting a picture of 590 different social impact investment policies across 45 countries, with most of the $US228 billion invested targeting "easy returns".

The OECD believes the current mix is less than desirable in the battle to achieve the UN's stated social development goals (SDGs).

"The challenge lies in defining and measuring impact," said OECD development co-operation director Jorge Moreira da Silva in a recent announcement.

"Different countries, public and private organisations are using different yardsticks to measure different elements. To counter the risk of 'impact washing', public authorities have a responsibility to set standards and ensure they are adhered to."

Mr Callaghan agrees, drawing a parallel with accounting standards.

"You have different peak bodies in accounting which make different judgments around value, depreciation percentages and other measures. That's an example of reaching consensus. And eventually people will start signing up to different principles and practices.

"If you want to take impact measurement and evaluation to a level where it becomes a profession, like accountancy, then you need to start understanding that it needs large-scale guidance."

That guidance includes pushing organisations to measure the whole picture of the impact they're having instead of selectively reporting the most positive aspects.

Again drawing a comparison with accounting, Mr Callaghan believes organisations should regularly report on their social impact, ideally alongside their financial reporting.

"The company I work for is based on a certain methodology, using cost-benefit analysis, which is the way government goes about looking at the effectiveness of social programs.

"Our perspective is that we're looking to create something that's cost-effective, light-touch resource-wise for organisations, but is rigorous enough to meet the standards of what government and other funders want."

He accepts that some academic institutions and others "fundamentally disagree with putting monetary value on social outcomes", but he believes it is still "early days".

Classroom
Members of the Social Impact Measurement Network Australia (SIMNA) are among those actively setting standards, pushing for better policies and sharing knowledge.

Yes, but what can we do now?

Grantmakers at all levels of government - alongside community and philanthropic foundations and corporate grantmakers - might think these hurdles make the challenge bigger than ever. But Mr Callaghan offers hope.

"That's where you have this kind of idea of contributing towards those larger outcomes. If you're an organisation that is linked to government, at whatever level, you need to be aligning the outcomes of what you're doing to their outcomes."

Organisations reporting to funders may not need to provide reports at the same level of complexity as their masters, but they must at least understand how they are contributing to those "population-sized metrics" and other measures.

Grantmakers must also rethink what they really expect from recipients, and Mr Callaghan points the finger at the multitude of acquittal reports that come with built-in rose-tinted lenses.

"I've produced many impact reports in my life. But have you ever seen a negative impact report that's been published? One that says this grant of $2 million we gave completely failed and had no impact? That needs to change."

This doesn't mean your failures need to be splashed on the front page of the newspaper, he says, but it does mean you should dispassionately learn and adapt from your mistakes.

When it comes to the cost of evaluation, Mr Callaghan says all governments have different spending guidelines, ranging from 5% to 20% of the budget.

The calculation may be based on size of the project, or on spending thresholds.

For example, he says, one model might hold that the evaluation of projects worth less than $100,000 would require "almost an anecdotal case study", and the evaluation of projects worth more than $100,000 would require quantitative analysis, while projects worth more than $250,000 would require "pre-post" social impact evaluation, meaning data is gathered before and after the intervention.

Cart before the horse? Getting the process right

Having guidelines doesn't mean people are using them properly, and processes are easily derailed by poor politics, poor timing and poor processes.

It probably wouldn't take any of us long to dig up a news report about a grants process that skipped a few stages, including evaluation.

Mr Callaghan has witnessed projects already well underway - with grants issued - even before the desired outcomes of the program or its logic model are fully defined.

"Then you're tendering for the evaluation at the very end of a project and need to retrospectively evaluate a whole program".

"If impact measurement evaluation is constructed around evidence-based policy, and evidence-based grant-giving, then fundamentally your desired set of outcomes should be driving the design of your activity, not the activities driving the outcomes you're looking to measure."

Mr Callaghan warns that trouble can occur when politics intervenes, when the whim of a minister means that public servants - who may not be specialists in the field - are required under intense time pressure to define reporting requirements or make decisions that don't reflect the evidence.

"A minister is going to give a presentation next week, so quickly go out and get me some quotes and get me some nice stats around what's happening, so that they can present it".

Authorities and funders can also leap into the unknown because the evidence doesn't materialise in line with policy, or with political or financial cycles, and the powers-that-be press on anyway.

"That's a waste of the evidence base that's been collected, because it's not being used in that decision, [and] if you foresee that's what is going to happen, then you've got to question why are you investing in that evidence base if you're never going to use it?"

He says it doesn't have to mean a conspiracy is afoot, but it might mean timelines are mismatched, or under pressure, or there are competing priorities between departments.

"Your program is over, and your final report from each organisation that you've given funding to comes in at the end of June. You have all these reports, and people want to do something straight away, but from an observations analysis and recommendations side of things, there is still a lot of work to be done."

We'd like to see that work done right. Wouldn't you?

MORE INFORMATION

Methods and tools: Social Impact Assessment Strategy Report (HEC Paris)

OECD Report: Social Impact Investment 2019

Five ways grantmakers can improve their social impact process