Grantmakers fine tune skills, nous and networks
Posted on 21 Aug 2019
By Matthew Schulz, journalist, Our Community
Hundreds of Australia’s leading grantmakers have recharged their enthusiasm for one of the toughest yet most rewarding jobs around, at the must-attend event for funding experts across government and philanthropy.
We’re talking of course about the Grantmaking in Australia conference, held August 8–9 in Melbourne..
Following the sell-out success of last year’s event, the Australian Institute of Grants Management (AIGM) this year moved to the top floor of Melbourne’s RACV Club to cater for 250 delegates.
The growing number of funding specialists reflects the times we live in, when funding effectively has never been more critical.
The theme of the conference, “better grantmaking evaluation”, saw presentations ranging from highly practical to theoretical – often at the same time.
Conference builds on best practice, learning from mistakes
Key themes that buzzed throughout the event included the power of partnerships, the benefits of existing data, new tools developed here and overseas, and the need to learn from and accept “failures” to get the most of grant schemes.
Conference MC Fiona Dempster, an AIGM director, said the event is the only one of its kind.
“It's the one and only time grantmakers get together. It’s the only cross-sectoral one that brings everyone together from across the country, with their unique work, to share ideas.”
Hosting the event cemented Ms Dempster’s view of grantmaking as a great profession.
“It's a really interesting area to work. You've got to do it against the background of highly researched and evidence-based information, yet you're working ‘on the ground’, and you're working with people, and you're trying to make an impact. You've got to be able to bring the ground up and the top down, and work in the middle. It's a great dance.”
Delegates heard from 21 speakers during the event, including Andrew Callaghan of the Australian Social Value Bank, Dr Squirrel Main from the Ian Potter Foundation and Dr Robyn Mildon of the Centre for Evidence and Implementation, who pitched three very different presentations that entertained while also providing tools and techniques to tackle the growing evaluation challenge.
For many delegates a highlight was the awarding of the AIGM Grantmaker of the Year gong to the four members of the NSW Heritage Grants team.
Witnessed by their peers, the four were awarded for their innovative and practical ideas, including a new grantmaker–grantseeker portal, a planning platform for prospective applicants, and methods for reforming the way grantees are supported by grants administration teams.
Earlier, panel discussions by leading funders and thinkers also tackled better grantmaking from both the funding and grantee sides of the equation.
In the first of these, Jodi Kennedy from Equity Trustees spelt out the challenges the organisation had faced on its journey towards outcomes-based grantmaking.
She told the conference Equity Trustees had realised many benefits by becoming transparent about how it distributed $100 million each a year, and said the organisation aimed to be as open as it expected grant recipients to be.
In the Grantee Panel that followed, funding recipients discussed how they were coping with the strong push by governments and philanthropists to prove their programs were worth the funds they’d been granted.
Jocelyn Bignold of McAuley Community Services for Women, Cameron McLeod of the North Melbourne Football Club and Karen Sait of the Port Phillip Community Group agreed the key to success was equal partnerships built on trust, even if the process was tough at times.
Ms Bignold summed it up nicely: “We want to work with you. We don’t want you to dictate outcomes. We want you to invest.”
And delegates were treated to an insider’s view – from the Innovation Lab’s Sarah Barker – of the work being done by Our Community to drive data intelligence to the next level for grantmakers via an “Outcomes Engine” that will track and analyse grantmaking impact via the SmartyGrants system.
Our Community’s top leaders and analysts had been hard at work in the Peace Room (not to be confused with the War Room), she said, tackling the prickly problem of measuring outcomes across a suite of different programs.
Grantmaking tribes meet for deep discussion
Many AIGM and SmartyGrants staff believe the true heart of the Grantmaking in Australia conference lies in the Tribal Gatherings held each year on day two of the jam-packed program.
State and federal grantmakers gathered in one room to hear their colleagues’ stories of data and evaluation from the front line.
Leah Andrews of the NSW Office of Environment and Heritage set out to answer the question “Can you afford not to evaluate?” (spoiler alert: the answer is no).
Then Anne Robinson of artsACT described how her organisation had adapted its grants program to better suit the arts community.
And data insights expert Paul Hyland dived into the depths of the data world, urging delegates to seek help from statisticians when the goings get hard, or risk being left behind in the data world.
Simultaneously, local government funders gathered in another room for a sometimes-raucous presentation by the mayor of the City of Port Phillip, Dick Gross, whose council takes in Melbourne’s lively seaside St Kilda region.
Only Councillor Gross would begin a presentation on good local government grantmaking with a pop quiz about the “most important thing grantmakers must know” using The Sound of Music’s “Do-Re-Mi” as a prop. (And the most important thing? The Local Government Act.)
He was followed by a presentation on engaging youth by Light Regional Council’s Lorinda Bayley, whose municipality straddles the rural–metro divide in South Australia; Dale Sutton from the southern Adelaide metro City of Onkaparinga, who shared his out-of-the-box thinking on how to bring in fresh grants and engage the community; and the City of Greater Geelong’s Justyn Rowe, who tackled the nitty gritty of committees, including the pain of “table bangers”, mind-numbing terms of reference, and impossible questions – and how to avoid all three.
SmartyGrants to the max
Many conference delegates were also SmartyGrants users, and so the Friday session headed Optimising SmartyGrants had power-users all ears about ways to maximise their advantage.
Users were glued to the screen as SmartyGrants training and support services director Jodie Shanks demonstrated the developing Outcomes Engine in action, showing off its application across multiple measurement frameworks.
She also touched on the continued expansion of the CLASSIE taxonomy, which helps grantmakers to label their social sector work with greater consistency.
Delegates were keen to know more about the use of dashboards and “doughnuts”, all of which can help grantmakers to work faster and more effectively.
Users dug into technical nooks and crannies to examine the potential development of non-standard grants, online contracts and better application of standard fields.
Ms Shanks stressed that reading SmartyNews – sent to users monthly – was the best way for users to find out about new product developments as soon as they were launched.
Grants guru Squirrel Main crunches the numbers
Delegates laughed and groaned with recognition as Dr Main – who leads research and evaluation work at the Ian Potter Foundation – convened a fictious grants “panel” comprising four amateur actors who improvised their way around the egos, unrealistic expectations and demands of both sides.
Dr Main had gleefully crunched the numbers on “the cost of not funding evaluation” using data from the Ian Potter Foundation’s extensive database.
The answer? A loss of between 20c and 67c per dollar invested.
Dr Main also showed that good programs with evaluation can multiply an organisation’s “leverage” significantly.
As she put it, “leverage is the currency of philanthropy”.
It may be the first time anyone has crunched the numbers to bring into sharp relief the difference between unwittingly funding failing programs and having the data to build on your successes.
Her message? Allocating funds for evaluation pays for itself.
Dr Main said she’d been excited to answer the question about the cost of not evaluating. “No one’s asked me that before,” she said after her presentation, and she expects to explore the topic further.
“In every instance I made a conservative estimate, so it’s probably much higher.”
Why grantmaking evaluation is like baking a perfect loaf
Australian Social Value Bank impact specialist Andrew Callaghan told the conference that the effort to become a good evaluator took time, patience and skills, much like making a good sourdough bread.
“Over seven years, I've mastered that craft, but it’s the same with impact measurement and evaluation: I’ve learned over time, I’ve failed, I’ve produced reports that were okay but needed to be improved, and I’ve had to change the questions that I’ve used.”
Evaluation, he told delegates, wasn’t a short relationship, and it certainly wasn’t a glossy report aimed at boosting public relations, but a “journey” for those wanting to be good at it. And one way to keep that understanding on track was to visualise “Sonya”, a fictitious not-for-profit provider struggling with the basics of evaluation amid growing demands for sophisticated measures from grantmakers, governments and philanthropists.
“The number one driver of failure of evaluation and impact measurement is not having a culture and senior champions who are driving it … [and] just ticking boxes to meet the requirements of funding,” he said.
He suggested grantmakers needed to ask themselves these questions to avoid an “evaluation fail”:
- What are my desired outcomes?
- What am I going to invest to evaluate these outcomes?
- What is the support mechanism to ensure recipients are able to collect the data needed?
- What is my process for auditing the data to ensure it is sound?
- What are we going to use the evaluation data for?
They’re easy questions to ask, but as grantmakers will attest, much harder to answer.
Dr Mildon reveals how not to trip over your evaluation
A high point of the conference for many delegates was the high-powered – and hilarious – presentation by Dr Robyn Mildon of the Centre for Evidence and Implementation, based in both Sydney and Singapore, who disarmed the crowd with her tale of tripping and landing squarely on her chin while alighting from a flight to Melbourne.
And while Dr Mildon outed herself as a “dork” for tripping over her scarf before a planeload of passengers and copping a “hideous bruise” and a tongue that didn’t quite work, she rose to the occasion at the conference – with the help of some painkillers.
In fact, many picked Dr Mildon as the conference highlight for her unabashed honesty and shoot-from-the-hip commentary and “myth busting” about evaluation.
Her centre, she said, is an intermediary agency that straddles research, policy and practice, working with government, philanthropy and corporates.
She said that while evidence was now “the new black”, some practitioners weren’t doing it well.
She warned delegates of the “old school” set who painted evidence-based policy or evidence-informed practice as a “one-way street”, with research findings driving policies and programs. Context was also critical, she said.
“It's about making decisions based on information that you have, in the context of the community that you're working in, the resources you have, the mandate of the organisation.
“It's just a way of figuring out the best decisions to make under such circumstances.”
She said it was time for “scatter gun” funding to end, and to understand that “poor evaluation makes most things look good”.
That’s why her evaluation “superheroes”, she said, were the folk at Evidence Action, a US-based global not-for-profit that has demonstrated good evaluation in the field.
Evidence Action pulled the plug on a program designed to help the rural poor in Bangladesh. Dubbed No Lean Season, the pilot seemed like a success, but a large-scale randomised controlled trial revealed No Lean Season wasn’t actually having the desired impacts at all.
She warned that high-quality studies tended to be costly and difficult, whereas bad research was often cheaper, with results that tended to flatter the organisations concerned.
A useful alternative for grantmakers to consider involved using existing research that already demonstrated the effectiveness of interventions.
But beyond that, grantmakers needed to understand the ground they were operating in.
“You must understand what good evaluation research looks like … it's a very important tool in your toolbox to understand what is being put in front of you or what claims are being made.”
“So when you're thinking about giving some money to folks, and asking them to evaluate how effective something is, it is beyond whether it's a process evaluation or impact evaluation. It's what question are they trying to answer and therefore match your methods to your question.”
She said there was still not enough being done to assess the performance of funders in the giving of grants, and to monitor their effectiveness.
And she hoped grantmakers would continue to share good evaluations to help the sector learn from them.
This year's event was the best attended we've hosted.
MORE INFORMATION