Reporting the business of non-profits
A long time ago, an organisation would typically only evaluate itself on one measure. The profit or loss statement at the end of any given period gave the organisation its report card for the year.
For non-profit organisations in particular, there is a wide range of potential metrics that can be used to measure performance internally, both qualitative and quantitative.
That was the situation Professor Robert Chenhall and Professor David Smith, together with Dr Matthew Hall from the London School of Economics and Political Science, found at the UK-based international development charity, Voluntary Service Overseas (VSO).
VSO's continuing debate over how to account for the varying international contexts of its 35 country programs provides an insight into how both non-profit and commercial organisations can better report the non-financial aspects of their work.
"We found that a compromise between two competing ideologies was both possible, and effective," Professor Smith says. "While the solution found was certainly imperfect, it propelled the organisation to continue considering and developing its overall purpose."
VSO changed its internal evaluation procedures over two years and during that time conducted interviews with senior personnel, observed day-to-day work practices and collected internal and publicly available documents.
From 2002, VSO had used its 'Strategic Resource Allocation' tool to measure the effectiveness of each program office. This asked country directors to rate the local performances against 16 set criteria. These were then aggregated to give each country a total percentage figure of its effectiveness.
They quickly realised however, that a two-digit number provided very little context for the target audiences. For example, if the Uganda program scored lower than the Sri Lanka program, did that mean it was actually performing less effectively, or was it struggling with a more difficult aid delivery environment? And how well did the self-rated scores equate to each other across country and regional boundaries regardless?
There were also concerns low scores could inadvertently discourage staff in those offices, reducing future effectiveness of the local programs.
From 2005, VSO asked country managers to instead compile annual reports for their offices. This contained no quantitative information, and was instead a written narrative-style document describing progress towards strategic objectives.
A third compromise method, the 'Quality Framework' (QF) was developed from 2007.
"A key difficulty in developing the QF was tension between the desire to standardise and the need for indicators to be 'inspirational'," the researchers note.
The QF – which asked country directors to rate performance against a more carefully defined set of criteria but to also provide qualitative notes on each point – delivers on both fronts. However, this has by no means been a perfect solution. It was further refined during the two years of the study and Smith expects it has continued to receive adjustments.
"We suggest it is the 'imperfect' nature of the QF that was pivotal to its continued existence as a compromising account," Professor Smith says.
"Changes privileging one mode of evaluation (such as quantitative and consistent scoring) were accompanied by changes that shifted the emphasis back to another mode of evaluation. This includes ensuring hard numbers were paired with narrative for context."
This 'concurrent visibility', shows how organisations can bring together differing, competing ideologies and feed off the resultant productive friction for the benefit of both points of view.