Why Standardized Grant Reporting Metrics Don’t Always Work…


One of the things I hated most when I used to work with and in the non-profit sector, was completing reports for funders. Some post-project reports were easy. You’d answer a few questions on impact, submit your expenses and report some metrics. We’d always include copies of evaluation reports, photos of the program in action and any political photo ops of recognition events etc.

The Thing About Mandatory Questions…

The one thing that always got me, was when some funders would request us to report metrics to their statistical databases. One funder in particular, kept implementing new **mandatory** statistical analyses programs to “standardize” reporting. What it created was a monster for both the organizations and the evaluator. Let me elaborate.

So the program the funder was trying to use was a statistical software program some company developed that promised to standardize and analyze impact based on 1-2 qualitative metrics. I will not call out the funder or the company, but the thing was crap.

It required participants, to go in and answer questions that were worded in a very complex way (because somehow it was unbiased) on a website that was totally not user friendly, about some very personal and intrusive items.

Who We Were Serving….

Now this might not sound all that bad, but keep in mind these were employment and training interventions aimed at marginalized populations. Those were the only intervention types this funder provided grants for. Over the years, we created programs for :

  • People with disabilities and barriers
  • Newcomers
  • Individuals who had been out of work for a significant period of time
  • Low Income communities
  • New mothers, with very young children at home.

These individuals had neither the time, some lacked ability ability, language skills, or self-confidence to answer the technical questions independently. Instead, it created an antagonistic relationship with the teams administering the questions.

We tried several methods-we would do phone interviews or support to help them answer the questions. They would get frustrated with the question language because they could not understand what the question asking, frustrated by us asking questions that were deemed very private and would not understand that this was the funder and NOT us who needed this information.

We tried emailing the questions, only to have them ignored. We tried getting employment counsellors to do the questions, and they would get frustrated with the employment counsellor. All in all, we ended up having to change the question language or translating it for newcomers, so they had any hope of understanding what the question was asking.

Creating Problems Where In Fact There Are Successes

The other thing were the “metrics” this funder deemed as a success metric for this type of project. I don’t remember them all, but two of the most important were:

a. Having fulltime work

b. Having benefits

This was deemed to reduce work precariousness.

While this sounds OK, anyone who has worked in the space, knows for folks with barriers, 35-40 hours a week is not possible for many individuals. Instead, we focus on getting them into the labour market and as they begin to work they can work to their ability. We offer the supports they need to just get them going. I can honestly say that in 10+ years of working in these programs, fulltime work was NOT the norm for most of them. Neither was benefits because of the type of jobs they were getting.

The same often happens for newcomers. They just want to get working. So we get them jobs. Sometimes their first jobs in the country are not the best jobs, but they want and need to work to support their families. Yes, they will eventually get better jobs, but sometimes first jobs tend to be part time, sometimes, they will work several part time jobs, and certainly not all of those jobs have benefits.

I could go on for hours. This is a growing problem in grant reporting. I know standardizing data sets makes it much easier for a funder to compare apples to apples, but the problems we face include oranges, bananas, grapes and even a few watermelons.

I know standardizing data sets makes it much easier for a funder to compare apples to apples, but the problems we face include oranges, bananas, grapes and even a few watermelons.

Data sets can be standardized, but they need to have different latitude for different types of populations that we serve. So what is a “win” for you, may not be a “win” for me. While we would love for everyone to be working fulltime and with benefits, that is not always the case. No employment program that is actually getting people jobs, should, in standardized statistical testing show up as if it is making people’s lives worse.

What To Do Instead

Never once did these programs ask individuals what they wanted. Data standardization could have happened by simply asking participants a goal at the beginning of the program, and then, post intervention, asked them if they met their goal. In this way, you measure “wins” based on personal goals. This is just one example. There are so many ways we can standardize data sets. The true art is ensuring our measuring tools do not discriminate and create bias where there is none. Sometimes we just need to think creatively about how to work with different populations. So why does this happen? The short answer, because government wants a line in a report somewhere that will get them elected again, that will demonstrate the effectiveness of the dollars they spend to the tax paying public.

Really, I am not a cynic. I have just seen this process so many times in the last 15 years, that in some ways it is like an old record being played time and time again.

I am optimistic that the more feedback we provide to funders, the better the evaluation process will become. So, I did give them feedback and hope that you will too when you see processes, questions and methods that do nothing to reflect the good work you and your teams are doing, and call them out, when so called specialist methods fail. Only through real world experience can these issues be improved and the tools we use begin to mirror the real and lived experiences of the people we serve.


If you found value in this blog, we would love to hear from you.

Please feel free to contact hello@pharononprofit.com to give us feedback, ask questions or leave your comments.

You can also access more content on this and other issues facing nonprofits by joining our free or premium memberships at: https://pharononprofit.com/join-now/





Leave a Reply

Your email address will not be published. Required fields are marked *