Funding the Invisible: Why Evaluation Systems Reproduce Inequity

International donors make resource redistribution decisions based on evidence. But what kind of evidence? In most cases: quantitative indicators, reach metrics, activity reports, and lists of beneficiaries. These are systems designed to capture what is easy to count—not what truly transforms.

This is not a technical problem. It is a political one. When evaluation frameworks fail to capture the transformations produced by feminist, trans, and social justice organizations within bodies, territories, relationships, and bonds, the result is a systematic invisibilization of their value. And what is made invisible goes unfunded.

The limits of metrics in movement-building processes

Conventional monitoring and evaluation systems were designed under a linear logic: activities are executed, outputs are produced, and predefined results are achieved. This logic is not neutral: it privileges the short-term verifiable, the individual over the collective, and the quantifiable over the relational.

But feminist, human rights, and social justice organizations do not operate under that logic. Their deepest transformations occur within dimensions that standard indicators were not designed to see: the trust woven between movements that historically operated in fragmentation; a shared political language emerging from months of collective analysis and encounter; the capacity to sustain one another in contexts of anti-rights offensives; and the care that makes it possible for people to continue their activism without burning out.

How do you measure solidarity? How do you capture in an indicator the moment an organization stops feeling alone?

An evaluation that sees what matters 

Addressing this gap requires evaluative frameworks that are systemic and non-linear—capable of recognizing that social change processes have their own timelines, that they do not develop in predictable sequences, and that their most significant effects often become visible long after a project has closed. This implies building tools that integrate qualitative and relational evidence, placing the perspectives of the evaluated organizations and communities at the center—not as informants, but as producers of knowledge about their own transformations.

In our research and evaluation practice, this translates into different questions than those that usually guide conventional systems: not just what activities were carried out, but what bonds were built and how much they weigh; not just how many people participated, but what changed in how they relate, how they decide, and how they care for one another.

In a project we conducted alongside Astraea Lesbian Foundation for Justice and Fòs Feminista—evaluating a transnational articulation process between trans and reproductive justice movements in Latin America and the Southern United States—this difference was fundamental. The most significant outcomes the process generated—trust between organizations that didn’t know each other, a shared political vocabulary, collective support networks in times of crisis—would not have appeared in any indicator table. They would have been, quite simply, invisible.

And that invisibility has direct consequences on funding.

If donors evaluated differently, they would fund differently 

When evaluation systems fail to capture the relational, the care, and the articulation between movements, donors make decisions based on an incomplete and biased image of reality. They fund what they can measure, which usually coincides with the most institutionalized organizations—those with a greater capacity to produce reports in the required formats and with agendas that translate more easily into standard results frameworks.

This is not about asking donors to fund based on faith. It is about building evaluation systems capable of producing the evidence that is currently missing—and using it to transform the decisions that define who exists and who does not on the global funding map.

If end-of-project reports could communicate the density of the bonds built, the depth of collective learning, and the capacity of movements to sustain one another in adverse contexts, the eligibility criteria and resource allocation models would change.

That is the argument: funding inequity is not reproduced solely by a lack of political will from donors. It is also reproduced because the instruments used to evaluate the value of organizations were not designed to see them.