AI Research Summary

Investors don't evaluate impact measurement frameworks—they evaluate whether the underlying data is defensible and verifiable. The asset class is maturing away from self-reported outputs (loans made, patients served) toward independently verified outcomes (financial trajectories improved, health conditions resolved), making third-party verification an increasingly expected prerequisite rather than a differentiator.

Article Snapshot

At-a-glance research context

Content CategoryImpact Investing
Target ReaderImpact investors, fund managers
Key Data PointInvestors prioritize data credibility over framework compliance in impact deals
Time to Apply1–2 hours
Difficulty LevelAdvanced

Here's the honest truth about impact measurement that most frameworks leave out:

Investors don't care which framework you use. They care whether you can defend your data.

I've watched founders spend months building elaborate impact measurement systems — beautifully documented, framework-compliant, methodologically rigorous — and still lose deals because the investor didn't trust the numbers. And I've watched founders with simpler, more limited data close, because the data they had was credible and they could explain exactly how it was collected.

The framework debate is real. But it's the second-order question. The first-order question is: do your impact numbers mean what you say they mean?


What IRIS+ Actually Is (and Isn't)

IRIS+ is the GIIN's managed catalog of generally accepted performance metrics [1] — a standardized vocabulary for describing impact. There are hundreds of metrics across dozens of sectors [1], covering everything from jobs created to acres of land converted to renewable energy to patients served.

What IRIS+ is: a common language that makes it possible to compare impact claims across different investors and different portfolio companies. When one investor says "we measured 12,000 direct jobs created using IRIS PI6359" and another investor uses the same metric, they're measuring the same thing in the same way. That comparability is genuinely valuable for portfolio reporting, industry benchmarking, and aggregate impact claims.

What IRIS+ isn't: a measurement system. It's a catalog of metrics, not a data collection methodology. Choosing the right IRIS+ metrics tells you what to measure; it doesn't tell you how to collect the data, what baseline to measure against, or how to verify the numbers.

Founders who say "we use IRIS+" have answered one question. The harder questions — methodology, baseline, verification — remain.


The Output vs. Outcome Problem

The most important measurement distinction in impact investing is the one between outputs and outcomes.

Outputs are what you produce: loans made, patients served, students trained, homes built. These are relatively easy to count, attributable to your direct activity, and usually what shows up in impact reports.

Outcomes are what changes: credit scores improved, health conditions resolved, employment secured, housing stability achieved. These are harder to measure, require longer time horizons, and involve attribution challenges (would the person have improved anyway, without your intervention?).

Most impact measurement is output measurement. This is understandable — outputs are easier and cheaper to track. But sophisticated investors increasingly want to see outcome data, because outputs don't prove impact. A lender who made 10,000 loans may or may not have improved borrower financial health. A healthcare company that enrolled 50,000 patients may or may not have improved health outcomes. The loan is the output. The improved financial trajectory is the outcome.

The founders who build genuine measurement sophistication work back from outcomes: what change in the world does the business exist to produce? Then: what output is causally connected to that outcome? Then: what's the minimum data collection that credibly demonstrates the connection?

Most impact measurement tracks what's easy to count, not what matters. Outputs tell investors what you did. Outcomes tell investors what changed because of it. The asset class is maturing toward demanding the second.


Third-Party Verification: The Trust Signal

Third-party verification is the most underutilized trust signal in impact investing — and the one that's becoming more expected.

Self-reported impact data is not inherently untrustworthy, but it creates an obvious incentive problem: the people who benefit from high impact numbers are the people measuring them. Investors who've seen enough impact reports know that self-reported numbers tend to run optimistic.

Third-party verification breaks that incentive structure. Organizations like the Global Impact Investing Rating System (GIIRS), Sustainalytics, B Lab's B Impact Assessment, and sector-specific verification bodies provide independent assessment of impact claims. The methodology is examined. The data collection is reviewed. The numbers are checked.

For companies raising from institutional impact investors, third-party verification is increasingly a prerequisite rather than a differentiator. The GIIN's 2024 research documents that the most active institutional allocators now expect verification as part of standard due diligence [2].

For founders who don't yet have third-party verification: what matters is having a credible plan and a realistic timeline. "We don't have verification yet; here's the body we plan to work with, here's the methodology we'll use, here's the timeline" is a reasonable answer. "We track it ourselves and trust our numbers" is not.


What Actually Matters to Investors

After all the framework debates, the investors I've watched close most efficiently are looking for four things:

1. Baseline data. Not projections — actual measurements of starting conditions. Where were your customers/beneficiaries before your intervention? What changed? Without a baseline, there's no way to attribute the change to you.

2. Methodological transparency. How was the data collected? Who collected it? What's the sampling methodology? What are the limitations? Investors who understand measurement appreciate honest methodological descriptions. The ones who find something to hide in the methodology notes kill deals faster than any metric.

3. Causal logic. Why does your business model produce the impact you claim? The mechanism should be obvious: you make money when the impact happens, so the revenue model tracks the outcome model. If the causal logic requires a long narrative bridge, the impact may not be structural.

4. Improvement trajectory. Are your impact numbers getting better over time? Are you learning from the data and adjusting the model? Investors backing early-stage companies don't expect perfect measurement. They expect honest measurement that demonstrates the company is getting better at delivering what it claims.

The framework you use matters less than whether you can defend your data. Choose the metrics that connect directly to your theory of change, measure them with an honest methodology, and be transparent about what you know and don't know. That credibility is worth more than any framework compliance.


Related Reading


The Bottom Line

The IRIS+ vs. custom framework debate is real but secondary. What sophisticated impact investors actually want: baseline data that precedes the intervention, methodological transparency about how numbers are collected, causal logic connecting the business model to the impact claim, and an improvement trajectory that shows the company is learning. Third-party verification is increasingly expected from institutional allocators [2]. Self-reported data with honest methodology is better than framework-compliant data with unexplained numbers. Build measurement infrastructure that you can defend, not just display.

FAQ

What is IRIS+ and how does it work in impact investing?

IRIS+ is the Global Impact Investing Network's standardized catalog of performance metrics [1] that creates a common language for measuring impact across different investors and portfolio companies. It's not a measurement system itself—it's a metric vocabulary that tells you what to measure, but not how to collect data, establish baselines, or verify numbers. When investors reference specific IRIS+ metrics like PI6359 for jobs created, they're using a shared definition, which enables portfolio comparison and industry benchmarking.

Why does impact measurement matter for gig workers and side hustlers building impact businesses?

Credible impact measurement is what separates fundable companies from unfundable ones—investors don't care which framework you use, they care whether they can defend your data. For entrepreneurs raising capital, the ability to prove your impact numbers with transparent methodology and baseline data is often the difference between closing a deal and losing it. Without measurement credibility, you can't scale beyond your own capital.

How do you measure impact outcomes versus outputs in your business?

Outputs are what you produce directly—loans made, patients served, students trained—and are easy to count but don't prove impact. Outcomes are what actually changes—credit scores improved, health resolved, employment secured—and require longer time horizons and stronger methodology to verify. Sophisticated investors increasingly demand outcome data because outputs alone don't demonstrate that your intervention caused the change; working backward from the outcome you want to create ensures your measurement tracks what actually matters.

How much of a financial difference does impact measurement credibility make when raising capital?

While specific return figures vary by sector, the GIIN's 2024 research shows that institutional impact investors now expect third-party verification as standard due diligence [2], making it a prerequisite rather than a differentiator for institutional funding. Founders with credible, verifiable impact data close deals; those relying on self-reported numbers without transparent methodology consistently lose deals, regardless of the actual impact being produced.

What are the risks of using only self-reported impact data without third-party verification?

Self-reported impact data creates an obvious incentive problem—the people benefiting from high impact numbers are measuring them—so investors become skeptical of the credibility. Institutional allocators now expect independent verification from bodies like GIIRS, Sustainalytics, or B Lab [2], and founders without a clear plan for third-party verification face significantly reduced deal flow from sophisticated investors. The reputational risk of data that doesn't hold up to scrutiny can permanently damage investor relationships.

How do you get started building credible impact measurement for your side business or startup?

Start by establishing baseline data—actual measurements of where your customers or beneficiaries were before your intervention—then build transparent methodology documentation explaining exactly how data is collected, who collects it, and what the limitations are. Create a clear causal logic showing why your business model produces the impact you claim, and develop a realistic timeline and plan for third-party verification from a recognized body like GIIRS or B Lab. Methodological honesty and a credible verification plan matter more than perfect data at the beginning.

What percentage of impact investors now require third-party verification as standard practice?

The GIIN's 2024 research documents that the most active institutional allocators now expect third-party verification as part of standard due diligence [2], marking it as an increasingly mandatory requirement rather than an optional differentiator. This shift reflects the maturation of the impact investing asset class toward demanding independent validation of impact claims rather than accepting self-reported metrics.


References

  1. Global Impact Investing Network. (2024). IRIS+ System: Generally Accepted Performance Metrics. GIIN
  2. Global Impact Investing Network. (2024). Sizing the Impact Investing Market 2024. GIIN