Overview

This report provides an in-depth exploration of Web3 grant programs, analyzing them from various angles to understand their best practices, effectiveness, development, and overall impact. Our research examines best practices, methods for measuring program success, and the development of a maturity framework for grant programs in this space. Additionally, we outline key impact metrics based on insights from a range of Web3 grant programs. Another aim of this study is to identify opportunities to improve reputation systems within these programs.

To inform our analysis, we conducted interviews with grant program operators, engaged with forums across the ecosystem, and revisited key findings from the Grant Innovation Lab’s State of Web3 Grants report, along with other retrospective studies. Drawing from this data, we developed several tools: a framework to evaluate program effectiveness, a maturity model to assess different stages of grant programs, and a set of metrics to gauge impact. We advise operators to use these tools in combination for a more comprehensive assessment of program outcomes.

It’s important to acknowledge that grant programs in the Web3 space are still experimental by nature. The entire industry is in a state of evolution, and grant programs have had to adapt accordingly. Even the most advanced programs, often in their 5th or 6th iteration, are still relatively young. Therefore, we don’t recommend directly comparing them with long-established grant initiatives and programs outside of Web3.

Finally, this study is empirical, shaped by the realities of a fast-evolving industry, but it is also data-driven, incorporating our deep dives and analysis from a wide range of grant programs. Our hope is that the insights provided here will be valuable to the wider ecosystem, supporting the continued development of effective and impactful grant programs in Web3.

Team

ZER8: Solo managed grant programs for Thankarb Milestone 1 and distributed over 500k $, Ex- eligibility team lead at Gitcoin DAO: reviewed over 3000 grant applications and helped save over 1m$. I’ve spent my last 3 years in web3 full time involved in managing grant programs, launching Qf rounds, reviewing grants(>3000) and studying friction points & inefficiencies.

Twitter: @zer8_futureLinkedin: Popescu Razvan Matei 1Email: [email protected]

Mashal Waqar is the Head of Marketing at Octant. She’s also the co-author of the State of Web3 Grants 1 report, and the Retroactive Grants report.

Her governance experience includes work with the Octant Community Fund, Gitcoin Grants Council, RARI Foundation, and ThankARB. Her roles include heading partnerships at Bankless Publishing, operations at VenturePunk, and research for seedclub, Protein, Gitcoin, and Ethereum Foundation’s Summer of Protocols. To date, she has collectively reviewed 130+ grants across various ecosystems in web3.

Mashal previously co-founded a global media company (The Tempest), a revenue-focused accelerator program for early-stage founders, a femtech startup, and a community-building consultancy . Mashal holds a B.S. in Computing Security with a minor in International Business from Rochester Institute of Technology (RIT). She’s a Forbes Middle East 30 Under 30 and winner of the 19th WIL Economic Forum Young Leader of the Year award.

Linkedin: Mashal Waqar @mashalTwitter: @arleryEmail: [email protected]

This work was made possible by the Cartographers Syndicate via the RFP program funded by ThankARB ML1B.

Report Structure

1.Challenges & Best Practices2.Program Effectiveness3.Maturity Framework4.Impact metrics5.Integration, refinement and implementation6.Aknowledgements

  1. Challenges & Best Practices

Challenges

Depending on the type of grant program, there’s different challenges to draw on.

Direct grant programs can be vulnerable to the principal agency problem, grants issued through community voting or quadratic funding and voting mechanisms, a key challenge is tackling sybil attacks.

For traditional grants programs, a few challenges as shared in the State of Web3 Grants report 1:

  • Lack of reporting from grantees
  • Sourcing quality applications
  • Measuring impact of grant. There are several factors that can contribute to this such as:Inability to measure and track grantee progressLack of reporting from granteesInability to measure grantee sustainabilityFor programs that don’t define categories, comparison of grantees can be difficult if grantees projects are incomparable in natureGrant farmingCrafting good RFPs, especially technical ones, is tough. Proper assessment takes resources.Manual due diligence takes time and resources. Even tougher to do at scale.Coordination between programs to sift out grant farmersGrantees want free money without any expectations for milestones or deliverablesFinding quality applicants for programs with large applicant volumes is tough.Decentralizing to a community is tricky if the decision-making group does not have the relevant expertise. Another challenge specific to grants programs run by DAOs is it’s tough to coordinate with a larger DAO.Staying up to date with a large portfolio of grantees is also tough.
    • Inability to measure and track grantee progress
    • Lack of reporting from grantees
    • Inability to measure grantee sustainability
    • For programs that don’t define categories, comparison of grantees can be difficult if grantees projects are incomparable in nature
    • Grant farming
    • Crafting good RFPs, especially technical ones, is tough. Proper assessment takes resources.
    • Manual due diligence takes time and resources. Even tougher to do at scale.
    • Coordination between programs to sift out grant farmers
    • Grantees want free money without any expectations for milestones or deliverables
    • Finding quality applicants for programs with large applicant volumes is tough.
    • Decentralizing to a community is tricky if the decision-making group does not have the relevant expertise. Another challenge specific to grants programs run by DAOs is it’s tough to coordinate with a larger DAO.
    • Staying up to date with a large portfolio of grantees is also tough.

Solutions and Best Practices

  • Consistent Reporting - monthly reporting and a regular cadence with consistency is an excellent way to be transparent, and to communicate updates with the larger community. Aave Grants DAO (AGD) is a good example where they share overall metrics such as:Total Grant Applications ReceivedTotal Applications ApprovedAcceptance RatePercentage of projects that make it to the written stage and video call stageTotal Amount DisbursedPercentage of Complete Grant PaymentsPercentage of In-progress/Incomplete Grant Payments:Status of all grants (percentage of complete, in-progress, and inactive or sunset)Amount and Quantity Approved by CategoryA summary of the grantees each month, along with how much they were awarded, and a brief description of their grant.
    • Total Grant Applications Received
    • Total Applications Approved
    • Acceptance Rate
    • Percentage of projects that make it to the written stage and video call stage
    • Total Amount Disbursed
    • Percentage of Complete Grant Payments
    • Percentage of In-progress/Incomplete Grant Payments:
    • Status of all grants (percentage of complete, in-progress, and inactive or sunset)
    • Amount and Quantity Approved by Category
    • A summary of the grantees each month, along with how much they were awarded, and a brief description of their grant.
  • The transparency in the evaluation process and criteria helps limit duplicate proposals and help applicants know what they need to improve in order to receive a grant in the first place.
  • Having different grants formats and flavors for different types of grants is an effective way to deploy capital and to run grants experiments.
  • Being adaptable and flexible with changing needs of the program and of the industry is a good practice that prevents the program from being outdated and ineffective.
  • A way to improve the quality of applications is to outline the nature of the application process, making it easier for grantees to apply. An example of this is Uniswap Foundation, whose website includes a checklist for potential applicants as well as tips and considerations to help applicants strengthen their submission.

Learnings from Grants Programs, Operators, and L2s

The following insights were derived from a variety of ecosystems by combing through governance forum retrospectives and insights, as well as direct interviews with several operators and grants tools, and leadership from these projects.

Learnings from AGD (Aave Grants DAO) Grants Program

Program Overview

Aave Grants DAO (AGD) was a community-led grants program that fosters community growth and innovation in the Aave ecosystem by serving as a gateway for teams building on top of Aave Pools or with GHO. It was started in 2021 and announced winding down in 2024.

Insights from the AGD forum

  • Establishing a legal entity allows a DAO to begin operating more independently and with more protection and certainty for contributors by providing a legal structure for members and the ability to operate with traditional industries.
  • “Introducing a rotating committee of community experts to serve short (3-6 months) time frames as grant reviewers allows for further involvement of the community while also bringing expertise from different community members.”
  • “A well functioning grants team only works well if the team is nimble, can move fast, and act independently”.
  • “Term elections for choosing capital allocators from the community is a well-established norm in various reputed ecosystems such as ENS DAO, Gitcoin etc. By implementing a more rigorous and clearly defined set of criteria for selection, an election process would offer a fair and unbiased chance for community members, including those who are new contributors and proposers, to join the AGD team and contribute meaningfully to the DAO”.
  • “A transparent process empowers the community to keep on oversight on what proposals are getting accepted, rejected, funded and get a deeper insight into the review parameters and process…A transparent process will empower the community to ask the right questions and allow for constructive feedback, thus improving further iterations of AGD. Additional deliberation and community participation is better than having none.”
  • Grantee growth parameters should be objective and tied to tangible value creation to better evaluate the performance of the grants program.

Insights from the AGD Operations Team

  • The ecosystem: stage of protocol, contributor maturity, developer interest/awareness - are really important factors for structuring and setting up a grants program. Giving one size fits all advice for grants programs is difficult. Grants programs should start with their mission/goals/values and then work backwards to design and implement a program to best serve that
  • Rotating committee: This is also something AGD explored. Reviewers need to be trusted and recognized by the community is crucial - unless the program is set up with direct community involvement it’s pretty necessary for a community to trust a grants team.
  • A well functioning grants team…: Being able to grow and evolve as a grants program alongside the respective ecosystem is key. Grants programs should complement the other stakeholders, not be a totally independent island.
  • Elections: This was actually brought up to the community and was shut down because it’s hard not to make something like elections a popularity contest. It’s tough though because the surface level appeal is clear, however there’s skepticism on whether it would result in getting the best people in the role. To the point above, especially having people who are independent seems difficult if elections are heavily leaned on. A (main?) priority of reviewers becomes reelection in cases like this.
  • “Additional deliberation and community participation is better than none”: The program mission and even on a per grant basis has a huge influence on this. If the goal is contributor growth and the ecosystem is relatively unknown then taking long times to review and engage the community on each grant doesn’t scale and the process does more harm than good. Additionally, there may not be much of a community to engage early on.

Learnings from ThankArbitrum (Arbitrum DAO) Grants Program

Program Overview

Thank Arbitrum is the first pluralist grants program under Arbitrum and has committed to allocating $3.2M in ARB through 10 grants programs to the top crypto builders and contributors. For this study, we’ve researched ThankARB’s Milestone 1 Program. ThankArbitrum can be seen as a multi-dimensional grant framework and a learning machine that evolves from iteration to iteration based on the inputs, learnings, and outputs of each iteration.

Insights from the Thank ARB Milestone 1 report in the Arbitrum Forum:

  • Legitimacy
  • The Thank ARB strategic priorities found during #GovMonth-an initiative to reward people that want to help shape governance, did not receive support from top delegates because they did not include them in the process in a way that was seen as legitimate.
  • Without legitimate priorities and processes it is hard for the DAO to enable more experimentation.
  • Communication
  • Communicating the results of a multi-dimensional grant framework is a complex task. Segmenting by persona (delegate/builder/grantee/etc) could be a solution
  • DAO members need communications to be simplified and aggregated. They’re unsure of where to go for maintaining context, knowing about responsibilities and potential roles as a DAO member, and learning about other ways to become a good Arbitrum citizen.
  • Organization
  • Potential contributors need a pathway to get a small grant to do narrow scoped work which then is assessed to provide next steps for the contributor to know there is opportunity and also for the DAO to double down on high performers
  • The community is willing and capable however they’re unsure of what to do next.
  • Evaluation and organization of 13 grant programs makes for extreme complexity. A potential route to tackling this is with focused workstreams.
  • Experimentation
  • Quadratic funding is likely not a good meta-allocation mechanism. Exploring options with Quadratic Voting and other custom algorithms could be better alternatives.
  • Successful tools usually solve more than one problem.For example: Hats protocol provides many solutions from workstream accountability to dynamic councils. Hedgey streams are easy to use and can work for grants as well as salaries.When finding solutions, it’s useful to understand context and to think through other ways the tools or solution can be used in the ecosystem.
  • The DAO needs more indexed data available for the community to interpret. Overall, the DAO does not have any agreed upon metrics to give definitive answers about what success looks like.
  • Funding Success
  • There have been lots of learnings with the foundation through compliance, processes, and systems implemented and funded within the ecosystem.
  • As decisions are made, they are observing and documenting criteria to open up the process in the future. Confirming community-led review capabilities is a priority before removing decision making power in the DAO. This applies at the framework level as multiple programs are attempting to draft and use criteria-based approaches.
  • Firestarters program served a clear purpose and was the most successful program (as perceived by the DAO because it drove tangible results for the DAO in a short time.
  • Arbitrum partnered with legacy organizations such as the American Cancer Society, Blackrock Franklin Templeton signifying important achievements.
  • Data driven funding will be critical in Milestone 2 (STIP support, OSO, etc)

Learnings from ThankARB ML1 Programs

Thank Arbitrum is a plural grant framework which involves multiple separate grant programs, this implies they each have to report separately.

List of programs with amounts funded:

Program NameAmount
Thank ARB350,000 ARB
Firestarters300,000 ARB
MEV Research330,000 ARB
Aribitrum’s “Biggest Small Grants Yet”90,000 ARB
Arbitrum Co. Lab by RN DAO156,000 ARB
Open Data Community (OCD) Intelligence165,000 ARB
Grantships by DAO Masons154,000 ARB
Plurality Gov Boost552,500 ARB
Questbook Support Rounds on Gitcoin100,000 ARB
Arbitrum Citizen Retro Funding100,000 ARB
Allo on Arbitrum Hackathon122,500 ARB

Here are the programs that participated and the reporting for each of the programs. Below we have presented the learnings from these grant programs:

1. Arbitrum Matching Fest

Program Description: This program added extra funds to the matching pool for quadratic funding rounds on Arbitrum. Top programs running on Arbitrum were selected to receive additional funding.

Why the Program Was Funded

This program was designed to:

  • Make an agreement with Gitcoin to prioritize deployment of Allo protocol on Arbitrum and integration to the Grant Stack interface.
  • Attract new audiences to Arbitrum via Gitcoin which may find a home on Arbitrum or generate transaction fees
  • Market Arbitrum in a positive way in the Web 3 community

Takeaways:

  • Partnering with a legacy health organization (American Cancer Society) turned out to be a great move, bringing in a lot of positive attention and significant marketing benefits.
  • Arbitrum could potentially handle all the main rounds on its own, given that it successfully hosted nearly half the rounds on Gitcoin.
  • Many of the organizations involved, like Metagov and TEC, had little to no prior experience with Arbitrum, yet they were successful in bringing their donor bases to Arbitrum for these rounds.

2. Firestarters

Program Description: The Firestarter program was designed to address specific and immediate needs within the DAO. Problems as identified by delegates and Plurality Labs (acquired by Thrive Protocol) were given a grant to do the initial catherding and research that is required to kickstart action.

Expected Outcomes

  • Quickly and effectively address urgent needs resulting in high-quality resolutions.
  • Demonstrate the ability to create fast and fair outcomes that benefit the ecosystem.
  • Frameworks for service providers to build a scalable foundation for Arbitrum growth.
  • High quality resolution of key, immediate needs. better, more fair outcomes for the ecosystem, more fair frameworks for service providers speed and fairness.

Takeaways:

  • This program was well received and has significant positive impact because it directly addressed needs the community recognized.
  • Firestarters need a clear next step rather than expecting indefinite continuation. The need for on-chain tools to assign roles for igniting and holding firestarters accountable stood out.
  • While giving someone like Disruption Joe decision-making power worked, it’s worth exploring other models that would function well without relying on a single individual.

3. Thank Arbitrum

Program Description: This program introduced a novel way for the community to connect with the DAO. It provided a foundation for the community to continually sense and respond to the fast-paced crypto environment. Its aim was to provide a place for DAO members to learn about what is happening, to voice their opinions, and to learn about opportunities to offer their skills to Arbitrum.

Expected Outcomes

  • Improve how grant funding is distributed
  • Increase delegate and community confidence that funds are allocated as intended based on a collaborative process.
  • Assure the community that we can manage the grant process inclusively and efficiently.

Takeaways:

  • Despite efforts to remove sybils, they struggled with engagement from participants who weren’t genuinely invested in the DAO. Future efforts should focus more on engaging delegates and valuable contributors.
  • While significant data was gathered on how token holders feel, it didn’t fully reflect the disengagement among delegates.
  • The need to develop better interfaces tailored to DAO members’ needs.
  • The initial attempt at a tARB reputation system didn’t work as planned. Discussions with Hats Protocol revealed that dynamic councils based on Thank ARB contributions could enhance decision-making, particularly for tasks requiring a high level of effort, similar to what’s needed in Optimism’s appeal system.

4. MEV (Miner Extractable Value) Research

Program Description: This grant was awarded to establish the Plural Research Society and build the first forum and plural voting tools needed to support it. As part of the grant, an experimental plural research workshop would be hosted where the participants – researchers from across the MEV space – would convene to present their research proposals, discuss and debate using a variety of pluralistic tools, then ultimately vote to decide how to allocate 100,000 ARB in grant funding.

Why the Program Was Funded

This program was designed to :

  • Create a scalable platform to build upon a successful proof of concept built for Zuzalu
  • Leverage the expertise of highly respected advisors to conduct novel research
  • Discover if a better forum can drastically improve the quality of conversation
  • Use the forum under-the-hood design to allocate funding on a niche, high-expertise topic
  • To ensure platforming for a broad selection of researcher opinions in the gas fee optimization and MEV space which has incentives to platform only financially beneficial views

Expected Outcomes

  • A new forum dedicated to high expertise discussions, pushing the boundaries of decentralized governance and credibility.
  • The forum will be a place where qualified voices and thought leaders stand out and the most important discussion gets attention in a decentralized way.
  • Radically amplify new ideas and technology.

Takeaways

  • Having a dedicated, high-quality advisor who is truly invested in the project made a big difference in attracting top talent and ensuring adherence to specs.
  • Some viewed the program as merely a “forum” or “conference,” but it’s actually a critical experiment aimed at improving our understanding of gas fee optimization and information sharing in the ecosystem.
  • Due to delays in compliance, assembling the right team, and coordinating schedules for researchers, this program has taken longer than others, with more insights expected as it progresses.

5. Arbitrum Co.Lab by RNDAO

Program Description

RnDAO designed a programme to grow the Arbitrum ecosystem through a research fellowship focused on governance and operations challenges, with the goal of incubating sustainable Collaboration Tech ventures that build on Arbitrum. The ventures operate as a “business cluster” creating network effects to attract others through integrations, talent, investor networks, etc.

Why the Program Was Funded

This program was funded to:

  • Explore proven methods of deliberation in a web 3 context such as citizen’s assemblies or sociacracy 3.0
  • Explore AI solutions to automate components of DAO contributor experience and/or governance design
  • Provide a support network for researchers along with a pathway to future funding.

Takeaways:

  • Ways of tracking applicants who aren’t selected but remain enthusiastic about Arbitrum are required and explored
  • A more clear definition of the research topics for the participants would have ensured more targeted outcomes.
  • Program is still ongoing so learnings are still being observed.

6. Plurality Gov Boost

Program Description

This program included direct grants funded to increase Arbitrum DAO ability to effectively govern its resources.

Why the Program Was Funded

This program was designed to:

  • Utilize PL delegated decision-making freedom to use different decision making modalities to fund needed work
  • Fund research and milestone based work using milestone based payouts.
  • Anticipate future needs of the DAO and build, processes, procedures, and programs which may have substantial impact on the DAO but lack other ways of being funded.

Takeaways:

  • The program highlighted the need for clearer decision-making pathways and transparency, especially for specialized requirements.
  • Some decisions were made based on expert input that wasn’t widely understood by others. For example, Helika Gaming’s data was crucial for analyzing acquisition costs across different verticals, which will save the DAO significant funds over time, even if these choices weren’t always clear to everyone.

7. Questbook Rounds on Gitcoin

Program Description

This program started with two unique governance experiments. A “domain allocation” governance experiment allowed the community to direct funds to four matching pools based on Questbook program domains. Then 4 quadratic funding rounds sourced grants to assist their domain allocators in sourcing grants.

Why the Program Was Funded

This program was designed to:

  • Support Questbook program success based on them answering that the “reason they would not succeed if they didn’t”
  • Show how pluralist programs can be complementary to each other.

Takeaways:

  • The disparity between votes in the domain round and funding round, especially in categories like Gaming, suggests that Quadratic Funding might not be the best method for all cases.
  • The true success of this program will be evident in how many projects receive additional funding after it ends. This will be most telling if the initial allocation isn’t fully used.
  • Offering a small ARB incentive for donations over $10 could nearly double the average donation amount on the Thank ARB platform.

8. Arbitrum Citizens Retro Funding

Program Description

The goal of this program was to distribute 100k $ARB to best Arbinauts and citizens that have proactively worked and truly impacted the Arbitrum DAO since its launch. They aimed to reward work that helped kickstart the DAO and/or contributed directly to Arbitrums strategic goals.

Why the Program was Funded

This program was designed to:

  • Reward DAO contributors who had stepped up to contribute without reward since the DAO started
  • Encourage future participation by setting a precedent for retroactive rewards
  • Experiment with using Quadratic Funding retroactively
  • Market the Arbitrum community in a highly visible way

Takeaways:

  • By focusing only on individuals, certain organizations that have made significant contributions were not able to participate. Involving delegates in creating eligibility criteria could help address this.
  • A dynamic pot size might be worth considering in future rounds to ensure higher participation.

9. Allo on Arbitrum Hackathon

Program Description

This program was all about expanding what Gitcoin’s Allo protocol could do on the Arbitrum One Network. New funding strategies, interfaces, and curation modules were available for all 490 protocols on Arbitrum to use freely to fund what matters.

Why the Program Was Funded

This program was designed to :

  • Enable new on-chain allocation strategies and interfaces which can be used by all 500+ protocols building on Arbitrum
  • Support more projects to exist on the Allo protocol project registry to create consistent open data standards for future community review and evaluation
  • Discover new grant evaluation interfaces and mechanisms
  • Support more decision making modalities to be available for use and testing in Thank ARB governance and potentially for Arbitrum DAO governance if successful.

Takeaways:

  • It’s crucial to clearly communicate the judging mechanism from the start. Even if the judging happens at the end, sticking to announced prize amounts is key to maintaining trust.
  • A lack of clarity via eligibility led to some undesirable situations, which were fortunately resolved through direct conversations with the teams involved.

Learnings from ThankARB leadership

Web3 grants are decentralized resource allocation mechanisms, broader than traditional grants, and designed to address ecosystem needs. These grants foster innovation, infrastructure, and ecosystem growth through decentralized decision-making.

ThankArb functions as a learning machine, iterating and improving with each funding round. Its focus on adaptability allows it to identify and expand successful mechanisms, decentralizing decision-making by empowering competent individuals.

The future of Web3 grants also involves refining decentralized funding mechanisms, such as quadratic funding (QF) possibly paired together with other onchain allocation mechanisms eg:conviction voting (CV), but even more important is applying tailored solutions that address crucial ecosystem needs., eg: direct funding, councils, etc . Experimentation with off and on-chain resource allocation remains critical.

Notable programs include ThankArb and ThankApe (ApeCoin community), both of which emphasize strong community-building before allocating grants. Iterative learning and adaptability make these programs stand out.

A natural comparison would be to look at RPGF vs ThankARB as they both serve as the main allocators for L2 ecosystems(Optimism and Arbitrum), but in reality ThankArb and Optimism RPGF1 serve distinct roles, with ThankArb focusing on decentralized allocation and RPGF1 on centralized decision-making. Success in both can be measured through ecosystem impact and milestone achievement

Key best practices include:

  • Ecosystem Needs: Tailoring grants to the ecosystem’s specific requirements.
  • Multiple Pathways: Offering diverse funding mechanisms for different projects, protocols, etc
  • Iterative Learning: Running experiments to refine grant strategies and address evolving needs.

Grant program effectiveness can be assessed through metrics such as milestone achievements, impact on ecosystem needs, and efficiency in resource distribution. Allocators must play a proactive role in managing and growing projects.

While Web3 aims for decentralization, most grant programs are still centralized in decision-making. Programs like ThankArb are pushing the boundaries by experimenting with on-chain mechanisms to distribute decision-making power more effectively.

Learnings from Karma GAP

About GAP:

The Grantee Accountability Protocol (GAP) is a tool designed to address grants funding challenges by aiding grantees in building their reputation, assisting communities in maintaining grantee accountability, and enabling third parties to develop applications using this protocol.

Takeaways

Effective grant programs should have clear goals, transparent evaluation rubrics, and involve the community in decision-making. A maturity index can rate programs based on factors like financial sustainability and community support. Feedback and marketing assistance are also important for successful grant programs

  • Articulating what an operator wants out of the grants program is crucial. This can help set the tone and guide the latter process of applications, grantee selection, and evaluation later on.
  • Having a rubric to evaluate the application from is also especially useful. This allows applicants to understand reasons for decision-making.
  • Having some form of feedback, even if generalized feedback is helpful to applicants, and is taking an extra step that can improve the process.
  • Post-funding support is an area most programs fall short in. Following up with grantees and collaborating or sharing feedback where useful, and amplifying and providing marketing support to grantees can go a long way in the grantee journey.

Learnings from Optimism RPGF 3 and 4

About Retroactive Public Goods Funding Rounds:

RetroPGF (Retroactive Public Goods Funding) is a mechanism that rewards public goods based on proven impact.

Retro Funding is being run as an ongoing experiment where insights and learnings from each round feed the next iteration of the program and inform the design of future experiments.

Takeaways

  • “Having standardized, verifiable, and comparable impact metrics is crucial to objectively measure the impact of applications.
  • Having stronger eligibility criteria results in less spammy applications.
  • Applications for funding can be shortened, as deadlines are the real driver of submissions
  • Defined grants rounds set the stage for a more focused and impactful approach to incentivizing contributions
  • The broad round scope overwhelmed badgeholders and applicants.
  • The absence of standardized, verifiable, and comparable impact metrics, and the reliance on individual subjective review criteria, made it difficult to objectively measure the impact of applications.
  • The self-selection of applications to review by badgeholders did not ensure a fair review by a minimum number of badgeholders of each application.
  • The sheer volume of applications, combined with weak eligibility criteria, complicated the voting process for badgeholders. Lists were not effective in scaling the ability of badgeholders to accurately vote on more applications.
  • These learnings were derived from 300+ crowdsourced pieces of feedback, collected via a survey among badgeholders, the RetroPGF 3 feedback Gov Forum post, as well as the badgeholder Discord/Telegram channel and will inform the next iteration of Retro Funding.
  • Below we document these learnings in detail as part of a gradual process to open source Optimism governance design. This post is non-exhaustive and aims to focus on the core learnings and most popular requests.”.

Learnings from the RPGF Team

Grants in Web3 are designed to support future innovations and projects by providing funding based on anticipated impact. Programs like RetroPGF (RPGF) offer rewards for past achievements, contrasting with traditional grants that focus on future outcomes.

Takeaways

  • Optimism’s approach includes rewarding ecosystem impact through grants, with a commitment from the Collective to recognize and compensate valuable contributions. Optimism aims to transition towards a fully on-chain grant system as the social layer and metrics become more integrated with proper attestation mechanisms.
  • The future of Web3 grants involves transitioning to more on-chain systems, improving transparency, and creating more sustainable models. This includes developing grant systems that can effectively measure and reward impact over time, rather than just focusing on short-term growth.
  • Top grant programs, such as RPGF and the Grants Council, vary in their approach. RPGF has a larger budget and compensates for past impact, while the Grants Council emphasizes a high standard of professionalism. Different programs may have different strengths depending on their focus and metrics.
  • The effectiveness of a grant program depends on its goals and metrics. RPGF may be more effective for compensating past achievements and ensuring sustainability, while programs like the Grants Council focus on future-oriented projects and maintaining high standards of professionalism.
  • Best practices include maintaining clarity and transparency throughout the grant process, avoiding rule changes mid-term, and ensuring fairness through multiple scorers and feedback steps. Establishing clear objectives and metrics for evaluation can enhance the quality of grants and outcomes.
  • Effectiveness is evaluated through metrics assigned to each grant mission, ensuring that grantees meet specific criteria. This includes tracking the long-term impact and sustainability of projects, rather than just short-term growth. For educational grants, follow-up on the career progression of educated individuals is also important.
  • Challenges in decentralization include ensuring fairness, transparency, and effective decision-making processes. There is a need for improved coordination and understanding among ecosystem participants to address issues such as misaligned incentives and ensuring that grants genuinely solve real-world problems.

Learnings from Badgeholders

RetroPGF through Optimism is considered one of the best grant programs due to its low bureaucratic overhead and flexibility for grantees. Retro Quadratic Funding (QF) rounds also stand out for their ability to involve the community and simultaneously provide financial support and marketing exposure to projects that have already made significant contributions.

RetroPGF is an interesting model because its post-completion funding model ensures that projects have already delivered value before receiving grants. This reduces the pressure to meet predefined milestones and offers projects more freedom to adapt to the fast-moving industry landscape. The Optimism program’s simplicity, large funding rounds, and low overhead make it highly efficient for both projects and the ecosystem.

Learnings from Gitcoin Grants Rounds

Grants in Web3 are funds provided without an expectation of return or equity stake. Their historical purpose is to support public goods, infrastructure, or research that can advance human progress. In the crypto space, grants are often funded through token generation events (TGEs) and are meant to drive ecosystem growth by funding various initiatives.

The purpose of grants is to ignite growth within ecosystems by using incentives. They aim to foster innovation and development in the Web3 space, although many grants have become part of marketing strategies rather than solely focusing on growth.

Gitcoin’s grants are funded by the Ethereum community and the wider Web3 community, rather than through Gitcoin’s own treasury. This is different from many crypto grant programs that use funds generated through TGEs.

  • Gitcoin employs a novel funding mechanism called quadratic funding, which is designed to enhance capital efficiency and support digital public goods.
  • Gitcoin has its own treasury that funds development but does not run grant programs at scale. The grants are primarily for digital public goods and open-source software.

Gitcoin is developing systems for better capital efficiency and allocation, with the goal of influencing broader grant programs and ecosystems. There’s a trend towards grants that can be converted into equity, which could provide a return on investment for grantors and enhance sustainability.

  • The future may include more automated incentive structures based on on-chain network metrics, reducing reliance on human decision-making. Projects like Gitcoin’s work with Optimism on direct-to-contract incentives are examples of this trend.
  • Noted for its mature evaluation process and comprehensive review system. RPGF program is well-structured with multiple stages of evaluation and metrics review.
  • Recognized for its strong ecosystem and effective use of its treasury. ENS manages its grant programs creatively, balancing impact and capital allocation.
  • Praised for its diverse programs and innovative approaches, including convertible grants to equity. The Solana foundation has demonstrated significant growth and effectiveness in its grant strategies.
  • Each program excels in different areas, making direct comparisons challenging. Optimism is noted for its evaluation maturity, ENS for its effective ecosystem management and creative grant use, and Silvana for its innovative approaches and recent growth. The best program depends on specific criteria and goals.
  • Clearly define the desired outcomes and return on investment (ROI). It’s crucial to understand how funding will translate into impact.
  • Strong eligibility criteria are essential for efficient allocation of funds. Protocol Guild’s success is attributed to its well-defined criteria.
  • Know your builders and provide additional support beyond funding. Implement milestones and accountability measures to ensure progress and alignment with goals.
  • Effectiveness is best measured by defining clear outcomes, ROI, and using appropriate metrics and accountability controls. Tools alone are not enough; understanding the underlying questions and processes is key.
  • The potential of decentralized grant programs lies in creating reputation systems that can track and evaluate the effectiveness of both programs and grantees. Open systems and public ledgers offer opportunities for developing these systems.
  • Building a comprehensive map of grant programs, their impact, and connections can improve the efficiency of the grant process. This includes understanding how programs and grantees interact and contribute to the ecosystem.
  • Gitcoin’s approach when it comes to grant funding: the ecosystem has a vision and the grant program derives its mission from it. Next, there’s granular objectives derived from the mission followed by key results for each and followed by leading indicators.
  • The Gitcoin leadership strongly believes in the evolutionary approach, no metric is good because it becomes redundant over time-people learn how to “hack it”. This is aligned with other thought leaders in the ecosystem such as Metrics Garden.
  • Incorporating feedback from participants is invaluable. Gitcoins products have evolved significantly since their inception due to feedback. They put emphasis on streamlining the application process and introduced more robust support systems to assist grantees. As a result, there was an 80% grantee satisfaction rate in GG20, the highest they’ve seen in at least the past year.
  • Gitcoin is pioneering allocation mechanisms with Allo and growing the EVM grant pie as more and more L2s are supported by GrantsStack(the permissionless platform to manage grant programs).

Learnings from Octant

  • One big takeaway is people don’t often read and pay attention to details. This observation applies to communities and users across the board from grantees to donors. Unless there’s repeated, consistent emphasis on communication and strong emphasis on any detail worth nothing, people are likely to ignore key pieces of information. A learning here has been to make it easier for people to understand and double down on communicating often and across channels.

Learnings from Giveth

Grants in Web3 are decentralized financial mechanisms aimed at stimulating ecosystem growth by supporting projects that contribute to public goods, open-source infrastructure, and innovation. Unlike traditional grants, they should emphasize decentralized decision-making and often align with token economies, enabling project sustainability through economic models, rather than direct funding alone and community/ecosystem alignment

Giveth challenges the traditional grant model, viewing grants as inefficient and unsustainable. Instead, Giveth will soon propose a more innovative approach by enabling projects to tokenize and create self-sustaining economic models, allowing for long-term growth and a better alignment of incentives. This approach shifts from one-time grants to fostering ecosystems where projects can generate value through bonding curves and tokenized economies.

The future of Web3 grants is seen as pluralistic, with a mix of decentralized governance models and council-led decisions. While grants might not ever be fully on-chain, the importance of efficiency, decentralized governance, and community involvement is clear. Retroactive funding mechanisms like RetroPGF (Retroactive Public Goods Funding) are predicted to become more prominent, reducing the bureaucratic burden on startups and fostering entrepreneurial freedom.

Tips for improving grant programs:

  • Eligibility clarity: Clear eligibility criteria and streamlined processes reduce wasted time for applicants.
  • Community involvement: Programs that incorporate community feedback, such as Quadratic Funding rounds, can foster broader project support.
  • Multiple program types: A pluralistic approach to funding (e.g., growth grants, builder grants, mission proposals) can cater to various ecosystem needs
  • Growing the ETH Pie: Giveth diverse grant programs are running on multiple chains and ecosystems

Evaluating the net value of grant programs to the ecosystem is more relevant than individual project outcomes. A tolerance for failures, especially with small grants, and highlights the importance of viewing programs in aggregate. Decentralized impact measurement can help to avoid bureaucracy, especially for smaller grants, advocating for metrics that reflect broader ecosystem health.

Decentralized governance in grant programs is seen as difficult to implement efficiently. While decentralized decision-making can bring greater community participation, the complexity of ensuring effective, scalable governance remains a significant challenge.

The primary issue lies in the misalignment of incentives within traditional grant structures. Grantees may only meet the minimum requirements for receiving funds, creating inefficiencies. However, the ecosystem as a whole, particularly the reliance on grants without long-term sustainability models, is a bigger problem. By enabling projects to create their own token economies, the incentive structure can shift toward long-term value creation.

2. Program Effectiveness