Effective Altruism Network Funding Guidelines

High-level thoughts:

  • Stewardship: Many people in the community view EA funders as stewards of EA money, similar to how citizens might consider government officials as stewards of government resources. Without fair, transparent processes detailing how and why funding is distributed many EAs may rethink their commitment to advancing EA causes. Especially given the extraordinary levels of interconnectedness among funders and fundees, leading to extremely high likelihood of conflicts of interest (e.g., power grabs) and information cascades (a.k.a., unfounded gossip). 
  • Maximizing EA Hours: An hour is an hour is an hour. Money matters primarily because it’s highly fungible with time. EA currently focuses on the number of organizations and employees, number of dollars allocated, and other high-level metrics to gauge “its” “success”. But the more useful metric is the total number of EA hours allocated in the most valuable way possible by altruists and non-altruists alike (including those outside the core EA community). Funding ought to maximize the number of hours people can dedicate to doing their highest value work (e.g., launching or growing high-impact organizations), working in less impactful organizations but transferring their salary to others doing higher impact work (e.g., earning to give), or working on self-development so they can then more effectively dedicate more of their discretionary resources to high-impact areas (e.g., strategic life optimization). EA funders currently seem to reject this framing and focus primarily on the value of their personal hours and not the hours of others. This seems to be “missing the forest for the trees”.
  • Collaborative Feedback: Not providing any feedback to rejected applicants means many thousands of hours of EA time will be potentially wasted. With many projects that fail to be funded, they should be closed or improved in some way the applicants likely don’t yet understand how to do. We should help those applicants if we’re all to succeed as a community. Generally speaking, this should be the evaluator (or a dedicated third-party service). If the evaluator doesn’t have the time to do this and doesn’t consider the net savings of potentially thousands or tens of thousands of EA hours and the possibility of preventing unexpected catastrophes from misguided applicants worth <5 minutes of their time, it seems they may be valuing their time in a very non-standard way. At a heuristic level, if an applicant spends >500 hours on building their organization or initiative then an evaluator ought to spend 1-60 minutes on providing feedback (for all reasonable applications and only in cases where the evaluators have meaningful insights to contribute). This feedback time should be pre-allocated into the evaluator’s standard time allocations for applicants. In other words, less money should go to applicants themselves and more to evaluators. 
  • Evaluator Evaluations: In many cases, it is not the applicant with their proposal that is incorrect–it is actually the evaluator who is incorrect in their evaluations due to their lack of skill, care or their conflicts of interest. To partly mitigate this, the evaluator’s colleagues ought to independently rate the same applicant. Each evaluator’s inter-rater reliability scores ought to be made public for most $1M+ sized allocations, as well as their track record of most $1M+ sized allocations with 1, 5, and 10 year follow ups. The same would often, but not always, follow for some “contended” evaluation. Some of those evaluations would likely need to remain private. But overall providing feedback to applicants as suggested above also helps ensure the evaluator improves their skill and/or ethics, as well. If they have a history of poorly reasoned explanations or they regularly disagree with their fellow evaluators, that information ought to be public. Having highly divergent evaluators might be a good thing, but even so their track record ought to be public so the community can come to their own conclusions.
    • Public Evaluation Example: “John Doe granted $1.5M to Jane Doe Inc. to do X for Y reasons with Z confidence that it would be deemed a reasonable grant in 10 years.”
    • Public Review Example: “Jane Doe Inc. is still operating at Y10 and is judged by our current evaluators in a shallow investigation as plausibly net positive.”
  • Status Quo Bias: In general, EA seems to have ossified around relatively conventional ways of thinking and operating. “EA gospel” has become a thing and a culture of brown-nosing and status maximizing has become entrenched. Some of this is unavoidable, but much isn’t. The flow of capital is plausibly the single biggest contributor to EA’s status quo and culture. EA has become heavily allergic to innovation. 

There are many different arrangements of funders in and around the EA ecosystem, plausibly requiring different norms for each. Here is a non-comprehensive taxonomy: 

  • Ownership: community “owned” (e.g., EA Funds), organizationally owned (e.g., shareholders, Boards of Directors), individually owned 
  • Size: small organizations, moderate-sized organizations, large organizations
  • Governing Role: high-level of governing responsibility, moderate-level of governing responsibility, no level of governing responsibility 
  • Capital Type: donations/grants, investments, loans, and other arrangements 
  • Allocation Entity: direct, indirect (e.g., regrantors), crowdfunded 
  • EA Involvement: active involvement in effective altruism community, moderate involvement in effective altruism community, no involvement in effective altruism community

Here are a few ideas to consider and potentially experiment with in some parts of the EA funding ecosystem:

  • Tiered Application System: Funders could consider a four-tiered application system to reduce applicant friction:
    • Round 1: Ask applicants to submit a one-pager and/or business model canvas for their project. Applicants generally should have already created those documents in their initial ideation phase before they consider fundraising. The funder could also ask the applicant to share any extenuating circumstances the funder ought to take into consideration with their application (e.g., potential conflicts of interest). Within 72 hours, funders could let promising applicants know they will continue to the next stage with an automatic email. Applicants who are less promising might also be given the option to continue, but with the generic response, “Based on a very cursory review, we don’t know if this proposal fits our funding criteria. That said, you’re welcome to continue to the next stage if you’d like. These entrepreneurial and writing coaches are available to help you craft application materials, sometimes at no cost to you. If you’d like to work on your application more before we review it deeply, please click this button to delay the process for 14 days. Resubmit your updated materials anytime before then. You will not be penalized at all for choosing to resubmit.” Applicant time: 1-20 hours; funder time = <1 hour.
    • Round 2: Ask applicants your most decision-relevant 3-5 questions in a standardized application. One question should be disclosure of conflicts of interest. Even after submission, the application should be editable by the applicant although the initial version should also be sent to the funder. Suggest a turnaround time for applicant and evaluator of 7 days. Applicant time: 1-20 hours; funder time: 1-5 hours.
    • Round 3: If needed, ask applicants any other necessary questions or request a full proposal. Otherwise, skip this step. Suggest a turnaround time for applicant and evaluator of 7 days. Applicant time: 1-20 hours; funder time: 1-10 hours.
    • Round 4: Have an open-ended verbal conversation on the project’s viability, strategic next steps, alternative funding channels, and any remaining open questions. Perform a final “gut” check on the decision to allocate or not. Suggest a turnaround time for call between applicant and evaluator of 3 days. Applicant time: 1 hour; funder time: 1 hour.
  • Confirmation: Run approved applicants through due diligence, legal review, and other internal processes to confirm the decision to allocate. Then send money. Suggested turnaround time: <4 weeks.   
  • Standardized Application Forms: Funders could use a standardized application form, with different version lengths for different purposes. Some have tried to standardize forms, but even subtle differences in language and form lengths means applicants usually spend hours reediting their materials.
  • Example Applications: Two sample successful and two sample unsuccessful applications materials could be included on the funder’s website next to the application form, with reasons for the rejections and acceptances included. These could be anonymized. At least one could be the before (rejected) and after (accepted) application for a project. 
  • Line By Line Feedback: Applications could be returned to applicants with anonymous positive and negative feedback on each line or section from the funders. A simple Google Doc with 1-3 comments would be exceptionally helpful. 
  • Holistic Feedback: For many rejected applicants, funders could give written or verbal feedback on why the project wasn’t funded and the top three things the applicant could do to improve (a) their odds of acceptance next time and/or (b) the project in general. In either case, the reviewer could verbally explain their thinking very quickly and have it translated into written form or send the raw recording after it is masked by an anonymizing speech synthesizer. A clear disclaimer saying all feedback is rough, incomplete, and perhaps completely off the mark could be sent alongside the feedback. Feedback could also, if desired by the evaluator, always be framed as questions for the applicant to consider–no declarative statements. The rejected applicant could be required to click a box legally waiving any rights to use any feedback in any legal context for them to receive it. Note this might only be an honor-based system, given differences in enforceability in different jurisdictions. And finally, a heuristic for the length of time an evaluator could timebox for providing feedback might be <5 minutes in most cases. 
    • For poor submissions, the feedback could be a simple generic email with helpful broad suggestions.
    • For good submissions, the feedback could be a simple generic email with helpful broad suggestions and 30-90 seconds of tailored feedback.
      • Feedback Example #1: “This is the 15th similar proposal that I’ve seen and it doesn’t appear to have a differentiator yet. Perhaps ask 2-3 mentors if they think it’s a tarpit idea or not, then decide whether you’d like to continue working on it. You can apply again in 30 days.” (30 seconds to write)
      • Feedback Example #2: “Hey I’m really sorry, but I just don’t think I get this proposal. I asked two of my colleagues to review it as well, but we’re all a little stuck here. Could you rework it and potentially re-apply in 30 days? See EASE for experts that can help you update your materials.” (25 seconds to write)
    • For outstanding submissions that almost met the funding bar, the feedback could be a simple generic email stating they were very close and might consider applying again in the future when more funds were available. They could also include 30-90 seconds of tailored feedback.
      • Feedback Example #1: “This was a great proposal and I think you could do a lot of good with it. You just missed our funding bar, so feel free to apply again later.” (20 seconds)
      • Feedback Example #2: “We’re a little more funding constrained than we expected to be, so we can’t fund this now. But we do expect this to change in 4-6 months. We’ll email you when we have more capital and see if you need support then. Perhaps try [Funder X] in the meantime.” (30 seconds to write) 
  • Feedback Open Call: Funders could offer an optional one-hour call for all applicants to join after allocations decisions are made for applicants to ask questions and receive answers. Funders can choose to not answer specific questions if privacy is a key priority for their approach. 
  • Application Statistics: Funders could provide application statistics on the primary application page. This could include average or expected number of applications, average acceptance rate, average and median amounts granted, smallest and largest amounts granted, and a ranking of the most common reasons for rejection. It could also include a list of the top reasons why applicants are historically rejected, as well as changes to these metrics over time. Note that putting statistics in other places like the EA Forum makes it difficult for applicants to find.
  • Private Submissions: Funders could allow applicants the option to submit an optional “for your eyes only” and not have it circulated among their formal or informal advisors. Ideally the application could selectively choose which evaluators they would be comfortable acting as reviewers. The funder could acknowledge this might lower the odds of the submission being successful. There are an enormous range of organizations doing good in the world, but many of them cannot follow the open-ended sharing framing that many funding processes require. Conflicts of interest, IP control, and bad faith actors amongst the funders or their network sometimes make this difficult. Allowing applicants to be privately submitted to only selected evaluators in the funding entity is especially important in the case of a proposed organization that would inherently critique people or organizations with links to some but not necessarily all of the evaluators in the funding entity. 
  • Conflicts of Interest: Funders could declare any potential conflict of interest in their internal database for all applicants, in their public database for all successful applicants, and to every applicant in their acceptance or rejection communications. The internal database would be reviewed by the funding organization’s Board of Directors annually.  
  • Kind Encouragement: Funders could emphasize that applicants can re-apply in the next funding round and that many of the best applicants in the past did exactly that. 
  • Advisory Support: An independent service offering advisory support for the EA funding ecosystem could be made available

Potential application rounds:

  • Round 1 – Short Form Application
    • Please submit your Business Model Canvas (1 page) or one-pager (1 page)
    • Please submit your short-term plan (<300 words)
    • If needed, please let us know anything else that might be pertinent for us to review this application fairly (<300 words)
  • Round 2 – Medium Form Application
    • 3-5 decision-relevant additional questions
  • Round 3 – Long Form Application (Optional)
    • 3-10 decision-relevant additional questions
  • Round 4 – Conversation
    • 20 minute (or longer) call exploring any relevant additional questions and a chance for the evaluator to offer strategic advice for the applicant, especially on their next steps