(2024-11-19) Zvim The Big Nonprofits Post
Zvi Mowshowitz: The Big Nonprofits Post. There are lots of great charitable giving opportunities out there right now. (Attractive Charities)
The first time that I served as a recommender in the Survival and Flourishing Fund (SFF) was back in 2021. I wrote in detail about my experiences then. At the time, I did not see many great opportunities, and was able to give out as much money as I found good places to do so.
How the world has changed in three years.
That means the focus of this post is different. In 2021, my primary goal was to share my perspective on the process and encourage future SFF applications. Sharing information on organizations was a secondary goal.
This time, my primary focus is on the organizations. Many people do not know good places to put donations. In particular, they do not know how to use donations to help AI go better and in particular to guard against AI existential risk. Until doing SFF this round, I did not have any great places to point them towards.
Organizations where I have the highest confidence in straightforward modest donations now, if your goals and model of the world align with theirs, are in bold.
note that donations to some of the organizations below may not be tax deductible.
Do not let me, or anyone else, tell you any of:
What is important or what is a good cause.
What types of actions are best to make the change you want to see in the world.
What particular strategies seem promising to you.
That you have to choose according to some formula or you’re an awful person.
This is especially true when it comes to policy advocacy, and especially in AI.
Briefly on my own prioritization right now (but again you should substitute your own): I chose to deprioritize all meta-level activities and talent development, because of how much good object-level work I saw available to do, and because I expected others to often prioritize talent and meta activities. I was largely but not exclusively focused on those who in some form were helping ensure AI does not kill everyone. And I saw high value in organizations that were influencing lab or government AI policies in the right ways, and continue to value Agent Foundations style and other off-paradigm technical research approaches.
Use Your Local Knowledge
I believe that the best places to give are the places where you have local knowledge.
Unconditional Grants to Worthy Individuals Are Great
The process of applying for grants, raising money, and justifying your existence sucks.
It especially sucks for many of the creatives and nerds that do a lot of the best work.
If you have to periodically go through this process, and are forced to continuously worry about making your work legible and how others will judge it, that will substantially hurt your true productivity. At best it is a constant distraction. By default, it is a severe warping effect. A version of this phenomenon is doing huge damage to academic science.
Do Not Think Only On the Margin, and Also Use Decision Theory
you want to do some amount of retrospective funding. If people have done exceptional work in the past, you should be willing to give them a bunch more rope in the future, above and beyond the expected value of their new project.
And the Nominees Are
*Time to talk about the organizations themselves.
Rather than offer precise rankings, I divided by cause category and into three confidence levels.*
Low confidence is still high praise, and very much a positive assessment!
*I’m tiering based on how I think about donations from you, from outside SFF.
I think the regranting organizations were clearly wrong choices from within SFF, but are reasonable picks if you don’t want to do extensive research, especially if you are giving small.*
In terms of funding levels needed, I will similarly divide into three categories.
Everyone seems eager to double their headcount. But I’m not putting people into the High category unless I am confident they can scalably absorb more funding
Organizations that Are Literally Me
Balsa Research
Focus: Groundwork starting with studies to allow repeal of the Jones Act
Don’t Worry About the Vase
Focus: Zvi Mowshowitz writes a lot of words, really quite a lot.
Thanks to generous anonymous donors, I am able to write full time and mostly not worry about money. That is what makes this blog possible. I want to as always be 100% clear: I am totally, completely fine as is, as is the blog.
Organizations Focusing On AI Non-Technical Research and Education
The Scenario Project
Focus: AI forecasting research projects, governance research projects, and policy engagement, in that order.
Leader: Daniel Kokotajlo, with Eli Lifland
Of all the ‘shut up and take my money’ applications, even before I got to participate in their tabletop wargame exercise, I judged this the most ‘shut up and take my money’-ist. At The Curve, I got to participate in the exercise and participate in discussions around it, and I’m now even more confident this is an excellent pick.
Daniel walked away from OpenAI, and what looked to be most of his net worth, to preserve his right to speak up.
This is how he wants to speak up, and try to influence what is to come, based on what he knows. I don’t know if it would have been my move, but the move makes a lot of sense
Lightcone Infrastructure
Focus: Rationality community infrastructure, LessWrong, AF and Lighthaven.
Leaders: Oliver Habryka, Raymond Arnold, Ben Pace
I think they are doing great work and are worthy of support. There is a large force multiplier here (although that is true of a number of other organizations I list as well).
Lightcone had been in a tricky spot for a while, because it got sued by FTX, and that made it very difficult to fundraise until it was settled, and also the settlement cost a lot of money, and OpenPhil is unwilling to fund Lightcone despite its recommenders finding Lightcone highly effective.
Effective Institutions Project (EIP)
Focus: AI governance, advisory and research, finding how to change decision points
Leader: Ian David Moss
Artificial Intelligence Policy Institute (AIPI)
Focus: Polls about AI
Leader: Daniel Colson
All those polls about how the public thinks about AI, including SB 1047? These are the people that did that. Without them, no one would be asking those questions.
Psychosecurity Ethics at EURAIO
Focus: Summits to discuss AI respecting civil liberties and not using psychological manipulation or eroding autonomy.
Leader: Neil Watson
provides something here for those skeptical of existential concerns.
Pallisade Research
Focus: AI capabilities demonstrations to inform decision makers
Leader: Jeffrey Ladish
AI Safety Info (Robert Miles)
Focus: Making YouTube videos about AI safety, starring Rob Miles
Leader: Rob Miles
I think these are pretty great videos in general
Intelligence Rising
Focus: Facilitation of the AI scenario planning game Intelligence Rising.
Leader: Caroline Jeanmaire
I haven’t had the opportunity to play Intelligence Rising, but I have read the rules to it, and heard a number of excellent after action reports (AARs), and played Daniel Kokotajlo’s version. The game is clearly solid, and it would be good if they continue to offer this experience and if more decision makers play it.
Convergence Analysis
Focus: A series of sociotechnical reports on key AI scenarios, governance recommendations and conducting AI awareness efforts.
Leader: David Kristoffersson
*I am not so interested in their Governance Research and AI Awareness tracks, where I believe there are many others, some of which seem like better bets.
Their Scenario Planning track is more exciting*
Longview Philanthropy
Focus: Conferences and advice on x-risk for those giving >$1 million per year
Leader: Simran Dhaliwal
They also do some amount of direct grantmaking, but are currently seeking funds for their conferences
I presume this does successfully act as a donation multiplier, if you are more comfortable than I am with that sort of strategy.
Organizations Focusing Primary On AI Policy and Diplomacy
Center for AI Safety and the CAIS Action Fund
Focus: Specifications for good AI safety, also directly impacting EU AI policy
Leader: Simeon Campos
in AI people calling for ‘supporting innovation’ are often using that as an argument against all regulation of AI, and indeed I am dismayed to see so many push so hard on this exactly in the one place I think they are deeply wrong – we could work together on it almost anywhere else.
Yet here they are rather high on the list. I have strong reasons to believe that we are closely aligned on key issues including compute governance, and private reasons to believe that FAI has been effective and we can expect that to continue, and its other initiatives also seem good
Center for AI Policy (CAIP)
They’re a small organization starting out. Their biggest action so far has been creating a model AI governance bill, which I reviewed in depth. Other than too-low compute thresholds throughout, their proposal was essentially ‘the bill people are hallucinating when they talk about SB 1047, except very well written.’
Focus: Lobbying Congress to adapt mandatory AI safety standards
Leader: Jason Green-Lowe
Encode Justice
Focus: Youth activism on AI safety issues
Leader: Sneha Revanur
The Future Society
Focus: AI governance standards and policy.
Leader: Caroline Jeanmaire
They have done quite a lot on a shoestring budget by using volunteers, helping with SB 1047 and in several other places. Now they are looking to turn pro, and would like to not be on a shoestring. I think they have clearly earned that right. The caveat is risk of ideological capture. Youth organizations tend to turn to left wing causes.
Safer AI
MIRI
Focus: AI research, field building and advocacy
Leaders: Dan Hendrycks
MIRI, concluding that it is highly unlikely alignment will make progress rapidly enough otherwise, has shifted its strategy to largely advocate for major governments coming up with an international agreement to halt AI progress and to do communications
Foundation for American Innovation (FAI)
Focus: Tech policy research, thought leadership, educational outreach to government
Leader: Grace Meyer
FAI is centrally about innovation
Focus: At this point, primarily AI policy advocacy, plus some research
Leaders: Malo Bourgon, Eliezer Yudkowsky
Edited: | Tweet this! | Search Twitter for discussion