Funding for AI alignment research
Last update: 01 November 2018

(1) Berkeley Existential Risk Initiative (BERI)
http://existence.org/getting-support/

If you work as an administrator for one of these groups: CHAI, CSER, FHI, MIRI, you may ask BERI to support a researcher that you nominate.
BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Currently, our main strategy is to take on ethical and legal responsibility, as a grant-maker and collaborator, for projects deemed to be important for reducing existential risk. These projects mostly revolve around reducing risk from technologies that may pose significant civilization-scale dangers, as determined by research collaborators who have adopted existential risk reduction as both their primary career ambition and their primary area of intellectual focus.

(2) MIRIx
https://intelligence.org/mirix/

MIRI wants to support AI safety research around the world. Our MIRIx program encourages mathematicians, computer scientists, and formal philosophers to organize their own workshops, and offers to reimburse the organizers for expenses.
The Machine Intelligence Research Institute is a research nonprofit studying the mathematical underpinnings of intelligent behavior. Our mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed.

(3) AI Alignment Prize
https://www.lesswrong.com/posts/4WbNGQMvuFtY3So7s/announcement-ai-alignment-prize-winners-and-next-round

Deadline 12/31/2018
We are looking for technical, philosophical and strategic ideas for AI alignment, posted publicly between July 15 and December 31, 2018. You can submit links to entries by leaving a comment below, or by email to apply@ai-alignment.com. We will try to give feedback on all early entries to allow improvement. Another change from previous rounds is that we ask each participant to submit only one entry (though possibly in multiple parts), rather than a list of several entries on different topics. The minimum prize pool will again be $10,000, with a minimum first prize of $5,000.

(4) AI-alignment.com
https://www.lesswrong.com/posts/DbPJGNS79qQfZcDm7/funding-for-ai-alignment-research

If you are interested in working on AI alignment, and might do full or part time work given funding, consider submitting a short application to funding@ai-alignment.com.

(5) The Shannon Fellowship
https://shannonlabs.co/

The Shannon Fellowship is supporting independent researchers to pursue breakthrough ideas in underfunded areas of intelligence research. We support three broad categories of projects:
Fundamental discoveries in your subfield of interest
High-quality educational content for distilling literature
Meta-tools for accelerating research and information dissemination
These can be in Brain Computer Interfaces, Intelligence Augmentation, Theoretical Neuroscience, and Artificial Intelligence. However, we welcome applications that do not strictly fit into these labels.

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

As of 01 November 2018: Funding sources below are not currently accepting applications.

(6) The Future of Life Institute (FLI) – 2018 International AI Safety Grants Competition
https://futureoflife.org/2017/12/20/2018-international-ai-safety-grants-competition/

INITIAL PROPOSAL — DUE FEBRUARY 25 2018, 11:59 PM Eastern Time
The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.
For maximal positive impact, this new grants competition thus focuses on Artificial General Intelligence, specifically research for safe and beneficial AGI. Successful grant proposals will either relate directly to AGI issues, or clearly explain how the proposed work is a necessary stepping stone toward safe and beneficial AGI.
Mission: To catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

(7) OpenAI
https://blog.openai.com/openai-scholars/

Applications will close no later than 11:59pm PT on March 31st
We’re providing 6-10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.

(8) GoodAI – General AI Challenge
https://www.general-ai-challenge.org/

Submission deadline:18 May 2018
The General AI Challenge is made up of multiple rounds, each designed to tackle a crucial research problem in human-level AI development. GoodAI will give out $5mil in prize money over the following years.
GoodAI’s mission is to develop general artificial intelligence – as fast as possible – to help humanity and understand the universe.

(9) SIDNfonds
https://www.sidnfonds.nl/nieuws/call-2-nu-open-responsible-ai-projecten-gezocht

Call 2 is open until 17 September 13.00.
(Google Translate) Our call for innovative internet projects is open again. In the context of our annual theme, we are looking for projects that respond to Responsible AI (in addition to the open call). What is the role of AI in our lives and what influence do we have on this? What does AI mean for the relationship between people and technology? By supporting projects in this area, SIDN Fund wants to play an active role in building up knowledge and best practices in the area of ​​Responsible AI.

(10) The Open Philanthropy Project AI Fellows Program
https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-philanthropy-project-ai-fellows-program

Applications and letters of recommendation are due by October 26, 2018, 11:59 PM Pacific time.
With this program, we seek to fully support a small group of the most promising PhD students in AI and ML who are interested in making the long-term, large-scale impacts of AI a central focus of their research. Fellows are funded through the 5th year of their PhD, and will receive a $40,000 per year stipend, $10,000 per year in research support, and payment of tuition and fees. We encourage applications from 5th-year students, who will be supported on a year-by-year basis; students who will be starting their PhD in Fall 2019; and students with pre-existing funding sources who find the mission and community of the Fellows Program appealing. We are committed to fostering a culture of inclusion, and encourage individuals with diverse backgrounds and experiences to apply; we especially encourage applications from qualified women and minorities.

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Note: If you know of a source of funding for AI alignment research that is not listed or if you find a mistake, please send an e-mail to eric@xriskfunding.com.