OpenAI is a non-profit dedicated to developing advanced AI for the public good and in an open manner. This carries with it risks, but is preferable to advanced AI being developed in secret to enhance private power.
OpenAI is currently working on developing a test framework for reinforcement learning.
OpenAI currently has pledged funding of $1b. It is important that this just be the beginning. This would probably sufficient to fund 500 AI researchers for 10 years.
Consider the scale of the problem. In 2015 Google scholar showed 40,000 articles published containing the phrase "artificial intelligence". If we very roughly assume 1 article per researcher per year, then that puts the number of AI researchers at 40,000. Thus the odds that OpenAI succeed in developing safe smarter-than-human AI is currently small. This is especially true if it makes it patents freely available, allowing others to build on them, but is unable to build on the patents of others.
A glimmer of hope. A substantial fraction of the AI researchers are academic researchers, who while not necessarily seeking to develop technology to benefit humanity, aren't in it purely to enhance private power. If it was possible to co-opt a large portion of the academic researchers into the OpenAI fold there might be a hope. Creative copyright licensing, similar to the GNU Copyleft, or Patent licensing, similar to the MPEG LA might be in order.
It seems likely that advanced AI will emerge through a string of inventions and the steady increase in computational power, rather than it being a single invention. This means front running is a real problem. It is easy for others to build on open research, possibly using patents or secrecy, stifle such research, and limit the public benefits of the final system.