What is “Open A.I?” Why we need Elon Musks’ new business venture
Elon Musk is an ambitious serial entrepreneur. The founder of Paypal, SpaceX, Solar Cities, and Tesla, who as early as his college years identified 5 major sectors in the world that needed immense innovation to drive them forward in the future.
He identified them as the internet, production and consumption of sustainable energy, space exploration, artificial intelligence, and rewriting human genetics. Musk has tackled space exploration and sustainable energy, now he is tapping into another of these concerns.
Open A.I is his latest business venture.
Open A.I is a non-profit artificial intelligence research company. According to its’ website, the “goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” They claim that “in the short term, we’re building on recent advances in AI research and working towards the next set of breakthroughs.”
Open A.I is working on many projects to push forward the development of safe, useful artificial intelligence. Below are listed four of the company's active projects.
- Detect if someone is using a covert breakthrough AI system in the world. As the number of organizations and resources allocated to AI research increases, the probability increases that an organization will make an undisclosed AI breakthrough and use the system for potentially malicious ends. It seems important to detect this. We can imagine a lot of ways to do this — looking at the news, financial markets, online games, etc.
- Build an agent to win online programming competitions. A program that can write other programs would be, for obvious reasons, very powerful.
- Cyber-security defense. An early use of AI will be to break into computer systems. We’d like AI techniques to defend against sophisticated hackers making heavy use of AI methods.
- A complex simulation with many long-lived agents. We’re interested in building a very large simulation with lots of different agents in it that can interact with each other, learn over a long period of time, discover language, and accomplish a rich variety of goals.
This all comes at a time when many of the world’s greatest minds have expressed deep concern for the future of A.I. Along with Musk, Bill Gates and Stephen Hawking have both expressed concern over the potentials of A.I to supersede our species.
“Success in creating AI would be the biggest event in human history,” Hawking wrote in an op-ed, which appeared in The Independent in 2014. “Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.” Professor Hawking added in a 2014 interview with BBC, “humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.”
Hawking even joined Elon Musk, Steve Wozniak, and hundreds of others in writing a letter that was presented at the International Joint Conference in Buenos Aires, Argentina. The letter warned that artificial intelligence can potentially be more dangerous than nuclear weapons.
There are many ways A.I could go wrong. I am glad someone has founded a company like Open A.I to help make sure this technology is guided and rules are set in place. The U.S Government has jumped headlong into the A.I race with drones and various military A.I systems that are becoming capable of many things. They can save lives, but weaponized, or with access to our weapons or information, they could pose a serious risk in the future and rules need to be laid out. Not to mention powerful A.I falling into enemy hands. This could be catastrophic.
Like in the movie I, Robot, laws need to be programmed into the A.I to where they cannot follow their own motives should they progress so far as to become self-aware. Just like the problems posed by autonomous vehicles, A.I will need to make ethical decisions and we as a species haven’t even yet reached consensus on many of our own ethical dilemmas. This poses the question; “Are we even ready for A.I if we cannot decide on certain issues as humans?”
A.I could change the world for the better, it doesn’t have to be the dark road of many a science fiction novel or film, but we need more companies like Musks’ start-up to ease the transition and keep us from rushing into potentially the biggest decision we will ever make as a species. One that if not thought out, could lead to the enslavement or extermination of mankind.