Professional Codes of Ethics

Chapter 2 Case Study: The Example of OpenAI

In 2015, Elon Musk of PayPal, Tesla, and SpaceX; Peter Thiel of Palantir; Reid Hoffman of LinkedIn; and Sam Altman of Y Combinator teamed up to launch OpenAI, a nonprofit artificial intelligence (AI) venture that would aim to generate AI technology for the benefit of humanity and would be made available to everyone. The group maintains that it can best seek ways to use AI only for human benefit by severing the profit motive from the research. The group holds a very high standard for what AI technology must do: “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.” In other words, rather than AI becoming a tool of government to conduct mass surveillance on citizens and control them or business using advanced AI to predict, influence, and control human spending, AI should not primarily be a tool of institution, public or private, to be used on individuals. Rather, as an “extension of individual human wills,” AI should be democratized and put to the use of expanding individual autonomy. It is not hard to see how this would be difficult to conduct in an organization seeking profits. All the intellectual property OpenAI develops will be publicly available without charge; the only exceptions would be technology that poses risk to human safety. So far, the main achievement of OpenAI has been its gaming bots that have defeated Dota 2 players. Dota 2 is a video game played by teams and is considered to be more difficult for computers than chess or Go.

Approximately three years after the founding of the organization, Elon Musk stepped down voluntarily as chairman to prevent a future conflict of interest. While Musk will still advise the nonprofit, his own company Tesla has come to emphasize AI more and more. While Tesla is primarily a manufacturer of electric cars, the direction the company is moving is to produce autonomous electric cars. Clearly insofar as the for-profit company is developing AI, there would be a conflict of interest between it and the free AI offered to the public by OpenAI.

How would the mission of OpenAI fit under the types of ethical principles discussed in the chapter? When Elon Musk stepped down as the chair, what sort of principle was he following?

Case study by Robert Reed

https://www.fastcompany.com/3054593/elon-musk-launches-openai-a-nonprofit-aimed-at-using-ai-to-benefit-humanity

https://motherboard.vice.com/en_us/article/qveedq/elon-musk-steps-down-from-open-source-ai-group

Back to top