Ng Han Guan, AP
In this April 26, 2018, file photo, a robot assist receptionist is seen at the booth of a Chinese automaker during the China Auto 2018 show in Beijing, China.

Editor's note: A version of this commentary by Brigham Young University professor Darin Gates was published by BYU's Wheatley Institution on May 22.

Artificial intelligence, or AI, generates the most pressing ethical questions of any technology today, in part because of the nearly ubiquitous influence it will have in so many areas of our lives. AI will have an immense ethical impact both in terms of the amount of good it can bring about, and in terms of the potential harms it can unleash.

Interest in the ethical dimensions of AI has increased dramatically in recent years — from the nearly daily reporting of ethical-related AI issues in various news outlets, to major companies such as Amazon, Google, Facebook, DeepMind, Microsoft and IBM coming together to create the Partnership on AI to Benefit People and Society. Elon Musk, the late Stephen Hawking and 8,000 others have signed an open letter with concerns about the future of AI. There has thus been an increasing recognition for the need to focus on the ethical aspects of AI. Here are some of the most significant ethical issues facing the continued implementation of artificial intelligence.

The positive impacts of AI

Developments in artificial intelligence and machine learning provide some of the most exciting breakthroughs in technology, medicine, education and many other fields. To get a sense of the incredible possibilities associated with AI, one need only listen to Sebastian Thrun — the founder of Google X and Google’s self-driving cars project, and the current CEO of Udacity — who provides a highly optimistic appraisal of the endless possibilities for good that can come from advances in AI.

At Stanford, Thrun led a team that created an artificially intelligent diagnosis algorithm for skin cancer detection. By combining visual processing and deep learning — a type of artificial intelligence that is modeled on neural networks — they were able to match or outperform the diagnoses of some of the world’s leading dermatologists. Thrun argues that the most positive benefits will come from the way in which AI will allow us to be more creative because it will master the repetitive tasks that take up so much time in the day-to-day life of so many people. Of course, many people could be out of a job as a result. More on that shortly.

One exciting yet controversial application of AI concerns its use in self-driving cars. The main ethical argument for self-driving cars is that 94 percent of fatal car accidents are caused by human error, which self-driving cars are purported to eliminate. There are also serious ethical questions regarding self-driving cars. One issue is how they will be programmed to respond in unavoidable harm scenarios. Should they always minimize harm — even if that means crashing into a wall and potentially killing the driver, as well as passengers, in order to avoid killing more pedestrians? Studies have shown that only a minority of people would be willing to buy such utilitarian programmed self-driving cars. Thus, ironically, from a consequentialist-utilitarian perspective, it might be best to allow such cars to operate in a more ethically egoistic manner.

The McKinsey Global Institute published a study this last year outlining many of the important contributions that will come from AI — from health care, to manufacturing, to retail, to education. This past March, I advised a team of four students and accompanied them to Seattle in order to compete in the Milgard Invitational Corporate Social Responsibility case competition. The topic this year was “Microsoft and the Future of AI.” These students were given 72 hours to come up with a plan (without any outside help) to complement Microsoft’s existing social responsibility initiatives.

As part of their presentation, the team focused on how AI will be key in “adaptive learning.” This involves computer assisted, AI driven learning that, as Harrison Miner points out, “will be able to learn and adapt to (an) individual’s learning styles and the pace at which they learn,” with a supposedly 90 percent higher engagement than traditional learning. Furthermore, as another of our students, Thomas Colton, pointed out, AI adaptive learning can “help career training development match the pace of the demands of a fast-evolving labor-market” to those displaced by technology in order to “repurpose their skills” in optimal ways.

Ethical possibilities

There are, however, harmful possibilities that advances in AI may bring about. One potential consequence is massive job loss. In that same McKinsey study, they argue that by 2030 nearly 70 million U.S. workers will need to find new occupations due to the automation of their jobs by AI. Many people have been concerned about this supposedly impending “robopocolypse.” In a recent article from Wired_,_ James Surowiecki downplays the possibility, or at least the imminent inevitability, of such a takeover. He counters some of these concerns and says “the problem we’re facing isn’t that robots are coming. It’s that they aren’t,” or aren’t coming fast enough. Others, such as Martin Ford, are not so optimistic.

There are numerous other ethical issues concerning AI — some having to do with questions of AI bias, many having to do with questions of privacy as we saw with the recent Facebook congressional hearings. Another problem concerns the role of AI and machine learning in manipulating, and in some cases addicting, people. Social media platforms such as Facebook are often wonderful tools for helping us stay in touch (at least minimally) with friends and family. However, Tristian Harris, a former “design ethicist” at Google, argues that such platforms also have a built-in incentive to manipulate us as part of a “race for our attention.” As he points out, social media companies (and others) define success by the time users spend on their platform. Harris argues for changing their design goal from a user time-spent model to a time-well-spent model.

Harmful consequences

Some worry about even more serious unintended consequences. Several prominent figures — including Musk and Hawking — have warned about these more extreme possibilities of AI. They worry about AI running out of control in unanticipated and catastrophic ways. They point to the possibility of a not-to-distant possible future in which AI achieves “superintelligence.” This possibility — which has been called the “technological singularity” or “intelligence explosion” — is that AI systems will improve themselves and increase in intelligence at such a rate that we will lose control. In the more extreme version, a runaway recursive self-improvement of such machines will lead to AI agents (or systems) taking over the world in some manner. Thrun sees such possibilities as extremely unlikely, pointing out that most AI systems are still limited to a single domain, and there has been very little progress in artificial general intelligence.

In the recent documentary, “Do You Trust This Computer,” Musk is quoted as saying: “AI doesn’t have to be evil to destroy humanity. If AI has a goal, and humanity just happens to be in the way, it will destroy humanity as a matter of course — without even thinking about it.” Musk continues: “The least scary future I can think of is one where we have at least democratized AI, because if one company or small group of people manages to develop god-like, digital superintelligence, they can take over the world.” This is one reason Musk and Sam Altman founded Open AI.

John Searle, one of the most famous critics of so-called Strong AI, argues that it is ridiculous to think that AI agents could ever engage in any sort of uprising, simply because they could never attain conscious desires and intentions. This is based on his well-known “Chinese room” argument. Searle argues that artificial intelligence works by means of syntax, and that it has no human (or semantic) understanding. AI can thus simulate human intelligence but never duplicate it. For Searle, AI agents have no true understanding. When Deep Blue beat Kasparov, or when AlphaGo beat Lee Sedol at Go, neither really knew they had won, nor did it really mean to them what it would for a human. They didn’t care that they had won.

6 comments on this story

Even if the extreme concerns about an AI takeover or singularity are unwarranted, there are still many legitimate concerns about the consequences of AI. It is not hard to believe there will be many who will be willing and able to use the power of AI to advance harmful and seriously unethical ends. Furthermore, it is important to find a way to incorporate ethics into the behavior of AI agents and systems. Brigham Young University professor Dan Ventura and I are currently working on a project exploring the possibility of incorporating ethical limits into the “thinking” of AI agents. Professor Ventura will present a paper on which we collaborated at the June International Conference on Computational Creativity in Spain.

There is thus no question that AI will transform our world in many positive ways, but it is also clear that the continued development and application of AI deserves a great deal of careful thought and attention because of its potential negative impact on our world.