首页 > 题库 > 学历提升 > 自学考试 > 自考本科 > 案例题

How to Raise a Moral Robot

In the future, humans will create more intelligent and more moral robots than those portrayed in the recent film Chappie. What the film touches on is perhaps the greatest challenge of raising moral robots. The toughest part is how to safely integrate robots into society. As is known, humans are the most powerful learning machines on Earth. If robots are to be part of human society, they have to become at least second best at learning. Is it possible for humans to produce moral robot learners?

More and more scientists of artificial intelligence (AI) agree that true intelligence comes from learning, not just from programming. With a growing number of machine learning approaches available, robots can take in new information. They turn that information into instructions and learn from feedback to adjust actions in the ever-changing environments.

Robot learning, however, must have limits. If scientists succeed in building sophisticated robots that can learn, they will have to establish limits to how robots learn. If robots are allowed to learn anything they can and want, they may become brutal bullies. Therefore, programmers must set rules and laws that prohibit them from learning anything socially undesirable.

One approach to that problem is democratic robot learning. Programmers write a small number of fundamental norms into the robot, and let it learn the remaining ones. These fundamental norms will include prevention of harm, especially to humans, but also politeness and respect. The norms will then be translated into behavior, for example, what it means to be polite in a particular context. They also define conditions under which one fundamental norm can replace another. It’s OK, for instance, to drop politeness when a robot tries to save someone from harm.

Democratic robot learning would also guide a robot in dealing with contradictory teachers. Say one person tries to teach the robot to share, and another tries to teach it to steal. In that case, the robot should ask the whole community which teacher it should listen to. After all, the norms and morals of a community are typically held by the majority of members in that community.

This approach would also prevent robots from learning something evil in human society. Humans are generally cooperative and kind towards those whom they consider part of their group, but they can become wicked and cruel towards those outside their group. If robots learn such hostile sentiments and evil actions, they may very well become a threat to humanity. Take robots in some science-fiction films for example. They get out of human control, turn their human masters into slaves, and even kill them. If those horrible scenes come into reality, it would be a disaster to mankind.

Somehow, society will have to protect robots from continuing dark human heritage. If successful, robots will be helpful to humanity as a whole—lending a hand in production, health care, education and elder care. That is what AI scientists should pursue, and those are the moral robots human society should raise.

参考答案: 查看答案 查看解析 下载APP畅快刷题

相关知识点试题

相关试卷