Service Robots in Japan

AN ETHICAL, SOCIAL, AND POLICY-DRIVEN ANALYSIS

Background

Service robots, robots that assist humans by performing useful tasks for them, have seen an astonishing rate of development over the past few years. This expansion of robots from primarily industrial to domestic and commercial sectors illustrates profound technological innovation.

Why Japan?

We are focusing on Japan because it has welcomed humanoids most rapidly into everyday life.

  • At Haneda Airport, porter robots carry passengers’ luggage.
  • Aiko Chihira , a robot dressed in kimono, directs customers at the Mitsukoshi department store.
  • The robot Pepper plays games with customers and provides customers with product information at Mizuho Financial Group.
  • The humanoid ASIMO serves drinks, dances, and even recently played soccer with U.S. president Barack Obama.

Japan as a Leader

Japan is leading the technological development of robots in the world.

Population Decrease

Japan's decreasing population is a call for robotic labor.


Ethical Concerns

The exponential growth of service robot technology raises ethical concerns. As robots become smarter and more commonplace, they are constantly presented with ethical dilemmas.

Moral Decision Making: Safety, Errors

As robots thrive on very complex software, errors and vulnerabilities are likely to lurk and potentially result in fatalities.

  • As robots become networked, they can potentially be hacked and their abilities can be turned against us.
  • Is it even possible for us to create machine intelligence that can make nuanced distinctions, such as between a gun and an ice-cream cone?
  • Can robots differentiate non-combatants from combatants, non-threatening from threatening?
  • What should the safety threshold be prior to service robots’ introduction into society?

Responsibility and Liability: who to hold accountable

If robots make a mistake, it is unclear who should be held responsible for the resulting harm.

  • Currently, product liability laws are largely untested in robotics. In fact, the common trend is towards deregulation, which increasingly releases manufacturers from responsibility.
  • Are there unique legal or moral hazards in designing machines that can autonomously kill people?
  • Are we ethically allowed to give away responsibility for taking care of the elderly and children to machines?
  • Is robotic companionship for other purposes, such as pets or sex partners, morally problematic?

A Deontological Perspective

Deontology states that the morality of an action is based on the action's adherence to rules and laws.

  • Over the years, robots have become smarter and more autonomous, but they still lack an essential feature: moral reasoning.
  • From a deontological perspective, how can robots know the difference between the right and wrong? How does one impart morals to a robot? Simply program rules into its brain? Send it to obedience class?
  • In fact, the spread of service robots depends on our ability to implement moral reasoning in service robots. Otherwise, service robots are doomed to violate deontological moral actions. This ultimately derails at the same time any utilitarian concern (harm of society’s intrinsic values).
  • The first colossal hurdle to moral reasoning: there is no universal set of human morals. Morality is culturally specific, continually evolving, and eternally debated. If robots are to live by an ethical code, where will it come from? What will it consist of? Who decides?

A Utilitarian Perspective

Utilitarism says the best moral action is the one that maximizes utility.

  • The main obstacle to full-scale implementation of service robots is whether or not they harm our society. In terms of a utilitarian standpoint, the essential ethical question is whether service robots do more good than harm to the Japanese society.
  • Will robots create more jobs than they will destroy?
  • Are robots a promising solution for elderly care, thereby essential to the Japanese aging population?
  • Is the huge market ahead for service robotics compensating for the potential dangers involved?
  • In Japan, they are no moral dilemma regarding automation due to a striking deficiency in the workforce. But what about other parts of the world, where people desperately need jobs?

Shrinking Privacy: A Legal and Ethical Concern

Service robots can easily be equipped with surveillance devices that could be monitored or accessed by third parties.

  • Robots can become mechanical databases that could run background checks on an individual’s driving, medical, and banking records.
  • Do all people have equal needs for privacy? Or do some, who are more vulnerable in some way, need privacy more than others?
  • Should legislators pass new laws to protect privacy, given the new technologies and products that now collect information about us?
  • Should individuals bear the responsibility of protecting their own privacy? Do most people understand the way in which their information is accessed, collected, or otherwise used online?

Japan and the World: A Global Perspective

In Japan, there is less of a moral dilemma regarding automation and robotic replacement.

  • Japan’s shrinking workforce and aging population are calls for robotic labor. However, the same does not hold true for other parts of the world. The growth of service robot technology will inevitably lead to a progressive adoption of service robots in entire world. This will entail ethical concerns for the rest of the world. In fact, most countries have expanding workforces and will not be able to easily switch to service robots. From a utilitarian perspective, in order to follow Japan’s lead in service robotics, the world would have to make sure that growth associated with robots far exceeds the pitfalls of reducing the available jobs.

Self-Driving Car Example: Reflecting Upon Ethical Concerns

  • Self-driving cars can potentially eliminate virtually all driving errors and ease congestion. With self-driving cars the US can save $110 billion annually (including 724 million gallons of fuel) and eliminate an average of 21,700 deaths per year.
  • There are also disadvantages: self-driving cars will result in a loss of more than 5 million jobs in the US. They are also expensive to build. However, from a utilitarian perspective, the massive economic and environmental savings coupled with the number of lives saved would do more good than harm to our society.
  • They also raise significant ethical concerns: To what extent should self-driving cars follow the law? Who is accountable for behavioral (driving) mistakes?
  • The trolley problem applied to self-driving cars is a choice between utilitarianism and deontology. Imagine an autonomous car that has two options:
    • Keep going and crash into the human-driven car, inadvertently killing the family of five.
    • Turn right and crash into another car, killing the one person sitting inside.

From a utilitarian perspective, the car would turn right and kill one person compared to five. In a survey, 62.8% of professional philosophers agreed that the utilitarian perspective should be adopted. Even if the utilitarian perspective is adopted, we still have to choose between rule utilitarianism and act utilitarianism. Rule utilitarianism states that "we must always pick the most utilitarian action regardless of the circumstances.” Basically, rule utilitarianism always tries to choose whatever option benefits the majority. Act utilitarianism states that “we must consider each individual act as a separate subset action,” meaning that no direct rules can be made as each situation is unique. If we suddenly replace the family of five with five bank robbers, is it more legitimate to turn right? Even if the answer is yes, a computer cannot logically handle all of different complex cases and their respective subtlety. Thus, utilitarianism can appear downright naïve. Also, can we morally accept that, at any moment, our life could be sacrificed for no particular misbehaviors to save the lives of others?

From a deontological perspective, the car would keep going, since “some values are simply categorically always true,” in the sense that “murder is always wrong, and we should never do it.” Despite the circumstances, an autonomous car should not decide to sacrifice its driver to save others. This approach would virtually abstract away all the complexity of each situation and would just never choose to actively murder someone for the benefits of others. In that way, this paradigm reminds us of Kant’s categorical imperative “act only according to that maxim whereby you can, at the same time, will that it should become a universal law,” meaning that we should not merely use people as means to an end.


Current Policy

Currently, the Japanese government is actively promoting the development of service robots. Prime Minister Abe has urged companies to “spread the use of robotics from large-scale factories to every corner of our economy and society.”

Japan Revitalization Strategy: Towards a “robot barrier-free" society

In 2014, the government of Japan — in its revised Japan Revitalization Strategy document — established a goal to realize a “New Industrial Revolution Driven by Robots.”

  • The New Robot Strategy aims at the gradual automation of just about everything: from agricultural equipment to automobiles, and disaster-relief services to robots in the food, cosmetics, and even pharmaceuticals industries.
  • Another focus of the initiative is expanding the role of service robots, with intermediate goals of seeing 30 percent penetration by 2020, and an ultimate target of 70 percent of such machines employed in the sector.
  • Japan is planning to set up a structure triggering innovation through promotion of public-private partnership, creation of more occasions for matching users and manufacturers, as well as pressing ahead with normalization and standardization under the perspectives of human resource development.

Deregulation and Funding at the Heart of Japan’s Robotic Revolution

To promote and enlarge utilization of robots in real society, Japan plans to carry a well-balanced reform of regulations and institutions from a viewpoint of both deregulation and increased funding.

Absence of Specific Regulation: Identification, Precautionary Risk Control

There exists no standardized regulation to prevent dangers associated with networked, autonomous robots.

  • The absence of regulation in specific cases makes it difficult for the relevant stakeholders to understand potential legal risks in activities involving service robots.
  • When accidents happen there is currently no method to identify the owner and manufacturer of the involved robots. This lack of an identification method makes it more challenging to prevent similar mistakes in the future, or hold involved parties accountable for accidents.
  • Service robots are not legally required to have precautionary risk mechanisms that could prevent dangerous situations.

Our Proposed Policy

Our proposal seeks to mitigate some of the potential ethical concerns. As the number of service robots in society grows, Japan needs to develop rules to manage them. Japan should not only strive to lead the technological development of service robots, but also the regulation of service robots. Thus, we propose three new laws that will help prevent dangers associated with the increase in service robots.

Kill Switch

All robots are required to have a precautionary risk control mechanism in the form of a human override capability (kill switch).

  • All service robots will be required to have a clearly visible and easily accessible kill switch.
  • The kill switch will immediately shut off the robot.
  • The switch is intended to allow humans to stop the robot in the case something goes wrong. It serves as a protective measure against faulty hardware and software, and also dangerous situations with which the robot may be unfamiliar.

Identification Chip

Robot manufacturers must ensure their robots are clearly identifiable by including identification chips that are protected from alteration.

  • The identification chips can be used in case of an accident to identify the owner and the manufacturer, so that they can be held accountable for their mistakes.
  • In addition, the manufacturers can learn from their mistakes, so that they can take actions to prevent similar mistakes in the future.
  • It will be illegal to remove these identification chips from a robot.
  • These chips will be standardized products, produced either by the government or by a government-sponsored organization.

Human Obedience

Manufacturers must program robots so that they obey orders given to them by humans except when such orders could lead to the death or injury of humans.

  • Robots must obey all orders of their human owners.
  • For example, if a human takes control in a self driving car, then the car should obey the human owner and stop making autonomous decisions as long as this does not result into the direct death or injury of humans.
  • In the case that a human takes over and something negative happens because of the human's decision, liability falls upon the human for making decisions that the robot followed.

Enforcement: These laws will be enforced by imposing financial penalties on manufacturers or owners of robots that do not comply with these regulations. It is necessary to establish a government agency to enforce these proposed regulations as well as oversee the ethical and social concerns of robotics manufacturing and development in Japan.


Alternatives

Our current policy involves a set of new regulations to prevent dangers associated with robots' proliferation. We consider two alternative policies: continuing the current policy and introducing a ban on service robots.

Keep Current Policy?

While Japan's government promotes the development of service robots, there is currently an absence of service robot specific policy regulations.

Ban Service Robots?

One way of dealing with difficult ethical questions is to avoid them altogether by either banning robots or heavily regulating the service robot industry.


Response To Alternatives

Both of these alternatives have significant weaknesses.

Maintaining the current policy is insufficient.

Counterarguments:

  • The lack of regulatory standards raises significant ethical and social considerations, particularly surrounding safety and human lives, as discussed in our current policy section.
  • Additionally, the absence of service robot specific laws makes it difficult for the relevant stakeholders to understand potential legal risks of activities involving service robots.
    • The fact that service robots often involve state-of-the-art technologies imposes a burden on robot makers in interpreting legal norms and complying with them.
    • Service robots may, in some aspects, differ from traditional machines such as in control methods or degrees of human attention needed for their operations; this makes it complicated to identify ‘‘what’’ service robots are in relation to existing laws.
    • This raises further questions as to whether the use of autonomous robots in public areas is allowed, and what conditions need to be met for their actual use.
    • Those who wish to develop and use robots need to identify and solve the legal issues faced by them on a case-by-case basis.
    • This significantly raises the barrier to entry for robot manufacturers, and as a result impedes innovation.

Banning Service Robots is Not the Solution.

Counterarguments:

  • Banning service robots is a form of overregulation, which is not a good idea: it impedes innovation.
  • Robots are needed to take care of Japan’s aging population, since Japan’s population and especially the working-age population are decreasing.
  • Robots save time and money by being able to produce a greater magnitude of products of higher quality in shorter periods of time (increased ROI).
    • Consequent to its shrinkage in workforce, Japan has to rely on productivity as its primary catalyst for growth.
  • Robots save people from performing dangerous tasks.

Overall, these alternatives are worse than our proposed policy.

  • They impede both the growth of the service robot industry and the increase of productivity in Japan.
  • Productivity is essential for Japan. In fact, if Japan can successfully double its rate of productivity growth, with a sharp focus on increasing value added as well as reducing costs, it could boost annual GDP growth to approximately 3 percent. By 2025, this would increase Japan’s GDP by up to 30 percent and improve Japan’s future prospects. Some $1.4 trillion in GDP growth is at stake in 2025 alone.
  • Our three proposed laws can significantly help to increase productivity in Japan while tremendously reducing damages done by potential robotic failures.
  • In terms of compliance cost, consumers will have to pay up to $600 million more annually for the additional safety features. However, Merill Lynch estimates that our changes will not affect the profit or revenue of the service robot manufacturers. In fact, adopting alternatives different from our policy will result in a far greater impact on the Japanese economy. Banning service robots or heavily regulating them will evidently reduce the size of the service robotics market by billions of dollars. Also, maintaining the current policy will not reduce the safety hazards associated with increased autonomy and the complex tasks service robots have to execute. Abstaining from human obedience, kill switch, and identification requirements will lead to increased fatalities, greater societal apprehension of robots, which will not only shrink the number of customers but will also impact the current positive perception of service robots.
  • Furthermore, the kill switch, human obedience, and identification chips constitute added features that companies can charge their customers for. Also, the new implementations will significantly improve the safety of the service robots. As a result, the overall societal view on robots will improve and the increased cost will not refrain customers from buying more service robots.

Team

Teun de Planque

B.S. Computer Science, Electrical Engineering
Stanford University

Jessica Zhao

B.S. Computer Science
Stanford University

Chris Elamri

B.S. Computer Science, Electrical Engineering
Stanford University