Living and growing with AI robots: our ideal cooperation partners?

Hello! This is Robert from the Moonshot PR team.

Today we bring you a special interview with Professor Toshio Fukuda, Program Director for Moonshot Goal 3: “Realization of AI robots that autonomously learn, adapt to their environment, evolve in intelligence and act alongside human beings, by 2050."

Our aging/shrinking societies could one day be supported by AI robots, living side-by-side with humans and learning and growing as we do. These autonomous allies could help us with daily chores, work in dangerous environments such as disaster sites, and even discover their own scientific principles and solutions. By growing alongside us, AI robots will be more understanding of our needs and provide greater benefits to individuals and society.

Ryosuke Matsuya, science communicator at the National Museum of Emerging Science and Innovation (Miraikan), talked to Professor Fukuda to find out more.

Toshio Fukuda
Visiting Professor, Institute of Innovation for Future Society, Nagoya University
Program Director of Moonshot Goal3 of the Moonshot R&D Project since 2020

Ryosuke Matsuya
Science communicator,The National Museum of Emerging Science Innovation

Addressing things that are not in the dataTraining "wisdom" with associative power

Matsuya: Goal 3 aims to create a society that utilizes AI robots. This concept has been around for a long time, but what are the challenges in achieving it?

Fukuda: Today's robots are not as intelligent as humans, but they have the ability to move quickly and never forget things. Because humans and robots have different abilities, spending time together creates a sense of discomfort. To coexist with humans, determining how robots can think and move in the same way as humans is an important issue.

Matsuya: When did research on AI robots first begin?

Fukuda: It all started in the 1980s when an American computer scientist, Professor Christopher Langdon, proposed artificial life. Subsequent work has been done under different names, leading to current research on the topic. Moving according to a programmed command is not a problem for an industrial robot, but to be active in society, it is necessary to create "smart robots" that can move in a fashion more in line with the natural world.

Matsuya: What kind of robot is a "smart robot"?

Fukuda: Even if people have never experienced something, they can cope by associating it with previous experiences. Similarly, a smart robot is one that, even in the absence of data, can make choices in an environment based off of previous training data. For example, in a new environment, it is capable of considering "is this the way?" and making a decision.

Matsuya: AI is what creates this associative power. Please tell us about AI research as well.

Fukuda: AI research began in the 1950s. Then, in the 1980s, a neural network (NN) that imitated the mechanism by which neurons, which are nerve cells in the human brain, exchange information with each other, appeared on the stage and became capable of handling complicated problems. In 2006, Dr. Geoffrey Hinton, a cognitive psychologist and computer scientist, proposed deep learning (DL).

Matsuya: What is the difference between NN and DL?

Fukuda: With NN, when a value is input after designing the features, it outputs an answer based on a certain rule set in advance by a person. With DL, the computer creates rules through trial and error and outputs the answers without having to design the features in advance.

Matsuya: DL trains a large amount of data to improve accuracy.

Fukuda: The problem is that learning takes a significant amount of time and effort. On top of this, while DL can guess the relationship between given input and output values, it is not adept at handling unknown input values that deviate from that particular situation.

In a "Trinity" with people and the environmentA Sense of expanding physical function

Matsuya: New AI learning methods appear to be required for robots to play an active role in society.

Fukuda: For example, when studying physics, students engage in "additional learning" where the content deepens from junior high school, through high school, and university. I also think that sequential learning is important for AI Robots. However, when performing new learning with conventional DL, it is difficult to learn additional new things because it is necessary to reconstruct the relationship between input and output values from scratch. Goal 3 is the first in the world to realize sequential learning in robots and transfers learning through analogy.

Matsuya: The content to be learned is likely to be broad.

Fukuda: For example, if there is a robot that performs civil engineering work autonomously, it must be able to understand both explanations and drawings written in words like humans.

Matsuya: Can robots learn what rules are used to draw the drawings with?

Fukuda: Specialist materials such as blueprints cannot be read by people unless they are taught the rules. Similarly, there is a need to teach the robot. If possible, we should be able to propose ways to add explanations and rearrange things.

Matsuya: Because robots are physically present, you can physically change the situation on the spot.

Fukuda: We were doing research 30 years ago to get robots to find the best route; however, instead of finding a route to avoid obstacles, we had a robot with limbs move the obstacles to create the shortest route. If it's a hindrance to robots, it's probably a hindrance to humans, isn't it? Robots and humans learn from and have a relationship with each other. This is co-existence.

Matsuya: It is necessary to take a bird's-eye view of the environment in which robots are used, as well as their relationships with people.

Fukuda: We think of this as the "trinity" of people, robots, and the environment. It would be great to develop "active sensing" in which robots work on the environment to gather information and then generalize what they have learned while interacting with people and the environment so that they can be used in other places.

Matsuya: AI is important for this generalization, isn't it?

Fukuda: We call it "coevolution," but I believe that AI and robots will interact and develop in a smart way, and eventually AI robots will be able to learn and act on their own. As the intelligence level of the robot increases, if a person thinks "I want water", the robot will be able to detect the situation and bring it quickly. Although the function of the robot remains unchanged, it moves by grasping the intention of the person so that it flows naturally, so the robot may feel like a part of one's body.

Matsuya: In this case, it seems that humans and robots can be close to each other.

Fukuda: However, there are still many issues to be solved, so we incorporated four Project Managers (PMs) in this project to proceed with the research. Professor Shigeki Sugano of Waseda University is focusing on a general-purpose AI robot that can work with people in the field of welfare and medical care, as well as housework and customer service. Project Professor Keiji Nagatani of the University of Tokyo is focusing on a collaborative AI robot that can work on behalf of humans in difficult environments including the moon and disaster sites. Associate Professor Kanako Harada of the University of Tokyo is focusing on an AI robot that conducts science experiments while discussing with scientists on an equal footing and discovers scientific principles and solutions by itself. Professor Yasuhisa Hirata of Tohoku University is focusing on an adaptable AI robot that changes its shape and function based on the user and provides appropriate services. As a result, we will develop robots based on the situation in which each person uses them.

Safety is the key issue for social demandProposing problem solving to the world

Matsuya: The world is likely to change if this goal can be realized; however, it is also likely that some people will abuse AI robots.

Fukuda: In addition to the potential for human abuse, robots must be able to distinguish between right and wrong, and there will be a need to make the systems that move robots universal. In addition, discussions are underway to provide robots with inspection systems in the same way as vehicle inspections are undertaken. We would like to collaborate with Moonshot Goal 1, which was introduced in the first part of the series, to find solutions to common legal and ethical issues.

Matsuya: Laws and ways of thinking vary considerably from country to country. What is the key to acceptance in society?

Fukuda: There has been research on autonomous driving for some time, but it has not happened even after more than 40 years. Even if it is technically possible, if an accident occurs, its spread will be delayed immediately. Human error is said to occur approximately twice in 100 attempts, so there is an error of about 2%. At the same level of error, society will never accept the technology. Although it is steady in terms of system engineering, we will solve minor small problems without omission, establish 99.9999% safety, and establish an AI robot that can be accepted by society with peace of mind.

Matsuya: Finally, do you have a message for the next generation that will lead into the future?

Fukuda: The Moonshot Program is aiming for the future in 2050. We want young researchers to enter this field and conduct research that will shape the future.

First published on Science Japan

Moonshot Goal 3:2050

Commentary by Program Director

Toshio Fukuda 
Visiting Professor, Institute of Innovation for Future Society, Nagoya University

【Message from PD】
Our R&D aims to achieve the following three outcomes by 2050:
(1) AI robots that autonomously make judgements and act in environments where it is difficult for humans to act.
(2) An automated AI robot system that aims to discover impactful scientific principles and solutions, by thinking and acting in the field of natural science.
(3) AI robots that humans feel comfortable with, have physical abilities equivalent to or greater than humans, and grow in harmony with human life.
The following two concepts are core to our work:
(1) Coevolution: AI technology and robot technology cooperate to improve their own performance.
(2) Self-organization: AI technology and robot technology self-modify their own knowledge and functions to adapt to their environment.