Skip to content
Game-Changing AI Tool Mimics Human Motion

Game-Changing AI Tool Mimics Human Motion

Creating human-like movement in robots has long been a challenging feat, especially mimicking complex movements like walking and running. However, a pioneering method now promises to overcome these hurdles, introducing an innovative approach that reproduces human motion in unprecedented detail.

Researchers have combined central pattern generators (CPGs)—neural circuits in the spinal cord responsible for regulating rhythmic muscle activity with the potency of deep reinforcement learning (DRL) to recreate complex human movement. This revolutionary approach is not only able to mimic movements such as walking and running but also compensate for frequencies where movement data is unavailable, smooth transition from stride to sprint, and even adjust to unstable environments.

The details of these groundbreaking findings were published in the well-known journal IEEE Robotics and Automation Letters dated April 15, 2024.

It's no secret that reproducing the intricacy and complexity of human movement in robots is beset with challenges. Existing AI models tend to have limitations when it comes to adjusting to unsuspected or technically challenging environments, causing inefficiencies. Living organisms provide a multitude of correct motion solutions, unlike AI which usually propose singular or fewer correct solutions. It's these inherent biological redundancies that current models find hard to replicate.

Researchers have tried to surpass these limitations through DRL. This method extends traditional reinforcement learning by using deep neural networks, which enables handling of complex tasks and direct learning from raw sensory inputs. However, this approach is not without its setbacks. A chief among them is the monumental computational cost that arises from exploring a huge input space, especially for systems characterised by a high degree of freedom.

Imitation learning is another method where a robot learns from human motion data, replicating the same movements. While this approach has proven successful for stable environments, its effectiveness dwindles when new or unfamiliar terrains are introduced in the mix. The learned behaviours of a robot with imitation learning are constrained within a narrow scope, leading to limitations in modification and navigation capabilities.

Mitsuhiro Hayashibe, a professor at Tohoku University's Graduate School of Engineering, explains, "We overcame many of the limitations of these two approaches by combining them." The research team combined imitation learning with a CPG-like controller. And instead of applying deep learning to the CPGs directly, they applied it to a reflex neural network that supports the CPGs."

CPGs direct rhythmic muscle movement patterns, similar to how a conductor leads an orchestra. Animals have a reflex circuit that works alongside the CPGs, offering the right feedback to adjust speed and movement according to the terrain. Hayashibe remarks, "By adopting the structure of CPG and its reflexive counterpart, the adaptive imitated CPG (AI-CPG) method achieves remarkable adaptability and stability in motion generation while imitating human motion."

This achievement sets a new benchmark in imitating human-like movement, providing excellent environmental adaptation abilities. It marks a significant milestone for generative AI technologies in robot control with potential implications across various industries.

Researchers involved in this groundbreaking project were part of the team at Tohoku University's Graduate School of Engineering and Switzerland's École Polytechnique Fédérale de Lausanne.

Disclaimer: The above article was written with the assistance of AI. The original sources can be found on ScienceDaily.