Consider the potential of a robot similar to a gelatinous organism that can morph its form to navigate through tight areas, such as inside a human body for a medical procedure. While these types of flexible robots have not yet left the confines of laboratory walls, significant research is being conducted to make them a reality - with applications in healthcare, wearable tech, and industry leading the charge.
One significant hurdle to overcome is the question of control. How does one issue commands to a robot that lacks joints and limbs for manipulation, but instead carries the capacity to drastically change its entire shape at will? This is the conundrum being tackled by researchers at the Massachusetts Institute of Technology.
These experts have concocted a control algorithm that can independently learn to direct the movements of a fluid robot to execute a specific task. This learning can occur even when the task necessitates the robot to morph its shape multiple times. Additionally, the team has created a simulator to test control algorithms for malleable soft robots against a series of complex tasks.
Witnessing success in their endeavors, the method was able to complete each of the eight tasks examined, surpassing the performance of other algorithms. Multi-layered tasks presented a particular area where the technique excelled.
For instance, in one watching exam, the robot had to minimize its height and sprout two tiny appendages to maneuver through a narrow pipe. Then, it needed to retract these appendages and elongate its body to open a lid at the end of the pipe.
Although currently in their embryonic stages, the hope is that this approach could pave the way for flexible, multipurpose robots, able to adapt their forms to achieve a variety of tasks.
The ground-breaking work was carried out by Boyuan Chen, an electrical engineering and computer science graduate student and co-author of the paper on this approach; Suning Huang, an undergraduate student at Tsinghua University in China who completed this work during a visiting student residency at MIT; Huazhe Xu, an assistant professor at Tsinghua University; and senior author Vincent Sitzmann, an assistant professor at MIT who leads the Scene Representation Group in the Computer Science and Artificial Intelligence Laboratory.
Traditional learning methods for robots involve a trial-and-error technique known as reinforcement learning. However, this methodology is much more challenging for shape-shifting robots as they have the capacity to change their entire form rather than just move a limb, making control much more complex.
A solution to this problem was created by focusing on learning to control groups of adjacent muscles that work collectively. The control algorithm would then refine the action plan by focusing on smaller details, following a coarse-to-fine methodology.
A unique feature of their algorithm is the understanding that nearby action points have more robust correlations. For instance, points around the robot's "shoulder" will move in a similar pattern when it shape-shifts, while points on the robot's "leg" will move similarly, though differently to those on the "shoulder".
Looking to the future, it may be many years until shape-altering robots are deployed in the real world. Yet, the research conducted by Boyuan Chen and his team could inspire other scientists not only to study reconfigurable soft robots but also to contemplate the use of 2D action spaces for other complex control problems.
Disclaimer: The above article was written with the assistance of AI. The original sources can be found on ScienceDaily.