Robots Gain New Sense: Combining Visual and Tactile Data to Understand Environment

Monday 28 July 2025

Robotics researchers have made a significant breakthrough in the field of robotic perception, allowing robots to better understand and interact with their environment. A team of scientists has developed a new approach that combines visual and tactile information to enable robots to perceive physical properties of objects, such as hardness, elasticity, and roughness.

Traditionally, robots have relied on either visual or tactile sensors to gather information about the world around them. However, these methods have limitations. Visual sensors can struggle with complex lighting conditions or subtle changes in object shape, while tactile sensors can only provide limited information about an object’s properties. By combining both types of sensors, researchers hope to create a more comprehensive understanding of the environment.

The new approach uses a combination of computer vision and large language models to enable robots to perceive physical properties of objects. Computer vision is used to process visual data from cameras or other sensors, while large language models are trained on vast amounts of text data to learn about object properties. The two types of data are then combined to create a more accurate understanding of the environment.

In a recent experiment, researchers tested their approach using 35 common household objects with diverse materials and shapes. They found that the new approach was able to accurately estimate physical properties such as hardness, elasticity, and roughness, outperforming traditional visual-only or tactile-only methods. The results show significant improvements in robotic perception, allowing robots to better understand and interact with their environment.

The implications of this breakthrough are vast. Robots could be used in a variety of applications, from manufacturing and logistics to healthcare and education. For example, robots could be used to inspect products for defects, or assist surgeons during medical procedures. The new approach also has the potential to improve human-robot interaction, enabling robots to better understand and respond to human commands.

While there is still much work to be done, this breakthrough marks an important step forward in the field of robotic perception. By combining visual and tactile information, researchers have created a more comprehensive understanding of the environment, paving the way for even more advanced applications in the future.

Cite this article: “Robots Gain New Sense: Combining Visual and Tactile Data to Understand Environment”, The Science Archive, 2025.

Robotics, Perception, Robotic Sensors, Computer Vision, Large Language Models, Object Properties, Hardness, Elasticity, Roughness, Robotic Interaction.

Reference: Zexiang Guo, Hengxiang Chen, Xinheng Mai, Qiusang Qiu, Gan Ma, Zhanat Kappassov, Qiang Li, Nutan Chen, “Robotic Perception with a Large Tactile-Vision-Language Model for Physical Property Inference” (2025).

Leave a Reply