EgoTouch: A Novel Approach to Capturing User Input in AR/VR

Tuesday 23 September 2025

The quest for more intuitive and natural ways to interact with virtual reality (VR) and augmented reality (AR) systems has led researchers to explore new methods for capturing user input. One such approach is using cameras integrated into AR/VR headsets to detect on-body touch, a technique dubbed EgoTouch.

In traditional VR and AR experiences, users often rely on controllers or gestures in mid-air to interact with virtual objects. However, these methods can be clumsy and unnatural, especially for tasks that require precision or fine motor control. On-body touch, on the other hand, leverages the user’s own body as an input surface, allowing for more intuitive and tactile interactions.

EgoTouch uses a combination of computer vision techniques and machine learning algorithms to detect subtle changes in the skin’s appearance and texture when a user touches their arm or hand with their finger. The system is designed to work with minimal setup and no additional hardware requirements, making it an attractive solution for widespread adoption.

Researchers have found that EgoTouch can accurately detect touch events across a range of lighting conditions, skin tones, and body motions – even while walking or moving around. The system also provides rich input metadata, including force, finger identification, angle of attack, and rotation, which can be used to enhance the overall user experience.

To achieve this level of accuracy, EgoTouch employs a novel approach that combines traditional computer vision techniques with machine learning algorithms. The system first uses a convolutional neural network (CNN) to segment and track the user’s skin from the camera feed, allowing it to focus on the area of interest. Next, a second CNN is trained to detect subtle changes in the skin’s appearance and texture when a touch event occurs.

The results are promising, with EgoTouch demonstrating high accuracy rates even in challenging conditions. The system’s ability to adapt to different lighting conditions and body motions makes it well-suited for real-world applications, where users may not always be in optimal lighting or remain stationary.

EgoTouch has significant implications for the future of VR and AR interaction. By enabling users to interact with virtual objects using their own bodies, the technology could revolutionize the way we engage with digital content. Imagine being able to manipulate virtual objects with precision and accuracy, simply by touching your arm or hand – it’s a prospect that promises to blur the lines between the physical and digital worlds.

Cite this article: “EgoTouch: A Novel Approach to Capturing User Input in AR/VR”, The Science Archive, 2025.

Virtual Reality, Augmented Reality, Egotouch, Computer Vision, Machine Learning, Neural Network, Touch Detection, Human-Computer Interaction, Gesture Recognition, Natural User Interface

Reference: Vimal Mollyn, Chris Harrison, “EgoTouch: On-Body Touch Input Using AR/VR Headset Cameras” (2025).

Discussion