Tuesday 25 February 2025
A team of researchers has developed a new method for recognizing gestures in historical artworks, with a focus on smell-related movements. This innovative approach combines traditional computer vision techniques with contextual information to improve accuracy.
The dataset used for this study, called SniffyArt, consists of 1941 images of people performing various gestures related to smelling, such as sniffing, holding their nose, and drinking from a cup. The images are taken from historical artworks, including paintings and prints from the 16th to the 19th centuries.
The researchers used two main approaches for gesture recognition: local features and global image context. Local features focus on the specific region of the image where the person is performing the gesture, while global image context considers the entire image, including the environment and other objects present in the scene.
To test their method, the team used four different backbones – ResNet-50, HRNet-W32, ResNet-101, and SwinV2 – and compared the results to traditional computer vision approaches. They found that incorporating contextual information significantly improved accuracy, particularly for gestures related to smelling.
One of the main challenges faced by the researchers was dealing with the limited size of the dataset. To address this, they used transfer learning to fine-tune pre-trained models on their dataset. This allowed them to leverage knowledge from larger datasets and adapt it to the specific task at hand.
The results of this study have significant implications for digital humanities and computational heritage. By developing more accurate methods for recognizing gestures in artworks, researchers can gain a deeper understanding of the cultural and historical context in which these images were created.
Furthermore, this approach could be used to create interactive exhibits that allow visitors to explore historical artworks in new ways. For example, a museum visitor could use a tablet or smartphone to identify specific gestures depicted in an artwork, such as a person sniffing a flower or holding their nose while drinking from a cup.
The researchers plan to continue refining their method and exploring its applications in digital humanities. They also hope to expand the SniffyArt dataset to include more images and annotations, which will enable even more accurate gesture recognition and analysis.
Overall, this study demonstrates the potential of combining computer vision techniques with contextual information to improve accuracy in gesture recognition. Its findings have important implications for our understanding of historical artworks and could lead to new and innovative ways of engaging with cultural heritage.
Cite this article: “Unwrapping the Secrets of Historical Artworks: A Novel Approach to Gesture Recognition”, The Science Archive, 2025.
Computer Vision, Gesture Recognition, Historical Artworks, Smell-Related Movements, Sniffyart Dataset, Contextual Information, Transfer Learning, Digital Humanities, Computational Heritage, Cultural Heritage







