Human Perceptual Weight Judgment
Published:
Presentation | Report | Dataset Github Page |
Exploring the dynamic interplay between human actions and object properties, our research delves into the captivating realm of computer vision and computer graphics. Inspired by Zheng’s groundbreaking work in 2020, we set out to answer a fascinating question: Can we decipher hidden object characteristics solely from the motion of human interaction, without directly observing the objects themselves?
Intrigued by Zheng’s methodology, which unveiled latent object properties through the analysis of 3D skeleton sequences, we embarked on our own experiment. Following Zheng’s lead, we focused on the ‘Lifting Box Experiment,’ an ingenious approach to estimate the weight of an object based on human motion interaction. Participants engaged in lifting a box through various scenarios, unaware of the changing weight concealed within.
Drawing inspiration from Hamilton’s 2005 study, which highlighted the importance of the early part of the lift movement in weight perception, we explored different facets of human-object interaction. Petrov’s 2023 research further underscored the significance of human body points, emphasizing the hands, feet, and head during object interaction.
Our hypothesis emerged from this rich background: the lower part of the legs, specifically the feet, plays a pivotal role in human perceptual weight judgment. To test this, we conducted an experiment using a carefully curated dataset. Subjects lifted boxes of varying weights, and we meticulously analyzed full-body videos alongside cropped versions that strategically hid the feet.
To dive deeper into the experiment’s details, we recruited participants aged 23 to 30. Equipped with a high-resolution monitor and a streamlined numpad, they embarked on a journey through 50 trials. The randomized order of videos ensured a comprehensive exploration of weight perception. The participants’ accuracy rate and average response time were meticulously recorded and shared, fostering a sense of healthy competition.
OpenSesame, our chosen experiment builder, facilitated the seamless administration of the experiment, presentation of video stimuli, and data collection. Our commitment to transparency is evident in the availability of our full dataset and code on our GitHub repository.
In essence, our work contributes to the evolving narrative of human-object interaction perception, shedding light on the nuanced role of the lower limbs in weight judgment. Explore the details of our experiment, follow our journey on GitHub, and join us in unraveling the mysteries of perceptual weight judgment.
Our small but mighty participant pool has given us a taste of the exciting possibilities. Imagine expanding our group to uncover even richer insights. We’re considering everything from exploring facial expressions in weight judgments to the influence of color scenes on our perceptions.
We’re diving into VR possibilities, eye-tracking adventures, and even envisioning a future where virtual reality meets real-world interactions! And guess what? You can be part of this journey too.
Our hope is that this little peek into our world sparks your curiosity. The data is available here.