Attribute-based object grounding and robot grasp detection with spatial reasoning
2025
Enabling robots to grasp objects specified through natural language is essential for effective human–robot interaction, yet it remains a significant challenge. Existing approaches often struggle with open–form language expressions and typically assume unambiguous target objects without duplicates. Moreover, they frequently rely on costly, dense pixel–wise annotations for both object grounding and grasp configuration. We present Attribute–based Object Grounding and Robotic Grasping (OGRG), a novel framework that interprets open–form language expressions and performs spatial reasoning to ground target objects and predict planar grasp poses, even in scenes containing duplicated object instances. We investigate OGRG in two settings: (1) Referring Grasp Synthesis (RGS) under pixel–wise full supervision, and (2) Referring Grasp Affordance (RGA) using weakly supervised learning with only single–pixel grasp annotations. Key contributions include a bi-directional vision–language fusion module and the integration of depth information to enhance geometric reasoning, improving both grounding and grasping performance. Experiment results show that OGRG outperforms strong baselines in tabletop scenes with diverse spatial language instructions. In RGS, it operates at 17.59 FPS on a single NVIDIA RTX 2080 Ti GPU, enabling potential use in closed–loop or multi–object sequential grasping, while delivering superior grounding and grasp prediction accuracy compared to all the baselines considered. Under the weakly supervised RGA setting, OGRG also surpasses baseline grasp–success rates in both simulation and real–robot trials, underscoring the effectiveness of its spatial reasoning design. Project page: https://z.umn.edu/ogrg
Research areas