Search published articles


Showing 1 results for Grasp Area

Haniye Merrikhi, Hossein Ebrahimnezhad,
Volume 20, Issue 4 (11-2024)
Abstract

Robots have become integral to modern society, taking over both complex and routine human tasks. Recent advancements in depth camera technology have propelled computer vision-based robotics into a prominent field of research. Many robotic tasks—such as picking up, carrying, and utilizing tools or objects—begin with an initial grasping step. Vision-based grasping requires the precise identification of grasp locations on objects, making the segmentation of objects into meaningful components a crucial stage in robotic grasping. In this paper, we present a system designed to detect the graspable parts of objects for a specific task. Recognizing that everyday household items are typically grasped at certain sections for carrying, we created a database of these objects and their corresponding graspable parts. Building on the success of the Dynamic Graph CNN (DGCNN) network in segmenting object components, we enhanced this network to detect the graspable areas of objects. The enhanced network was trained on the compiled database, and the visual results, along with the obtained Intersection over Union (IoU) metrics, demonstrate its success in detecting graspable regions. It achieved a grand mean IoU (gmIoU) of 92.57% across all classes, outperforming established networks such as PointNet++ in part segmentation for this dataset. Furthermore, statistical analysis using analysis of variance (ANOVA) and T-test validates the superiority of our method.

Page 1 from 1     

Creative Commons License
© 2022 by the authors. Licensee IUST, Tehran, Iran. This is an open access journal distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.