Showing 3 results for Point Cloud
Julian Herrera-Benavidez , Cesar Pachón-Suescún, Robinson Jimenez-Moreno,
Volume 20, Issue 4 (11-2024)
Abstract
This paper presents the design and results of using a deep learning algorithm for robotic manipulation in object handling tasks in a virtual industrial environment. The simulation tool used is V-REP and the environment corresponds to a production line based on a conveyor belt and a SCARA type robot manipulator. The main contribution of this work focuses on the integration of a depth camera located on the robot and the computation of the gripping coordinates by identifying and locating three different types of objects of interest with random locations on the conveyor belt, through a Faster R-CNN. The results show that the system manages to perform the indicated activities, obtaining a classification accuracy of 97.4% and a mean average precision of 0.93, which allowed a correct detection and manipulation of the objects.
Haniye Merrikhi, Hossein Ebrahimnezhad,
Volume 20, Issue 4 (11-2024)
Abstract
Robots have become integral to modern society, taking over both complex and routine human tasks. Recent advancements in depth camera technology have propelled computer vision-based robotics into a prominent field of research. Many robotic tasks—such as picking up, carrying, and utilizing tools or objects—begin with an initial grasping step. Vision-based grasping requires the precise identification of grasp locations on objects, making the segmentation of objects into meaningful components a crucial stage in robotic grasping. In this paper, we present a system designed to detect the graspable parts of objects for a specific task. Recognizing that everyday household items are typically grasped at certain sections for carrying, we created a database of these objects and their corresponding graspable parts. Building on the success of the Dynamic Graph CNN (DGCNN) network in segmenting object components, we enhanced this network to detect the graspable areas of objects. The enhanced network was trained on the compiled database, and the visual results, along with the obtained Intersection over Union (IoU) metrics, demonstrate its success in detecting graspable regions. It achieved a grand mean IoU (gmIoU) of 92.57% across all classes, outperforming established networks such as PointNet++ in part segmentation for this dataset. Furthermore, statistical analysis using analysis of variance (ANOVA) and T-test validates the superiority of our method.
Humairah Mansor, Shazmin Aniza Abdul Shukor, Razak Wong Chen Keng, Nurul Syahirah Khalid,
Volume 21, Issue 2 (6-2025)
Abstract
Building fixtures like lighting are very important to be modelled, especially when a higher level of modelling details is required for planning indoor renovation. LIDAR is often used to capture these details due to its capability to produce dense information. However, this led to the high amount of data that needs to be processed and requires a specific method, especially to detect lighting fixtures. This work proposed a method named Size Density-Based Spatial Clustering of Applications with Noise (SDBSCAN) to detect the lighting fixtures by calculating the size of the clusters and classifying them by extracting the clusters that belong to lighting fixtures. It works based on Density-Based Spatial Clustering of Applications with Noise (DBSCAN), where geometrical features like size are incorporated to detect and classify these lighting fixtures. The final results of the detected lighting fixtures to the raw point cloud data are validated by using F1-score and IoU to determine the accuracy of the predicted object classification and the positions of the detected fixtures. The results show that the proposed method has successfully detected the lighting fixtures with scores of over 0.9. It is expected that the developed algorithm can be used to detect and classify fixtures from any 3D point cloud data representing buildings.