LucidGrasp: Robotic Framework for Autonomous Manipulation of Laboratory Equipment with Different Degrees of Transparency via 6D Pose Estimation

Uloženo v:
Podrobná bibliografie
Vydáno v:arXiv.org (Oct 31, 2024), p. n/a
Hlavní autor: Makarova, Maria
Další autoři: Trinitatova, Daria, Liu, Qian, Tsetserukou, Dzmitry
Vydáno:
Cornell University Library, arXiv.org
Témata:
On-line přístup:Citation/Abstract
Full text outside of ProQuest
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!

MARC

LEADER 00000nab a2200000uu 4500
001 3115595574
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3115595574 
045 0 |b d20241031 
100 1 |a Makarova, Maria 
245 1 |a LucidGrasp: Robotic Framework for Autonomous Manipulation of Laboratory Equipment with Different Degrees of Transparency via 6D Pose Estimation 
260 |b Cornell University Library, arXiv.org  |c Oct 31, 2024 
513 |a Working Paper 
520 3 |a Many modern robotic systems operate autonomously, however they often lack the ability to accurately analyze the environment and adapt to changing external conditions, while teleoperation systems often require special operator skills. In the field of laboratory automation, the number of automated processes is growing, however such systems are usually developed to perform specific tasks. In addition, many of the objects used in this field are transparent, making it difficult to analyze them using visual channels. The contributions of this work include the development of a robotic framework with autonomous mode for manipulating liquid-filled objects with different degrees of transparency in complex pose combinations. The conducted experiments demonstrated the robustness of the designed visual perception system to accurately estimate object poses for autonomous manipulation, and confirmed the performance of the algorithms in dexterous operations such as liquid dispensing. The proposed robotic framework can be applied for laboratory automation, since it allows solving the problem of performing non-trivial manipulation tasks with the analysis of object poses of varying degrees of transparency and liquid levels, requiring high accuracy and repeatability. 
653 |a Teleoperators 
653 |a Visual fields 
653 |a Visual tasks 
653 |a Pose estimation 
653 |a Visual perception 
653 |a Automation 
653 |a Liquid levels 
653 |a Visual perception driven algorithms 
653 |a Task complexity 
653 |a Robotics 
700 1 |a Trinitatova, Daria 
700 1 |a Liu, Qian 
700 1 |a Tsetserukou, Dzmitry 
773 0 |t arXiv.org  |g (Oct 31, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3115595574/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2410.07801