Vision-based Tactile Image Generation via Contact Condition-guided Diffusion Model

Guardado en:
书目详细资料
发表在:arXiv.org (Dec 2, 2024), p. n/a
主要作者: Lin, Xi
其他作者: Xu, Weiliang, Mao, Yixian, Wang, Jing, Lv, Meixuan, Liu, Lu, Luo, Xihui, Li, Xinming
出版:
Cornell University Library, arXiv.org
主题:
在线阅读:Citation/Abstract
Full text outside of ProQuest
标签: 添加标签
没有标签, 成为第一个标记此记录!

MARC

LEADER 00000nab a2200000uu 4500
001 3138992165
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3138992165 
045 0 |b d20241202 
100 1 |a Lin, Xi 
245 1 |a Vision-based Tactile Image Generation via Contact Condition-guided Diffusion Model 
260 |b Cornell University Library, arXiv.org  |c Dec 2, 2024 
513 |a Working Paper 
520 3 |a Vision-based tactile sensors, through high-resolution optical measurements, can effectively perceive the geometric shape of objects and the force information during the contact process, thus helping robots acquire higher-dimensional tactile data. Vision-based tactile sensor simulation supports the acquisition and understanding of tactile information without physical sensors by accurately capturing and analyzing contact behavior and physical properties. However, the complexity of contact dynamics and lighting modeling limits the accurate reproduction of real sensor responses in simulations, making it difficult to meet the needs of different sensor setups and affecting the reliability and effectiveness of strategy transfer to practical applications. In this letter, we propose a contact-condition guided diffusion model that maps RGB images of objects and contact force data to high-fidelity, detail-rich vision-based tactile sensor images. Evaluations show that the three-channel tactile images generated by this method achieve a 60.58% reduction in mean squared error and a 38.1% reduction in marker displacement error compared to existing approaches based on lighting model and mechanical model, validating the effectiveness of our approach. The method is successfully applied to various types of tactile vision sensors and can effectively generate corresponding tactile images under complex loads. Additionally, it demonstrates outstanding reconstruction of fine texture features of objects in a Montessori tactile board texture generation task. 
653 |a Image resolution 
653 |a Image reconstruction 
653 |a Color imagery 
653 |a Sensors 
653 |a Effectiveness 
653 |a Image acquisition 
653 |a Optical data processing 
653 |a Physical properties 
653 |a Error reduction 
653 |a Complexity 
653 |a Tactile sensors (robotics) 
653 |a Optical measurement 
653 |a Image processing 
653 |a Dimensional analysis 
653 |a Lighting 
653 |a Contact force 
653 |a Texture 
653 |a Optical properties 
700 1 |a Xu, Weiliang 
700 1 |a Mao, Yixian 
700 1 |a Wang, Jing 
700 1 |a Lv, Meixuan 
700 1 |a Liu, Lu 
700 1 |a Luo, Xihui 
700 1 |a Li, Xinming 
773 0 |t arXiv.org  |g (Dec 2, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3138992165/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.01639