International Journal of Computational Intelligence Systems

Volume 14, Issue 1, 2021, Pages 1753 - 1762

Tactile–Visual Fusion Based Robotic Grasp Detection Method with a Reproducible Sensor

Authors
Yaoxian Song1, 2, Yun Luo2, Changbin Yu3, 4, *
1School of Computer Science & Institute for Intelligent Robots, Fudan University, Shanghai, China
2School of Engineering, Westlake University, Hangzhou, China
3College of Artificial Intelligence and Big Data, Shandong First Medical University & Shandong Academy of Medical Sciences, Shandong, China
4Faculty of Engineering & the Built Environment, University of Johannesburg, Johannesburg, South Africa
*Corresponding author. Email: hzsongyaoxian@163.com
Corresponding Author
Changbin Yu
Received 15 March 2021, Accepted 26 May 2021, Available Online 11 June 2021.
DOI
10.2991/ijcis.d.210531.001How to use a DOI?
Keywords
Tactile sensor; Tactile–visual dataset; Multi-modal fusion; Deep learning; Grasp detection
Abstract

Robotic grasp detection is a fundamental problem in robotic manipulation. The conventional grasp methods, using vision information only, can cause potential damage in force-sensitive tasks. In this paper, we propose a tactile–visual based method using a reproducible sensor to realize a fine-grained and haptic grasping. Although there exist several tactile-based methods, they require expensive custom sensors in coordination with their specific datasets. In order to overcome the limitations, we introduce a low-cost and reproducible tactile fingertip and build a general tactile–visual fusion grasp dataset including 5,110 grasping trials. We further propose a hierarchical encoder–decoder neural network to predict grasp points and force in an end-to-end manner. Then comparisons of our method with the state-of-the-art methods in the benchmark are shown both in vision-based and tactile–visual fusion schemes, and our method outperforms in most scenarios. Furthermore, we also compare our fusion method with the only vision-based method in the physical experiment, and the results indicate that our end-to-end method empowers the robot with a more fine-grained grasp ability, reducing force redundancy by 41%. Our project is available at https://sites.google.com/view/tvgd

Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Download article (PDF)
View full text (HTML)

Journal
International Journal of Computational Intelligence Systems
Volume-Issue
14 - 1
Pages
1753 - 1762
Publication Date
2021/06/11
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.d.210531.001How to use a DOI?
Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Yaoxian Song
AU  - Yun Luo
AU  - Changbin Yu
PY  - 2021
DA  - 2021/06/11
TI  - Tactile–Visual Fusion Based Robotic Grasp Detection Method with a Reproducible Sensor
JO  - International Journal of Computational Intelligence Systems
SP  - 1753
EP  - 1762
VL  - 14
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.d.210531.001
DO  - 10.2991/ijcis.d.210531.001
ID  - Song2021
ER  -