Image retrieval based on ResNet and ITQ
Guijun Wang, Baohua Qiang, Xianchun Zou, Jinyun Lu
Available Online March 2018.
- https://doi.org/10.2991/mecae-18.2018.96How to use a DOI?
- Learning to Hash, Deep Residual Network, Iterative Quantization.
- In recent years, more and more hash learning methods have been applied to solve large-scale vision problems. It has been shown that learning hash function by using supervised information can boost hashing quality. The state-of-the-art image retrieval hashing methods based on visual features lacks of learning ability, the image expression ability is weak and the efficiency of large-scale image retrieval is low. In this paper, we propose a new supervised hashing framework based on deep Residual Networks and Iterative Quantization hashing. Firstly, we exploit the learning abilities of deep residual network to mine the inherent hidden relationship of image content, extract deep feature descriptors, and increase the visual expression of images Secondly, Iterative Quantization Hashing is applied to learn from the high-dimensional image feature and map into low-dimensional hamming space and achieve compact Hash codes. Finally, image retrieval is accomplished in low-dimensional hamming space. Experimental results of MNIST, CIFAR-10, CIFAR-100 and Caltech 256 show that the expression ability of visual feature is effectively improved and the image retrieval performance is substantially boosted compared with other related methods.
- Open Access
- This is an open access article distributed under the CC BY-NC license.
Cite this article
TY - CONF AU - Guijun Wang AU - Baohua Qiang AU - Xianchun Zou AU - Jinyun Lu PY - 2018/03 DA - 2018/03 TI - Image retrieval based on ResNet and ITQ PB - Atlantis Press SP - 491 EP - 496 SN - 2352-5401 UR - https://doi.org/10.2991/mecae-18.2018.96 DO - https://doi.org/10.2991/mecae-18.2018.96 ID - Wang2018/03 ER -