Manifold Learning Method for Large Scale Dataset Based on Gradient Descent
Wang Yunhe, Gao Yuan, Xu Chao
Available Online November 2013.
- https://doi.org/10.2991/icmt-13.2013.145How to use a DOI?
- manifold learning; LLE; gradient descent; time complexity.
- Dimension reduction is a research hotspot in recent years, especially manifold learning for high-dimensional data. Cause high-dimensional data have complex nonlinear structures there are many researchers focus on nonlinear methods. The memory cost and running time are too large and difficult to operate when the scales of data are tremendously large. In order to solve this problem, we utilized the gradient descent to search the low-dimensional embedding. It replaced the eigenvalue decomposition of a large sparse matrix of LLE (Linear Locally Embedding) algorithm. The time complexity is lower than before and storage memory is declined obviously. Experimental results demonstrated our approach performed well than the original algorithm. Furthermore, our approach can be applied to other manifold method or other research fields such as information retrieval and feature extraction.
- Open Access
- This is an open access article distributed under the CC BY-NC license.
Cite this article
TY - CONF AU - Wang Yunhe AU - Gao Yuan AU - Xu Chao PY - 2013/11 DA - 2013/11 TI - Manifold Learning Method for Large Scale Dataset Based on Gradient Descent BT - 3rd International Conference on Multimedia Technology(ICMT-13) PB - Atlantis Press SN - 1951-6851 UR - https://doi.org/10.2991/icmt-13.2013.145 DO - https://doi.org/10.2991/icmt-13.2013.145 ID - Yunhe2013/11 ER -