Proceedings of the 2023 International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2023)

Investigation Related to Influence of Multi-GPUs on Parallel Algorithms Based on PyTorch Distributed

Authors
Luyu Li1, *
1Department of Computer Science, Macau University of Science and Technology, Macau, 999078, China
*Corresponding author. Email: 20098535i011004@student.must.edu.mo
Corresponding Author
Luyu Li
Available Online 27 November 2023.
DOI
10.2991/978-94-6463-300-9_22How to use a DOI?
Keywords
Data Parallelism; Model Parallelism; Pytorch; Deep Learning
Abstract

This paper designs experiments to investigate the impact of different GPU counts on occupancy, accuracy, loss values and time of the PyTorch distributed data parallel and model parallel module in LeNet VGG16 and ResNet models respectively. PyTorch is a powerful deep learning framework but due to the expansion of the database size, it cannot load it all into memory at once. So, it divides large-scale data into small chunks, which are then processed simultaneously on multiple processors or computers, and finally the results are combined. But this leads to a change in the model results. Therefore, it brings out the experiment as follow, n LeNet and VGG16 models, model parallel experiments with different numbers of GPUs were conducted on these both models, and about ResNet model in this paper, performed data parallelism experiments with different number of GPUs. The findings revealed that, in the model parallel experiments, the number of GPUs employed did not have a significant impact on the floating accuracy of the results. However, a slight variation in the accuracy of the output answers was observed among the LeNet models, with an overall average accuracy of 26.2%. In the data parallel experiment, it was observed that the running time increased with an increasing number of GPUs. Specifically, the running time using one GPU was measured at 265.173 seconds, while using five GPUs resulted in a running time of 347.273 seconds. These results suggest that the utilization of multiple GPUs in data parallelism leads to an increase in operating time.

Copyright
© 2023 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2023 International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2023)
Series
Advances in Computer Science Research
Publication Date
27 November 2023
ISBN
10.2991/978-94-6463-300-9_22
ISSN
2352-538X
DOI
10.2991/978-94-6463-300-9_22How to use a DOI?
Copyright
© 2023 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Luyu Li
PY  - 2023
DA  - 2023/11/27
TI  - Investigation Related to Influence of Multi-GPUs on Parallel Algorithms Based on PyTorch Distributed
BT  - Proceedings of the 2023 International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2023)
PB  - Atlantis Press
SP  - 212
EP  - 223
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6463-300-9_22
DO  - 10.2991/978-94-6463-300-9_22
ID  - Li2023
ER  -