Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)

Evaluating LoRA, QLoRA, and Full Fine-Tuning on Compact Language Models Under Limited GPU Resources

Authors
Congbo Ni1, *
1College of Arts and Science, New York University, New York, NY, USA
*Corresponding author. Email: cn2393@nyu.edu
Corresponding Author
Congbo Ni
Available Online 24 April 2026.
DOI
10.2991/978-94-6239-648-7_93How to use a DOI?
Keywords
Parameter-efficient fine-tuning; QLoRA; LoRA; LLM adaptation; sentiment analysis
Abstract

Language models that are fine-tuned can also be surprisingly high-demand, although the model themselves can be itty-bitty. In the course of this project, the paper learned how to apply three approaches of adapting compact models to a simple classification problem: updating all model parameters, adding low-rank adapters, and using adapters together with quantization. It is not to obtain as much accuracy as possible but to learn what actually is the most successful method, when there are limited computation and memory. Throughout the experiments the three methods acted very differently. Good results with full fine-tuning were achieved with the smaller model. The adapter based approach minimized the load but at times exhibited unstable loss characteristics. The quantized one, however, ran without any complications in all the trials and enabled the larger model to be trained. These results indicate that, in environment with limited GPU resource, a quantized adapter design can provide a feasible trade-off of stability, efficiency and ultimate performance.

Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)
Series
Advances in Computer Science Research
Publication Date
24 April 2026
ISBN
978-94-6239-648-7
ISSN
2352-538X
DOI
10.2991/978-94-6239-648-7_93How to use a DOI?
Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Congbo Ni
PY  - 2026
DA  - 2026/04/24
TI  - Evaluating LoRA, QLoRA, and Full Fine-Tuning on Compact Language Models Under Limited GPU Resources
BT  - Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)
PB  - Atlantis Press
SP  - 862
EP  - 869
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6239-648-7_93
DO  - 10.2991/978-94-6239-648-7_93
ID  - Ni2026
ER  -