Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)

Framework Design and Performance Comparative Analysis of Large Language Models

Authors
Mingxuan Deng1, *
1School of Engineering Science, Lappeenranta-Lahti University of Technology, Yliopistonkatu 34, 53850, Lappeenranta, Finland
*Corresponding author. Email: Mingxuan.deng@student.lut.fi
Corresponding Author
Mingxuan Deng
Available Online 24 April 2026.
DOI
10.2991/978-94-6239-648-7_89How to use a DOI?
Keywords
Large language models; Transformer; Model comparison; Multimodal; Resource efficiency
Abstract

With the emergence of Transformer architecture, large language models (LLMs) have made breakthroughs in the fields of language understanding, reasoning, code generation and multimodal interaction. This research systematically sorts out the technical evolution path of mainstream LLM in the past five years, from pre-training paradigm, architecture characteristics, instruction fine-tuning to multi-modal expansion, and analyzes the differences between different models in data sources, training strategies, structural optimization and task performance. On the basis of reviewing a large number of literatures, the main differences between the closed-source model and the open source model in terms of intelligence level, scalability and application potential are summarized, and it is pointed out that problems such as resource efficiency, illusion control, security alignment and cross-modal consistency still constitute the core challenges of current technological development. Through the discussion of the typical benchmark data set and the evaluation index system, this study further reveals the trade-off relationship between model performance, bias, security and interpretability. Finally, this article proposes that in the future, LLM will deepen its development in the direction of efficient architecture, deploy ability, multimodal integration and value alignment, providing reference for subsequent academic research and engineering applications.

Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)
Series
Advances in Computer Science Research
Publication Date
24 April 2026
ISBN
978-94-6239-648-7
ISSN
2352-538X
DOI
10.2991/978-94-6239-648-7_89How to use a DOI?
Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Mingxuan Deng
PY  - 2026
DA  - 2026/04/24
TI  - Framework Design and Performance Comparative Analysis of Large Language Models
BT  - Proceedings of the International Workshop on Advances in Deep Learning for Image Analysis and Computer Vision (IWADIC 2025)
PB  - Atlantis Press
SP  - 821
EP  - 828
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6239-648-7_89
DO  - 10.2991/978-94-6239-648-7_89
ID  - Deng2026
ER  -