Proceedings of the International Conference on Recent Trends in Intelligent Computing, Manufacturing, and Electronics (rTIME 2025)

Adaptive Layer Calibration: Performance Boost for Large Models

Authors
Siddhant Sukhatankar1, *
1Amazon, Arlington, VA, US
*Corresponding author. Email: siddhantsukhatankar@gmail.com
Corresponding Author
Siddhant Sukhatankar
Available Online 31 March 2026.
DOI
10.2991/978-94-6239-628-9_8How to use a DOI?
Keywords
Neural Networks; LLM; Deep Learning
Abstract

The study centers on normalization layers, which have emerged as a significant burden in contemporary neural network architectures. The method doesn’t treat normalization as a separate step. Instead, it adds normalization weights directly to the core computation function. This makes the feedback loop that keeps layer activations stable tighter. The gains are small but important. The method cuts down on inference time and makes parts of the training pipeline easier to handle. Even small savings add up and lower the total cost of computation when there are billions of parameters. Also, it’s important to remember that normalization has mostly been used to make things more stable, not to make them more expressive. So, this work doesn’t affect the model learning features; it just changes how well it learns. These changes are more important than they first seem in large language models, where inefficiencies exponentially grow with scale.

Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the International Conference on Recent Trends in Intelligent Computing, Manufacturing, and Electronics (rTIME 2025)
Series
Advances in Engineering Research
Publication Date
31 March 2026
ISBN
978-94-6239-628-9
ISSN
2352-5401
DOI
10.2991/978-94-6239-628-9_8How to use a DOI?
Copyright
© 2026 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Siddhant Sukhatankar
PY  - 2026
DA  - 2026/03/31
TI  - Adaptive Layer Calibration: Performance Boost for Large Models
BT  - Proceedings of the International Conference on Recent Trends in Intelligent Computing, Manufacturing, and Electronics (rTIME 2025)
PB  - Atlantis Press
SP  - 71
EP  - 88
SN  - 2352-5401
UR  - https://doi.org/10.2991/978-94-6239-628-9_8
DO  - 10.2991/978-94-6239-628-9_8
ID  - Sukhatankar2026
ER  -