A Survey of Pretrained Language Models for NLP Tasks

14 Pages Posted: 1 Apr 2025

Date Written: March 06, 2022

Abstract

Pretrained language models (PLMs) have revolutionized natural language processing (NLP) by significantly improving performance across a wide range of tasks. This survey presents an overview of the evolution of PLMs, from traditional approaches to the more advanced transformer-based models. We discuss the key components, training strategies, and major models, such as BERT, GPT, T5, and their variants. Furthermore, we explore the applications of PLMs in various NLP tasks, including text classification, question answering, machine translation, and named entity recognition. Additionally, we highlight challenges in fine-tuning, model efficiency, interpretability, and domain adaptation. The paper concludes by outlining the future directions of PLMs, emphasizing the need for more efficient, robust, and domain-specific models.

Keywords: Pretrained Language Models, NLP, BERT, GPT, T5, Transformer, Fine-Tuning, Model Efficiency, Interpretability, Domain Adaptation

Suggested Citation

Heleen, Betty and Abubakar, Muhammad and Dodda, Suresh, A Survey of Pretrained Language Models for NLP Tasks (March 06, 2022). Available at SSRN: https://ssrn.com/abstract=5198602 or http://dx.doi.org/10.2139/ssrn.5198602

Muhammad Abubakar

Independent ( email )

Suresh Dodda

Independent ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
13
Abstract Views
116
PlumX Metrics