Self-Training: A Survey

35 Pages Posted: 24 Jun 2024

See all articles by Massih-Reza Amini

Massih-Reza Amini

affiliation not provided to SSRN

Feofanov Vasilii

affiliation not provided to SSRN

Loïc Pauletto

affiliation not provided to SSRN

Liès Hadjadj

affiliation not provided to SSRN

Emilie Devijver

affiliation not provided to SSRN

Yury Maximov

Government of the United States of America - Los Alamos National Laboratory

Abstract

Semi-supervised algorithms aim to learn prediction functions from a small set of labeled training set and a large set of unlabeled observations. Because these approaches are relevant in many applications, they have received a lot of interest in both academia and industry. Among the existing techniques, self-training methods have undoubtedly attracted greater attention in recent years. These models are designed to find the decision boundary on low density regions without making additional assumptions about the data distribution, and use the unsigned output score of a learned classifier, or its margin, as an indicator of confidence. The working principle of self-training algorithms is to learn a classifier iteratively by assigning pseudo-labels to the set of unlabeled training samples with a margin greater than a certain threshold. The pseudo-labeled examples are then used to enrich the labeled training data and to train a new classifier in conjunction with the labeled training set. In this paper, we present self-training methods for binary and multi-class classification as well as their variants and two related approaches, namely consistency-based approaches and transductive learning. We also provide brief descriptions of self-supervised learning and reinforced self-training, two distinct approaches despite their similar names. Finally, we present the most popular applications where self-training is employed. For pseudo-labeling, fixed thresholds usually lead to subpar results, highlighting the significance of dynamic thresholding for best results. Moreover, improving pseudo-label noise enhances generalization and class differentiation. The performance is also impacted by augmenting initial labeled training samples. To the best of our knowledge, this is the first thorough and complete survey on self-training.

Keywords: Semi-supervised learning, Self-training

Suggested Citation

Amini, Massih-Reza and Vasilii, Feofanov and Pauletto, Loïc and Hadjadj, Liès and Devijver, Emilie and Maximov, Yury, Self-Training: A Survey. Available at SSRN: https://ssrn.com/abstract=4875054 or http://dx.doi.org/10.2139/ssrn.4875054

Massih-Reza Amini (Contact Author)

affiliation not provided to SSRN ( email )

No Address Available

Feofanov Vasilii

affiliation not provided to SSRN ( email )

No Address Available

Loïc Pauletto

affiliation not provided to SSRN ( email )

No Address Available

Liès Hadjadj

affiliation not provided to SSRN ( email )

No Address Available

Emilie Devijver

affiliation not provided to SSRN ( email )

No Address Available

Yury Maximov

Government of the United States of America - Los Alamos National Laboratory ( email )

Los Alamos, NM 87545
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
12
Abstract Views
53
PlumX Metrics