Self-Training: A Survey
35 Pages Posted: 24 Jun 2024
Abstract
Semi-supervised algorithms aim to learn prediction functions from a small set of labeled training set and a large set of unlabeled observations. Because these approaches are relevant in many applications, they have received a lot of interest in both academia and industry. Among the existing techniques, self-training methods have undoubtedly attracted greater attention in recent years. These models are designed to find the decision boundary on low density regions without making additional assumptions about the data distribution, and use the unsigned output score of a learned classifier, or its margin, as an indicator of confidence. The working principle of self-training algorithms is to learn a classifier iteratively by assigning pseudo-labels to the set of unlabeled training samples with a margin greater than a certain threshold. The pseudo-labeled examples are then used to enrich the labeled training data and to train a new classifier in conjunction with the labeled training set. In this paper, we present self-training methods for binary and multi-class classification as well as their variants and two related approaches, namely consistency-based approaches and transductive learning. We also provide brief descriptions of self-supervised learning and reinforced self-training, two distinct approaches despite their similar names. Finally, we present the most popular applications where self-training is employed. For pseudo-labeling, fixed thresholds usually lead to subpar results, highlighting the significance of dynamic thresholding for best results. Moreover, improving pseudo-label noise enhances generalization and class differentiation. The performance is also impacted by augmenting initial labeled training samples. To the best of our knowledge, this is the first thorough and complete survey on self-training.
Keywords: Semi-supervised learning, Self-training
Suggested Citation: Suggested Citation