Options
TextGram: Towards a Better Domain-Adaptive Pretraining
Journal
Communications in Computer and Information Science
ISSN
18650929
Date Issued
2024-01-01
Author(s)
Hiwarkhedkar, Sharayu
Mittal, Saloni
Magdum, Vidula
Dhekane, Omkar
Joshi, Raviraj
Kale, Geetanjali
Ladkat, Arnav
Abstract
For green AI, it is crucial to measure and reduce the carbon footprint emitted during the training of large language models. In NLP, performing pre-training on Transformer models requires significant computational resources. This pre-training involves using a large amount of text data to gain prior knowledge for performing downstream tasks. Thus, it is important that we select the correct data in the form of domain-specific data from this vast corpus to achieve optimum results aligned with our domain-specific tasks. While training on large unsupervised data is expensive, it can be optimized by performing a data selection step before pretraining. Selecting important data reduces the space overhead and the substantial amount of time required to pre-train the model while maintaining constant accuracy. We investigate the existing selection strategies and propose our own domain-adaptive data selection method - TextGram - that effectively selects essential data from large corpora. We compare and evaluate the results of finetuned models for text classification task with and without data selection. We show that the proposed strategy works better compared to other selection methods.
Volume
2046 CCIS
Subjects