Fine-Tuning

In machine learning, fine-tuning is an approach to transfer learning in which the weights of a pre-trained model are trained on new data.[1] Fine-tuning can be done on the entire neural network, or on only a subset of its layers, in which case the layers that are not being fine-tuned are "frozen" (not updated during the backpropagation step)... Fine-tuning is common in natural language processing (NLP), especially in the domain of language modeling. Large language models (LLMs) like OpenAI's GPT-2 can be fine-tuned on downstream[jargon] NLP tasks to produce better results than the pre-trained model can normally achieve https://en.wikipedia.org/wiki/Fine-tuning_(machine_learning)


Edited:    |       |    Search Twitter for discussion