What is fine-tuning in machine learning and AI?
Fine-tuning is the process of taking a pretrained machine learning model and further training it on a smaller, targeted data set. The aim of fine-tuning is to maintain the original capabilities of a pretrained model while adapting it to suit more specialized use cases.
Building on top of an existing sophisticated model through fine-tuning enables machine learning developers to create effective models for specific use cases more efficiently. This approach is especially beneficial when computational resources are limited or relevant data is scarce.
The performance of a fine-tuned model can surpass that of the original pretrained model on the specific tasks for which it was fine-tuned. For example, a business incorporating generative AI into customer support might train a large language model (LLM) on its product information, policies and past customer interactions. This enterprise-specific training helps the fine-tuned model produce more useful, relevant responses compared with its more general pretrained counterpart.
How does fine-tuning work?
Fine-tuning begins with an existing model that has already been trained on a large, diverse data set, learning a wide range of features and patterns. For instance, a pretrained image recognition model might be trained on millions of images, ranging from landscapes to household objects to people.
This article is part of
What is machine learning? Guide, definition and examples
As part of this initial training, the pretrained model learns to generalize by identifying underlying patterns and features in its training data. Over time, the model becomes able to correctly interpret new input. A large image model like this would gradually learn to detect whether an image contains a bird after analyzing thousands of images of birds.
But despite their impressive generalization abilities, off-the-shelf pretrained models do not always work well for niche use cases. The aforementioned model trained on general images might recognize a bird, broadly speaking, but struggle to accurately distinguish among species -- a problem when developing an app to help bird-watchers identify their sightings, for example.
Building a comprehensive image processing model from scratch for such a niche task would be computationally intensive, expensive and likely beyond the means of a small app developer. Fine-tuning plays a crucial role in such scenarios, taking advantage of the extensive foundational learning of pretrained models and adapting that baseline knowledge for specific tasks. In this way, fine-tuning strikes a balance between general knowledge and task-specific expertise.
To start fine-tuning a machine learning model, the model developer builds or selects a smaller, specialized data set targeted to their use case, such as a collection of bird photos. Although these fine-tuning data sets might comprise hundreds or thousands of data points, they are still generally much smaller than the original model's training data set.
After acquiring and preprocessing this additional data, the developer further trains -- or fine-tunes -- the pretrained model. The early layers of the neural network, which capture basic features such as simple textures in images or vector embeddings in text, typically remain unchanged, or "frozen." Later layers, in contrast, are adjusted or added to capture the new data and better match the task at hand.
This process aims to balance retaining the model's valuable foundational knowledge with improving its performance on the fine-tuning use case. To this end, model developers often set a lower learning rate -- a hyperparameter that describes how much a model's weights are adjusted during training. Setting a lower learning rate during fine-tuning helps prevent drastic changes to the already learned weights, ensuring the model preserves its existing knowledge.
What are the risks and benefits of fine-tuning?
As with any machine learning technique, fine-tuning a model has certain benefits and disadvantages.
The key benefits of fine-tuning include the following:
- Cost and resource efficiency. Fine-tuning a pretrained model is generally much faster, more cost-effective and more compute-efficient than training a model from scratch. This, in turn, leads to lower costs and less onerous infrastructure requirements.
- Better performance on narrow use cases. Fine-tuned pretrained models, with their combination of broad foundational learning and task-specific training, can achieve high performance in specialized use cases. This is especially useful in scenarios where task-specific data is limited.
- Democratization of machine learning capabilities. Fine-tuning helps make advanced machine learning models more accessible for individuals and organizations with limited compute and financial resources. Even smaller organizations that would be unable to build a model from scratch can adapt pretrained models for a range of applications.
However, fine-tuning also comes with a number of risks and challenges, including the following:
- Overfitting. A common problem when working with small data sets, overfitting occurs when a machine learning model hews too closely to its training data and learns irrelevant features, or "noise," causing poor performance on new, unseen data. Strategies such as data augmentation, regularization and incorporating dropout layers can help mitigate this limitation.
- Balancing new and previously learned knowledge. There is some risk that the fine-tuned model will forget the general knowledge acquired during pretraining, especially if the new data differs significantly from the original data. Freezing too many layers can prevent the model from adapting well to the new task, while freezing too few risks losing important pre-learned features.
- Reliance on pretrained models. Because fine-tuning depends so heavily on the pretrained model, any flaws or limitations in that model can affect its fine-tuned counterpart. For example, if the pretrained model exhibits biases or security vulnerabilities, those flaws could persist or even get worse in the fine-tuned model if not corrected before or during fine-tuning.
Real-world applications for fine-tuning
Fine-tuning has many possible use cases in real-world settings. The following are a few examples:
- Customer service. A company could fine-tune a general LLM on a data set of previous customer support interactions specific to that industry or organization. The fine-tuned chatbot could then respond more effectively to users' questions about the company's industry and products.
- Retail and e-commerce. An e-commerce platform seeking to improve its product recommendation engine could fine-tune a pretrained model on the company's user interaction data, such as purchase history and user ratings. This fine-tuned model could then offer users more personalized, accurate product recommendations.
- Healthcare and medicine. Medical researchers investigating a rare condition could fine-tune a pretrained image processing model on a small data set of disease-specific CT scans. The fine-tuned model could then identify markers of the condition with higher accuracy.
- Historical research. A historian working on digitizing ancient texts might encounter difficulty performing automated optical character recognition on an archaic language. Pretraining a natural language processing (NLP) model on a corpus of these ancient texts could help the model better recognize their distinctive linguistic features.
- Conservation and sustainability. An ecology team tracking wildlife in a forest might need to differentiate animal sounds from background noise in audio recordings. They could fine-tune a pretrained audio processing model on a data set of forest audio recordings, teaching the fine-tuned model to isolate specific animal noises.
RAG vs. fine-tuning vs. transfer learning
Retrieval-augmented generation (RAG), fine-tuning and transfer learning are distinct concepts that share some overarching similarities. Briefly, fine-tuning and transfer learning are strategies for applying preexisting models to new tasks, whereas RAG is a type of model architecture that blends external information retrieval with generative AI capabilities.
Transfer learning, the broadest concept of the three, involves using knowledge that a model learned from one task as the starting point for a second, related task. Transfer learning is a common strategy in deep learning fields such as NLP and computer vision, particularly for tasks where collecting extensive data is challenging.
Fine-tuning is a specific technique within the broader category of transfer learning that involves making small adjustments to a pretrained model's parameters to improve its performance on a specific task. This often includes modifying or adding certain layers in the model, while keeping most of the original pretrained model's structure.
Unlike transfer learning and fine-tuning, RAG refers to a specific type of NLP model architecture. RAG combines a pretrained language model with a knowledge retrieval system. Unlike fine-tuning and transfer learning, which are machine learning training methods, RAG is a technique for enhancing model output by incorporating additional information from external data sources.
In RAG, the model retrieves contextual information from an external knowledge source -- for example, a database or collection of documents -- in response to a user's query. A generative AI model, such as a transformer-based LLM, then uses that data to inform its output. This is particularly useful for applications where generating accurate responses requires information not contained within the LLM itself.