OiO.lk Blog python How to Incrementally Train a Face Recognition Model Without Retraining From Scratch?
python

How to Incrementally Train a Face Recognition Model Without Retraining From Scratch?


I’m new to deep learning and currently building a face recognition model. I’ve already trained a model using the images of two people (Cristiano Ronaldo and Lionel Messi). Now, I want to add more people (e.g., Maria Sharapova) to the model without retraining everything from scratch since my laptop doesn’t have enough computational power.

Is there a way to train a model a new model using the new dataset? If so, how can I efficiently merge the new training data with the existing model? I’m just trying to learn something new and would appreciate any guidance or code examples. Thank you!

Here is my existing code

import torch
import torchvision
from torchvision import datasets, models, transforms
import os

import ssl
ssl._create_default_https_context = ssl._create_unverified_context

data_transforms = {
    'train': transforms.Compose([
        transforms.Resize((224, 224)),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'test': transforms.Compose([
        transforms.Resize((224, 224)),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
}

data_dir="./new_dataset"
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x])
                  for x in ['train', 'test']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True)
               for x in ['train', 'test']}
class_names = image_datasets['train'].classes

model = models.resnet18(pretrained=True, progress=True)

num_classes = len(class_names)
model.fc = torch.nn.Linear(model.fc.in_features, num_classes)

device = torch.device("cpu")
model = model.to(device)

criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

num_epochs = 10

for epoch in range(num_epochs):
    for inputs, labels in dataloaders['train']:
        inputs = inputs.to(device)
        labels = labels.to(device)

        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, labels)

        loss.backward()
        optimizer.step()

torch.save(model.state_dict(), 'model.pth')

model.eval()

correct = 0
total = 0

with torch.no_grad():
    for inputs, labels in dataloaders['test']:
        inputs = inputs.to(device)
        labels = labels.to(device)

        outputs = model(inputs)
        _, predicted = torch.max(outputs.data, 1)

        total += labels.size(0)
        correct += (predicted == labels).sum().item()

accuracy = 100 * correct / total
print(f"Accuracy on the test set: {accuracy}%")

The folder, ./new_dataset looks like this

new_dataset/
--test/
----cristiano_ronaldo
----lione_messi
--train/
----cristiano_ronaldo
----lione_messi



You need to sign in to view this answers

Exit mobile version