Skip to content

Repository containing codebase for "FaceOff: A Video-to-Video Face Swapping Network" accepted at WACV 2023

Notifications You must be signed in to change notification settings

skymanaditya1/FaceOff

Repository files navigation

FaceOff: A Video-to-Video Face Swapping System

Aditya Agarwal*1, Bipasha Sen*1, Rudrabha Mukhopadhyay1, Vinay Namboodiri2, C V Jawahar1
1International Institute of Information Technology, Hyderabad, 2University of Bath

*denotes equal contribution

This is the official implementation of the paper "FaceOff: A Video-to-Video Face Swapping System" published at WACV 2023.

For more results, information, and details visit our project page and read our paper. Following are some outputs from our network on the V2V Face Swapping Task.

Results on same identity

Getting started

  1. Set up a conda environment with all dependencies using the following commands:

    conda env create -f environment.yml
    conda activate faceoff
    

Training FaceOff

The following command trains the V2V Face Swapping network. At set intervals, it will generate the data on the validation dataset.

CUDA_VISIBLE_DEVICES=0,1,2,3 python train_faceoff_perceptual.py

Parameters
Below is the full list of parameters

--dist_url - port on which experiment is run
--batch_size - batch size, default is 32
--size - image size, default is 256
--epoch - number of epochs to train for
--lr - learning rate, default is 3e-4 --sched - scheduler to use
--checkpoint_suffix - folder where checkpoints are saved, in default mode a random folder name is created
--validate_at - number of steps after which validation is performed, default is 1024
--ckpt - indicates a pretrained checkpoint, default is None
--test - whether testing the model
--gray - whether testing on gray scale
--colorjit - type of color jitter to add, const, random or empty are the possible options
--crossid - whether cross id required during validation, default is True
--custom_validation - used to test FaceOff on two videos, default is False
--sample_folder - path where the validation videos are stored
--checkpoint_dir - dir path where checkpoints are saved
--validation_folder - dir path where validated samples are saved

All the values can be left at their default values to train FaceOff in the vanilla setting. An example is given below:

CUDA_VISIBLE_DEVICES=0,1,2,3 python train_faceoff_perceptual.py

Checkpoints

Pretrained checkpoints would be released soon!

We would love your contributions to improve FaceOff

FaceOff introduces the novel task of Video-to-Video Face Swapping that tackles a pressing challenge in the moviemaking industry: swapping the actor's face and expressions on the face of their body double. Existing face-swapping methods swap only the identity of the source face without swapping the source (actor) expressions which is undesirable as the starring actor's source expressions are paramount. In video-to-video face swapping, we swap the source's facial expressions along with the identity on the target's background and pose. Our method retains the face and expressions of the source actor and the pose and background information of the target actor. Currently, our model has a few limitations. we would like to strongly encourage contributions and spur further research into some of the limitations listed above.

  1. Video Quality: FaceOff is based on combining the temporal motion of the source and the target face videos in the reduced space of a vector quantized variational autoencoder. Consequently, it suffers from a few quality issues and the output resolution is limited to 256x256. Generating samples of very high-quality is typically required in the movie-making industry.

  2. Temporal Jitter: Although the 3Dconv modules get rid of most of the temporal jitters in the blended output, there are a few noticeable temporal jitters that require attention. The temporal jitters occur as we try to photo-realistically blend two different motions (source and target) in a temporally coherent manner.

  3. Extreme Poses: FaceOff was designed to face-swap actor's face and expressions with the double's pose and background information. Consequently, it is expected that the pose difference between the source and the target actors won't be extreme. FaceOff can solve roll related rotations in the 2D space, it would be worth investigating fixing rotations due to yaw and pitch in the 3D space and render the output back to the 2D space.

Thanks

VQVAE2

We would like to thank the authors of VQVAE2 ([https://github.com/rosinality/vq-vae-2-pytorch])(https://github.com/rosinality/vq-vae-2-pytorch) for releasing the code. We modify on top of their codebase for performing V2V Face Swapping.

Citation

If you find our work useful in your research, please cite:

@InProceedings{Agarwal_2023_WACV,
    author    = {Agarwal, Aditya and Sen, Bipasha and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C. V.},
    title     = {FaceOff: A Video-to-Video Face Swapping System},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {3495-3504}
}

Contact

If you have any questions, please feel free to email the authors.

Aditya Agarwal: aditya.ag@research.iiit.ac.in
Bipasha Sen: bipasha.sen@research.iiit.ac.in
Rudrabha Mukhopadhyay: radrabha.m@research.iiit.ac.in
Vinay Namboodiri: vpn22@bath.ac.uk
C V Jawahar: jawahar@iiit.ac.in

About

Repository containing codebase for "FaceOff: A Video-to-Video Face Swapping Network" accepted at WACV 2023

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published