This repository gives the official implementation of PCFace: Retinex-guided Relighting and Latent-space Refinement for Realistic Diffusion-based Face Swapping.
A suitable python virtual environment named pcface can be created
and activated with:
python -m venv pcface
source ./pcface/bin/activate
cd greycfs/
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
pip install -r requirements.txt
pip install -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
pip install -e git+https://github.com/openai/CLIP.git@main#egg=clip
If the GPU node you are running does not have direct internet access, please download the clip-vit-large-patch14 manually in the huggingface repo by downloading files
config.json, pytorch_model.bin, vocab.json, merges.txt, tokenizer_config.json, and preprocessor_config.json into a folder named clip-vit-large-patch14. After that, update clip_path in reface/ldm/modules/encoders/modules.py by your own folder.
Download the following models from the provided links and place them in the corresponding paths to perform face swapping and quantitative evaluation.
lighting_transfer/model_lighting_transfer/model_epoch106.pth
reface/models/REFace/checkpoints/last.ckpt
reface/Other_dependencies/face_parsing/79999_iter.pth
reface/Other_dependencies/arcface/model_ir_se50.pth
reface/Other_dependencies/DLIB_landmark_det/shape_predictor_68_face_landmarks.dat
skin_refinement/pretrained_models/e4e_ffhq_encode.pt
skin_refinement/pretrained_models/d_skin_kmeans_20k.npy
To test our face swapping pipeline on folder source and folder target, you can use:
CUDA_VISIBLE_DEVICES=${device} python3 pcface_inference.py \
--outdir "${Results_dir}" \
--Base_dir "${Base_dir}" \
--target_folder "${target_path}" \
--src_folder "${source_path}" \
--config "${REFACE_CONFIG}" \
--ckpt "${REFACE_CKPT}" \
--checkpoint_path "${PSP_CKPT}" \
--n_samples 1 \
--scale 3.5 \
--ddim_steps 50
or simply run:
sh pcface_inference.sh