This repository is the code of the paper "Learning Self-Shadowing for Clothed Human Bodies", by Farshad Einabadi, Jean-Yves Guillemaut and Adrian Hilton, in The 35th Eurographics Symposium on Rendering, London, England, 2024, proceedings published by Eurographics - The European Association for Computer Graphics.
License
Copyright (C) 2024 University of Surrey.
The code repository is published under the CC-BY-NC 4.0 license (https://creativecommons.org/licenses/by-nc/4.0/deed.en).
Acknowledgment
This repository uses code from the PIFuHD repository (https://github.com/facebookresearch/pifuhd) shared under CC-BY-NC 4.0 license (https://github.com/facebookresearch/pifuhd/blob/main/LICENSE) with Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
The corresponding PIFuHD publication is: "PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization" by Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo, published in Proc. CVPR 2020 (https://shunsukesaito.github.io/PIFuHD/)
How to use the code
Please follow the instructions below to perform inference on your input images. A sample frame is provided for convenience in sample-input
.
Input Directory
All input images should be stored in a single path, e.g., sample-input
, in which, the naming of files follows the following pattern:
Each input entry has
<base_name>_details.yaml
<base_name>_input.png
<base_name>_mask.png
respectively for the light directions, RGB input image, and the corresponding mask of the person.
The content of <base_name>_details.yaml
is a list of light direction in (phi, theta) format in radian [0, π]
as follows:
light_directions:
-
- 0.5
- 0.6
-
- 3.0
- 2.3
At this stage, for performance gains from gpu batching, all input images should have the same 'number' of light directions, but not necessarily the same values.
Example Usage
-
Build the docker image based on
Dockerfile
. -
Download the shared pre-trained models from
https://cvssp.org/data/self-shadowing
and store them in a path of your choice, e.g../checkpoints
- Our self-shadowing model
- Our re-shading model
- Extracted PIFuHD's pre-trained frontal surface normal estimator (Saito et al., CVPR 2020)
-
Run
/workspace/venvs/torch/bin/python relight_minimal_export.py ./sample-input ./sample-output ./checkpoints/pifuhd.netG.netF.pt ./checkpoints/self_shadow_model.checkpoint.best ./checkpoints/relight_model.checkpoint.best --batch-size 2 --gpu
For the order and description of positional and optional passed arguments, run
python relighting/relight_minimal_export.py -h
Please note that it takes 1 or 2 inferences until reaching optimal inference speed -- this should mostly be attributed to the setup time required before the first inference.