From fef5472a2fe3c47d8c5ffa56ad13928a790df1ab Mon Sep 17 00:00:00 2001
From: rdanecek <danekradek@gmail.com>
Date: Tue, 14 Feb 2023 18:12:36 +0100
Subject: [PATCH] Update README

---
 README.md                | 4 ++++
 gdl_apps/EMOCA/README.md | 2 ++
 2 files changed, 6 insertions(+)

diff --git a/README.md b/README.md
index 4c8cd40b..95376309 100644
--- a/README.md
+++ b/README.md
@@ -63,6 +63,8 @@ Compared to the original model it produces:
 1) Much better lip and eye alignment 
 2) Much better lip articulation 
 
+You can find the [comparison video here](https://download.is.tue.mpg.de/emoca/assets/emoca_v2_comparison.mp4)
+
 This is achieved by: 
 1) Using a subset of mediapipe landmarks for mouth, eyes and eyebrows (as opposed to FAN landmarks that EMOCA v1 uses)
 2) Using absolute landmark loss in combination with the relative losses (as opposed to only relative landmark losses in EMOCA v1)
@@ -70,6 +72,8 @@ This is achieved by:
 
 You will have to upgrade to the new environment in order to use EMOCA v2. Please follow the steps bellow to install the package. Then, go to the [EMOCA](gdl_apps/EMOCA) subfolder and follow the steps described there.
 
+
+
 While using the new version of this repo is recommended, you can still access the old release [here](https://github.com/radekd91/emoca/tree/EMOCA-v1.0).
 
 ## EMOCA project 
diff --git a/gdl_apps/EMOCA/README.md b/gdl_apps/EMOCA/README.md
index 31effe10..7186c2f0 100644
--- a/gdl_apps/EMOCA/README.md
+++ b/gdl_apps/EMOCA/README.md
@@ -35,6 +35,8 @@ The available models are:
 3) `EMOCA_v2_lr_cos_1.5` - EMOCA v2 trained with mediapipe landmarks and with the lip reading loss (cosine similarity on lip reading features, similarly to SPECTRE) 
 4) `EMOCA_v2_lr_mse_20` - (default) EMOCA v2 trained with mediapipe landmarks and with the lip reading loss (MSE on lip reading features)
 
+You can find the [comparison video here](https://download.is.tue.mpg.de/emoca/assets/emoca_v2_comparison.mp4)
+
 Notes: 
 The SPECTRE paper uses a cosine similarity metric on lip reading features for supervision. In practice, we found that the cosine similarity loss can sometimes be artifact prone (over-exaggerated lip motion). This is the `EMOCA_v2_lr_cos_1.5` model. We found the supervision by mean squared error metric to be more stable in this regard and hence we recommend using the `EMOCA_v2_lr_mse_20` model. If you find that even this one produces undesirable artifacts, we suggest using `EMOCA_v2_mp`, which does not use the lip reading loss but is still much better thatn the original `EMOCA` model.
 
-- 
GitLab