diff --git a/README.md b/README.md
index 4c8cd40b88d39e503c95a54ab8ba8b09b45b60b0..95376309f7aab62ef9434ea770727dfdce0921b2 100644
--- a/README.md
+++ b/README.md
@@ -63,6 +63,8 @@ Compared to the original model it produces:
 1) Much better lip and eye alignment 
 2) Much better lip articulation 
 
+You can find the [comparison video here](https://download.is.tue.mpg.de/emoca/assets/emoca_v2_comparison.mp4)
+
 This is achieved by: 
 1) Using a subset of mediapipe landmarks for mouth, eyes and eyebrows (as opposed to FAN landmarks that EMOCA v1 uses)
 2) Using absolute landmark loss in combination with the relative losses (as opposed to only relative landmark losses in EMOCA v1)
@@ -70,6 +72,8 @@ This is achieved by:
 
 You will have to upgrade to the new environment in order to use EMOCA v2. Please follow the steps bellow to install the package. Then, go to the [EMOCA](gdl_apps/EMOCA) subfolder and follow the steps described there.
 
+
+
 While using the new version of this repo is recommended, you can still access the old release [here](https://github.com/radekd91/emoca/tree/EMOCA-v1.0).
 
 ## EMOCA project 
diff --git a/gdl_apps/EMOCA/README.md b/gdl_apps/EMOCA/README.md
index 31effe107e30fcecd96bceb9af80e304868e814e..7186c2f00b86d94f5cb7c4057a7af4596b6a1d0f 100644
--- a/gdl_apps/EMOCA/README.md
+++ b/gdl_apps/EMOCA/README.md
@@ -35,6 +35,8 @@ The available models are:
 3) `EMOCA_v2_lr_cos_1.5` - EMOCA v2 trained with mediapipe landmarks and with the lip reading loss (cosine similarity on lip reading features, similarly to SPECTRE) 
 4) `EMOCA_v2_lr_mse_20` - (default) EMOCA v2 trained with mediapipe landmarks and with the lip reading loss (MSE on lip reading features)
 
+You can find the [comparison video here](https://download.is.tue.mpg.de/emoca/assets/emoca_v2_comparison.mp4)
+
 Notes: 
 The SPECTRE paper uses a cosine similarity metric on lip reading features for supervision. In practice, we found that the cosine similarity loss can sometimes be artifact prone (over-exaggerated lip motion). This is the `EMOCA_v2_lr_cos_1.5` model. We found the supervision by mean squared error metric to be more stable in this regard and hence we recommend using the `EMOCA_v2_lr_mse_20` model. If you find that even this one produces undesirable artifacts, we suggest using `EMOCA_v2_mp`, which does not use the lip reading loss but is still much better thatn the original `EMOCA` model.