diff --git a/README.md b/README.md
index 0367d86f5d38270a1f15ea43af47a2ed850a498a..3309b3b5cbe0f136213a30aa7e401068d22a2b16 100644
--- a/README.md
+++ b/README.md
@@ -155,7 +155,7 @@ Note: EMOCA was developed with Pytorch 1.12.1 and Pytorch3d 0.6.2 running on CUD
 
 0) Activate the environment: 
 ```bash
-conda activate work36_cu11
+conda activate work38_cu11
 ```
 
 1) For running EMOCA examples, go to [EMOCA](gdl_apps/EMOCA) 
diff --git a/gdl_apps/EMOCA/README.md b/gdl_apps/EMOCA/README.md
index 446467a0a36c3a2db030d806275b7c7449737229..4e9020b9ab1fe7d29371da24c60552d366502afd 100644
--- a/gdl_apps/EMOCA/README.md
+++ b/gdl_apps/EMOCA/README.md
@@ -63,7 +63,7 @@ conda activate work38
 ### Single Image Reconstruction 
 If you want to run EMOCA on images, run the following
 ```python 
-python demos/test_emoca_on_images.py --input_folder <path_to_images> --output_folder <set_your_output_path> --model_name EMOCA 
+python demos/test_emoca_on_images.py --input_folder <path_to_images> --output_folder <set_your_output_path> --model_name EMOCA_v2_lr_mse_20 
 ```
 The script will detect faces in every image in the folder output the results that you specify with `--save_images`, `--save_codes`, `--save_mesh` to the output folder. 
 
@@ -77,7 +77,7 @@ See `demos/test_emoca_on_images.py` for further details.
 ### Video Reconstruction 
 If you want to be able to create a video of the reconstruction (like the teaser above), just pick your favourite emotional video and run the following:
 ```python 
-python demos/test_emoca_on_video.py --input_video <path_to_your_video> --output_folder <set_your_output_path> --model_name EMOCA 
+python demos/test_emoca_on_video.py --input_video <path_to_your_video> --output_folder <set_your_output_path> --model_name EMOCA_v2_lr_mse_20 
 ```
 The script will extract the frames from the video, run face detection on it to extract cropped out faces. Then EMOCA will be run, the reconstruction renderings saved and finally a reconstruction video sequence created. Processing long videos may take some time.