diff --git a/README.md b/README.md
index 6da66bdfd19ea457976a7ca5539ea1c0a63d0e07..d5282a1f07f49888567fede032f68793c4e6c438 100644
--- a/README.md
+++ b/README.md
@@ -4,15 +4,15 @@
 This project contains the docker file and other associated files required to collect RGB and Depth images from Spots 5 cameras as well as obtaining the initial pose estimation from Spots body frame. The image that this docker file builds is meant to be turned into a container directly on the COREI/O.
 
 ## How to run the docker file
-1. On a local machine run ``docker build ./`` to create an Ubuntu image with VI, Python and BD pre-requisites.
+1. On a local machine run       ``docker build ./`` to create an Ubuntu image with VI, Python and BD pre-requisites.
 2. Update the name of the image as desired ``docker tag <image-id> <desired-image-name>``
 3. Save the image into a .tar file ``docker save <image-name>:<version> -o <filename-of-tar>``
 4. Copy across this image to the CORI/O ``scp -P 20022 <tar-file> spot@<robots-ip>`` 
-5. On the CORI/O load in the image ``sudo docker load -i <tar-file>``
-6. Run container ``sudo docker run --name dev_env  -it --network=host -v /data:/data -v /home/spot/data-collection:/data-collection <image-name>:<version>``
+5. On the COREI/O load in the image ``sudo docker load -i <tar-file>``
+6. Run container `sudo docker run --name dev_env  -it --network=host -v /data:/data -v /home/spot/data-collection:/data-collection <image-name>:<version>`
 
 ## How to run the data collection
 1. Within the container you can setup a basic data-collection structure i.e. `<scene-name>/<sequence_number>/images/<camera-name>` and `<scene-name>/<sequence_number>/poses`. You can do this by running `bash setup_data_format.bash <scene-name> <sequence-number>`. This will write out in whatever you have mounted to the /data-collection directory.
-2. To start the data collection `python data_collection.py <robots-ip> -output_path <data-collection-path>/<scene>/<sequence-no>`
+2. To start the data collection `python data_collection.py <robots-ip> --output_path <data-collection-path>/<scene>/<sequence-no>`
 
 Note: If you are not running an AutoWalk via the tablet you will not get pose estimates.