diff --git a/gcloud/Deployment Steps.md b/gcloud/Deployment Steps.md
new file mode 100644
index 0000000000000000000000000000000000000000..19024ada83e1aac2a2de94f26ee429921ab8aa4c
--- /dev/null
+++ b/gcloud/Deployment Steps.md	
@@ -0,0 +1,82 @@
+## Mongo Volumes
+
+### Storage Class
+
+Here we create a storage class that defines what properties the **default-mongo** class should have.
+
+`kubectl apply -f gcloud/mongo-pv/mongo-storage-class.yaml`
+
+You can check the current storage classes:
+
+`kubectl get storageclasses`
+
+### Creating Mongo Persistent Volumes
+
+We need a storage system for the Mongo instances. For this, we use (Persistent Volumes)[https://kubernetes.io/docs/concepts/storage/persistent-volumes/]
+
+- When we make a claim, Kubernetes automatically makes the Persistent Volume for us based on the Storage Class defined.
+
+#### User Mongo Volume
+
+`kubectl apply -f gcloud/user-mongo/user-mongo-pvc.yaml`
+
+### Checking Result
+
+You can check the current Persistent Volumes in our cluster:
+`kubectl get pv`
+
+As well as the current Persistent Volume Claims in our cluster:
+`kubectl get pvc`
+
+<br />
+
+## Deploying Mongo Services
+
+Now that we have the storage for our Mongo instances, we can deploy the services.
+
+- The deployment defines that the instance should use our created volumes.
+
+#### User Mongo Service
+
+`kubectl apply -f gcloud/user-mongo/user-mongo-deployment.yaml`
+
+<br />
+
+## Deploying Main Services
+
+For our main services, we first define our Horizontal Autoscaler, then we deploy our service.
+
+### Frontend
+
+`kubectl apply -f gcloud/frontend-service/frontend-service-autoscaler.yaml`
+`kubectl apply -f gcloud/frontend-service/frontend-service-deployment.yaml`
+
+### User Backend
+
+`kubectl apply -f gcloud/user-service/user-service-autoscaler.yaml`
+`kubectl apply -f gcloud/user-service/user-service-deployment.yaml`
+
+<br />
+
+## NGINX Service
+
+Thanks to our smort teammate Matt, we have a NGINX service. We use this service as a reverse proxy. What this means is that none of our services are exposed. Instead, we expose our NGINX service, and it manages our routes to services!
+
+### ConfigMap
+
+In Kubernetes, a ConfigMap is a key-value store that stores configuration data for your application. The ConfigMap can then be mounted as a volume inside a container, allowing the configuration data to be read by the application running inside the container.
+
+So we define the config in the (nginx.conf)[gcloud\nginx-service\nginx.conf] file and deploy it:
+
+`kubectl create configmap nginx-conf --from-file=gcloud/nginx-service/nginx.conf`
+
+### Deploying NGINX Service
+
+Before deploying the NGINX service, we define our Horizontal Autoscaler for it:
+`kubectl apply -f gcloud/nginx-service/nginx-service-autoscaler.yaml`
+
+We can now deploy NGINX as a service:
+
+`kubectl apply -f gcloud/nginx-service/nginx-service-deployment.yaml`
+
+What is different about this service, is that it is on a public endpoint!