Instructions
Step 1: microservices introductory demo
Often, we want to contribute to or further develop an open-source project in isolation from the original. We achieve this by forking a project for our personal use (always check license), which has some benefits over branching. Click Fork at the top right corner of this project's home page, and then you have a personal copy of the repository which you can change to your heart's content. Clone the repository to your Azure VM (or you can also use your own laptop if you have python3 and Docker installed ** It will not work on Windows unless you have Windows server** It will likely work with MacOS and Linux.).
You will see that you are on the master
branch, with two files movies.py
and showtimes.py
. Look through their source code to see how they function. movies.py
fetches movie data from a json file in /database
. showtimes.py
fetches showtimes for the movies from another json file in /database
. Each of them is a microservice which exposes RESTful API endpoints. You can run them as follows:
python3 movies.py
Now, the movies microservice is running. You can query it by sending HTTP GET requests. Note that we are using the port 5001 on localhost (127.0.0.1) as that is how the server is configured (see towards the bottom of the file in movies.py)
python3 movies.py
Now, the movies microservice is running. You can query it by sending HTTP GET requests. Note that we are using the port 5001 on localhost (127.0.0.1) as that is how the server is configured (see towards the bottom of the file in movies.py)
curl http://127.0.0.1:5001/ #returns list of endpoints exposed
curl http://127.0.0.1:5001/movies #returns list of all movies
curl http://127.0.0.1:5001/movies/720d006c-3a57-4b6a-b18f-9b713b073f3c #returns details of one movie
Similarly, you can run the showtimes microservice
python3 showtimes.py
You can query its endpoints too:
python3 showtimes.py
You can query its endpoints too:
$ curl 127.0.0.1:5002/
{"uri": "/", "subresource_uris": {"showtimes": "/showtimes", "showtime": "/showtimes/<date>"}}%
$ curl 127.0.0.1:5002/showtimes
{"20151130": ["720d006c-3a57-4b6a-b18f-9b713b073f3c", "a8034f44-aee4-44cf-b32c-74cf452aaaae", "39ab85e5-5e8e-4dc5-afea-65dc368bd7ab"], "20151201": ["267eedb8-0f5d-42d5-8f43-72426b9fb3e6", "7daf7208-be4d-4944-a3ae-c1c2f516f3e6", "39ab85e5-5e8e-4dc5-afea-65dc368bd7ab", "a8034f44-aee4-44cf-b32c-74cf452aaaae"], "20151202": ["a8034f44-aee4-44cf-b32c-74cf452aaaae", "96798c08-d19b-4986-a05d-7da856efb697", "39ab85e5-5e8e-4dc5-afea-65d
Step 2: Making microservices talk to each other
As with the movies, the showtimes microservice allows you to query one single record. For a date that it knows about (e.g., 30 Nov 2015 or 20151130
), you can ask what movies were shown on that date.
$ curl 127.0.0.1:5002/showtimes/20151130
[
"720d006c-3a57-4b6a-b18f-9b713b073f3c",
"a8034f44-aee4-44cf-b32c-74cf452aaaae",
"39ab85e5-5e8e-4dc5-afea-65dc368bd7ab"
]%
You get back a list of movie IDs. This is nice, but not wholly satisfactory. Your next task is to modify the showtimes_record
function to contact the movies microservice to get the movie title given its ID, and make the showtimes service return back a more user-friendly response that looks like this:
$ curl 127.0.0.1:5002/showtimes/20151130
[
"The Good Dinosaur",
"The Martian",
"Spectre"
]%
The main change needed is simple. We just have to issue an HTTP GET request to the movies API endpoint. We do this by changing showtime_record
as follows:
movies_service = "http://127.0.0.1:5001/movies/{}" #we know this is where the service is running. We replace the {} with the actual ID of the movie we want.
def showtimes_record(date):
if date not in showtimes:
raise NotFound
print(showtimes[date])
result = []
for movie_id in showtimes[date]:
resp = requests.get(movies_service.format(movie_id))
result.append(resp.json()["title"])
return nice_json(result)
Step 3: Dockerization of microservices
We now have two microservices running on localhost and talking to each other. If we need to move these services, it is not easy. Your next task is to wrap these microservices as a docker image and make them portable. Essentially, this involves specifying all the dependencies and running parameters explicitly in a
Dockerfile
. You can have a go at this by copying from this tutorial.
In this branch, we have created two directories movieservice
and stservice
that create docker images for movies and showtimes microservices respectively. The Dockerfile for the movie microservice is shown here. The other one is very similar.
FROM python:3.8-alpine
WORKDIR /
COPY movieservice/requirements.txt .
RUN pip install -r requirements.txt
COPY ./movies.py /
COPY ./database /database
EXPOSE 5001
CMD python movies.py
Docker images are created in layers. The FROM
command says this image is derived from a base python 3.8 image. WORKDIR
sets the working directory for the main process in the running container. The COPY
commands are interesting -- the first argument is in the host
machine, and the second argument is in the container. Thus, for example, COPY ./movies.py /
is an instruction to copy the movies.py to the /
directory in the container, which happens to be the working directory. So, when CMD python movies.py
runs the main command, it finds movies.py in the /
directory which is its working directory.
Also interesting is the command to EXPOSE 5001
which is a declaration that we want the port 5001, on which the microservice is listening, to be exposed to the outside world. However, if you do this, build and run the docker container, and then try to query the dockerized service (e.g., with curl http://127.0.0.1:5001/movies
), we get a connection refused error. There is a fundamental difference between exposing and publishing a port, which is done with the -p
flag which we saw in last week's live session. This distinction is discussed here
The Makefile provided in this branch has a target make movieservice
which provides the right set of parameters. But even after publishing the port 5001, we still are not able to access the microservice. (Try issuing another curl
). The reason is because of Docker Network Namespaces
. The container is given a different network namespace to isolate it, and because in movies.py
we have asked the Flask server to listen in on the localhost IP address, it does not respond to queries for the external IP address. The solution for this is to have the HTTP Server listen on 0.0.0.0
(See the differences in movies.py between this branch and the simpleservices branch).
The short solution/summary: Checkout this branch, and run
$ make movieservice stservice # This brings up Docker containers for both services
$ echo "After this we can query each service separately as with the curls below"
$ curl http://127.0.0.1:5001/movies
$ curl 127.0.0.1:5002/showtimes
A simple technical explanation of what is going on at the network level, together with illuminating visuals can be found in this blog
Step 4: How to make two dockerized microservices talk to each other
The above has just demonstrated that we can talk to each microservice from the host machine. Therefore, you may be under the impression that the two microservices should also be able to talk to each other, as we were able to, before dockerizing them. Test this by invoking the showtimes API endpoint that invokes the movies service (from Step 2 above):
curl 127.0.0.1:5002/showtimes/20151130
. Notice that this causes an exception. Interestingly, although we can see it from the host machine, the dockerized showtimes service cannot access the movie service docker container!
What is going on? Essentially, this is due to the excellent isolation provided by Docker through namespaces, which is making development slightly harder. An overview of the intricacies of Docker Networking can be found here. The simple solution for us is to tell the Docker daemon to create a network that encompasses both the containers we have created. The Makefile
has a target for this: make network
.
Once the make target sets up the networking connections between the containers, the curl 127.0.0.1:5002/showtimes/20151130
will return the results as it did in Step 2.
5. Further steps
The Makefile in the above steps stitches together a number of different things - the ports to expose, the data directory, and creating a network. All this becomes very complicated when you have more services running that talk to each other. To make this simpler, we can use
docker compose
.
Docker compose takes a file called docker-compose.yaml where all the details of the Makefile can be specified in a very simple and straightforward manner:
services:
movieservice:
image: movieservice-img
build:
context: .
dockerfile: movieservice/Dockerfile
ports:
- "5001:5001"
networks:
- microservices-net #allows communication to all services on this network, ie, to stservice
stservice:
image: stservice-img
build:
context: .
dockerfile: stservice/Dockerfile
ports:
- "5002:5002"
networks:
- microservices-net #allows communication to all services on this network, ie to movieservice
networks:
# The presence of this object is sufficient to define it
microservices-net: {}
The above says there are two services, movieservice
and stservice
, which are both on a network called microservices-net
. Note how much simpler it is here. we just say that two services are on the same network, and docker-compose does the magic behind the scenes to make each service visible to the other. For each service, we also define the name of the image and how to build it (the build context directory and the dockerfile to use). We also say how to map a port within the container to outside the container.
Essentially docker-compose.yaml
the same information as in the Makefile, but easier to understand and more "object oriented". It also bundles the image with the service. Often, we do not care about the name of the container image; only the service name matters. Thus, docker compose makes it easier to create the service itself.
The service can be started with docker-compose up
(or with docker-compose up --build
to build the image itself). If you make a change to the service, you can run docker-compose down
and docker-compose up --build
to bring down, rebuild and bring up the service again. If you are merely intending to restart the service, you can run docker-compose stop
and docker-compose start
.
The docker compose specification gives full details about what you can say in a docker-compose.yaml
file. There is also a Getting Started guide and a full reference of the Docker Compose CLI.
Even further steps
The above is already much more easier than running by hand all the commands in the makefile, or creating the makefile from scratch. However, during development, we will frequently find that the source code within the container has a bug and we need to make a tiny change and restart the service. This quickly becomes a pain. We can avoid this if we can figure out a way to change source code within a running container.
Volumes are a way to map a directory in the host with a directory in the running container. This can be used for various things -- e.g., you can use it to output files or other persistent information from a running container to the host. You can also use it to have the running container read files from the host, passing config details and secrets (e.g., passwords).
You can find out more about config and secrets in the docker compose specification. Essentially, they are a way to create volumes with appropriate file permissions for read, write ane execute privileges. Here, we learn how to ``hot reload'' a source file in a running Docker container.
Hot reload requires two things - (i) a change made to a source file in the host container should be visible in the running Docker container. (ii) the changed source code should be picked up by the running service. To enable (ii) we use a capability of Flask. By setting the environment variable FLASK_ENV=development, the running Flask service picks up any change to the underlying source code and reflects it immediately when we next issue an HTTP request. To set this environment variable, we use the env
element in Docker-compose.yaml. To enable the Flask /within the container/ to recognise a change in the source code made in the host, we map the source code directory in the host to a directory within the container. Note that it is not allowed to map a host directory to the root (/) directory of the container. Therefore, we have to move the source code to another directory (say /src/) in the container. We change movieservice's Dockerfile to reflect this (see the diff between the corresponding movieservice Dockerfiles in this branch and other branches).
The docker-compose.yaml for this directory contains the required changes for movieservice
. To see this that enables a change to be directly reflected, bring up the services with docker-compose up
(this should be run from the root directory where the docker-compose.yaml file is) and run the curl
test as before:
$ curl 127.0.0.1:5002/showtimes/20151130
["The Good Dinosaur", "The Martian", "Spectre"]%
Now, if you uncomment line 31 of movieservice.py and run curl
again, you should see that the movie titles changes from The Good Dinosaur
, The Martian
and Spectre
to Hello world
in each case:
$ curl 127.0.0.1:5002/showtimes/20151130
["hello world", "hello world", "hello world"]%
Note that if you change anything in showtimes.py, this is not reflected immediately as the stservice container does not have a volume mapping. As an exercise, implement hot source swapping for the stservice container as well.
credits
The original code is taken from https://github.com/umermansoor/microservices It has been lightly modified for Python3 compatibility, and further simplified to showcase microservice communications. We have also added a demo of dockerization