DevOps

Using Docker Networks from the Host

Profil Picture

Guillaume Briday

4 minutes

If you use Docker and Docker Compose extensively, you know there are a few inconveniences, including managing networks from the host and between containers. We'll see how to simplify its usage (a bit).

Docker's DNS

Your containers are within the same network, as I mentioned in the article Docker: Custom Networks, meaning they can communicate with each other. You can know a container's IP address using the service name, a link alias, or a net-alias. This is made possible via Docker's internal DNS, which takes care of everything for us.

Thus, if your docker-compose.yml looks like this:

version: '3'

services:
  app:
    image: php:7.2-fpm
    depends_on:
      - mysql

  mysql:
    image: mysql
    ports:
      - '3306:3306'
    environment:
      - MYSQL_ROOT_PASSWORD=secret
      - MYSQL_DATABASE=laravel

You can, for example, use the following configuration for MySQL from your php container:

DB_CONNECTION=mysql
DB_HOST=mysql
DB_DATABASE=laravel
DB_USERNAME=root
DB_PASSWORD=secret

As seen here, we can set DB_HOST=mysql because the DNS service will return the IP address of the container named mysql.

We can also view this information as follows. If I run docker inspect with the ID of my MySQL container and look under NetworkSettings, I'll find all the information regarding my container's network:

$ docker ps
946f13c85f9a    php        "docker-php-entrypoi..."   Less than a second ago   Up 5 seconds        9000/tcp                         laravelblog_app_1
8212e8b19f61    mysql      "docker-entrypoint.s..."   Less than a second ago   Up 6 seconds        0.0.0.0:3306->3306/tcp           laravelblog_mysql_1

$ docker inspect 8212e8b19f61

# Simplified output

[
    {
        "NetworkSettings": {
            "Bridge": "",
            "Ports": {
                "3306/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "3306"
                    }
                ]
            },
            "Networks": {
                "laravelblog_default": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "8212e8b19f61",
                        "mysql"
                    ],
                    "NetworkID": "299ff2144683c1a14757b5c6e717607217dacd4d774e1200411b7673dd7e1c66",
                    "EndpointID": "6ea28db687adae1105f57afe4b475e852fa4fad618c996b5cc111af26ab1a53e",
                    "Gateway": "172.21.0.1",
                    "IPAddress": "172.21.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:15:00:03",
                    "DriverOpts": null
                }
            }
        }
    }
]

You can see which ports are open; in our case, opening ports is only useful if we want to access the container from the host. We then find the container's IP address within the network called laravelblog_default. There could be multiple networks and thus multiple different IP addresses.

We can also change our .env file in this way to get the same result:

DB_CONNECTION=mysql
DB_HOST=172.21.0.3
DB_DATABASE=laravel
DB_USERNAME=root
DB_PASSWORD=secret

However, be careful, as the address may change over time. I only do this to explain how it works. You should continue using the DNS.

Let's go even further. From a container, we can get DNS information and find the IP addresses of other services:

$ docker exec -it 946f13c85f9a bash

> cat /etc/resolv.conf
nameserver 127.0.0.11 # This gives us the IP address of Docker's DNS
options ndots:0

> apt install dnsutils -y # We need to install dig in the container for our tests
> dig mysql
; <<>> DiG 9.10.3-P4-Debian <<>> mysql
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28649
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;mysql.         IN    A

;; ANSWER SECTION:
mysql.   600    IN    A    172.21.0.3

The DNS service returns the container's address correctly.

Accessing the network from the host

If I try to run a PHP command that needs the MySQL database to work, I will get an error. Indeed, Docker's DNS only works inside containers, and my host cannot resolve the address "mysql".

This does not prevent access to containers from the host if we change the address to 127.0.0.1 and a port is bound to the outside of the container. However, the app container will no longer have access to the mysql service.

We don't want to change our configuration every time we want to execute a command, so we have two choices. Either we run our commands in new containers or modify our local /etc/hosts.

Run commands in a new container

For example, the command to run Laravel migrations:

# Without Docker
$ php artisan migrate

# With Docker
$ docker-compose run --rm app php artisan migrate

Docker will then launch a new container, which will be deleted at the end of the execution, on the same network as the other services to run the migrations.

This is quite fast (especially on Linux), and it works well. It is just annoying to have to prefix the command with docker-compose run --rm [service_name] [command]. Additionally, it's challenging to create a generic alias because it depends on the service name and the options we want to pass.

Modify /etc/hosts

In this case, we'll simply associate the hostname mysql with the address 127.0.0.1 so that we don't have to change the configuration every time but instead point to the correct address.

Add this line to the end of the /etc/hosts file:

# /etc/hosts

127.0.0.1   mysql

Now, I can run the following commands from my host, and my configuration will work for both containers and the host:

$ php artisan migrate

Of course, in this case, you will need to have the commands installed on your host, unlike running in a container, which only requires Docker images to function.

Additionally, the corresponding containers must be started beforehand, whereas the Docker command starts all the containers we need.

I don't think this is a problem, as it's very rare to run these commands outside a development context (where all containers are already running).

Also, you need to create as many associations as you have services with different names for which you need access from both the host and a container simultaneously.

Conclusion

I find this trick very useful when working heavily with Docker. We retain the advantages of working on our host and the benefits of isolating processes within containers.

Simplify your time tracking with Timecop

Timecop is a time tracking app that brings simplicity in your day to day life.

Timecop projects