Skip to content

Docker tips and tricks

Docker container like systemctl service

If you want to start docker container based service

# Write systemd unit file
cat << EOF > /etc/systemd/system/docker-container@ecs-agent.service
[Unit]
Description=Docker Container %I
Requires=docker.service
After=cloud-final.service

[Service]
Restart=always
ExecStartPre=-/usr/bin/docker rm -f %i 
ExecStart=/usr/bin/docker run --name %i \
--privileged \
--restart=on-failure:10 \
--volume=/var/run:/var/run \
--volume=/var/log/ecs/:/log:Z \
--volume=/var/lib/ecs/data:/data:Z \
--volume=/etc/ecs:/etc/ecs \
--net=host \
--env-file=/etc/ecs/ecs.config \
amazon/amazon-ecs-agent:latest
ExecStop=/usr/bin/docker stop %i

[Install]
WantedBy=default.target
EOF

systemctl enable docker-container@ecs-agent.service
systemctl start docker-container@ecs-agent.service

Portainer

Portainer - is very useful GUI for managing Docker's hosts and clusters. Simply open port 9000 on your host, run container and open in your browser host_ip:9000, create user and manage all your containers.

docker container run -d \
  -p 9000:9000 \
  -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer

Install Docker

The Docker installation package available in the official Ubuntu repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.

Ubuntu

First, update your existing list of packages:

sudo apt update

Next, install a few prerequisite packages which let apt use packages over HTTPS:

sudo apt install apt-transport-https ca-certificates curl software-properties-common

Then add the GPG key for the official Docker repository to your system:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

# amd64
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
or
# arm64
sudo add-apt-repository \
   "deb [arch=arm64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

Next, update the package database with the Docker packages from the newly added repo and install Docker:

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

sudo systemctl status docker

The output should be similar to the following, showing that the service is active and running:

Output:

● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago
     Docs: https://docs.docker.com
 Main PID: 10096 (dockerd)
    Tasks: 16
   CGroup: /system.slice/docker.service
           ├─10096 /usr/bin/dockerd -H fd://
           └─10113 docker-containerd --config /var/run/docker/containerd/containerd.toml

Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client.

For more comfortable work from your user acc - create the docker group:

sudo groupadd docker

Add your user to the docker group:

sudo usermod -aG docker $USER
Understanding containers has to cover things like chroot, the container file system and layers, cgroup isolation, and talk about the pros/cons of containers.

An explanation of how these work without a discussion of cgroups and how that facilitates process/memory/network isolation is going to sound weak to an interviewer. So read up on how cgroups are implemented in the kernel. Then learn about the different ways lxc containers work vs docker containers. Then learn about emerging container specification standards. Why are they happening? Understand what’s going on both under the hood in the kernel and the shifts happening in the industry.

chroot

A chroot environment provides functionality similar to that of a virtual machine, but it is a lighter solution. The captive system doesn’t need a hypervisor to be installed and configured, such as VirtualBox or Virtual Machine Manager. Nor does it need to have a kernel installed in the captive system. The captive system shares your existing kernel.

Creating chroot environment

We need a directory to act as the root directory of the chroot environment. So that we have a shorthand way of referring to that directory we’ll create a variable and store the name of the directory in it. Here we’re setting up a variable to store a path to the “testroot” directory. It doesn’t matter if this directory doesn’t exist yet, we’re going to create it soon. If the directory does exist, it should be empty.

chr=/home/dave/testroot
mkdir -p $chr

We need to create directories to hold the portions of the operating system our chroot environment will require. We’re going to set up a minimalist Linux environment that uses Bash as the interactive shell. We’ll also include the touch, rm, and ls commands. That will allow us to use all Bash’s built-in commands and touch, rm, and ls. We’ll be able to create, list and remove files, and use Bash. And—in this simple example—that’s all.

mkdir -p $chr/{bin,lib,lib64}

cd $chr

Let’s copy the binaries that we need in our minimalist Linux environment from your regular “/bin” directory into our chroot “/bin” directory. The -v (verbose) option makes cp tell us what it is doing as it performs each copy action.

cp -v /bin/{bash,touch,ls,rm} $chr

Dependencies

These binaries will have dependencies. We need to discover what they are and copy those files into our environment. This way, for example we can add all dependencies for /bin/bash:

list="$(ldd /bin/bash | egrep -o '/lib.*\.[0-9]')"
for i in $list; do cp -v --parents "$i" "${chr}"; done

Use that technique to capture the dependencies of each of the other commands. Or you can write one more loop through all your apps.

chroot command

The last of our dependencies are copied into our chroot environment. We’re finally ready to use the chroot command. This command sets the root of the chroot environment, and specifies which application to run as the shell.

sudo chroot $chr /bin/bash

Our chroot environment is now active. The terminal window prompt has changed, and the interactive shell is the being handled by the bash shell in our environment.

Use exit to leave the chroot environment:

exit

Open sockets inside docker container

docker inspect -f '{{.State.Pid}}' container_name_or_id
And once you have the PID, use that as the argument to the target (-t) option of nsenter. For example, to run netstat inside the container network namespace:

$ sudo nsenter -t 15652 -n netstat
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN  

Comments