Download as pdf or txt
Download as pdf or txt
You are on page 1of 400

The Docker Book

James Turnbull

March 11, 2017

Version: v17.03.0 (381319)

Website: The Docker Book


Some rights reserved. No part o this publication may be reproduced, stored in a
retrieval system, or transmitted in any orm or by any means, electronic,
mechanical or photocopying, recording, or otherwise, or commercial purposes
without the prior permission o the publisher.
This work is licensed under the Creative Commons
Attribution-NonCommercial-NoDerivs 3.0 Unported License. To view a copy o
this license, visit here.
© Copyright 2015 - James Turnbull <[email protected]>
Contents

Page

Foreword 1
Who is this book or? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A note about versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Credits and Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 2
Technical Reviewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Scott Collier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
John Ferlito . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Pris Nasrat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Technical Illustrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Prooreader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Conventions in the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Code and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Colophon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Errata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Chapter 1 Introduction 6
Introducing Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
An easy and lightweight way to model reality . . . . . . . . . . . . . 8
A logical segregation o duties . . . . . . . . . . . . . . . . . . . . . . 8
Fast, ecient development lie cycle . . . . . . . . . . . . . . . . . . 8
Encourages service oriented architecture . . . . . . . . . . . . . . . . 9
Docker components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Docker client and server . . . . . . . . . . . . . . . . . . . . . . . . . . 9

i
Contents

Docker images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Registries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Compose and Swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
What can you use Docker or? . . . . . . . . . . . . . . . . . . . . . . . . . 13
Docker with conguration management . . . . . . . . . . . . . . . . . . . 14
Docker’s technical components . . . . . . . . . . . . . . . . . . . . . . . . . 15
What’s in the book? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Docker resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Chapter 2 Installing Docker 18


Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Installing on Ubuntu and Debian . . . . . . . . . . . . . . . . . . . . . . . 20
Checking or prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . 21
Installing Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Docker and UFW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Installing on Red Hat and amily . . . . . . . . . . . . . . . . . . . . . . . 26
Checking or prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . 26
Installing Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Starting the Docker daemon on the Red Hat amily . . . . . . . . . . 30
Docker or Mac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Installing Docker or Mac . . . . . . . . . . . . . . . . . . . . . . . . . 31
Testing Docker or Mac . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Docker or Windows installation . . . . . . . . . . . . . . . . . . . . . . . . 33
Installing Docker or Windows . . . . . . . . . . . . . . . . . . . . . . 34
Testing Docker or Windows . . . . . . . . . . . . . . . . . . . . . . . . 34
Using Docker on OSX and Windows with this book . . . . . . . . . . . . 35
Docker installation script . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Binary installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
The Docker daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Conguring the Docker daemon . . . . . . . . . . . . . . . . . . . . . 38
Checking that the Docker daemon is running . . . . . . . . . . . . . 41
Upgrading Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Docker user interaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Version: v17.03.0 (38f1319) ii


Contents

Chapter 3 Getting Started with Docker 45


Ensuring Docker is ready . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Running our rst container . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Working with our rst container . . . . . . . . . . . . . . . . . . . . . . . . 50
Container naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Starting a stopped container . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Attaching to a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Creating daemonized containers . . . . . . . . . . . . . . . . . . . . . . . . 56
Seeing what’s happening inside our container . . . . . . . . . . . . . . . 57
Docker log drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Inspecting the container’s processes . . . . . . . . . . . . . . . . . . . . . . 60
Docker statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Running a process inside an already running container . . . . . . . . . . 62
Stopping a daemonized container . . . . . . . . . . . . . . . . . . . . . . . 64
Automatic container restarts . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Finding out more about our container . . . . . . . . . . . . . . . . . . . . 66
Deleting a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Chapter 4 Working with Docker images and repositories 70


What is a Docker image? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Listing Docker images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Pulling images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Searching or images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Building our own images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Creating a Docker Hub account . . . . . . . . . . . . . . . . . . . . . . 81
Using Docker commit to create images . . . . . . . . . . . . . . . . . 83
Building images with a Dockerle . . . . . . . . . . . . . . . . . . . . 86
Building the image rom our Dockerle . . . . . . . . . . . . . . . . . 90
What happens i an instruction ails? . . . . . . . . . . . . . . . . . . 93
Dockerles and the build cache . . . . . . . . . . . . . . . . . . . . . . 95
Using the build cache or templating . . . . . . . . . . . . . . . . . . . 95
Viewing our new image . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Launching a container rom our new image . . . . . . . . . . . . . . 98
Dockerle instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

Version: v17.03.0 (38f1319) iii


Contents

Pushing images to the Docker Hub . . . . . . . . . . . . . . . . . . . . . . 126


Automated Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Deleting an image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Running your own Docker registry . . . . . . . . . . . . . . . . . . . . . . 133
Running a registry rom a container . . . . . . . . . . . . . . . . . . . 134
Testing the new registry . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Alternative Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Quay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Chapter 5 Testing with Docker 138


Using Docker to test a static website . . . . . . . . . . . . . . . . . . . . . 139
An initial Dockerle or the Sample website . . . . . . . . . . . . . . 139
Building our Sample website and Nginx image . . . . . . . . . . . . 143
Building containers rom our Sample website and Nginx image . . 145
Editing our website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Using Docker to build and test a web application . . . . . . . . . . . . . 150
Building our Sinatra application . . . . . . . . . . . . . . . . . . . . . 150
Creating our Sinatra container . . . . . . . . . . . . . . . . . . . . . . 152
Extending our Sinatra application to use Redis . . . . . . . . . . . . 157
Connecting our Sinatra application to the Redis container . . . . . 162
Docker internal networking . . . . . . . . . . . . . . . . . . . . . . . . 163
Docker networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Connecting containers summary . . . . . . . . . . . . . . . . . . . . . 181
Using Docker or continuous integration . . . . . . . . . . . . . . . . . . . 182
Build a Jenkins and Docker server . . . . . . . . . . . . . . . . . . . . 183
Create a new Jenkins job . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Running our Jenkins job . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Next steps with our Jenkins job . . . . . . . . . . . . . . . . . . . . . . 197
Summary o our Jenkins setup . . . . . . . . . . . . . . . . . . . . . . 198
Multi-conguration Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Create a multi-conguration job . . . . . . . . . . . . . . . . . . . . . 198
Testing our multi-conguration job . . . . . . . . . . . . . . . . . . . 203
Summary o our multi-conguration Jenkins . . . . . . . . . . . . . . 205
Other alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

Version: v17.03.0 (38f1319) iv


Contents

Drone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Shippable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

Chapter 6 Building services with Docker 207


Building our rst application . . . . . . . . . . . . . . . . . . . . . . . . . . 207
The Jekyll base image . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Building the Jekyll base image . . . . . . . . . . . . . . . . . . . . . . 209
The Apache image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Building the Jekyll Apache image . . . . . . . . . . . . . . . . . . . . 213
Launching our Jekyll site . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Updating our Jekyll site . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Backing up our Jekyll volume . . . . . . . . . . . . . . . . . . . . . . . 219
Extending our Jekyll website example . . . . . . . . . . . . . . . . . . 221
Building a Java application server with Docker . . . . . . . . . . . . . . . 221
A WAR le etcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Fetching a WAR le . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Our Tomcat 7 application server . . . . . . . . . . . . . . . . . . . . . 225
Running our WAR le . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Building on top o our Tomcat application server . . . . . . . . . . . 228
A multi-container application stack . . . . . . . . . . . . . . . . . . . . . . 232
The Node.js image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
The Redis base image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
The Redis primary image . . . . . . . . . . . . . . . . . . . . . . . . . . 238
The Redis replica image . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Creating our Redis back-end cluster . . . . . . . . . . . . . . . . . . . 240
Creating our Node container . . . . . . . . . . . . . . . . . . . . . . . . 246
Capturing our application logs . . . . . . . . . . . . . . . . . . . . . . 247
Summary o our Node stack . . . . . . . . . . . . . . . . . . . . . . . . 251
Managing Docker containers without SSH . . . . . . . . . . . . . . . . . . 252
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

Chapter 7 Docker Orchestration and Service Discovery 254


Docker Compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Installing Docker Compose . . . . . . . . . . . . . . . . . . . . . . . . . 256

Version: v17.03.0 (38f1319) v


Contents

Getting our sample application . . . . . . . . . . . . . . . . . . . . . . 258


The docker-compose.yml le . . . . . . . . . . . . . . . . . . . . . . . 262
Running Compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Using Compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Compose in summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Consul, Service Discovery and Docker . . . . . . . . . . . . . . . . . . . . 272
Building a Consul image . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Testing a Consul container locally . . . . . . . . . . . . . . . . . . . . 277
Running a Consul cluster in Docker . . . . . . . . . . . . . . . . . . . 279
Starting the Consul bootstrap node . . . . . . . . . . . . . . . . . . . . 282
Starting the remaining nodes . . . . . . . . . . . . . . . . . . . . . . . 285
Running a distributed service with Consul in Docker . . . . . . . . . 293
Docker Swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Understanding the Swarm . . . . . . . . . . . . . . . . . . . . . . . . . 307
Installing Swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Setting up a Swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Running a service on your Swarm . . . . . . . . . . . . . . . . . . . . 313
Orchestration alternatives and components . . . . . . . . . . . . . . . . . 318
Fleet and etcd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Apache Mesos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Helios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Centurion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320

Chapter 8 Using the Docker API 321


The Docker APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
First steps with the Remote API . . . . . . . . . . . . . . . . . . . . . . . . 322
Testing the Docker Remote API . . . . . . . . . . . . . . . . . . . . . . . . 326
Managing images with the API . . . . . . . . . . . . . . . . . . . . . . 327
Managing containers with the API . . . . . . . . . . . . . . . . . . . . 329
Improving the TProv application . . . . . . . . . . . . . . . . . . . . . . . 334
Authenticating the Docker Remote API . . . . . . . . . . . . . . . . . . . . 339
Create a Certicate Authority . . . . . . . . . . . . . . . . . . . . . . . 340
Create a server certicate signing request and key . . . . . . . . . . 342

Version: v17.03.0 (38f1319) vi


Contents

Conguring the Docker daemon . . . . . . . . . . . . . . . . . . . . . 346


Creating a client certicate and key . . . . . . . . . . . . . . . . . . . 347
Conguring our Docker client or authentication . . . . . . . . . . . 350
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352

Chapter 9 Getting help and extending Docker 353


Getting help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
The Docker orums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Docker on IRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Docker on GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Reporting issues or Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Setting up a build environment . . . . . . . . . . . . . . . . . . . . . . . . 355
Install Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Install source and build tools . . . . . . . . . . . . . . . . . . . . . . . 356
Check out the source . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Contributing to the documentation . . . . . . . . . . . . . . . . . . . . 357
Build the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Running the tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Use Docker inside our development environment . . . . . . . . . . . 362
Submitting a pull request . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Merge approval and maintainers . . . . . . . . . . . . . . . . . . . . . 365
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366

List o Figures 368

List o Listings 382

Index 383

Version: v17.03.0 (38f1319) vii


Foreword

Who is this book or?

The Docker Book is or developers, sysadmins, and DevOps-minded olks who
want to implement Docker™ and container-based virtualization.
There is an expectation that the reader has basic Linux/Unix skills and is amiliar
with the command line, editing les, installing packages, managing services, and
basic networking.

A note about versions

This books ocuses on Docker Community Edition version v17.03.0-ce and later.
It is not generally backwards-compatible with earlier releases. Indeed, it is rec-
ommended that or production purposes you use Docker version v17.03.0-ce or
later.
In March 2017 Docker re-versioned and renamed their product lines. The Docker
Engine version went rom Docker 1.13.1 to 17.03.0. The product was renamed to
become the Docker Community Edition or Docker CE. When we reer to Docker
in this book we’re generally reerencing the Docker Community Edition.

1
Foreword

Credits and Acknowledgments


• My partner and best riend, Ruth Brown, who continues to humor me despite
my continuing to write books.
• The team at Docker Inc., or developing Docker and helping out during the
writing o the book.
• The olks in the #docker channel and the Docker mailing list.
• Royce Gilbert or not only creating the amazing technical illustrations, but
also the cover.
• Abhinav Ajgaonkar or his Node.js and Express example application.
• The technical review team or keeping me honest and pointing out all the
stupid mistakes.
• Robert P. J. Day - who provided amazingly detailed errata or the book ater
release.

Images on pages 38, 45, 48, courtesy o Docker, Inc.


Docker™ is a registered trademark o Docker, Inc.

Technical Reviewers

Scott Collier

Scott Collier is a Senior Principal System Engineer or Red Hat’s Systems Design
and Engineering team. This team identies and works on high-value solution
stacks based on input rom Sales, Marketing, and Engineering teams and develops
reerence architectures or consumption by internal and external customers. Scott
is a Red Hat Certied Architect (RHCA) with more than 15 years o IT experi-
ence, currently ocused on Docker, OpenShit, and other products in the Red Hat
portolio.
When he’s not tinkering with distributed architectures, he can be ound running,
hiking, camping, and eating barbecue around the Austin, TX, area with his wie
and three children. His notes on technology and other things can be ound at
http://colliernotes.com.

Version: v17.03.0 (38f1319) 2


Foreword

John Ferlito

John is a serial entrepreneur as well as an expert in highly available and scalable


inrastructure. John is currently a ounder and CTO o Bulletproo, who provide
Mission Critical Cloud, and CTO o Vquence, a Video Metrics aggregator.
In his spare time, John is involved in the FOSS communities. He was a co-
organizer o linux.con.au 2007 and a committee member o SLUG in 2007,
and he has worked on various open-source projects, including Debian, Ubuntu,
Puppet, and the Annodex suite. You can read more about John’s work on his
blog. John has a Bachelor o Engineering (Computing) with Honors rom the
University o New South Wales.

Pris Nasrat

Pris Nasrat works as an Engineering Manager at Etsy and is a Docker contributor.


They have worked on a variety o open source tools in the systems engineering
space, including boot loaders, package management, and conguration manage-
ment.
Pris has worked in a variety o Systems Administration and Sotware Development
roles, including working as an SRE at Google, a Sotware Engineer at Red Hat and
as an Inrastructure Specialist Consultant at ThoughtWorks. Pris has spoken at
various conerences, rom talking about Agile Inrastructure at Agile 2009 during
the early days o the DevOps movement to smaller meetups and conerences.

Technical Illustrator
Royce Gilbert has over 30 years’ experience in CAD design, computer support, net-
work technologies, project management, and business systems analysis or major
Fortune 500 companies, including Enron, Compaq, Koch Industries, and Amoco
Corp. He is currently employed as a Systems/Business Analyst at Kansas State Uni-
versity in Manhattan, KS. In his spare time he does Freelance Art and Technical
Illustration as sole proprietor o Royce Art. He and his wie o 38 years are living
in and restoring a 127-year-old stone house nestled in the Flinthills o Kansas.

Version: v17.03.0 (38f1319) 3


Foreword

Prooreader

Q grew up in the New York area and has been a high school teacher, cupcake icer,
scientist wrangler, orensic anthropologist, and catastrophic disaster response
planner. She now resides in San Francisco, making music, acting, putting together
ng-newsletter, and taking care o the ne olks at Stripe.

Author

James is an author and open-source geek. His most recent books are The Ter-
raorm Book about inrastructure management tool Terraorm, The Art o Moni-
toring about monitoring, The Docker Book about Docker, and The Logstash Book
about the popular open-source logging tool. James also authored two books about
Puppet (Pro Puppet and the earlier book about Puppet). He is the author o three
other books, including Pro Linux System Administration, Pro Nagios 2.0, and Hard-
ening Linux.
He was ormerly CTO at Kickstarter, at Docker as VP o Services and Support,
Venmo as VP o Engineering, and Puppet as VP o Technical Operations. He likes
ood, wine, books, photography, and cats. He is not overly keen on long walks on
the beach or holding hands.

Conventions in the book

This is an inline code statement.


This is a code block:

Listing 1: Sample code block

This is a code block

Version: v17.03.0 (38f1319) 4


Foreword

Long code strings are broken.

Code and Examples

You can nd all the code and examples rom the book at http://www.dockerbook.
com/code/index.html, or you can check out the GitHub https://github.com/
turnbullpress/dockerbook-code.

Colophon

This book was written in Markdown with a large dollop o LaTeX. It was then
converted to PDF and other ormats using PanDoc (with some help rom scripts
written by the excellent olks who wrote Backbone.js on Rails).

Errata

Please email any errata you nd to [email protected].

Version

This is version v17.03.0 (381319) o The Docker Book.

Version: v17.03.0 (38f1319) 5


Chapter 1

Introduction

Containers have a long and storied history in computing. Unlike hypervisor vir-
tualization, where one or more independent machines run virtually on physical
hardware via an intermediation layer, containers instead run in user space on top
o an operating system’s kernel. As a result, container virtualization is oten called
operating system-level virtualization. Container technology allows multiple iso-
lated user space instances to be run on a single host.
As a result o their status as guests o the operating system, containers are some-
times seen as less exible: they can generally only run the same or a similar guest
operating system as the underlying host. For example, you can run Red Hat En-
terprise Linux on an Ubuntu server, but you can’t run Microsot Windows on top
o an Ubuntu server.
Containers have also been seen as less secure than the ull isolation o hypervisor
virtualization. Countering this argument is that lightweight containers lack the
larger attack surace o the ull operating system needed by a virtual machine
combined with the potential exposures o the hypervisor layer itsel.
Despite these limitations, containers have been deployed in a variety o use
cases. They are popular or hyperscale deployments o multi-tenant services, or
lightweight sandboxing, and, despite concerns about their security, as process
isolation environments. Indeed, one o the more common examples o a container
is a chroot jail, which creates an isolated directory environment or running

6
Chapter 1: Introduction

processes. Attackers, i they breach the running process in the jail, then nd
themselves trapped in this environment and unable to urther compromise a host.
More recent container technologies have included OpenVZ, Solaris Zones, and
Linux containers like lxc. Using these more recent technologies, containers can
now look like ull-blown hosts in their own right rather than just execution envi-
ronments. In Docker’s case, having modern Linux kernel eatures, such as control
groups and namespaces, means that containers can have strong isolation, their
own network and storage stacks, as well as resource management capabilities to
allow riendly co-existence o multiple containers on a host.
Containers are generally considered a lean technology because they require lim-
ited overhead. Unlike traditional virtualization or paravirtualization technologies,
they do not require an emulation layer or a hypervisor layer to run and instead
use the operating system’s normal system call interace. This reduces the over-
head required to run containers and can allow a greater density o containers to
run on a host.
Despite their history containers haven’t achieved large-scale adoption. A large
part o this can be laid at the eet o their complexity: containers can be complex,
hard to set up, and dicult to manage and automate. Docker aims to change that.

Introducing Docker

Docker is an open-source engine that automates the deployment o applications


into containers. It was written by the team at Docker, Inc (ormerly dotCloud Inc,
an early player in the Platorm-as-a-Service (PAAS) market), and released by them
under the Apache 2.0 license.

 NOTE Disclaimer and disclosure: I am an advisor at Docker.

So what is special about Docker? Docker adds an application deployment engine


on top o a virtualized container execution environment. It is designed to provide

Version: v17.03.0 (38f1319) 7


Chapter 1: Introduction

a lightweight and ast environment in which to run your code as well as an ecient
workow to get that code rom your laptop to your test environment and then into
production. Docker is incredibly simple. Indeed, you can get started with Docker
on a minimal host running nothing but a compatible Linux kernel and a Docker
binary. Docker’s mission is to provide:

An easy and lightweight way to model reality

Docker is ast. You can Dockerize your application in minutes. Docker relies on a
copy-on-write model so that making changes to your application is also incredibly
ast: only what you want to change gets changed.
You can then create containers running your applications. Most Docker contain-
ers take less than a second to launch. Removing the overhead o the hypervisor
also means containers are highly perormant and you can pack more o them into
your hosts and make the best possible use o your resources.

A logical segregation o duties

With Docker, Developers care about their applications running inside containers,
and Operations cares about managing the containers. Docker is designed to en-
hance consistency by ensuring the environment in which your developers write
code matches the environments into which your applications are deployed. This
reduces the risk o ”worked in dev, now an ops problem.”

Fast, ecient development lie cycle

Docker aims to reduce the cycle time between code being written and code being
tested, deployed, and used. It aims to make your applications portable, easy to
build, and easy to collaborate on.

Version: v17.03.0 (38f1319) 8


Chapter 1: Introduction

Encourages service oriented architecture

Docker also encourages service-oriented and microservices architectures. Docker


recommends that each container run a single application or process. This pro-
motes a distributed application model where an application or service is repre-
sented by a series o inter-connected containers. This makes it easy to distribute,
scale, debug and introspect your applications.

 NOTE You don’t need to build your applications this way i you don’t wish.
You can easily run a multi-process application inside a single container.

Docker components

Let’s look at the core components that compose the Docker Community Edition:

• The Docker client and server, also called the Docker Engine.
• Docker Images
• Registries
• Docker Containers

Docker client and server

Docker is a client-server application. The Docker client talks to the Docker server
or daemon, which, in turn, does all the work. You’ll also sometimes see the Docker
daemon called the Docker Engine. Docker ships with a command line client binary,
docker, as well as a ull RESTul API to interact with the daemon: dockerd. You
can run the Docker daemon and client on the same host or connect your local
Docker client to a remote daemon running on another host. You can see Docker’s
architecture depicted here:

Version: v17.03.0 (38f1319) 9


Chapter 1: Introduction

Figure 1.1: Docker architecture

Version: v17.03.0 (38f1319) 10


Chapter 1: Introduction

Docker images

Images are the building blocks o the Docker world. You launch your containers
rom images. Images are the ”build” part o Docker’s lie cycle. They are a lay-
ered ormat, using Union le systems, that are built step-by-step using a series o
instructions. For example:

• Add a le.
• Run a command.
• Open a port.

You can consider images to be the ”source code” or your containers. They are
highly portable and can be shared, stored, and updated. In the book, we’ll learn
how to use existing images as well as build our own images.

Registries

Docker stores the images you build in registries. There are two types o registries:
public and private. Docker, Inc., operates the public registry or images, called
the Docker Hub. You can create an account on the Docker Hub and use it to share
and store your own images.
The Docker Hub also contains, at last count, over 10,000 images that other people
have built and shared. Want a Docker image or an Nginx web server, the Asterisk
open source PABX system, or a MySQL database? All o these are available, along
with a whole lot more.
You can also store images that you want to keep private on the Docker Hub. These
images might include source code or other proprietary inormation you want to
keep secure or only share with other members o your team or organization.
You can also run your own private registry, and we’ll show you how to do that in
Chapter 4. This allows you to store images behind your rewall, which may be a
requirement or some organizations.

Version: v17.03.0 (38f1319) 11


Chapter 1: Introduction

Containers

Docker helps you build and deploy containers inside o which you can package
your applications and services. As we’ve just learned, containers are launched
rom images and can contain one or more running processes. You can think about
images as the building or packing aspect o Docker and the containers as the
running or execution aspect o Docker.
A Docker container is:

• An image ormat.
• A set o standard operations.
• An execution environment.

Docker borrows the concept o the standard shipping container, used to transport
goods globally, as a model or its containers. But instead o shipping goods, Docker
containers ship sotware.
Each container contains a sotware image -- its ’cargo’ -- and, like its physical
counterpart, allows a set o operations to be perormed. For example, it can be
created, started, stopped, restarted, and destroyed.
Like a shipping container, Docker doesn’t care about the contents o the container
when perorming these actions; or example, whether a container is a web server,
a database, or an application server. Each container is loaded the same as any
other container.
Docker also doesn’t care where you ship your container: you can build on your
laptop, upload to a registry, then download to a physical or virtual server, test,
deploy to a cluster o a dozen Amazon EC2 hosts, and run. Like a normal shipping
container, it is interchangeable, stackable, portable, and as generic as possible.
With Docker, we can quickly build an application server, a message bus, a utility
appliance, a CI test bed or an application, or one o a thousand other possible ap-
plications, services, and tools. It can build local, sel-contained test environments
or replicate complex application stacks or production or development purposes.
The possible use cases are endless.

Version: v17.03.0 (38f1319) 12


Chapter 1: Introduction

Compose and Swarm

In addition to solitary containers we can also run Docker containers in stacks and
clusters, what Docker calls swarms. The Docker ecosystem contains two more
tools:

• Docker Compose - which allows you to run stacks o containers to represent


application stacks, or example web server, application server and database
server containers running together to serve a specic application.

• Docker Swarm - which allows you to create clusters o containers, called


swarms, that allow you to run scalable workloads.

We’ll look at Docker Compose and Swarm in Chapter 7.

What can you use Docker or?

So why should you care about Docker or containers in general? We’ve discussed
briey the isolation that containers provide; as a result, they make excellent sand-
boxes or a variety o testing purposes. Additionally, because o their ’standard’
nature, they also make excellent building blocks or services. Some o the exam-
ples o Docker running out in the wild include:

• Helping make your local development and build workow aster, more e-
cient, and more lightweight. Local developers can build, run, and share
Docker containers. Containers can be built in development and promoted to
testing environments and, in turn, to production.
• Running stand-alone services and applications consistently across multiple
environments, a concept especially useul in service-oriented architectures
and deployments that rely heavily on micro-services.
• Using Docker to create isolated instances to run tests like, or example, those
launched by a Continuous Integration (CI) suite like Jenkins CI.
• Building and testing complex applications and architectures on a local host
prior to deployment into a production environment.

Version: v17.03.0 (38f1319) 13


Chapter 1: Introduction

• Building a multi-user Platorm-as-a-Service (PAAS) inrastructure.


• Providing lightweight stand-alone sandbox environments or developing,
testing, and teaching technologies, such as the Unix shell or a programming
language.
• Sotware as a Service applications;
• Highly perormant, hyperscale deployments o hosts.

You can see a list o some o the early projects built on and around the Docker
ecosystem in the blog post here.

Docker with conguration management


Since Docker was announced, there have been a lot o discussions about where
Docker ts with conguration management tools like Puppet and Che. Docker
includes an image-building and image-management solution. One o the drivers
or modern conguration management tools was the response to the ”golden im-
age” model. With golden images, you end up with massive and unmanageable
image sprawl: large numbers o (deployed) complex images in varying states o
versioning. You create randomness and exacerbate entropy in your environment
as your image use grows. Images also tend to be heavy and unwieldy. This oten
orces manual change or layers o deviation and unmanaged conguration on top
o images, because the underlying images lack appropriate exibility.
Compared to traditional image models, Docker is a lot more lightweight: images
are layered, and you can quickly iterate on them. There are some legitimate ar-
guments to suggest that these attributes alleviate many o the management prob-
lems traditional images present. It is not immediately clear, though, that this
alleviation represents the ability to totally replace or supplement conguration
management tools. There is amazing power and control to be gained through the
idempotence and introspection that conguration management tools can provide.
Docker itsel still needs to be installed, managed, and deployed on a host. That
host also needs to be managed. In turn, Docker containers may need to be orches-
trated, managed, and deployed, oten in conjunction with external services and
tools, which are all capabilities that conguration management tools are excellent
in providing.

Version: v17.03.0 (38f1319) 14


Chapter 1: Introduction

It is also apparent that Docker represents (or, perhaps more accurately, encour-
ages) some diferent characteristics and architecture or hosts, applications, and
services: they can be short-lived, immutable, disposable, and service-oriented.
These behaviors do not lend themselves or resonate strongly with the need or
conguration management tools. With these behaviors, you are rarely concerned
with long-term management o state, entropy is less o a concern because contain-
ers rarely live long enough or it to be, and the recreation o state may oten be
cheaper than the remediation o state.
Not all inrastructure can be represented with these behaviors, however. Docker’s
ideal workloads will likely exist alongside more traditional inrastructure deploy-
ment or a little while. The long-lived host, perhaps also the host that needs to
run on physical hardware, still has a role in many organizations. As a result o
these diverse management needs, combined with the need to manage Docker it-
sel, both Docker and conguration management tools are likely to be deployed
in the majority o organizations.

Docker’s technical components

Docker can be run on any x64 host running a modern Linux kernel; we recommend
kernel version 3.10 and later. It has low overhead and can be used on servers,
desktops, or laptops. Run inside a virtual machine, you can also deploy Docker
on OS X and Microsot Windows. It includes:

• A native Linux container ormat that Docker calls libcontainer.


• Linux kernel namespaces, which provide isolation or lesystems, processes,
and networks.
• Filesystem isolation: each container is its own root lesystem.
• Process isolation: each container runs in its own process environment.
• Network isolation: separate virtual interaces and IP addressing between
containers.
• Resource isolation and grouping: resources like CPU and memory are allo-
cated individually to each Docker container using the cgroups, or control
groups, kernel eature.

Version: v17.03.0 (38f1319) 15


Chapter 1: Introduction

• Copy-on-write: lesystems are created with copy-on-write, meaning they


are layered and ast and require limited disk usage.
• Logging: STDOUT, STDERR and STDIN rom the container are collected, logged,
and available or analysis or trouble-shooting.
• Interactive shell: You can create a pseudo-tty and attach to STDIN to provide
an interactive shell to your container.

What’s in the book?

In this book, we walk you through installing, deploying, managing, and extending
Docker. We do that by rst introducing you to the basics o Docker and its com-
ponents. Then we start to use Docker to build containers and services to perorm
a variety o tasks.
We take you through the development lie cycle, rom testing to production, and
see where Docker ts in and how it can make your lie easier. We make use o
Docker to build test environments or new projects, demonstrate how to integrate
Docker with continuous integration workow, and then how to build application
services and platorms. Finally, we show you how to use Docker’s API and how
to extend Docker yoursel.
We teach you how to:

• Install Docker.
• Take your rst steps with a Docker container.
• Build Docker images.
• Manage and share Docker images.
• Run and manage more complex Docker containers and stacks o Docker con-
tainers.
• Deploy Docker containers as part o your testing pipeline.
• Build multi-container applications and environments.
• Introduce the basics o Docker orchestration with Docker Compose, Consul,
and Swarm.
• Explore the Docker API.
• Getting Help and Extending Docker.

Version: v17.03.0 (38f1319) 16


Chapter 1: Introduction

It is recommended that you read through every chapter. Each chapter builds
on your Docker knowledge and introduces new eatures and unctionality. By
the end o the book you should have a solid understanding o how to work with
Docker to build standard containers and deploy applications, test environments,
and standalone services.

Docker resources

• Docker homepage
• Docker Hub
• Docker blog
• Docker documentation
• Docker Getting Started Guide
• Docker code on GitHub
• Docker Forge - collection o Docker tools, utilities, and services.
• Docker mailing list
• The Docker Forum
• Docker on IRC: irc.reenode.net and channel #docker
• Docker on Twitter
• Get Docker help on StackOverow
• Docker.com

In addition to these resources in Chapter 9 you’ll get a detailed explanation o


where and how to get help with Docker.

Version: v17.03.0 (38f1319) 17


Chapter 2

Installing Docker

Installing Docker is quick and easy. Docker is currently supported on a wide


variety o Linux platorms, including shipping as part o Ubuntu and Red Hat En-
terprise Linux (RHEL). Also supported are various derivative and related distribu-
tions like Debian, CentOS, Fedora, Oracle Linux, and many others. Using a virtual
environment, you can install and run Docker on OS X and Microsot Windows.
Currently, the Docker team recommends deploying Docker on Ubuntu, Debian or
the RHEL amily (CentOS, Fedora, etc) hosts and makes available packages that
you can use to do this. In this chapter, I’m going to show you how to install Docker
in our diferent but complementary environments:

• On a host running Ubuntu.


• On a host running Red Hat Enterprise Linux or derivative distribution.
• On OS X using Docker or Mac.
• On Microsot Windows using Docker or Windows.

 TIP Docker or Mac and Docker or Windows are a collection o components
that installs everything you need to get started with Docker. It includes a tiny
virtual machine shipped with a wrapper script to manage it. The virtual machine
runs the daemon and provides a local Docker daemon on OS X and Microsot Win-
dows. The Docker client binary, docker, is installed natively on these platorms

18
Chapter 2: Installing Docker

and connected to the Docker daemon running in the virtual machine. It replaces
the legacy Docker Toolbox and Boot2Docker.

Docker runs on a number o other platorms, including Debian, SUSE, Arch Linux,
CentOS, and Gentoo. It’s also supported on several Cloud platorms including
Amazon EC2, Rackspace Cloud, and Google Compute Engine.

 TIP You can nd a ull list o installation targets in the Docker installation
guide.

I’ve chosen these our methods because they represent the environments that are
most commonly used in the Docker community. For example, your developers
and sysadmins may wish to start with building Docker containers on their OS X
or Windows workstations using Docker or Mac or Windows and then promote
these containers to a testing, staging, or production environment running one o
the other supported platorms.
I recommend you step through at least the Ubuntu or the RHEL installation to get
an idea o Docker’s prerequisites and an understanding o how to install it.

 TIP As with all installation processes, I also recommend you look at using
tools like Puppet or Che to install Docker rather than using a manual process.
For example, you can nd a Puppet module to install Docker here and a Che
cookbook here.

Version: v17.03.0 (38f1319) 19


Chapter 2: Installing Docker

Requirements

For all o these installation types, Docker has some basic prerequisites. To use
Docker you must:

• Be running a 64-bit architecture (currently x86_64 and amd64 only). 32-bit


architectures are NOT currently supported.
• Be running a Linux 3.10 or later kernel. Some earlier kernels rom 2.6.x and
later will run Docker successully. Your results will greatly vary, though,
and i you need support you will oten be asked to run on a more recent
kernel.
• The kernel must support an appropriate storage driver. For example,
– Device Mapper
– AUFS
– vs
– btrs
– ZFS (introduced in Docker 1.7)
– The deault storage driver is usually Device Mapper or AUFS.
• cgroups and namespaces kernel eatures must be supported and enabled.

Installing on Ubuntu and Debian

Installing Docker on Ubuntu and Debian is currently ocially supported on a


selection o releases:

• Ubuntu Yakkety 16.10 (64-bit)


• Ubuntu Xenial 16.04 (64-bit)
• Ubuntu Trusty 14.04 (LTS) (64-bit)
• Debian Stretch (64-bit)
• Debian 8.0 Jessie (64-bit)
• Debian 7.7 Wheezy (64-bit)

Version: v17.03.0 (38f1319) 20


Chapter 2: Installing Docker

 NOTE This is not to say Docker won’t work on other Ubuntu (or Debian)
versions that have appropriate kernel levels and the additional required support.
They just aren’t ocially supported, so any bugs you encounter may result in a
WONTFIX.

To begin our installation, we rst need to conrm we’ve got all the required pre-
requisites. I’ve created a brand new Ubuntu 16.04 LTS 64-bit host on which to
install Docker. We’re going to call that host darknight.example.com.

Checking or prerequisites

Docker has a small but necessary list o prerequisites required to install and run
on Ubuntu hosts.

Kernel

First, let’s conrm we’ve got a suciently recent Linux kernel. We can do this
using the uname command.

Listing 2.1: Checking or the Linux kernel version on Ubuntu

$ uname -a
Linux darknight.example.com 3.13.0-43-generic #72-Ubuntu SMP Mon
Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

We see that we’ve got a 3.13.0 x86_64 kernel installed. This is the deault or
Ubuntu 14.04 and later.
We also want to install the linux-image-extra and linux-image-extra-virtual
packages that contain the aufs storage driver.

Version: v17.03.0 (38f1319) 21


Chapter 2: Installing Docker

Listing 2.2: Installing the linux-image-extra package

$ sudo apt-get install linux-image-extra-$(uname -r) linux-image-


extra-virtual

I we’re using an earlier release o Ubuntu we may have an earlier kernel. We


should be able to upgrade our Ubuntu to the later kernel via apt-get:

Listing 2.3: Installing a 3.10 or later kernel on Ubuntu

$ sudo apt-get update


$ sudo apt-get install linux-headers-3.16.0-34-generic linux-
image-3.16.0-34-generic linux-headers-3.16.0-34

 NOTE Throughout this book we’re going to use sudo to provide the re-
quired root privileges.

We can then update the Grub boot loader to load our new kernel.

Listing 2.4: Updating the boot loader on Ubuntu Precise

$ sudo update-grub

Ater installation, we’ll need to reboot our host to enable the new 3.10 or later
kernel.

Version: v17.03.0 (38f1319) 22


Chapter 2: Installing Docker

Listing 2.5: Reboot the Ubuntu host

$ sudo reboot

Ater the reboot, we can then check that our host is running the right version using
the same uname -a command we used above.

Installing Docker

Now we’ve got everything we need to add Docker to our host. To install Docker,
we’re going to use the Docker team’s DEB packages.
First, we need to install some prerequisite packages.

Listing 2.6: Adding prerequisite Ubuntu packages

sudo apt-get install \


apt-transport-https \
ca-certificates \
curl \
software-properties-common

Then add the ocial Docker GPG key.

Listing 2.7: Adding the Docker GPG key

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo


apt-key add -

And then add the Docker APT repository. You may be prompted to conrm that

Version: v17.03.0 (38f1319) 23


Chapter 2: Installing Docker

you wish to add the repository and have the repository’s GPG key automatically
added to your host.

Listing 2.8: Adding the Docker APT repository

$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"

The lsb_release command should populate the Ubuntu distribution version o


your host.
Now, we update our APT sources.

Listing 2.9: Updating APT sources

$ sudo apt-get update

We can now install the Docker package itsel.

Listing 2.10: Installing the Docker packages on Ubuntu

$ sudo apt-get install docker-ce

This will install Docker and a number o additional required packages.

 TIP Prior to Docker 1.8.0 the package name was lxc-docker and between
Docker 1.8 and 1.13 the package name was docker-engine.

Version: v17.03.0 (38f1319) 24


Chapter 2: Installing Docker

We should now be able to conrm that Docker is installed and running using the
docker info command.

Listing 2.11: Checking Docker is installed on Ubuntu

$ sudo docker info


Containers: 0
Images: 0
. . .

Docker and UFW

I you use the UFW, or Uncomplicated Firewall, on Ubuntu, then you’ll need to
make a small change to get it to work with Docker. Docker uses a network bridge
to manage the networking on your containers. By deault, UFW drops all or-
warded packets. You’ll need to enable orwarding in UFW or Docker to unction
correctly. We can do this by editing the /etc/default/ufw le. Inside this le,
change:

Listing 2.12: Old UFW orwarding policy

DEFAULT_FORWARD_POLICY="DROP"

To:

Listing 2.13: New UFW orwarding policy

DEFAULT_FORWARD_POLICY="ACCEPT"

Save the update and reload UFW.

Version: v17.03.0 (38f1319) 25


Chapter 2: Installing Docker

Listing 2.14: Reload the UFW rewall

$ sudo ufw reload

Installing on Red Hat and amily

Installing Docker on Red Hat Enterprise Linux (or CentOS or Fedora) is currently
only supported on a small selection o releases:

• Red Hat Enterprise Linux (and CentOS) 7 and later (64-bit)


• Fedora 24 and later (64-bit)
• Oracle Linux 6 or 7 with Unbreakable Enterprise Kernel Release 3 or higher
(64-bit)

 TIP Docker is shipped by Red Hat as a native package on Red Hat Enterprise
Linux 7 and later. Additionally, Red Hat Enterprise Linux 7 is the only release on
which Red Hat ocially supports Docker.

Checking or prerequisites

Docker has a small but necessary list o prerequisites required to install and run
on Red Hat and the Red Hat amily o distributions.

Kernel

We need to conrm that we have a 3.10 or later kernel version. We can do this
using the uname command like so:

Version: v17.03.0 (38f1319) 26


Chapter 2: Installing Docker

Listing 2.15: Checking the Red Hat or Fedora kernel

$ uname -a
Linux darknight.example.com 3.10.9-200.fc19.x86_64 #1 SMP Wed Aug
21 19:27:58 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

All o the currently supported Red Hat and the Red Hat amily o platorms should
have a kernel that supports Docker.

Installing Docker

The process or installing difers slightly between Red Hat variants. On Red Hat
Enterprise Linux 6 and CentOS 6, we will need to add the EPEL package reposi-
tories rst. On Fedora, we do not need the EPEL repositories enabled. There are
also some package-naming diferences between platorms and versions.

Installing on Red Hat Enterprise Linux 6 and CentOS 6

For Red Hat Enterprise Linux 6 and CentOS 6, we install EPEL by adding the
ollowing RPM.

Listing 2.16: Installing EPEL on Red Hat Enterprise Linux 6 and CentOS 6

$ sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386


/epel-release-6-8.noarch.rpm

Now we should be able to install the Docker package.

Version: v17.03.0 (38f1319) 27


Chapter 2: Installing Docker

Listing 2.17: Installing the Docker package on Red Hat Enterprise Linux 6 and CentOS
6

$ sudo yum -y install docker-io

Installing on Red Hat Enterprise Linux 7

With Red Hat Enterprise Linux 7 and later you can install Docker using these
instructions.

Listing 2.18: Installing Docker on RHEL 7

$ sudo subscription-manager repos --enable=rhel-7-server-extras-


rpms
$ sudo yum install -y docker

You’ll need to be a Red Hat customer with an appropriate RHEL Server subscrip-
tion entitlement to access the Red Hat Docker packages and documentation.

Installing on Fedora

There have been several package name changes across versions o Fedora. For
Fedora 19, we need to install the docker-io package.

 TIP On newer Red Hat and amily versions the yum command has been re-
placed with the dnf command. The syntax is otherwise unchanged.

Version: v17.03.0 (38f1319) 28


Chapter 2: Installing Docker

Listing 2.19: Installing the Docker package on Fedora 19

$ sudo yum -y install docker-io

On Fedora 20, the package was renamed to docker.

Listing 2.20: Installing the Docker package on Fedora 20

$ sudo yum -y install docker

For Fedora 21 the package name reverted back to docker-io.

Listing 2.21: Installing the Docker package on Fedora 21

$ sudo yum -y install docker-io

Finally, on Fedora 22 the package name became docker again. Also on Fedora 22
the yum command was deprecated and replaced with the dnf command.

Listing 2.22: Installing the Docker package on Fedora 22

$ sudo dnf install docker

 TIP For Oracle Linux you can nd documentation on the Docker site.

Version: v17.03.0 (38f1319) 29


Chapter 2: Installing Docker

Starting the Docker daemon on the Red Hat amily

Once the package is installed, we can start the Docker daemon. On Red Hat En-
terprise Linux 6 and CentOS 6 you can use.

Listing 2.23: Starting the Docker daemon on Red Hat Enterprise Linux 6

$ sudo service docker start

I we want Docker to start at boot we should also:

Listing 2.24: Ensuring the Docker daemon starts at boot on Red Hat Enterprise Linux
6

$ sudo service docker enable

On Red Hat Enterprise Linux 7 and Fedora.

Listing 2.25: Starting the Docker daemon on Red Hat Enterprise Linux 7

$ sudo systemctl start docker

I we want Docker to start at boot we should also:

Listing 2.26: Ensuring the Docker daemon starts at boot on Red Hat Enterprise Linux
7

$ sudo systemctl enable docker

We should now be able to conrm Docker is installed and running using the docker
info command.

Version: v17.03.0 (38f1319) 30


Chapter 2: Installing Docker

Listing 2.27: Checking Docker is installed on the Red Hat amily

$ sudo docker info


Containers: 0
Images: 0
. . .

 TIP Or you can directly download the latest RPMs rom the Docker site or
RHEL, CentOS and Fedora.

Docker or Mac

I you’re using OS X, you can quickly get started with Docker using Docker or
Mac. Docker or Mac is a collection o Docker components including a tiny virtual
machine with a supporting command line tool that is installed on an OS X host
and provides you with a Docker environment.
Docker or Mac ships with a variety o components:

• Hyperkit.
• The Docker client and server.
• Docker Compose (see Chapter 7).
• Docker Machine - Which helps you create Docker hosts.
• Kitematic - is a GUI that helps you run Docker locally and interact with the
Docker Hub.

Installing Docker or Mac

To install Docker or Mac we need to download its installer. You can nd it here.

Version: v17.03.0 (38f1319) 31


Chapter 2: Installing Docker

Let’s grab the current release:

Listing 2.28: Downloading the Docker or Mac DMG le

$ wget https://download.docker.com/mac/stable/Docker.dmg

Launch the downloaded installer and ollow the instructions to install Docker or
Mac.

Figure 2.1: Installing Docker or Mac on OS X

Testing Docker or Mac

We can now test that our Docker or Mac installation is working by trying to
connect our local client to the Docker daemon running on the virtual machine.
Make sure the Docker.app is running and then open a terminal window and type:

Version: v17.03.0 (38f1319) 32


Chapter 2: Installing Docker

Listing 2.29: Testing Docker or Mac on OS X

$ docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 13
Server Version: 1.12.1
Storage Driver: aufs
. . .

And presto! We have Docker running locally on our OS X host.


There’s a lot more you can use and congure with Docker or Mac and you can
read its documentation on the Docker or Mac site.

Docker or Windows installation

I you’re using Microsot Windows, you can quickly get started with Docker using
Docker or Windows. Docker or Windows is a collection o Docker components
including a tiny Hyper-V virtual machine with a supporting command line tool
that is installed on a Microsot Windows host and provides you with a Docker
environment.
Docker or Windows ships with a variety o components:

• The Docker client and server.


• Docker Compose (see Chapter 7).
• Docker Machine - Which helps you create Docker hosts.
• Kitematic - is a GUI that helps you run Docker locally and interact with the
Docker Hub.

Version: v17.03.0 (38f1319) 33


Chapter 2: Installing Docker

Docker or Windows requires 64bit Windows 10 Pro, Enterprise or Education


(with the 1511 November update, Build 10586 or later) and Microsot Hyper-V. I
your host does not satisy these requirements, you can install the Docker Toolbox,
which uses Oracle Virtual Box instead o Hyper-V.

 TIP You can also install a Docker client via the Chocolatey package manager.

Installing Docker or Windows

To install Docker or Windows we need to download its installer. You can nd it
here.
Let’s grab the current release:

Listing 2.30: Downloading the Docker or Windows .MSI le

https://download.docker.com/win/stable/InstallDocker.msi

Launch the downloaded installer and ollow the instructions to install Docker or
Windows.

Testing Docker or Windows

We can now test that our Docker or Windows installation is working by trying
to connect our local client to the Docker daemon running on the virtual machine.
Ensure the Docker application is running and open a terminal window and run:

Version: v17.03.0 (38f1319) 34


Chapter 2: Installing Docker

Listing 2.31: Testing Docker or Windows

$ docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 13
Server Version: 1.12.1
Storage Driver: aufs
. . .

And presto! We have Docker running locally on our Windows host.


There’s a lot more you can use and congure with Docker or Windows and you
can read its documentation on the Docker or Windows site.

Using Docker on OSX and Windows with this book

I you are ollowing the examples in this book you will sometimes be asked to use
volumes or the docker run command with the -v ag to mount a local directory
into a Docker container may not work on Windows. You can’t mount a local
directory on host into the Docker host running in the Docker virtual machine
because they don’t share a le system. I you want to use any examples with
volumes, such as those in Chapters 5 and 6, I recommend you run Docker on a
Linux-based host.
It’s also worth reading the Docker or Mac File Sharing section or Docker or Win-
dows Shared Drive section. This allows you enable volume usage by mounting
directories into the Docker or Mac and Docker or Windows client applications.

 TIP All the examples in the book assume you are using the latest Docker or
Version: v17.03.0 (38f1319) 35
Chapter 2: Installing Docker

Mac or Docker or Windows versions.

Docker installation script

There is also an alternative method available to install Docker on an appropriate


host using a remote installation script. To use this script we need to curl it rom
the get.docker.com website.

 NOTE The script currently only supports Ubuntu, Fedora, Debian, and
Gentoo installation. It may be updated shortly to include other distributions.

First, we’ll need to ensure the curl command is installed.

Listing 2.32: Testing or curl

$ whereis curl
curl: /usr/bin/curl /usr/bin/X11/curl /usr/share/man/man1/curl.1.
gz

We can use apt-get to install curl i necessary.

Listing 2.33: Installing curl on Ubuntu

$ sudo apt-get -y install curl

Or we can use yum or the newer dnf command on Fedora.

Version: v17.03.0 (38f1319) 36


Chapter 2: Installing Docker

Listing 2.34: Installing curl on Fedora

$ sudo yum -y install curl

Now we can use the script to install Docker.

Listing 2.35: Installing Docker rom the installation script

$ curl https://get.docker.com/ | sudo sh

This will ensure that the required dependencies are installed and check that our
kernel is an appropriate version and that it supports an appropriate storage driver.
It will then install Docker and start the Docker daemon.

Binary installation

I we don’t wish to use any o the package-based installation methods, we can


download the latest binary version o Docker.

Listing 2.36: Downloading the Docker binary

$ wget http://get.docker.com/builds/Linux/x86_64/docker-latest.
tgz

I recommend not taking this approach, as it reduces the maintainability o your


Docker installation. Using packages is simpler and easier to manage, especially i
using automation or conguration management tools.

Version: v17.03.0 (38f1319) 37


Chapter 2: Installing Docker

The Docker daemon

Ater we’ve installed Docker, we need to conrm that the Docker daemon is run-
ning. Docker runs as a root-privileged daemon process to allow it to handle op-
erations that can’t be executed by normal users (e.g., mounting lesystems). The
docker binary runs as a client o this daemon and also requires root privileges to
run. You can control the Docker daemon via the dockerd binary.

 NOTE Prior to Docker 1.12 the daemon was controlled with the docker
daemon subcommand.

The Docker daemon should be started by deault when the Docker package is
installed. By deault, the daemon listens on a Unix socket at /var/run/docker.
sock or incoming Docker requests. I a group named docker exists on our system,
Docker will apply ownership o the socket to that group. Hence, any user that
belongs to the docker group can run Docker without needing to use the sudo
command.

 WARNING Remember that although the docker group makes lie easier,
it is still a security exposure. The docker group is root-equivalent and should be
limited to those users and applications who absolutely need it.

Conguring the Docker daemon

We can change how the Docker daemon binds by adjusting the -H ag when the
daemon is run.
We can use the -H ag to speciy diferent interace and port conguration; or
example, binding to the network:

Version: v17.03.0 (38f1319) 38


Chapter 2: Installing Docker

Listing 2.37: Changing Docker daemon networking

$ sudo dockerd -H tcp://0.0.0.0:2375

This would bind the Docker daemon to all interaces on the host. Docker isn’t
automatically aware o networking changes on the client side. We will need to
speciy the -H option to point the docker client at the server; or example, docker
-H :4200 would be required i we had changed the port to 4200. Or, i we don’t
want to speciy the -H on each client call, Docker will also honor the content o
the DOCKER_HOST environment variable.

Listing 2.38: Using the DOCKER_HOST environment variable

$ export DOCKER_HOST="tcp://0.0.0.0:2375"

 WARNING By deault, Docker client-server communication is not au-


thenticated. This means that i you bind Docker to an exposed network interace,
anyone can connect to the daemon. There is, however, some TLS authentication
available in Docker 0.9 and later. You’ll see how to enable it when we look at the
Docker API in Chapter 8.

We can also speciy an alternative Unix socket path with the -H ag; or example,
to use unix://home/docker/docker.sock:

Listing 2.39: Binding the Docker daemon to a diferent socket

$ sudo dockerd -H unix://home/docker/docker.sock

Version: v17.03.0 (38f1319) 39


Chapter 2: Installing Docker

Or we can speciy multiple bindings like so:

Listing 2.40: Binding the Docker daemon to multiple places

$ sudo dockerd -H tcp://0.0.0.0:2375 -H unix://home/docker/docker


.sock

 TIP I you’re running Docker behind a proxy or corporate rewall you can
also use the HTTPS_PROXY, HTTP_PROXY, NO_PROXY options to control how the dae-
mon connects.

We can also increase the verbosity o the Docker daemon by using the -D ag.

Listing 2.41: Turning on Docker daemon debug

$ sudo dockerd -D

I we want to make these changes permanent, we’ll need to edit the various startup
congurations. On SystemV-enabled Ubuntu and Debian releases, this is done by
editing the /etc/default/docker le and changing the DOCKER_OPTS variable.
On systemd-enabled distributions we would add an override le at:
/etc/systemd/system/docker.service.d/override.conf

With content like:

Version: v17.03.0 (38f1319) 40


Chapter 2: Installing Docker

Listing 2.42: The systemd override le

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H ...

In earlier Red Hat and Fedora releases, we’d edit the /etc/sysconfig/docker le.

 NOTE On other platorms, you can manage and update the Docker dae-
mon’s starting conguration via the appropriate init mechanism.

Checking that the Docker daemon is running

On Ubuntu, i Docker has been installed via package, we can check i the daemon
is running with the Upstart status command:

Listing 2.43: Checking the status o the Docker daemon

$ sudo status docker


docker start/running, process 18147

We can then start or stop the Docker daemon with the Upstart start and stop
commands, respectively.

Version: v17.03.0 (38f1319) 41


Chapter 2: Installing Docker

Listing 2.44: Starting and stopping Docker with Upstart

$ sudo stop docker


docker stop/waiting
$ sudo start docker
docker start/running, process 18192

On systemd-enabled Ubuntu and Debian releases, as well as Red Hat and Fedora,
we can do similarly using the service shortcuts.

Listing 2.45: Starting and stopping Docker on Red Hat and Fedora

$ sudo service docker stop


$ sudo service docker start

I the daemon isn’t running, then the docker binary client will ail with an error
message similar to this:

Listing 2.46: The Docker daemon isn’t running

2014/05/18 20:08:32 Cannot connect to the Docker daemon. Is '


dockerd' running on this host?

Upgrading Docker

Ater you’ve installed Docker, it is also easy to upgrade it when required. I you
installed Docker using native packages via apt-get or yum, then you can also use
these channels to upgrade it.

Version: v17.03.0 (38f1319) 42


Chapter 2: Installing Docker

For example, run the apt-get update command and then install the new ver-
sion o Docker. We’re using the apt-get install command because the docker-
engine (ormerly lxc-docker) package is usually pinned.

Listing 2.47: Upgrade docker

$ sudo apt-get update


$ sudo apt-get install docker-engine

Docker user interaces

You can also potentially use a graphical user interace to manage Docker once
you’ve got it installed. Currently, there are a small number o Docker user inter-
aces and web consoles available in various states o development, including:

• Shipyard - Shipyard gives you the ability to manage Docker resources, in-
cluding containers, images, hosts, and more rom a single management in-
terace. It’s open source, and the code is available rom https://github.
com/ehazlett/shipyard.

• Portainer - UI or Docker is a web interace that allows you to interact with
the Docker Remote API. It’s written in JavaScript using the AngularJS rame-
work.

• Kitematic - is a GUI or OS X and Windows that helps you run Docker locally
and interact with the Docker Hub. It’s a ree product released by Docker Inc.

Summary

In this chapter, we’ve seen how to install Docker on a variety o platorms. We’ve
also seen how to manage the Docker daemon.

Version: v17.03.0 (38f1319) 43


Chapter 2: Installing Docker

In the next chapter, we’re going to start using Docker. We’ll begin with container
basics to give you an introduction to basic Docker operations. I you’re all set up
and ready to go then jump into Chapter 3.

Version: v17.03.0 (38f1319) 44


Chapter 3

Getting Started with Docker

In the last chapter, we saw how to install Docker and ensure the Docker daemon
is up and running. In this chapter we’re going to see how to take our rst steps
with Docker and work with our rst container. This chapter will provide you with
the basics o how to interact with Docker.

Ensuring Docker is ready

We’re going to start with checking that Docker is working correctly, and then
we’re going to take a look at the basic Docker workow: creating and managing
containers. We’ll take a container through its typical liecycle rom creation to a
managed state and then stop and remove it.
Firstly, let’s check that the docker binary exists and is unctional:

45
Chapter 3: Getting Started with Docker

Listing 3.1: Checking that the docker binary works

$ sudo docker info


Containers: 33
Running: 0
Paused: 0
Stopped: 33
Images: 217
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 284
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
. . .
Username: jamtur01
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8

Here, we’ve passed the info command to the docker binary, which returns a list o
any containers, any images (the building blocks Docker uses to build containers),
the execution and storage drivers Docker is using, and its basic conguration.
As we’ve learned in previous chapters, Docker has a client-server architecture.
It has two binaries, the Docker server provided via the dockerd binary and the
docker binary, that acts as a client. As a client, the docker binary passes requests
to the Docker daemon (e.g., asking it to return inormation about itsel), and then
processes those requests when they are returned.

Version: v17.03.0 (38f1319) 46


Chapter 3: Getting Started with Docker

 NOTE Prior to Docker 1.12 all o this unctionality was provided by a


single binary: docker.

Running our rst container

Now let’s try and launch our rst container with Docker. We’re going to use the
docker run command to create a container. The docker run command provides
all o the ”launch” capabilities or Docker. We’ll be using it a lot to create new
containers.

 TIP You can nd a ull list o the available Docker commands here or by
typing docker help. You can also use the Docker man pages (e.g., man docker-run).
This will not work on Docker or Mac or Docker or OSX as no man pages are
shipped.

Version: v17.03.0 (38f1319) 47


Chapter 3: Getting Started with Docker

Listing 3.2: Running our rst container

$ sudo docker run -i -t ubuntu /bin/bash


Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
43db9dbdcb30: Pull complete
2dc64e8f8d4f: Pull complete
670a583e1b50: Pull complete
183b0bfcd10e: Pull complete
Digest: sha256:
c6674c44c6439673bf56536c1a15916639c47ea04c3d6296c5df938add67b54b

Status: Downloaded newer image for ubuntu:latest


root@fcd78e1a3569:/#

Wow. A bunch o stuf happened here when we ran this command. Let’s look at
each piece.

Listing 3.3: The docker run command

$ sudo docker run -i -t ubuntu /bin/bash

First, we told Docker to run a command using docker run. We passed it two
command line ags: -i and -t. The -i ag keeps STDIN open rom the container,
even i we’re not attached to it. This persistent standard input is one hal o what
we need or an interactive shell. The -t ag is the other hal and tells Docker
to assign a pseudo-tty to the container we’re about to create. This provides us
with an interactive shell in the new container. This line is the base conguration
needed to create a container with which we plan to interact on the command line
rather than run as a daemonized service.

Version: v17.03.0 (38f1319) 48


Chapter 3: Getting Started with Docker

 TIP You can nd a ull list o the available Docker run ags here or by
typing docker help run. You can also use the Docker man pages (e.g., example
man docker-run.)

Next, we told Docker which image to use to create a container, in this case the
ubuntu image. The ubuntu image is a stock image, also known as a ”base” image,
provided by Docker, Inc., on the Docker Hub registry. You can use base images
like the ubuntu base image (and the similar fedora, debian, centos, etc., images)
as the basis or building your own images on the operating system o your choice.
For now, we’re just running the base image as the basis or our container and not
adding anything to it.

 TIP We’ll hear a lot more about images in Chapter 4, including how to to
build our own images. Throughout the book we use the ubuntu image. This is
a reasonably heavyweight image, measuring a couple o hundred megabytes in
size. I you’d preer something smaller the Alpine Linux image is recommended
as extremely lightweight, generally 5Mb in size or the base image. Its image
name is alpine.

So what was happening in the background here? Firstly, Docker checked locally
or the ubuntu image. I it can’t nd the image on our local Docker host, it will
reach out to the Docker Hub registry run by Docker, Inc., and look or it there.
Once Docker had ound the image, it downloaded the image and stored it on the
local host.
Docker then used this image to create a new container inside a lesystem. The
container has a network, IP address, and a bridge interace to talk to the local
host. Finally, we told Docker which command to run in our new container, in this
case launching a Bash shell with the /bin/bash command.
When the container had been created, Docker ran the /bin/bash command inside
it; the container’s shell was presented to us like so:

Version: v17.03.0 (38f1319) 49


Chapter 3: Getting Started with Docker

Listing 3.4: Our rst container’s shell

root@f7cbdac22a02:/#

Working with our rst container


We are now logged into a new container, with the catchy ID o f7cbdac22a02, as
the root user. This is a ully edged Ubuntu host, and we can do anything we like
in it. Let’s explore it a bit, starting with asking or its hostname.

Listing 3.5: Checking the container’s hostname

root@f7cbdac22a02:/# hostname
f7cbdac22a02

We see that our container’s hostname is the container ID. Let’s have a look at the
/etc/hosts le too.

Listing 3.6: Checking the container’s /etc/hosts

root@f7cbdac22a02:/# cat /etc/hosts


172.17.0.4 f7cbdac22a02
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Version: v17.03.0 (38f1319) 50


Chapter 3: Getting Started with Docker

Docker has also added a host entry or our container with its IP address.
Let’s also check out its networking conguration.

Listing 3.7: Checking the container’s interaces

root@f7cbdac22a02:/# hostname -I
172.17.0.4

We see that we have an IP address o 172.17.0.4, just like any other host. We can
also check its running processes.

Listing 3.8: Checking container’s processes

root@f7cbdac22a02:/# ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME
COMMAND
root 1 0.0 0.0 18156 1936 ? Ss May30 0:00
/bin/bash
root 21 0.0 0.0 15568 1100 ? R+ 02:38 0:00
ps -aux

Now, how about we install a package?

Listing 3.9: Installing a package in our rst container

root@f7cbdac22a02:/# apt-get update; apt-get install vim

We’ll now have Vim installed in our container.


You can keep playing with the container or as long as you like. When you’re
done, type exit, and you’ll return to the command prompt o your Ubuntu host.

Version: v17.03.0 (38f1319) 51


Chapter 3: Getting Started with Docker

So what’s happened to our container? Well, it has now stopped running. The
container only runs or as long as the command we specied, /bin/bash, is run-
ning. Once we exited the container, that command ended, and the container was
stopped.
The container still exists; we can show a list o current containers using the docker
ps -a command.

Listing 3.10: Listing Docker containers

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES


1cd57c2cdf7f ubuntu "/bin/bash" A minute Exited gray_cat

By deault, when we run just docker ps, we will only see the running containers.
When we speciy the -a ag, the docker ps command will show us all containers,
both stopped and running.

 TIP You can also use the docker ps command with the -l ag to show the
last container that was run, whether it is running or stopped. You can also use
the --format ag to urther control what and how inormation is outputted.

We see quite a bit o inormation about our container: its ID, the image used to
create it, the command it last ran, when it was created, and its exit status (in our
case, 0, because it was exited normally using the exit command). We can also
see that each container has a name.

 NOTE There are three ways containers can be identied: a short UUID
(like f7cbdac22a02), a longer UUID (like f7cbdac22a02e03c9438c729345e54db9d
20cfa2ac1fc3494b6eb60872e74778), and a name (like gray_cat).

Version: v17.03.0 (38f1319) 52


Chapter 3: Getting Started with Docker

Container naming

Docker will automatically generate a name at random or each container we create.
We see that the container we’ve just created is called gray_cat. I we want to
speciy a particular container name in place o the automatically generated name,
we can do so using the --name ag.

Listing 3.11: Naming a container

$ sudo docker run --name bob_the_container -i -t ubuntu /bin/bash


root@aa3f365f0f4e:/# exit

This would create a new container called bob_the_container. A valid container


name can contain the ollowing characters: a to z, A to Z, the digits 0 to 9, the
underscore, period, and dash (or, expressed as a regular expression: [a-zA-Z0-9_
.-]).

We can use the container name in place o the container ID in most Docker com-
mands, as we’ll see. Container names are useul to help us identiy and build
logical connections between containers and applications. It’s also much easier to
remember a specic container name (e.g., web or db) than a container ID or even
a random name. I recommend using container names to make managing your
containers easier.

 NOTE We’ll see more about how to connect Docker containers in Chapter
5.

Names are unique. I we try to create two containers with the same name, the
command will ail. We need to delete the previous container with the same name
beore we can create a new one. We can do so with the docker rm command.

Version: v17.03.0 (38f1319) 53


Chapter 3: Getting Started with Docker

Starting a stopped container

So what to do with our now-stopped bob_the_container container? Well, i we


want, we can restart a stopped container like so:

Listing 3.12: Starting a stopped container

$ sudo docker start bob_the_container

We could also reer to the container by its container ID instead.

Listing 3.13: Starting a stopped container by ID

$ sudo docker start aa3f365f0f4e

 TIP We can also use the docker restart command.

Now i we run the docker ps command without the -a ag, we’ll see our running
container.

 NOTE In a similar vein there is also the docker create command which
creates a container but does not run it. This allows you more granular control
over your container workow.

Version: v17.03.0 (38f1319) 54


Chapter 3: Getting Started with Docker

Attaching to a container

Our container will restart with the same options we’d specied when we launched
it with the docker run command. So there is an interactive session waiting on
our running container. We can reattach to that session using the docker attach
command.

Listing 3.14: Attaching to a running container

$ sudo docker attach bob_the_container

or via its container ID:

Listing 3.15: Attaching to a running container via ID

$ sudo docker attach aa3f365f0f4e

and we’ll be brought back to our container’s Bash prompt:

 TIP You might need to hit Enter to bring up the prompt

Listing 3.16: Inside our re-attached container

root@aa3f365f0f4e:/#

I we exit this shell, our container will again be stopped.

Version: v17.03.0 (38f1319) 55


Chapter 3: Getting Started with Docker

Creating daemonized containers

In addition to these interactive containers, we can create longer-running contain-


ers. Daemonized containers don’t have the interactive session we’ve just used
and are ideal or running applications and services. Most o the containers you’re
likely to run will probably be daemonized. Let’s start a daemonized container
now.

Listing 3.17: Creating a long running container

$ sudo docker run --name daemon_dave -d ubuntu /bin/sh -c "while


true; do echo hello world; sleep 1; done"
1333bb1a66af402138485fe44a335b382c09a887aa9f95cb9725e309ce5b7db3

Here, we’ve used the docker run command with the -d ag to tell Docker to
detach the container to the background.
We’ve also specied a while loop as our container command. Our loop will echo
hello world over and over again until the container is stopped or the process
stops.
With this combination o ags, you’ll see that, instead o being attached to a shell
like our last container, the docker run command has instead returned a container
ID and returned us to our command prompt. Now i we run docker ps, we see a
running container.

Listing 3.18: Viewing our running daemon_dave container

CONTAINER ID IMAGE COMMAND CREATED STATUS


PORTS NAMES
1333bb1a66af ubuntu /bin/sh -c 'while tr 32 secs ago Up 27
daemon_dave

Version: v17.03.0 (38f1319) 56


Chapter 3: Getting Started with Docker

Seeing what’s happening inside our container

We now have a daemonized container running our while loop; let’s take a look
inside the container and see what’s happening. To do so, we can use the docker
logs command. The docker logs command etches the logs o a container.

Listing 3.19: Fetching the logs o our daemonized container

$ sudo docker logs daemon_dave


hello world
hello world
hello world
hello world
hello world
hello world
hello world
. . .

Here we see the results o our while loop echoing hello world to the logs. Docker
will output the last ew log entries and then return. We can also monitor the
container’s logs much like the tail -f binary operates using the -f ag.

Version: v17.03.0 (38f1319) 57


Chapter 3: Getting Started with Docker

Listing 3.20: Tailing the logs o our daemonized container

$ sudo docker logs -f daemon_dave


hello world
hello world
hello world
hello world
hello world
hello world
hello world
. . .

 TIP Use Ctrl-C to exit rom the log tail.

You can also tail a portion o the logs o a container, again much like the tail
command with the -f --tail ags. For example, you can get the last ten lines
o a log by using docker logs --tail 10 daemon_dave. You can also ollow the
logs o a container without having to read the whole log le with docker logs -
-tail 0 -f daemon_dave.

To make debugging a little easier, we can also add the -t ag to prex our log
entries with timestamps.

Version: v17.03.0 (38f1319) 58


Chapter 3: Getting Started with Docker

Listing 3.21: Tailing the logs o our daemonized container

$ sudo docker logs -ft daemon_dave


2016-08-02T03:31:16.743679596Z hello world
2016-08-02T03:31:17.744769494Z hello world
2016-08-02T03:31:18.745786252Z hello world
2016-08-02T03:31:19.746839926Z hello world
. . .

 TIP Again, use Ctrl-C to exit rom the log tail.

Docker log drivers

Since Docker 1.6 you can also control the logging driver used by your daemon and
container. This is done using the --log-driver option. You can pass this option
to both the daemon and the docker run command.
There are a variety o options including the deault json-file which provides the
behavior we’ve just seen using the docker logs command.
Also available is syslog which disables the docker logs command and redirects
all container log output to Syslog. You can speciy this with the Docker daemon
to output all container logs to Syslog or override it using docker run to direct
output rom individual containers.

Version: v17.03.0 (38f1319) 59


Chapter 3: Getting Started with Docker

Listing 3.22: Enabling Syslog at the container level

$ sudo docker run --log-driver="syslog" --name daemon_dwayne -d


ubuntu /bin/sh -c "while true; do echo hello world; sleep 1;
done"
. . .

 TIP I you’re running inside Docker or Mac or Windows you might need to
start the Syslog daemon inside the VM. Use docker-machine ssh to connect to
the Docker VM and run the syslogd command to start the Syslog daemon.

This will cause the daemon_dwayne container to log to Syslog and result in the
docker logs command showing no output.

Lastly, also available is none, which disables all logging in containers and results
in the docker logs command being disabled.

 TIP Additional logging drivers continue to be added. Docker 1.8 introduced


support or Graylog’s GELF protocol, Fluentd and a log rotation driver.

Inspecting the container’s processes

In addition to the container’s logs we can also inspect the processes running inside
the container. To do this, we use the docker top command.

Version: v17.03.0 (38f1319) 60


Chapter 3: Getting Started with Docker

Listing 3.23: Inspecting the processes o the daemonized container

$ sudo docker top daemon_dave

We can then see each process (principally our while loop), the user it is running
as, and the process ID.

Listing 3.24: The docker top output

PID USER COMMAND


977 root /bin/sh -c while true; do echo hello world; sleep 1;
done
1123 root sleep 1

Docker statistics

In addition to the docker top command you can also use the docker stats com-
mand. This shows statistics or one or more running Docker containers. Let’s see
what these look like. We’re going to look at the statistics or our daemon_dave and
daemon_dwayne containers.

Version: v17.03.0 (38f1319) 61


Chapter 3: Getting Started with Docker

Listing 3.25: The docker stats command

$ sudo docker stats daemon_dave daemon_dwayne


CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
BLOCK I/O
daemon_dave 0.14% 212 KiB/994 MiB 0.02% 5.062 KiB/648 B 1.69
MB / 0 B
daemon_dwayne 0.11% 216 KiB/994 MiB 0.02% 1.402 KiB/648 B
24.43 MB / 0 B

We see a list o daemonized containers and their CPU, memory and network and
storage I/O perormance and metrics. This is useul or quickly monitoring a group
o containers on a host.

 NOTE The docker stats command was introduced in Docker 1.5.0.

Running a process inside an already running con-


tainer

Since Docker 1.3 we can also run additional processes inside our containers using
the docker exec command. There are two types o commands we can run inside a
container: background and interactive. Background tasks run inside the container
without interaction and interactive tasks remain in the oreground. Interactive
tasks are useul or tasks like opening a shell inside a container. Let’s look at an
example o a background task.

Version: v17.03.0 (38f1319) 62


Chapter 3: Getting Started with Docker

Listing 3.26: Running a background task inside a container

$ sudo docker exec -d daemon_dave touch /etc/new_config_file

Here the -d ag indicates we’re running a background process. We then speciy
the name o the container to run the command inside and the command to be
executed. In this case our command will create a new empty le called /etc/
new_config_file inside our daemon_dave container. We can use a docker exec
background command to run maintenance, monitoring or management tasks in-
side a running container.

 TIP Since Docker 1.7 you can use the -u ag to speciy a new process owner
or docker exec launched processes.

We can also run interactive tasks like opening a shell inside our daemon_dave
container.

Listing 3.27: Running an interactive command inside a container

$ sudo docker exec -t -i daemon_dave /bin/bash

The -t and -i ags, like the ags used when running an interactive container,
create a TTY and capture STDIN or our executed process. We then speciy the
name o the container to run the command inside and the command to be executed.
In this case our command will create a new bash session inside the container
daemon_dave. We could then use this session to issue other commands inside our
container.

Version: v17.03.0 (38f1319) 63


Chapter 3: Getting Started with Docker

 NOTE The docker exec command was introduced in Docker 1.3 and is
not available in earlier releases. For earlier Docker releases you should see the
nsenter command explained in Chapter 6.

Stopping a daemonized container

I we wish to stop our daemonized container, we can do it with the docker stop
command, like so:

Listing 3.28: Stopping the running Docker container

$ sudo docker stop daemon_dave

or again via its container ID.

Listing 3.29: Stopping the running Docker container by ID

$ sudo docker stop c2c4e57c12c4

 NOTE The docker stop command sends a SIGTERM signal to the Docker
container’s running process. I you want to stop a container a bit more enthusias-
tically, you can use the docker kill command, which will send a SIGKILL signal
to the container’s process.

Run docker ps to check the status o the now-stopped container. Useul here is
the docker ps -n x ag which shows the last x containers, running or stopped.

Version: v17.03.0 (38f1319) 64


Chapter 3: Getting Started with Docker

Automatic container restarts

I your container has stopped because o a ailure you can congure Docker to
restart it using the --restart ag. The --restart ag checks or the container’s
exit code and makes a decision whether or not to restart it. The deault behavior
is to not restart containers at all.
You speciy the --restart ag with the docker run command.

Listing 3.30: Automatically restarting containers

$ sudo docker run --restart=always --name daemon_alice -d ubuntu


/bin/sh -c "while true; do echo hello world; sleep 1; done"

In this example the --restart ag has been set to always. Docker will try to
restart the container no matter what exit code is returned. Alternatively, we can
speciy a value o on-failure which restarts the container i it exits with a non-
zero exit code. The on-failure ag also accepts an optional restart count.

Listing 3.31: On-ailure restart count

--restart=on-failure:5

This will attempt to restart the container a maximum o ve times i a non-zero
exit code is received.

 NOTE The --restart ag was introduced in Docker 1.2.0.

Version: v17.03.0 (38f1319) 65


Chapter 3: Getting Started with Docker

Finding out more about our container

In addition to the inormation we retrieved about our container using the docker
ps command, we can get a whole lot more inormation using the docker inspect
command.

Listing 3.32: Inspecting a container

$ sudo docker inspect daemon_alice


[{
"ID": "
c2c4e57c12c4c142271c031333823af95d64b20b5d607970c334784430bcbd0f
",
"Created": "2014-05-10T11:49:01.902029966Z",
"Path": "/bin/sh",
"Args": [
"-c",
"while true; do echo hello world; sleep 1; done"
],
"Config": {
"Hostname": "c2c4e57c12c4",
. . .

The docker inspect command will interrogate our container and return its con-
guration inormation, including names, commands, networking conguration,
and a wide variety o other useul data.
We can also selectively query the inspect results hash using the -f or --format
ag.

Version: v17.03.0 (38f1319) 66


Chapter 3: Getting Started with Docker

Listing 3.33: Selectively inspecting a container

$ sudo docker inspect --format='{{ .State.Running }}'


daemon_alice
true

This will return the running state o the container, which in our case is true. We
can also get useul inormation like the container’s IP address.

Listing 3.34: Inspecting the container’s IP address

$ sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}'


daemon_alice
172.17.0.2

 TIP The --format or -f ag is a bit more than it seems on the surace. It’s
actually a ull Go template being exposed. You can make use o all the capabilities
o a Go template when querying it.

We can also list multiple containers and receive output or each.

Listing 3.35: Inspecting multiple containers

$ sudo docker inspect --format '{{.Name}} {{.State.Running}}' \


daemon_dave daemon_alice
/daemon_dave true
/daemon_alice true

Version: v17.03.0 (38f1319) 67


Chapter 3: Getting Started with Docker

We can select any portion o the inspect hash to query and return.

 NOTE In addition to inspecting containers, you can see a bit more about
how Docker works by exploring the /var/lib/docker directory. This directory
holds your images, containers, and container conguration. You’ll nd all your
containers in the /var/lib/docker/containers directory.

Deleting a container

I you are nished with a container, you can delete it using the docker rm com-
mand.

 NOTE Since Docker 1.6.2 you can delete a running Docker container using
the -f ag to the docker rm command. Prior to this version you must stop the
container rst using the docker stop command or docker kill command.

Listing 3.36: Deleting a container

$ sudo docker rm 80430f8d0921


80430f8d0921

There isn’t currently a way to delete all containers, but you can slightly cheat with
a command like the ollowing:

Version: v17.03.0 (38f1319) 68


Chapter 3: Getting Started with Docker

Listing 3.37: Deleting all containers

$ sudo docker rm -f `sudo docker ps -a -q`

This command will list all o the current containers using the docker ps command.
The -a ag lists all containers, and the -q ag only returns the container IDs rather
than the rest o the inormation about your containers. This list is then passed to
the docker rm command, which deletes each container. The -f ag orce removes
any running containers. I you’d preer to protect those containers, omit the ag.

Summary

We’ve now been introduced to the basic mechanics o how Docker containers work.
This inormation will orm the basis or how we’ll learn to use Docker in the rest
o the book.
In the next chapter, we’re going to explore building our own Docker images and
working with Docker repositories and registries.

Version: v17.03.0 (38f1319) 69


Chapter 4

Working with Docker images and


repositories

In Chapter 2, we learned how to install Docker. In Chapter 3, we learned how to


use a variety o commands to manage Docker containers, including the docker
run command.

Let’s see the docker run command again.

Listing 4.1: Revisiting running a basic Docker container

$ sudo docker run -i -t --name another_container_mum ubuntu \


/bin/bash
root@b415b317ac75:/#

This command will launch a new container called another_container_mum rom


the ubuntu image and open a Bash shell.
In this chapter, we’re going to explore Docker images: the building blocks rom
which we launch containers. We’ll learn a lot more about Docker images, what
they are, how to manage them, how to modiy them, and how to create, store,
and share your own images. We’ll also examine the repositories that hold images
and the registries that store repositories.

70
Chapter 4: Working with Docker images and repositories

What is a Docker image?

Let’s continue our journey with Docker by learning a bit more about Docker im-
ages. A Docker image is made up o lesystems layered over each other. At the
base is a boot lesystem, bootfs, which resembles the typical Linux/Unix boot
lesystem. A Docker user will probably never interact with the boot lesystem.
Indeed, when a container has booted, it is moved into memory, and the boot
lesystem is unmounted to ree up the RAM used by the initrd disk image.
So ar this looks pretty much like a typical Linux virtualization stack. Indeed,
Docker next layers a root lesystem, rootfs, on top o the boot lesystem. This
rootfs can be one or more operating systems (e.g., a Debian or Ubuntu lesys-
tem).
In a more traditional Linux boot, the root lesystem is mounted read-only and
then switched to read-write ater boot and an integrity check is conducted. In the
Docker world, however, the root lesystem stays in read-only mode, and Docker
takes advantage o a union mount to add more read-only lesystems onto the
root lesystem. A union mount is a mount that allows several lesystems to be
mounted at one time but appear to be one lesystem. The union mount overlays
the lesystems on top o one another so that the resulting lesystem may contain
les and subdirectories rom any or all o the underlying lesystems.
Docker calls each o these lesystems images. Images can be layered on top o
one another. The image below is called the parent image and you can traverse
each layer until you reach the bottom o the image stack where the nal image
is called the base image. Finally, when a container is launched rom an image,
Docker mounts a read-write lesystem on top o any layers below. This is where
whatever processes we want our Docker container to run will execute.
This sounds conusing, so perhaps it is best represented by a diagram.

Version: v17.03.0 (38f1319) 71


Chapter 4: Working with Docker images and repositories

Figure 4.1: The Docker lesystem layers

When Docker rst starts a container, the initial read-write layer is empty. As
changes occur, they are applied to this layer; or example, i you want to change
a le, then that le will be copied rom the read-only layer below into the read-
write layer. The read-only version o the le will still exist but is now hidden
underneath the copy.

Version: v17.03.0 (38f1319) 72


Chapter 4: Working with Docker images and repositories

This pattern is traditionally called ”copy on write” and is one o the eatures that
makes Docker so powerul. Each read-only image layer is read-only; this image
never changes. When a container is created, Docker builds rom the stack o im-
ages and then adds the read-write layer on top. That layer, combined with the
knowledge o the image layers below it and some conguration data, orm the con-
tainer. As we discovered in the last chapter, containers can be changed, they have
state, and they can be started and stopped. This, and the image-layering rame-
work, allows us to quickly build images and run containers with our applications
and services.

Listing Docker images

Let’s get started with Docker images by looking at what images are available to
us on our Docker host. We can do this using the docker images command.

Listing 4.2: Listing Docker images

$ sudo docker images


REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu latest c4ff7513909d 6 days ago 225.4 MB

We see that we’ve got an image, rom a repository called ubuntu. So where does
this image come rom? Remember in Chapter 3, when we ran the docker run
command, that part o the process was downloading an image? In our case, it’s
the ubuntu image.

 NOTE Local images live on our local Docker host in the /var/lib/docker
directory. Each image will be inside a directory named or your storage driver;
or example, aufs or devicemapper. You’ll also nd all your containers in the
/var/lib/docker/containers directory.

Version: v17.03.0 (38f1319) 73


Chapter 4: Working with Docker images and repositories

That image was downloaded rom a repository. Images live inside repositories,
and repositories live on registries. The deault registry is the public registry man-
aged by Docker, Inc., Docker Hub.

 TIP The Docker registry code is open source. You can also run your own
registry, as we’ll see later in this chapter. The Docker Hub product is also available
as a commercial ”behind the rewall” product called Docker Trusted Registry,
ormerly Docker Enterprise Hub.

Figure 4.2: Docker Hub

Inside Docker Hub (or on a Docker registry you run yoursel), images are stored
in repositories. You can think o an image repository as being much like a Git
repository. It contains images, layers, and metadata about those images.
Each repository can contain multiple images (e.g., the ubuntu repository contains
images or Ubuntu 12.04, 12.10, 13.04, 13.10, 14.04, 16.04). Let’s get another
image rom the ubuntu repository now.

Version: v17.03.0 (38f1319) 74


Chapter 4: Working with Docker images and repositories

Listing 4.3: Pulling the Ubuntu 16.04 image

$ sudo docker pull ubuntu:16.04


16.04: Pulling from library/ubuntu
Digest: sha256:
c6674c44c6439673bf56536c1a15916639c47ea04c3d6296c5df938add67b54b

Status: Downloaded newer image for ubuntu:16.04

Here we’ve used the docker pull command to pull down the Ubuntu 16.04 image
rom the ubuntu repository.
Let’s see what our docker images command reveals now.

Listing 4.4: Listing the ubuntu Docker images

$ sudo docker images


REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu latest 5506de2b643b 3 weeks ago 199.3 MB
ubuntu 16.04 0b310e6bf058 5 months ago 127.9 MB

 TIP Throughout the book we use the ubuntu image. This is a reasonably
heavyweight image, measuring a couple o hundred megabytes in size. I you’d
preer something smaller the Alpine Linux image is recommended as extremely
lightweight, generally 5Mb in size or the base image. Its image name is alpine.

You can see we’ve now got the latest Ubuntu image and the 16.04 image. This
shows us that the ubuntu image is actually a series o images collected under a
single repository.

Version: v17.03.0 (38f1319) 75


Chapter 4: Working with Docker images and repositories

 NOTE We call it the Ubuntu operating system, but really it is not the ull
operating system. It’s a cut-down version with the bare runtime required to run
the distribution.

We identiy each image inside that repository by what Docker calls tags. Each
image is being listed by the tags applied to it, so, or example, 12.04, 12.10,
quantal, or precise and so on. Each tag marks together a series o image layers
that represent a specic image (e.g., the 16.04 tag collects together all the layers
o the Ubuntu 16.04 image). This allows us to store more than one image inside
a repository.
We can reer to a specic image inside a repository by suxing the repository
name with a colon and a tag name, or example:

Listing 4.5: Running a tagged Docker image

$ sudo docker run -t -i --name new_container ubuntu:16.04 /bin/


bash
root@79e36bff89b4:/#

This launches a container rom the ubuntu:16.04 image, which is an Ubuntu 16.04
operating system.
It’s always a good idea to build a container rom specic tags. That way we’ll know
exactly what the source o our container is. There are diferences, or example,
between Ubuntu 14.04 and 16.04, so it would be useul to specically state that
we’re using ubuntu:16.04 so we know exactly what we’re getting.
There are two types o repositories: user repositories, which contain images con-
tributed by Docker users, and top-level repositories, which are controlled by the
people behind Docker.
A user repository takes the orm o a username and a repository name; or example,
jamtur01/puppet.

Version: v17.03.0 (38f1319) 76


Chapter 4: Working with Docker images and repositories

• Username: jamtur01
• Repository name: puppet

Alternatively, a top-level repository only has a repository name like ubuntu. The
top-level repositories are managed by Docker Inc and by selected vendors who pro-
vide curated base images that you can build upon (e.g., the Fedora team provides
a fedora image). The top-level repositories also represent a commitment rom
vendors and Docker Inc that the images contained in them are well constructed,
secure, and up to date.
In Docker 1.8 support was also added or managing the content security o images,
essentially signed images. This is currently an optional eature and you can read
more about it on the Docker blog.

 WARNING User-contributed images are built by members o the Docker


community. You should use them at your own risk: they are not validated or
veried in any way by Docker Inc.

Pulling images

When we run a container rom images with the docker run command, i the image
isn’t present locally already then Docker will download it rom the Docker Hub.
By deault, i you don’t speciy a specic tag, Docker will download the latest
tag, or example:

Listing 4.6: Docker run and the deault latest tag

$ sudo docker run -t -i --name next_container ubuntu /bin/bash


root@23a42cee91c3:/#

Version: v17.03.0 (38f1319) 77


Chapter 4: Working with Docker images and repositories

Will download the ubuntu:latest image i it isn’t already present on the host.
Alternatively, we can use the docker pull command to pull images down our-
selves preemptively. Using docker pull saves us some time launching a container
rom a new image. Let’s see that now by pulling down the ‘edora:21 base image.

Listing 4.7: Pulling the edora image

$ sudo docker pull fedora:21


21: Pulling from library/fedora
d60b4509ad7d: Pull complete
Digest: sha256:4328
c03e6cafef1676db038269fc9a4c3528700d04ca1572e706b4a0aa320000
Status: Downloaded newer image for fedora:21

Let’s see this new image on our Docker host using the docker images command.
This time, however, let’s narrow our review o the images to only the fedora im-
ages. To do so, we can speciy the image name ater the docker images command.

Listing 4.8: Viewing the edora image

$ sudo docker images fedora


REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
fedora 21 7d3f07f8de5f 6 weeks ago 374.1 MB

We see that the fedora:21 image has been downloaded. We could also download
another tagged image using the docker pull command.

Listing 4.9: Pulling a tagged edora image

$ sudo docker pull fedora:20

Version: v17.03.0 (38f1319) 78


Chapter 4: Working with Docker images and repositories

This would have just pulled the fedora:20 image.

Searching or images

We can also search all o the publicly available images on Docker Hub using the
docker search command:

Listing 4.10: Searching or images

$ sudo docker search puppet


NAME DESCRIPTION STARS OFFICIAL
AUTOMATED
macadmins/puppetmaster Simple puppetmaster 21 [
OK]
devopsil/puppet Dockerfile for a 18 [
OK]
. . .

 TIP You can also browse the available images online at Docker Hub.

Here, we’ve searched the Docker Hub or the term puppet. It’ll search images and
return:

• Repository names
• Image descriptions
• Stars - these measure the popularity o an image
• Ocial - an image managed by the upstream developer (e.g., the fedora
image managed by the Fedora team)
• Automated - an image built by the Docker Hub’s Automated Build process

Version: v17.03.0 (38f1319) 79


Chapter 4: Working with Docker images and repositories

 NOTE We’ll see more about Automated Builds later in this chapter.

Let’s pull down an image.

Listing 4.11: Pulling down the jamtur01/puppetmaster image

$ sudo docker pull jamtur01/puppetmaster

This will pull down the jamtur01/puppetmaster image (which, by the way, con-
tains a pre-installed Puppet master server).
We can then use this image to build a new container. Let’s do that now using the
docker run command again.

Listing 4.12: Creating a Docker container rom the puppetmaster image

$ sudo docker run -i -t jamtur01/puppetmaster /bin/bash


root@4655dee672d3:/# facter
architecture => amd64
augeasversion => 1.2.0
. . .
root@4655dee672d3:/# puppet --version
3.4.3

You can see we’ve launched a new container rom our jamtur01/puppetmaster
image. We’ve launched the container interactively and told the container to
run the Bash shell. Once inside the container’s shell, we’ve run Facter (Puppet’s
inventory application), which was pre-installed on our image. From inside the
container, we’ve also run the puppet binary to conrm it is installed.

Version: v17.03.0 (38f1319) 80


Chapter 4: Working with Docker images and repositories

Building our own images

So we’ve seen that we can pull down pre-prepared images with custom contents.
How do we go about modiying our own images and updating and managing them?
There are two ways to create a Docker image:

• Via the docker commit command


• Via the docker build command with a Dockerfile

The docker commit method is not currently recommended, as building with a


Dockerfile is ar more exible and powerul, but we’ll demonstrate it to you or
the sake o completeness. Ater that, we’ll ocus on the recommended method
o building Docker images: writing a Dockerfile and using the docker build
command.

 NOTE We don’t generally actually ”create” new images; rather, we build


new images rom existing base images, like the ubuntu or fedora images we’ve
already seen. I you want to build an entirely new base image, you can see some
inormation on this in this guide.

Creating a Docker Hub account

A big part o image building is sharing and distributing your images. We do this
by pushing them to the Docker Hub or your own registry. To acilitate this, let’s
start by creating an account on the Docker Hub. You can join Docker Hub here.

Version: v17.03.0 (38f1319) 81


Chapter 4: Working with Docker images and repositories

Figure 4.3: Creating a Docker Hub account.

Create an account and veriy your email address rom the email you’ll receive ater
signing up.
Now let’s test our new account rom Docker. To sign into the Docker Hub you can
use the docker login command.

Listing 4.13: Logging into the Docker Hub

$ sudo docker login


Login with your Docker ID to push and pull images from Docker Hub
. If you don't have a Docker ID, head over to https://hub.
docker.com to create one.
Username (jamtur01): jamtur01
Password:
Login Succeeded

This command will log you into the Docker Hub and store your credentials or
uture use. You can use the docker logout command to log out rom a registry
server.

Version: v17.03.0 (38f1319) 82


Chapter 4: Working with Docker images and repositories

 NOTE Your credentials will be stored in the $HOME/.dockercfg le. Since


Docker 1.7.0 this is now $HOME/.docker/config.json.

Using Docker commit to create images

The rst method o creating images uses the docker commit command. You can
think about this method as much like making a commit in a version control system.
We create a container, make changes to that container as you would change code,
and then commit those changes to a new image.
Let’s start by creating a container rom the ubuntu image we’ve used in the past.

Listing 4.14: Creating a custom container to modiy

$ sudo docker run -i -t ubuntu /bin/bash


root@4aab3ce3cb76:/#

Next, we’ll install Apache into our container.

Listing 4.15: Adding the Apache package

root@4aab3ce3cb76:/# apt-get -yqq update


. . .
root@4aab3ce3cb76:/# apt-get -y install apache2
. . .

We’ve launched our container and then installed Apache within it. We’re going
to use this container as a web server, so we’ll want to save it in its current state.
That will save us rom having to rebuild it with Apache every time we create a
new container. To do this we exit rom the container, using the exit command,

Version: v17.03.0 (38f1319) 83


Chapter 4: Working with Docker images and repositories

and use the docker commit command.

Listing 4.16: Committing the custom container

$ sudo docker commit 4aab3ce3cb76 jamtur01/apache2


8ce0ea7a1528

You can see we’ve used the docker commit command and specied the ID o the
container we’ve just changed (to nd that ID you could use the docker ps -l
-q command to return the ID o the last created container) as well as a target
repository and image name, here jamtur01/apache2. O note is that the docker
commit command only commits the diferences between the image the container
was created rom and the current state o the container. This means updates are
lightweight.
Let’s look at our new image.

Listing 4.17: Reviewing our new image

$ sudo docker images jamtur01/apache2


. . .
jamtur01/apache2 latest 8ce0ea7a1528 13 seconds ago 90.63 MB

We can also provide some more data about our changes when committing our
image, including tags. For example:

Version: v17.03.0 (38f1319) 84


Chapter 4: Working with Docker images and repositories

Listing 4.18: Committing another custom container

$ sudo docker commit -m "A new custom image" -a "James Turnbull"


\
4aab3ce3cb76 jamtur01/apache2:webserver
f99ebb6fed1f559258840505a0f5d5b6173177623946815366f3e3acff01adef

Here, we’ve specied some more inormation while committing our new image.
We’ve added the -m option which allows us to provide a commit message explain-
ing our new image. We’ve also specied the -a option to list the author o the
image. We’ve then specied the ID o the container we’re committing. Finally,
we’ve specied the username and repository o the image, jamtur01/apache2, and
we’ve added a tag, webserver, to our image.
We can view this inormation about our image using the docker inspect com-
mand.

Listing 4.19: Inspecting our committed image

$ sudo docker inspect jamtur01/apache2:webserver


[{
"Architecture": "amd64",
"Author": "James Turnbull",
"Comment": "A new custom image",
. . .
}]

 TIP You can nd a ull list o the docker commit ags here.

I we want to run a container rom our new image, we can do so using the docker

Version: v17.03.0 (38f1319) 85


Chapter 4: Working with Docker images and repositories

run command.

Listing 4.20: Running a container rom our committed image

$ sudo docker run -t -i jamtur01/apache2:webserver /bin/bash


root@9c2d3a843b9e:/# service apache2 status
* apache2 is not running

You’ll note that we’ve specied our image with the ull tag: jamtur01/apache2:
webserver.

Building images with a Dockerle

We don’t recommend the docker commit approach. Instead, we recommend that


you build images using a denition le called a Dockerfile and the docker build
command. The Dockerfile uses a basic DSL (Domain Specic Language) with in-
structions or building Docker images. We recommend the Dockerfile approach
over docker commit because it provides a more repeatable, transparent, and idem-
potent mechanism or creating images.
Once we have a Dockerfile we then use the docker build command to build a
new image rom the instructions in the Dockerfile.

Our rst Dockerle

Let’s now create a directory and an initial Dockerfile. We’re going to build a
Docker image that contains a simple web server.

Version: v17.03.0 (38f1319) 86


Chapter 4: Working with Docker images and repositories

Listing 4.21: Creating a sample repository

$ mkdir static_web
$ cd static_web
$ touch Dockerfile

We’ve created a directory called static_web to hold our Dockerfile. This di-
rectory is our build environment, which is what Docker calls a context or build
context. Docker will upload the build context, as well as any les and directories
contained in it, to our Docker daemon when the build is run. This provides the
Docker daemon with direct access to any code, les or other data you might want
to include in the image.
We’ve also created an empty Dockerfile le to get started. Now let’s look at an
example o a Dockerfile to create a Docker image that will act as a Web server.

Listing 4.22: Our rst Dockerle

# Version: 0.0.1
FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
RUN apt-get update; apt-get install -y nginx
RUN echo 'Hi, I am in your container' \
>/var/www/html/index.html
EXPOSE 80

The Dockerfile contains a series o instructions paired with arguments. Each


instruction, or example FROM, should be in upper-case and be ollowed by an
argument: FROM ubuntu:16.04. Instructions in the Dockerfile are processed rom
the top down, so you should order them accordingly.
Each instruction adds a new layer to the image and then commits the image.
Docker executing instructions roughly ollow a workow:

Version: v17.03.0 (38f1319) 87


Chapter 4: Working with Docker images and repositories

• Docker runs a container rom the image.


• An instruction executes and makes a change to the container.
• Docker runs the equivalent o docker commit to commit a new layer.
• Docker then runs a new container rom this new image.
• The next instruction in the le is executed, and the process repeats until all
instructions have been executed.

This means that i your Dockerfile stops or some reason (or example, i an
instruction ails to complete), you will be let with an image you can use. This is
highly useul or debugging: you can run a container rom this image interactively
and then debug why your instruction ailed using the last image created.

 NOTE The Dockerfile also supports comments. Any line that starts with
a # is considered a comment. You can see an example o this in the rst line o
our Dockerfile.

The rst instruction in a Dockerfile must be FROM. The FROM instruction species
an existing image that the ollowing instructions will operate on; this image is
called the base image.
In our sample Dockerfile we’ve specied the ubuntu:16.04 image as our base
image. This specication will build an image on top o an Ubuntu 16.04 base
operating system. As with running a container, you should always be specic
about exactly rom which base image you are building.
Next, we’ve specied the MAINTAINER instruction, which tells Docker who the au-
thor o the image is and what their email address is. This is useul or speciying
an owner and contact or an image.

 NOTE The MAINTAINER instruction is deprecated in Docker 1.13.0.

We’ve ollowed these instructions with two RUN instructions. The RUN instruction

Version: v17.03.0 (38f1319) 88


Chapter 4: Working with Docker images and repositories

executes commands on the current image. The commands in our example: up-
dating the installed APT repositories and installing the nginx package and then
creating the /var/www/html/index.html le containing some example text. As
we’ve discovered, each o these instructions will create a new layer and, i suc-
cessul, will commit that layer and then execute the next instruction.
By deault, the RUN instruction executes inside a shell using the command wrapper
/bin/sh -c. I you are running the instruction on a platorm without a shell or
you wish to execute without a shell (or example, to avoid shell string munging),
you can speciy the instruction in exec ormat:

Listing 4.23: A RUN instruction in exec orm

RUN [ "apt-get", " install", "-y", "nginx" ]

We use this ormat to speciy an array containing the command to be executed


and then each parameter to pass to the command.
Next, we’ve specied the EXPOSE instruction, which tells Docker that the applica-
tion in this container will use this specic port on the container. That doesn’t mean
you can automatically access whatever service is running on that port (here, port
80) on the container. For security reasons, Docker doesn’t open the port automat-
ically, but waits or you to do it when you run the container using the docker run
command. We’ll see this shortly when we create a new container rom this image.
You can speciy multiple EXPOSE instructions to mark multiple ports to be exposed.

 NOTE Docker also uses the EXPOSE instruction to help link together con-
tainers, which we’ll see in Chapter 5. You can expose ports at run time with the
docker run command with the --expose option.

Version: v17.03.0 (38f1319) 89


Chapter 4: Working with Docker images and repositories

Building the image rom our Dockerle

All o the instructions will be executed and committed and a new image returned
when we run the docker build command. Let’s try that now:

Version: v17.03.0 (38f1319) 90


Chapter 4: Working with Docker images and repositories

Listing 4.24: Running the Dockerle

$ cd static_web
$ sudo docker build -t="jamtur01/static_web" .
Sending build context to Docker daemon 2.56 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:16.04
---> ba5877dc9bec
Step 1 : MAINTAINER James Turnbull "[email protected]"
---> Running in b8ffa06f9274
---> 4c66c9dcee35
Removing intermediate container b8ffa06f9274
Step 2 : RUN apt-get update
---> Running in f331636c84f7
---> 9d938b9e0090
Removing intermediate container f331636c84f7
Step 3 : RUN apt-get install -y nginx
---> Running in 4b989d4730dd
---> 93fb180f3bc9
Removing intermediate container 4b989d4730dd
Step 4 : RUN echo 'Hi, I am in your container'
>/var/www/html/index.html
---> Running in b51bacc46eb9
---> b584f4ac1def
Removing intermediate container b51bacc46eb9
Step 5 : EXPOSE 80
---> Running in 7ff423bd1f4d
---> 22d47c8cb6e5
Successfully built 22d47c8cb6e5

We’ve used the docker build command to build our new image. We’ve specied
the -t option to mark our resulting image with a repository and a name, here the
jamtur01 repository and the image name static_web. I strongly recommend you

Version: v17.03.0 (38f1319) 91


Chapter 4: Working with Docker images and repositories

always name your images to make it easier to track and manage them.
You can also tag images during the build process by suxing the tag ater the
image name with a colon, or example:

Listing 4.25: Tagging a build

$ sudo docker build -t="jamtur01/static_web:v1" .

 TIP I you don’t speciy any tag, Docker will automatically tag your image
as latest.

The trailing . tells Docker to look in the local directory to nd the Dockerfile.
You can also speciy a Git repository as a source or the Dockerfile as we see
here:

Listing 4.26: Building rom a Git repository

$ sudo docker build -t="jamtur01/static_web:v1" \


github.com/turnbullpress/docker-static_web

Here Docker assumes that there is a Dockerfile located in the root o the Git
repository.

 TIP Since Docker 1.5.0 and later you can also speciy a path to a le to use as a
build source using the -f ag. For example, docker build -t "jamtur01/static_-
web" -f /path/to/file. The le specied doesn’t need to be called Dockerfile
but must still be within the build context.

Version: v17.03.0 (38f1319) 92


Chapter 4: Working with Docker images and repositories

But back to our docker build process. You can see that the build context has
been uploaded to the Docker daemon.

Listing 4.27: Uploading the build context to the daemon

Sending build context to Docker daemon 2.56 kB


Sending build context to Docker daemon

 TIP I a le named .dockerignore exists in the root o the build context
then it is interpreted as a newline-separated list o exclusion patterns. Much like
a .gitignore le it excludes the listed les rom being treated as part o the build
context, and thereore prevents them rom being uploaded to the Docker daemon.
Globbing can be done using Go’s lepath.

Next, you can see that each instruction in the Dockerfile has been executed with
the image ID, 22d47c8cb6e5, being returned as the nal output o the build pro-
cess. Each step and its associated instruction are run individually, and Docker has
committed the result o each operation beore outputting that nal image ID.

What happens i an instruction ails?

Earlier, we talked about what happens i an instruction ails. Let’s look at an


example: let’s assume that in Step 4 we got the name o the required package
wrong and instead called it ngin.
Let’s run the build again and see what happens when it ails.

Version: v17.03.0 (38f1319) 93


Chapter 4: Working with Docker images and repositories

Listing 4.28: Managing a ailed instruction

$ cd static_web
$ sudo docker build -t="jamtur01/static_web" .
Sending build context to Docker daemon 2.56 kB
Sending build context to Docker daemon
Step 1 : FROM ubuntu:16.04
---> 8dbd9e392a96
Step 2 : MAINTAINER James Turnbull "[email protected]"
---> Running in d97e0c1cf6ea
---> 85130977028d
Step 3 : RUN apt-get update
---> Running in 85130977028d
---> 997485f46ec4
Step 4 : RUN apt-get install -y ngin
---> Running in ffca16d58fd8
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package ngin
2014/06/04 18:41:11 The command [/bin/sh -c apt-get install -y
ngin] returned a non-zero code: 100

Let’s say I want to debug this ailure. I can use the docker run command to create
a container rom the last step that succeeded in my Docker build, in this example
using the image ID o 997485f46ec4.

Listing 4.29: Creating a container rom the last successul step

$ sudo docker run -t -i 997485f46ec4 /bin/bash


dcge12e59fe8:/#

Version: v17.03.0 (38f1319) 94


Chapter 4: Working with Docker images and repositories

I can then try to run the apt-get install -y ngin step again with the right pack-
age name or conduct some other debugging to determine what went wrong. Once
I’ve identied the issue, I can exit the container, update my Dockerfile with the
right package name, and retry my build.

Dockerles and the build cache

As a result o each step being committed as an image, Docker is able to be really


clever about building images. It will treat previous layers as a cache. I, in our
debugging example, we did not need to change anything in Steps 1 to 3, then
Docker would use the previously built images as a cache and a starting point.
Essentially, it’d start the build process straight rom Step 4. This can save you a
lot o time when building images i a previous step has not changed. I, however,
you did change something in Steps 1 to 3, then Docker would restart rom the rst
changed instruction.
Sometimes, though, you want to make sure you don’t use the cache. For example,
i you’d cached Step 3 above, apt-get update, then it wouldn’t reresh the APT
package cache. You might want it to do this to get a new version o a package. To
skip the cache, we can use the --no-cache ag with the docker build command..

Listing 4.30: Bypassing the Dockerle build cache

$ sudo docker build --no-cache -t="jamtur01/static_web" .

Using the build cache or templating

As a result o the build cache, you can build your Dockerfiles in the orm o
simple templates (e.g., adding a package repository or updating packages near
the top o the le to ensure the cache is hit). I generally have the same template
set o instructions in the top o my Dockerfile, or example or Ubuntu:

Version: v17.03.0 (38f1319) 95


Chapter 4: Working with Docker images and repositories

Listing 4.31: A template Ubuntu Dockerle

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-07-01
RUN apt-get -qq update

Let’s step through this new Dockerfile. Firstly, I’ve used the FROM instruction
to speciy a base image o ubuntu:16.04. Next, I’ve added my MAINTAINER in-
struction to provide my contact details. I’ve then specied a new instruction, ENV.
The ENV instruction sets environment variables in the image. In this case, I’ve
specied the ENV instruction to set an environment variable called REFRESHED_AT,
showing when the template was last updated. Lastly, I’ve specied the apt-get
-qq update command in a RUN instruction. This rereshes the APT package cache
when it’s run, ensuring that the latest packages are available to install.
With my template, when I want to reresh the build, I change the date in my ENV
instruction. Docker then resets the cache when it hits that ENV instruction and runs
every subsequent instruction anew without relying on the cache. This means my
RUN apt-get update instruction is rerun and my package cache is rereshed with
the latest content. You can extend this template example or your target platorm
or to t a variety o needs. For example, or a fedora image we might:

Listing 4.32: A template Fedora Dockerle

FROM fedora:21
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-07-01
RUN yum -q makecache

Which perorms a similar caching unction or Fedora using Yum.

Version: v17.03.0 (38f1319) 96


Chapter 4: Working with Docker images and repositories

Viewing our new image

Now let’s take a look at our new image. We can do this using the docker images
command.

Listing 4.33: Listing our new Docker image

$ sudo docker images jamtur01/static_web


REPOSITORY TAG ID CREATED SIZE
jamtur01/static_web latest 22d47c8cb6e5 24 seconds ago 12.29 kB
(virtual 326 MB)

I we want to drill down into how our image was created, we can use the docker
history command.

Listing 4.34: Using the docker history command

$ sudo docker history 22d47c8cb6e5


IMAGE CREATED CREATED BY
SIZE
22d47c8cb6e5 6 minutes ago /bin/sh -c #(nop) EXPOSE map[80/tcp
:{}] 0 B
b584f4ac1def 6 minutes ago /bin/sh -c echo 'Hi, I am in your
container' 27 B
93fb180f3bc9 6 minutes ago /bin/sh -c apt-get install -y nginx
18.46 MB
9d938b9e0090 6 minutes ago /bin/sh -c apt-get update
20.02 MB
4c66c9dcee35 6 minutes ago /bin/sh -c #(nop) MAINTAINER James
Turnbull " 0 B
. . .

Version: v17.03.0 (38f1319) 97


Chapter 4: Working with Docker images and repositories

We see each o the image layers inside our new jamtur01/static_web image and
the Dockerfile instruction that created them.

Launching a container rom our new image

Let’s launch a new container using our new image and see i what we’ve built has
worked.

Listing 4.35: Launching a container rom our new image

$ sudo docker run -d -p 80 --name static_web jamtur01/static_web


nginx -g "daemon off;"
6751b94bb5c001a650c918e9a7f9683985c3eb2b026c2f1776e61190669494a8

Here I’ve launched a new container called static_web using the docker run com-
mand and the name o the image we’ve just created. We’ve specied the -d option,
which tells Docker to run detached in the background. This allows us to run long-
running processes like the Nginx daemon. We’ve also specied a command or
the container to run: nginx -g "daemon off;". This will launch Nginx in the
oreground to run our web server.
We’ve also specied a new ag, -p. The -p ag manages which network ports
Docker publishes at runtime. When you run a container, Docker has two methods
o assigning ports on the Docker host:

• Docker can randomly assign a high port rom the range 32768 to 61000 on
the Docker host that maps to port 80 on the container.
• You can speciy a specic port on the Docker host that maps to port 80 on
the container.

The docker run command will open a random port on the Docker host that will
connect to port 80 on the Docker container.
Let’s look at what port has been assigned using the docker ps command. The -l
ag tells Docker to show us the last container launched.

Version: v17.03.0 (38f1319) 98


Chapter 4: Working with Docker images and repositories

Listing 4.36: Viewing the Docker port mapping

$ sudo docker ps -l
CONTAINER ID IMAGE ... PORTS
NAMES
6751b94bb5c0 jamtur01/static_web:latest ... 0.0.0.0:49154->80/
tcp static_web

We see that port 49154 is mapped to the container port o 80. We can get the
same inormation with the docker port command.

Listing 4.37: The docker port command

$ sudo docker port 6751b94bb5c0 80


0.0.0.0:49154

We’ve specied the container ID and the container port or which we’d like to see
the mapping, 80, and it has returned the mapped port, 49154.
Or we could use the container name too.

Listing 4.38: The docker port command with container name

$ sudo docker port static_web 80


0.0.0.0:49154

The -p option also allows us to be exible about how a port is published to the
host. For example, we can speciy that Docker bind the port to a specic port:

Version: v17.03.0 (38f1319) 99


Chapter 4: Working with Docker images and repositories

Listing 4.39: Exposing a specic port with -p

$ sudo docker run -d -p 80:80 --name static_web_80 jamtur01/


static_web nginx -g "daemon off;"

This will bind port 80 on the container to port 80 on the local host. It’s impor-
tant to be wary o this direct binding: i you’re running multiple containers, only
one container can bind a specic port on the local host. This can limit Docker’s
exibility.
To avoid this, we could bind to a diferent port.

Listing 4.40: Binding to a diferent port

$ sudo docker run -d -p 8080:80 --name static_web_8080 jamtur01/


static_web nginx -g "daemon off;"

This would bind port 80 on the container to port 8080 on the local host.
We can also bind to a specic interace.

Listing 4.41: Binding to a specic interace

$ sudo docker run -d -p 127.0.0.1:80:80 --name static_web_lb


jamtur01/static_web nginx -g "daemon off;"

Here we’ve bound port 80 o the container to port 80 on the 127.0.0.1 interace
on the local host. We can also bind to a random port using the same structure.

Version: v17.03.0 (38f1319) 100


Chapter 4: Working with Docker images and repositories

Listing 4.42: Binding to a random port on a specic interace

$ sudo docker run -d -p 127.0.0.1::80 --name static_web_random


jamtur01/static_web nginx -g "daemon off;"

Here we’ve removed the specic port to bind to on 127.0.0.1. We would now
use the docker inspect or docker port command to see which random port was
assigned to port 80 on the container.

 TIP You can bind UDP ports by adding the sux /udp to the port binding.

Docker also has a shortcut, -P, that allows us to publish all ports we’ve exposed
via EXPOSE instructions in our Dockerfile.

Listing 4.43: Exposing a port with docker run

$ sudo docker run -d -P --name static_web_all jamtur01/static_web


nginx -g "daemon off;"

This would publish port 80 on a random port on our local host. It would also
publish any additional ports we had specied with other EXPOSE instructions in
the Dockerfile that built our image.

 TIP You can nd more inormation on port redirection here.

With this port number, we can now view the web server on the running container
using the IP address o our host or the localhost on 127.0.0.1.

Version: v17.03.0 (38f1319) 101


Chapter 4: Working with Docker images and repositories

 NOTE You can nd the IP address o your local host with the ifconfig or
ip addr command.

Listing 4.44: Connecting to the container via curl

$ curl localhost:49154
Hi, I am in your container

Now we’ve got a simple Docker-based web server.

Dockerle instructions

We’ve already seen some o the available Dockerfile instructions, like RUN and
EXPOSE. But there are also a variety o other instructions we can put in our
Dockerfile. These include CMD, ENTRYPOINT, ADD, COPY, VOLUME, WORKDIR, USER,
ONBUILD, LABEL, STOPSIGNAL, ARG, SHELL, HEALTHCHECK and ENV. You can see a ull
list o the available Dockerfile instructions here.
We’ll also see a lot more Dockerfiles in the next ew chapters and see how to
build some cool applications into Docker containers.

CMD

The CMD instruction species the command to run when a container is launched. It
is similar to the RUN instruction, but rather than running the command when the
container is being built, it will speciy the command to run when the container
is launched, much like speciying a command to run when launching a container
with the docker run command, or example:

Version: v17.03.0 (38f1319) 102


Chapter 4: Working with Docker images and repositories

Listing 4.45: Speciying a specic command to run

$ sudo docker run -i -t jamtur01/static_web /bin/true

This would be articulated in the Dockerfile as:

Listing 4.46: Using the CMD instruction

CMD ["/bin/true"]

You can also speciy parameters to the command, like so:

Listing 4.47: Passing parameters to the CMD instruction

CMD ["/bin/bash", "-l"]

Here we’re passing the -l ag to the /bin/bash command.

 WARNING You’ll note that the command is contained in an array. This


tells Docker to run the command ’as-is’. You can also speciy the CMD instruction
without an array, in which case Docker will prepend /bin/sh -c to the command.
This may result in unexpected behavior when the command is executed. As a
result, it is recommended that you always use the array syntax.

Lastly, it’s important to understand that we can override the CMD instruction using
the docker run command. I we speciy a CMD in our Dockerfile and one on the
docker run command line, then the command line will override the Dockerfile’s
CMD instruction.

Version: v17.03.0 (38f1319) 103


Chapter 4: Working with Docker images and repositories

 NOTE It’s also important to understand the interaction between the CMD
instruction and the ENTRYPOINT instruction. We’ll see some more details o this
below.

Let’s look at this process a little more closely. Let’s say our Dockerfile contains
the CMD:

Listing 4.48: Overriding CMD instructions in the Dockerle

CMD [ "/bin/bash" ]

We can build a new image (let’s call it jamtur01/test) using the docker build
command and then launch a new container rom this image.

Listing 4.49: Launching a container with a CMD instruction

$ sudo docker run -t -i jamtur01/test


root@e643e6218589:/#

Notice something diferent? We didn’t speciy the command to be executed at the


end o the docker run. Instead, Docker used the command specied by the CMD
instruction.
I, however, I did speciy a command, what would happen?

Version: v17.03.0 (38f1319) 104


Chapter 4: Working with Docker images and repositories

Listing 4.50: Overriding a command locally

$ sudo docker run -i -t jamtur01/test /bin/ps


PID TTY TIME CMD
1 ? 00:00:00 ps
$

You can see here that we have specied the /bin/ps command to list running
processes. Instead o launching a shell, the container merely returned the list
o running processes and stopped, overriding the command specied in the CMD
instruction.

 TIP You can only speciy one CMD instruction in a Dockerfile. I more than
one is specied, then the last CMD instruction will be used. I you need to run
multiple processes or commands as part o starting a container you should use a
service management tool like Supervisor.

ENTRYPOINT

Closely related to the CMD instruction, and oten conused with it, is the ENTRYPOINT
instruction. So what’s the diference between the two, and why are they both
needed? As we’ve just discovered, we can override the CMD instruction on the
docker run command line. Sometimes this isn’t great when we want a container
to behave in a certain way. The ENTRYPOINT instruction provides a command
that isn’t as easily overridden. Instead, any arguments we speciy on the docker
run command line will be passed as arguments to the command specied in the
ENTRYPOINT. Let’s see an example o an ENTRYPOINT instruction.

Version: v17.03.0 (38f1319) 105


Chapter 4: Working with Docker images and repositories

Listing 4.51: Speciying an ENTRYPOINT

ENTRYPOINT ["/usr/sbin/nginx"]

Like the CMD instruction, we also speciy parameters by adding to the array. For
example:

Listing 4.52: Speciying an ENTRYPOINT parameter

ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]

 NOTE As with the CMD instruction above, you can see that we’ve specied
the ENTRYPOINT command in an array to avoid any issues with the command being
prepended with /bin/sh -c.

Now let’s rebuild our image with an ENTRYPOINT o ENTRYPOINT ["/usr/sbin/


nginx"].

Listing 4.53: Rebuilding static_web with a new ENTRYPOINT

$ sudo docker build -t="jamtur01/static_web" .

And then launch a new container rom our jamtur01/static_web image.

Version: v17.03.0 (38f1319) 106


Chapter 4: Working with Docker images and repositories

Listing 4.54: Using docker run with ENTRYPOINT

$ sudo docker run -t -i jamtur01/static_web -g "daemon off;"

We’ve rebuilt our image and then launched an interactive container. We specied
the argument -g "daemon off;". This argument will be passed to the command
specied in the ENTRYPOINT instruction, which will thus become /usr/sbin/nginx
-g "daemon off;". This command would then launch the Nginx daemon in the
oreground and leave the container running as a web server.
We can also combine ENTRYPOINT and CMD to do some neat things. For example,
we might want to speciy the ollowing in our Dockerfile.

Listing 4.55: Using ENTRYPOINT and CMD together

ENTRYPOINT ["/usr/sbin/nginx"]
CMD ["-h"]

Now when we launch a container, any option we speciy will be passed to the
Nginx daemon; or example, we could speciy -g "daemon off"; as we did above
to run the daemon in the oreground. I we don’t speciy anything to pass to the
container, then the -h is passed by the CMD instruction and returns the Nginx help
text: /usr/sbin/nginx -h.
This allows us to build in a deault command to execute when our container is run
combined with overridable options and ags on the docker run command line.

 TIP I required at runtime, you can override the ENTRYPOINT instruction using
the docker run command with --entrypoint ag.

Version: v17.03.0 (38f1319) 107


Chapter 4: Working with Docker images and repositories

WORKDIR

The WORKDIR instruction provides a way to set the working directory or the con-
tainer and the ENTRYPOINT and/or CMD to be executed when a container is launched
rom the image.
We can use it to set the working directory or a series o instructions or or the
nal container. For example, to set the working directory or a specic instruction
we might:

Listing 4.56: Using the WORKDIR instruction

WORKDIR /opt/webapp/db
RUN bundle install
WORKDIR /opt/webapp
ENTRYPOINT [ "rackup" ]

Here we’ve changed into the /opt/webapp/db directory to run bundle install
and then changed into the /opt/webapp directory prior to speciying our
ENTRYPOINT instruction o rackup.

You can override the working directory at runtime with the -w ag, or example:

Listing 4.57: Overriding the working directory

$ sudo docker run -ti -w /var/log ubuntu pwd


/var/log

This will set the container’s working directory to /var/log.

Version: v17.03.0 (38f1319) 108


Chapter 4: Working with Docker images and repositories

ENV

The ENV instruction is used to set environment variables during the image build
process. For example:

Listing 4.58: Setting an environment variable in Dockerle

ENV RVM_PATH /home/rvm/

This new environment variable will be used or any subsequent RUN instructions,
as i we had specied an environment variable prex to a command like so:

Listing 4.59: Prexing a RUN instruction

RUN gem install unicorn

would be executed as:

Listing 4.60: Executing with an ENV prex

RVM_PATH=/home/rvm/ gem install unicorn

You can speciy single environment variables in an ENV instruction or since Docker
1.4 you can speciy multiple variables like so:

Listing 4.61: Setting multiple environment variables using ENV

ENV RVM_PATH=/home/rvm RVM_ARCHFLAGS="-arch i386"

We can also use these environment variables in other instructions.

Version: v17.03.0 (38f1319) 109


Chapter 4: Working with Docker images and repositories

Listing 4.62: Using an environment variable in other Dockerle instructions

ENV TARGET_DIR /opt/app


WORKDIR $TARGET_DIR

Here we’ve specied a new environment variable, TARGET_DIR, and then used its
value in a WORKDIR instruction. Our WORKDIR instruction would now be set to /opt
/app.

 NOTE You can also escape environment variables when needed by prex-
ing them with a backslash.

These environment variables will also be persisted into any containers created
rom your image. So, i we were to run the env command in a container built
with the ENV RVM_PATH /home/rvm/ instruction we’d see:

Listing 4.63: Persistent environment variables in Docker containers

root@bf42aadc7f09:~# env
. . .
RVM_PATH=/home/rvm/
. . .

You can also pass environment variables on the docker run command line using
the -e ag. These variables will only apply at runtime, or example:

Version: v17.03.0 (38f1319) 110


Chapter 4: Working with Docker images and repositories

Listing 4.64: Runtime environment variables

$ sudo docker run -ti -e "WEB_PORT=8080" ubuntu env


HOME=/
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=792b171c5e9f
TERM=xterm
WEB_PORT=8080

Now our container has the WEB_PORT environment variable set to 8080.

USER

The USER instruction species a user that the image should be run as; or example:

Listing 4.65: Using the USER instruction

USER nginx

This will cause containers created rom the image to be run by the nginx user. We
can speciy a username or a UID and group or GID. Or even a combination thereo,
or example:

Version: v17.03.0 (38f1319) 111


Chapter 4: Working with Docker images and repositories

Listing 4.66: Speciying USER and GROUP variants

USER user
USER user:group
USER uid
USER uid:gid
USER user:gid
USER uid:group

You can also override this at runtime by speciying the -u ag with the docker run
command.

 TIP The deault user i you don’t speciy the USER instruction is root.

VOLUME

The VOLUME instruction adds volumes to any container created rom the image.
A volume is a specially designated directory within one or more containers that
bypasses the Union File System to provide several useul eatures or persistent or
shared data:

• Volumes can be shared and reused between containers.


• A container doesn’t have to be running to share its volumes.
• Changes to a volume are made directly.
• Changes to a volume will not be included when you update an image.
• Volumes persist until no containers use them.

This allows us to add data (like source code), a database, or other content into an
image without committing it to the image and allows us to share that data between
containers. This can be used to do testing with containers and an application’s

Version: v17.03.0 (38f1319) 112


Chapter 4: Working with Docker images and repositories

code, manage logs, or handle databases inside a container. We’ll see examples o
this in Chapters 5 and 6.
You can use the VOLUME instruction like so:

Listing 4.67: Using the VOLUME instruction

VOLUME ["/opt/project"]

This would attempt to create a mount point /opt/project to any container created
rom the image.

 TIP Also useul and related is the docker cp command. This allows you
to copy les to and rom your containers. You can read about it in the Docker
command line documentation.

Or we can speciy multiple volumes by speciying an array:

Listing 4.68: Using multiple VOLUME instructions

VOLUME ["/opt/project", "/data" ]

 TIP We’ll see a lot more about volumes and how to use them in Chapters 5
and 6. I you’re curious you can read more about volumes in the Docker volumes
documentation.

Version: v17.03.0 (38f1319) 113


Chapter 4: Working with Docker images and repositories

ADD

The ADD instruction adds les and directories rom our build environment into our
image; or example, when installing an application. The ADD instruction species
a source and a destination or the les, like so:

Listing 4.69: Using the ADD instruction

ADD software.lic /opt/application/software.lic

This ADD instruction will copy the le software.lic rom the build directory to /
opt/application/software.lic in the image. The source o the le can be a URL,
lename, or directory as long as it is inside the build context or environment. You
cannot ADD les rom outside the build directory or context.
When ADD’ing les Docker uses the ending character o the destination to deter-
mine what the source is. I the destination ends in a /, then it considers the source
a directory. I it doesn’t end in a /, it considers the source a le.
The source o the le can also be a URL; or example:

Listing 4.70: URL as the source o an ADD instruction

ADD http://wordpress.org/latest.zip /root/wordpress.zip

Lastly, the ADD instruction has some special magic or taking care o local tar
archives. I a tar archive (valid archive types include gzip, bzip2, xz) is specied
as the source le, then Docker will automatically unpack it or you:

Listing 4.71: Archive as the source o an ADD instruction

ADD latest.tar.gz /var/www/wordpress/

Version: v17.03.0 (38f1319) 114


Chapter 4: Working with Docker images and repositories

This will unpack the latest.tar.gz archive into the /var/www/wordpress/ direc-
tory. The archive is unpacked with the same behavior as running tar with the
-x option: the output is the union o whatever exists in the destination plus the
contents o the archive. I a le or directory with the same name already exists in
the destination, it will not be overwritten.

 WARNING Currently this will not work with a tar archive specied in a
URL. This is somewhat inconsistent behavior and may change in a uture release.

Finally, i the destination doesn’t exist, Docker will create the ull path or us,
including any directories. New les and directories will be created with a mode
o 0755 and a UID and GID o 0.

 NOTE It’s also important to note that the build cache can be invalidated
by ADD instructions. I the les or directories added by an ADD instruction change
then this will invalidate the cache or all ollowing instructions in the Dockerfile.

COPY

The COPY instruction is closely related to the ADD instruction. The key diference
is that the COPY instruction is purely ocused on copying local les rom the build
context and does not have any extraction or decompression capabilities.

Version: v17.03.0 (38f1319) 115


Chapter 4: Working with Docker images and repositories

Listing 4.72: Using the COPY instruction

COPY conf.d/ /etc/apache2/

This will copy les rom the conf.d directory to the /etc/apache2/ directory.
The source o the les must be the path to a le or directory relative to the build
context, the local source directory in which your Dockerfile resides. You cannot
copy anything that is outside o this directory, because the build context is up-
loaded to the Docker daemon, and the copy takes place there. Anything outside
o the build context is not available. The destination should be an absolute path
inside the container.
Any les and directories created by the copy will have a UID and GID o 0.
I the source is a directory, the entire directory is copied, including lesystem
metadata; i the source is any other kind o le, it is copied individually along
with its metadata. In our example, the destination ends with a trailing slash /, so
it will be considered a directory and copied to the destination directory.
I the destination doesn’t exist, it is created along with all missing directories in
its path, much like how the mkdir -p command works.

LABEL

The LABEL instruction adds metadata to a Docker image. The metadata is in the
orm o key/value pairs. Let’s see an example.

Listing 4.73: Adding LABEL instructions

LABEL version="1.0"
LABEL location="New York" type="Data Center" role="Web Server"

Version: v17.03.0 (38f1319) 116


Chapter 4: Working with Docker images and repositories

The LABEL instruction is written in the orm o label="value". You can speciy
one item o metadata per label or multiple items separated with white space. We
recommend combining all your metadata in a single LABEL instruction to save
creating multiple layers with each piece o metadata. You can inspect the labels
on an image using the docker inspect command..

Listing 4.74: Using docker inspect to view labels

$ sudo docker inspect jamtur01/apache2


. . .

"Labels": {
"version": "1.0",
"location": "New York",
"type": "Data Center",
"role": "Web Server"
},

Here we see the metadata we just dened using the LABEL instruction.

 NOTE The LABEL instruction was introduced in Docker 1.6.

STOPSIGNAL

The STOPSIGNAL instruction instruction sets the system call signal that will be sent
to the container when you tell it to stop. This signal can be a valid number rom
the kernel syscall table, or instance 9, or a signal name in the ormat SIGNAME, or
instance SIGKILL.

Version: v17.03.0 (38f1319) 117


Chapter 4: Working with Docker images and repositories

 NOTE The STOPSIGNAL instruction was introduced in Docker 1.9.

ARG

The ARG instruction denes variables that can be passed at build-time via the
docker build command. This is done using the --build-arg ag. You can only
speciy build-time arguments that have been dened in the Dockerfile.

Listing 4.75: Adding ARG instructions

ARG build
ARG webapp_user=user

The second ARG instruction sets a deault, i no value is specied or the argument
at build-time then the deault is used. Let’s use one o these arguments in a docker
build now.

Listing 4.76: Using an ARG instruction

$ docker build --build-arg build=1234 -t jamtur01/webapp .

As the jamtur01/webapp image is built the build variable will be set to 1234 and
the webapp_user variable will inherit the deault value o user.

 WARNING At this point you’re probably thinking - this is a great way


to pass secrets like credentials or keys. Don’t do this. Your credentials will be
exposed during the build process and in the build history o the image.

Version: v17.03.0 (38f1319) 118


Chapter 4: Working with Docker images and repositories

Docker has a set o predened ARG variables that you can use at build-time without
a corresponding ARG instruction in the Dockerfile.

Listing 4.77: The predened ARG variables

HTTP_PROXY
http_proxy
HTTPS_PROXY
https_proxy
FTP_PROXY
ftp_proxy
NO_PROXY
no_proxy

To use these predened variables, pass them using the --build-arg <variable
>=<value> ag to the docker build command.

 NOTE The ARG instruction was introduced in Docker 1.9 and you can read
more about it in the Docker documentation.

SHELL

The SHELL instruction allows the deault shell used or the shell orm o commands
to be overridden. The deault shell on Linux is ‘["/bin/sh", "-c"] and on Win-
dows is ["cmd", "/S", "/C"].
The SHELL instruction is useul on platorms such as Windows where there are
multiple shells, or example running commands in the cmd or powershell environ-
ments. Or when need to run a command on Linux in a specic shell, or example
Bash.
The SHELL instruction can be used multiple times. Each new SHELL instruction

Version: v17.03.0 (38f1319) 119


Chapter 4: Working with Docker images and repositories

overrides all previous SHELL instructions, and afects any subsequent instructions.

HEALTHCHECK

The HEALTHCHECK instruction tells Docker how to test a container to check that it
is still working correctly. This allows you to check things like a web site being
served or an API endpoint responding with the correct data, allowing you to iden-
tiy issues that appear, even i an underlying process still appears to be running
normally.
When a container has a health check specied, it has a health status in addition
to its normal status. You can speciy a health check like:

Listing 4.78: Speciying a HEALTHCHECK instruction

HEALTHCHECK --interval=10s --timeout=1m --retries=5 CMD curl http


://localhost || exit 1

The HEALTHCHECK instruction contains options and then the command you wish to
run itsel, separated by a CMD keyword.
We’ve rst specied three deault options:

• --interval - deaults to 30 seconds. This is the period between health


checks. In this case the rst health check will run 10 seconds ater container
launch and subsequently every 10 seconds.
• --timeout - deaults to 30 seconds. I the health check takes longer the
timeout then it is deemed to have ailed.
• --retries - deaults to 3. The number o ailed checks beore the container
is marked as unhealthy.

The command ater the CMD keyword can be either a shell command or an exec
array, or example as we’ve seen in the ENTRYPOINT instruction. The command
should exit with 0 to indicate health or 1 to indicate an unhealthy state. In our

Version: v17.03.0 (38f1319) 120


Chapter 4: Working with Docker images and repositories

CMD we’re executing curl on the localhost. I the command ails we’re exiting
with an exit code o 1, indicating an unhealthy state.
We can see the state o the health check using the docker inspect command.

Listing 4.79: Docker inspect the health state

$ sudo docker inspect --format '{{.State.Health.Status}}'


static_web
healthy

The health check state and related data is stored in the .State.Health namespace
and includes current state as well as a history o previous checks and their output.
The output rom each health check is also available via docker inspect.

Listing 4.80: Health log output

$ sudo docker inspect --format '{{range .State.Health.Log}} {{.


ExitCode}} {{.Output}} {{end}}' static
0 Hi, I am in your container

Here we’re iterating through the array o .Log entries in the docker inspect out-
put.
There can only be one HEALTHCHECK instruction in a Dockerfile. I you list more
than one then only the last will take efect.
You can also disable any health checks specied in any base images you may have
inherited with the instruction:

Version: v17.03.0 (38f1319) 121


Chapter 4: Working with Docker images and repositories

Listing 4.81: Disabling inherited health checks

HEALTHCHECK NONE

 NOTE This instruction was added in Docker 1.12.

ONBUILD

The ONBUILD instruction adds triggers to images. A trigger is executed when the
image is used as the basis o another image (e.g., i you have an image that needs
source code added rom a specic location that might not yet be available, or i
you need to execute a build script that is specic to the environment in which the
image is built).
The trigger inserts a new instruction in the build process, as i it were specied
right ater the FROM instruction. The trigger can be any build instruction. For
example:

Listing 4.82: Adding ONBUILD instructions

ONBUILD ADD . /app/src


ONBUILD RUN cd /app/src; make

This would add an ONBUILD trigger to the image being created, which we see when
we run docker inspect on the image.

Version: v17.03.0 (38f1319) 122


Chapter 4: Working with Docker images and repositories

Listing 4.83: Showing ONBUILD instructions with docker inspect

$ sudo docker inspect 508efa4e4bf8


...
"OnBuild": [
"ADD . /app/src",
"RUN cd /app/src/; make"
]
...

For example, we’ll build a new Dockerfile or an Apache2 image that we’ll call
jamtur01/apache2.

Listing 4.84: A new ONBUILD image Dockerle

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
RUN apt-get update; apt-get install -y apache2
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ONBUILD ADD . /var/www/
EXPOSE 80
ENTRYPOINT ["/usr/sbin/apache2"]
CMD ["-D", "FOREGROUND"]

Now we’ll build this image.

Version: v17.03.0 (38f1319) 123


Chapter 4: Working with Docker images and repositories

Listing 4.85: Building the apache2 image

$ sudo docker build -t="jamtur01/apache2" .


...
Step 7 : ONBUILD ADD . /var/www/
---> Running in 0e117f6ea4ba
---> a79983575b86
Successfully built a79983575b86

We now have an image with an ONBUILD instruction that uses the ADD instruction
to add the contents o the directory we’re building rom to the /var/www/ directory
in our image. This could readily be our generic web application template rom
which I build web applications.
Let’s try this now by building a new image called webapp rom the ollowing
Dockerfile:

Listing 4.86: The webapp Dockerle

FROM jamtur01/apache2
MAINTAINER James Turnbull "[email protected]"
ENV APPLICATION_NAME webapp
ENV ENVIRONMENT development

Let’s look at what happens when I build this image.

Version: v17.03.0 (38f1319) 124


Chapter 4: Working with Docker images and repositories

Listing 4.87: Building our webapp image

$ sudo docker build -t="jamtur01/webapp" .


...
Step 0 : FROM jamtur01/apache2
# Executing 1 build triggers
Step onbuild-0 : ADD . /var/www/
---> 1a018213a59d
---> 1a018213a59d
Step 1 : MAINTAINER James Turnbull "[email protected]"
...
Successfully built 04829a360d86

We see that straight ater the FROM instruction, Docker has inserted the ADD in-
struction, specied by the ONBUILD trigger, and then proceeded to execute the
remaining steps. This would allow me to always add the local source and, as I’ve
done here, speciy some conguration or build inormation or each application;
hence, this becomes a useul template image.
The ONBUILD triggers are executed in the order specied in the parent image and
are only inherited once (i.e., by children and not grandchildren). I we built an-
other image rom this new image, a grandchild o the jamtur01/apache2 image,
then the triggers would not be executed when that image is built.

 NOTE There are several instructions you can’t ONBUILD: FROM, MAINTAINER,
and ONBUILD itsel. This is done to prevent Inception-like recursion in Dockerfile
builds.

Version: v17.03.0 (38f1319) 125


Chapter 4: Working with Docker images and repositories

Pushing images to the Docker Hub

Once we’ve got an image, we can upload it to the Docker Hub. This allows us to
make it available or others to use. For example, we could share it with others in
our organization or make it publicly available.

 NOTE The Docker Hub also has the option o private repositories. These
are a paid-or eature that allows you to store an image in a private repository
that is only available to you or anyone with whom you share it. This allows you
to have private images containing proprietary inormation or code you might not
want to share publicly.

We push images to the Docker Hub using the docker push command.
Let’s build an image without a user prex and try and push it now.

Listing 4.88: Trying to push a root image

$ cd static_web
$ sudo docker build --no-cache -t="static_web" .
. . .
Successfully built a312a2ed58c7
$ sudo docker push static_web
The push refers to a repository [docker.io/library/static_web]
c0121fc36460: Preparing
8591faa9900d: Preparing
9a39129ae0ac: Preparing
98305c1a8f5e: Preparing
0185b3091e8e: Preparing
ea9f151abb7e: Waiting
unauthorized: authentication required

Version: v17.03.0 (38f1319) 126


Chapter 4: Working with Docker images and repositories

What’s gone wrong here? We’ve tried to push our image to the repository
static_web, but Docker knows this is a root repository. Root repositories are
managed only by the Docker, Inc., team and will reject our attempt to write to
them as unauthorized. Let’s try again.

Listing 4.89: Pushing a Docker image

$ sudo docker push jamtur01/static_web


The push refers to a repository [jamtur01/static_web] (len: 1)
Processing checksums
Sending image list
Pushing repository jamtur01/static_web to registry-1.docker.io (1
tags)
. . .

This time, our push has worked, and we’ve written to a user repository, jamtur01
/static_web. We would write to your own user ID, which we created earlier, and
to an appropriately named image (e.g., youruser/yourimage).
We can now see our uploaded image on the Docker Hub.

Version: v17.03.0 (38f1319) 127


Chapter 4: Working with Docker images and repositories

Figure 4.4: Your image on the Docker Hub.

 TIP You can nd documentation and more inormation on the eatures o
the Docker Hub here.

Automated Builds

In addition to being able to build and push our images rom the command line,
the Docker Hub also allows us to dene Automated Builds. We can do so by con-
necting a GitHub or BitBucket repository containing a Dockerfile to the Docker
Hub. When we push to this repository, an image build will be triggered and a new
image created. This was previously also known as a Trusted Build.

 NOTE Automated Builds also work or private GitHub and BitBucket repos-
itories.

Version: v17.03.0 (38f1319) 128


Chapter 4: Working with Docker images and repositories

The rst step in adding an Automated Build to the Docker Hub is to connect your
GitHub account or BitBucket to your Docker Hub account. To do this, navigate to
Docker Hub, sign in, click on your prole link, then click the Create -> Create
Automated Build button.

Figure 4.5: The Add Repository button.

You will see a page that shows your options or linking to either GitHub or Bit-
Bucket. Click the Select button under the GitHub logo to initiate the account
linkage. You will be taken to GitHub and asked to authorize access or Docker
Hub.
On Github you have two options: Public and Private (recommended) and
Limited. Select Public and Private (recommended), and click Allow Access
to complete the authorization. You may be prompted to input your GitHub
password to conrm the access.
From here, you will be prompted to select the organization and repository rom
which you want to construct an Automated Build.

Version: v17.03.0 (38f1319) 129


Chapter 4: Working with Docker images and repositories

Figure 4.6: Selecting your repository.

Select the repository rom which you wish to create an Automated Build and then
congure the build.

Figure 4.7: Conguring your Automated Build.

Speciy the deault branch you wish to use, and conrm the repository name.
Speciy a tag you wish to apply to any resulting build, then speciy the location o
the Dockerfile. The deault is assumed to be the root o the repository, but you
can override this with any path.

Version: v17.03.0 (38f1319) 130


Chapter 4: Working with Docker images and repositories

Finally, click the Create button to add your Automated Build to the Docker Hub.
You will now see your Automated Build submitted. Click on the Build Details
link to see the status o the last build, including log output showing the build
process and any errors. A build status o Done indicates the Automated Build is
up to date. An Error status indicates a problem; you can click through to see the
log output.

 NOTE You can’t push to an Automated Build using the docker push com-
mand. You can only update it by pushing updates to your GitHub or BitBucket
repository.

Deleting an image

We can also delete images when we don’t need them anymore. To do this, we’ll
use the docker rmi command.

Listing 4.90: Deleting a Docker image

$ sudo docker rmi jamtur01/static_web


Untagged: 06c6c1f81534
Deleted: 06c6c1f81534
Deleted: 9f551a68e60f
Deleted: 997485f46ec4
Deleted: a101d806d694
Deleted: 85130977028d

Here we’ve deleted the jamtur01/static_web image. You can see Docker’s layer
lesystem at work here: each o the Deleted: lines represents an image layer
being deleted. I a running container is still using an image then you won’t be

Version: v17.03.0 (38f1319) 131


Chapter 4: Working with Docker images and repositories

able to delete it. You’ll need to stop all containers running that image, remove
them and then delete the image.

 NOTE This only deletes the image locally. I you’ve previously pushed
that image to the Docker Hub, it’ll still exist there.

I you want to delete an image’s repository on the Docker Hub, you’ll need to sign
in and delete it there using the Settings -> Delete button.

Figure 4.8: Deleting a repository.

We can also delete more than one image by speciying a list on the command line.

Version: v17.03.0 (38f1319) 132


Chapter 4: Working with Docker images and repositories

Listing 4.91: Deleting multiple Docker images

$ sudo docker rmi jamtur01/apache2 jamtur01/puppetmaster

or, like the docker rm command cheat we saw in Chapter 3, we can do the same
with the docker rmi command:

Listing 4.92: Deleting all images

$ sudo docker rmi `docker images -a -q`

Running your own Docker registry

Having a public registry o Docker images is highly useul. Sometimes, however,


we are going to want to build and store images that contain inormation or data
that we don’t want to make public. There are two choices in this situation:

• Make use o private repositories on the Docker Hub.


• Run your own registry behind the rewall.

The team at Docker, Inc., have open-sourced the code they use to run a Docker
registry, thus allowing us to build our own internal registry. The registry does not
currently have a user interace and is only made available as an API service.

 TIP I you’re running Docker behind a proxy or corporate rewall you can
also use the HTTPS_PROXY, HTTP_PROXY, NO_PROXY options to control how Docker
connects.

Version: v17.03.0 (38f1319) 133


Chapter 4: Working with Docker images and repositories

Running a registry rom a container

Installing a registry rom a Docker container is simple. Just run the Docker-
provided container like so:

Listing 4.93: Running a container-based registry

$ docker run -d -p 5000:5000 --name registry registry:2

This will launch a container running version 2.0 o the registry application and
bind port 5000 to the local host.

 TIP I you’re running an older version o the Docker Registry, prior to 2.0,
you can use the Migrator tool to upgrade to a new registry.

Testing the new registry

So how can we make use o our new registry? Let’s see i we can upload one o
our existing images, the jamtur01/static_web image, to our new registry. First,
let’s identiy the image’s ID using the docker images command.

Listing 4.94: Listing the jamtur01 static_web Docker image

$ sudo docker images jamtur01/static_web


REPOSITORY TAG ID CREATED SIZE
jamtur01/static_web latest 22d47c8cb6e5 24 seconds ago 12.29
kB (virtual 326 MB)

Next we take our image ID, 22d47c8cb6e5, and tag it or our new registry. To

Version: v17.03.0 (38f1319) 134


Chapter 4: Working with Docker images and repositories

speciy the new registry destination, we prex the image name with the hostname
and port o our new registry. In our case, our new registry has a hostname o
docker.example.com.

Listing 4.95: Tagging our image or our new registry

$ sudo docker tag 22d47c8cb6e5 docker.example.com:5000/jamtur01/


static_web

Ater tagging our image, we can then push it to the new registry using the docker
push command:

Listing 4.96: Pushing an image to our new registry

$ sudo docker push docker.example.com:5000/jamtur01/static_web


The push refers to a repository [docker.example.com:5000/jamtur01
/static_web] (len: 1)
Processing checksums
Sending image list
Pushing repository docker.example.com:5000/jamtur01/static_web (1
tags)
Pushing 22
d47c8cb6e556420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c
Buffering to disk 58375952/? (n/a)
Pushing 58.38 MB/58.38 MB (100%)
. . .

The image is then posted in the local registry and available or us to build new
containers using the docker run command.

Version: v17.03.0 (38f1319) 135


Chapter 4: Working with Docker images and repositories

Listing 4.97: Building a container rom our local registry

$ sudo docker run -t -i docker.example.com:5000/jamtur01/


static_web /bin/bash

This is the simplest deployment o the Docker registry behind your rewall. It
doesn’t explain how to congure the registry or manage it. To nd out details
like conguring authentication, how to manage the backend storage or your im-
ages and how to manage your registry see the ull conguration and deployments
details in the Docker Registry deployment documentation.

Alternative Indexes

There are a variety o other services and companies out there starting to provide
custom Docker registry services.

Quay

The Quay service provides a private hosted registry that allows you to upload both
public and private containers. Unlimited public repositories are currently ree.
Private repositories are available in a series o scaled plans. The Quay product
has recently been acquired by CoreOS and will be integrated into that product.

Summary

In this chapter, we’ve seen how to use and interact with Docker images and the
basics o modiying, updating, and uploading images to the Docker Hub. We’ve
also learned about using a Dockerfile to construct our own custom images. Fi-
nally, we’ve discovered how to run our own local Docker registry and some hosted
alternatives. This gives us the basis or starting to build services with Docker.

Version: v17.03.0 (38f1319) 136


Chapter 4: Working with Docker images and repositories

We’ll use this knowledge in the next chapter to see how we can integrate Docker
into a testing workow and into a Continuous Integration liecycle.

Version: v17.03.0 (38f1319) 137


Chapter 5

Testing with Docker

We’ve learned a lot about the basics o Docker in the previous chapters. We’ve
learned about images, the basics o launching, and working with containers. Now
that we’ve got those basics down, let’s try to use Docker in earnest. We’re going
to start by using Docker to help us make our development and testing processes a
bit more streamlined and ecient.
To demonstrate this, we’re going to look at three use cases:

• Using Docker to test a static website.


• Using Docker to build and test a web application.
• Using Docker or Continuous Integration.

 NOTE We’re using Jenkins or CI because it’s the platorm I have the most
experience with, but you can adapt most o the ideas contained in those sections
to any CI platorm.

In the rst two use cases, we’re going to ocus on local, developer-centric devel-
oping and testing, and in the last use case, we’ll see how Docker might be used in
a broader multi-developer liecycle or build and test.

138
Chapter 5: Testing with Docker

This chapter will introduce you to using Docker as part o your daily lie and work-
ow, including useul concepts like connecting Docker containers. The chapter
contains a lot o useul inormation on how to run and manage Docker in general,
and I recommend you read it even i these use cases aren’t immediately relevant
to you.

Using Docker to test a static website

One o the simplest use cases or Docker is as a local web development environ-
ment. Such an environment allows you to replicate your production environment
and ensure what you develop will also likely run in production. We’re going to
start with installing the Nginx web server into a container to run a simple website.
Our website is originally named Sample.

An initial Dockerle or the Sample website

To do this, let’s start with creating some structure, some conguration les or our
container and a Dockerfile rom which to build our image. We start by creating
a directory to hold our Dockerfile rst.

Listing 5.1: Creating a directory or our Sample website Dockerle

$ mkdir sample
$ cd sample

We’re also going to need some Nginx conguration les to run our website. We can
download some example les I’ve prepared earlier rom GitHub into the sample
directory.

Version: v17.03.0 (38f1319) 139


Chapter 5: Testing with Docker

Listing 5.2: Getting our Nginx conguration les

$ wget https://raw.githubusercontent.com/jamtur01/dockerbook-code
/master/code/5/sample/nginx/global.conf
$ wget https://raw.githubusercontent.com/jamtur01/dockerbook-code
/master/code/5/sample/nginx/nginx.conf

Now let’s look at the Dockerfile you’re going to create or our Sample website.

Listing 5.3: The Dockerle or the Sample website

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01
RUN apt-get -yqq update; apt-get -yqq install nginx
RUN mkdir -p /var/www/html/website
ADD global.conf /etc/nginx/conf.d/
ADD nginx.conf /etc/nginx/nginx.conf
EXPOSE 80

Here we’ve written a Dockerfile that:

• Installs Nginx.
• Creates a directory, /var/www/html/website/, in the container.
• Adds the Nginx conguration rom the local les we downloaded to our
image.
• Exposes port 80 on the image.

Our two Nginx conguration les congure Nginx or running our Sample website.
The global.conf le is copied into the /etc/nginx/conf.d/ directory by the ADD
instruction. The global.conf conguration le species:

Version: v17.03.0 (38f1319) 140


Chapter 5: Testing with Docker

Listing 5.4: The global.con le

server {
listen 0.0.0.0:80;
server_name _;

root /var/www/html/website;
index index.html index.htm;

access_log /var/log/nginx/default_access.log;
error_log /var/log/nginx/default_error.log;
}

This sets Nginx to listen on port 80 and sets the root o our webserver to /var/www
/html/website, the directory we just created with a RUN instruction.

We also need to congure Nginx to run non-daemonized in order to allow it to


work inside our Docker container. To do this, the nginx.conf le is copied into
the /etc/nginx/ directory and contains:

Version: v17.03.0 (38f1319) 141


Chapter 5: Testing with Docker

Listing 5.5: The nginx.con conguration le

user www-data;
worker_processes 4;
pid /run/nginx.pid;
daemon off;

events { }

http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
}

In this conguration le, the daemon off; option stops Nginx rom going into
the background and orces it to run in the oreground. This is because Docker
containers rely on the running process inside them to remain active. By deault,
Nginx daemonizes itsel when started, which would cause the container to run
briey and then stop when the daemon was orked and launched and the original
process that orked it stopped.
This le is copied to /etc/nginx/nginx.conf by the ADD instruction.
You’ll also see a subtle diference between the destinations o the two ADD instruc-

Version: v17.03.0 (38f1319) 142


Chapter 5: Testing with Docker

tions. The rst ends in the directory, /etc/nginx/conf.d/, and the second in a
specic le /etc/nginx/nginx.conf. Both styles are accepted ways o copying
les into a Docker image.

 NOTE You can nd all the code and sample conguration les or this at
The Docker Book Code site or the Docker Book site. You will need to specically
download or copy and paste the nginx.conf and global.conf conguration les
into the nginx directory we created to make them available or the docker build.

Building our Sample website and Nginx image

From this Dockerfile, we can build ourselves a new image with the docker build
command; we’ll call it jamtur01/nginx.

Listing 5.6: Building our new Nginx image

$ sudo docker build -t jamtur01/nginx .

This will build and name our new image, and you should see the build steps
execute. We can take a look at the steps and layers that make up our new image
using the docker history command.

Version: v17.03.0 (38f1319) 143


Chapter 5: Testing with Docker

Listing 5.7: Showing the history o the Nginx image

$ sudo docker history jamtur01/nginx


IMAGECREATED CREATED BY SIZE
f99cb0a6726d 7 secs ago /bin/sh -c #(nop) EXPOSE 80/tcp 0
B
d0741c80034e 7 secs ago /bin/sh -c #(nop) ADD file:
d6698a182fafaf3cb0 415 B
f1b8d3ab6b4f 8 secs ago /bin/sh -c #(nop) ADD file:9778
ae1b43896011cc 286 B
4e88da941d2b About a min /bin/sh -c mkdir -p /var/www/html/
website 0 B
1224c6db31b7 About a min /bin/sh -c apt-get -yqq update; apt-get
-yq 39.32 MB
2cfbed445367 About a min /bin/sh -c #(nop) ENV REFRESHED_AT=2016-
06-01 0 B
6b5e0485e5fa About a min /bin/sh -c #(nop) MAINTAINER James
Turnbull " 0 B
91e54dfb1179 2 days ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0
B
d74508fb6632 2 days ago /bin/sh -c sed -i 's/^#\s*\(deb.*
universe\)$/ 1.895 kB
c22013c84729 2 days ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/
polic 194.5 kB
d3a1f33e8a5a 2 days ago /bin/sh -c #(nop) ADD file:5
a3f9e9ab88e725d60 188.2 MB

The history starts with the nal layer, our new jamtur01/nginx image and works
backward to the original parent image, ubuntu:16.04. Each step in between shows
the new layer and the instruction rom the Dockerfile that generated it.

Version: v17.03.0 (38f1319) 144


Chapter 5: Testing with Docker

Building containers rom our Sample website and Nginx image

We can now take our jamtur01/nginx image and start to build containers rom it,
which will allow us to test our Sample website. To do that we need to add the
Sample website’s code. Let’s download it now into the sample directory.

Listing 5.8: Downloading our Sample website

$ mkdir website; cd website


$ wget https://raw.githubusercontent.com/jamtur01/dockerbook-code
/master/code/5/sample/website/index.html
$ cd ..

This will create a directory called website inside the sample directory. We then
download an index.html le or our Sample website into that website directory.
Now let’s look at how we might run a container using the docker run command.

Listing 5.9: Running our rst Nginx testing container

$ sudo docker run -d -p 80 --name website \


-v $PWD/website:/var/www/html/website \
jamtur01/nginx nginx

 NOTE You can see we’ve passed the nginx command to docker run. Nor-
mally this wouldn’t make Nginx run interactively. In the conguration we sup-
plied to Docker, though, we’ve added the directive daemon off. This directive
causes Nginx to run interactively in the oreground when launched.

You can see we’ve used the docker run command to build a container rom our

Version: v17.03.0 (38f1319) 145


Chapter 5: Testing with Docker

jamtur01/nginx image called website. You will have seen most o the options
beore, but the -v option is new. This new option allows us to create a volume in
our container rom a directory on the host.
Let’s take a brie digression into volumes, as they are important and useul in
Docker. Volumes are specially designated directories within one or more contain-
ers that bypass the layered Union File System to provide persistent or shared data
or Docker. This means that changes to a volume are made directly and bypass
the image. They will not be included when we commit or build an image.

 TIP Volumes can also be shared between containers and can persist even
when containers are stopped. We’ll see how to make use o this or data manage-
ment in later chapters.

In our immediate case, we see the value o volumes when we don’t want to bake
our application or code into an image. For example:

• We want to work on and test it simultaneously.


• It changes requently, and we don’t want to rebuild the image during our
development process.
• We want to share the code between multiple containers.

The -v option works by speciying a directory or mount on the local host separated
rom the directory on the container with a :. I the container directory doesn’t
exist Docker will create it.
We can also speciy the read/write status o the container directory by adding
either rw or ro ater that directory, like so:

Version: v17.03.0 (38f1319) 146


Chapter 5: Testing with Docker

Listing 5.10: Controlling the write status o a volume

$ sudo docker run -d -p 80 --name website \


-v $PWD/website:/var/www/html/website:ro \
jamtur01/nginx nginx

This would make the container directory /var/www/html/website read-only.


In our Nginx website container, we’ve mounted a local website we’re developing.
To do this we’ve mounted, as a volume, the directory $PWD/website to /var/www
/html/website in our container. In our Nginx conguration (in the /etc/nginx
/conf.d/global.conf conguration le), we’ve specied this directory as the lo-
cation to be served out by the Nginx server.

 TIP The website directory we’re using is contained in the source code or this
book here and on GitHub here. You can see the index.html le we downloaded
inside that directory.

Now, i we look at our running container using the docker ps command, we see
that it is active, it is named website, and port 80 on the container is mapped to
port 49161 on the host.

Listing 5.11: Viewing the Sample website container

$ sudo docker ps -l
CONTAINER ID IMAGE ... PORTS NAMES
6751b94bb5c0 jamtur01/nginx:latest ... 0.0.0.0:49161->80/tcp
website

I we browse to port 49161 on our Docker host, we’ll be able to see our Sample

Version: v17.03.0 (38f1319) 147


Chapter 5: Testing with Docker

website displayed.

Figure 5.1: Browsing the Sample website.

Editing our website

Neat! We’ve got a live site. Now what happens i we edit our website? Let’s open
up the index.html le in the sample/website older on our local host and edit it.

Listing 5.12: Editing our Sample website

$ cd sample
$ vi $PWD/website/index.html

We’ll change the title rom:

Version: v17.03.0 (38f1319) 148


Chapter 5: Testing with Docker

Listing 5.13: Old title

This is a test website

To:

Listing 5.14: New title

This is a test website for Docker

Let’s reresh our browser and see what we’ve got now.

Figure 5.2: Browsing the edited Sample website.

We see that our Sample website has been updated. This is a simple example
o editing a website, but you can see how you could easily do so much more.
More importantly, you’re testing a site that reects production reality. You can
now have containers or each type o production web-serving environment (e.g.,
Apache, Nginx), or running varying versions o development rameworks like
PHP or Ruby on Rails, or or database back ends, etc.

Version: v17.03.0 (38f1319) 149


Chapter 5: Testing with Docker

Using Docker to build and test a web application

Now let’s look at a more complex example o testing a larger web application.
We’re going to test a Sinatra-based web application instead o a static website
and then develop that application whilst testing in Docker. Sinatra is a Ruby-
based web application ramework. It contains a web application library and a
simple Domain Specic Language or DSL or creating web applications. Unlike
more complex web application rameworks, like Ruby on Rails, Sinatra does not
ollow the model–view–controller pattern but rather allows you to create quick
and simple web applications.
As such it’s perect or creating a small sample application to test. In our case our
new application is going to take incoming URL parameters and output them as a
JSON hash. We’re also going to take advantage o this application architecture to
show you how to link Docker containers together.

Building our Sinatra application

Let’s create a directory, sinatra, to hold our new application and any associated
les we’ll need or the build.

Listing 5.15: Create directory or web application testing

$ mkdir -p sinatra
$ cd sinatra

Inside the sinatra directory let’s start with a Dockerfile to build the basic image
that we will use to develop our Sinatra web application.

Version: v17.03.0 (38f1319) 150


Chapter 5: Testing with Docker

Listing 5.16: Dockerle or our Sinatra container

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01

RUN apt-get update -yqq; apt-get -yqq install ruby ruby-dev build
-essential redis-tools
RUN gem install --no-rdoc --no-ri sinatra json redis

RUN mkdir -p /opt/webapp

EXPOSE 4567

CMD [ "/opt/webapp/bin/webapp" ]

You can see that we’ve created another Ubuntu-based image, installed Ruby and
RubyGems, and then used the gem binary to install the sinatra, json, and redis
gems. The sinatra and json gems contain Ruby’s Sinatra library and support or
JSON. The redis gem we’re going to use a little later on to provide integration to
a Redis database.
We’ve also created a directory to hold our new web application and exposed the
deault WEBrick port o 4567.
Finally, we’ve specied a CMD o /opt/webapp/bin/webapp, which will be the bi-
nary that launches our web application.
Let’s build this new image now using the docker build command.

Listing 5.17: Building our new Sinatra image

$ sudo docker build -t jamtur01/sinatra .

Version: v17.03.0 (38f1319) 151


Chapter 5: Testing with Docker

Creating our Sinatra container

We’ve built our image. Let’s now download our Sinatra web application’s source
code. You can nd the code or this Sinatra application here or at The Docker
Book site. The application is made up o the bin and lib directories rom the
webapp directory.
Let’s download it now into the sinatra directory.

Listing 5.18: Download our Sinatra web application

$ cd sinatra
$ wget --cut-dirs=3 -nH -r --reject Dockerfile,index.html --no-
parent http://dockerbook.com/code/5/sinatra/webapp/

Let’s quickly look at the core o the webapp source code contained in the sinatra
/webapp/lib/app.rb le.

Version: v17.03.0 (38f1319) 152


Chapter 5: Testing with Docker

Listing 5.19: The Sinatra app.rb source code

require "rubygems"
require "sinatra"
require "json"

class App < Sinatra::Application

set :bind, '0.0.0.0'

get '/' do
"<h1>DockerBook Test Sinatra app</h1>"
end

post '/json/?' do
params.to_json
end

end

This is a simple application that converts any parameters posted to the /json
endpoint to JSON and displays them.
We also need to ensure that the webapp/bin/webapp binary is executable prior to
using it using the chmod command.

Listing 5.20: Making the webapp/bin/webapp binary executable

$ chmod +x webapp/bin/webapp

Now let’s launch a new container rom our image using the docker run command.
To launch we should be inside the sinatra directory because we’re going to mount

Version: v17.03.0 (38f1319) 153


Chapter 5: Testing with Docker

our source code into the container using a volume.

Listing 5.21: Launching our rst Sinatra container

$ sudo docker run -d -p 4567 --name webapp \


-v $PWD/webapp:/opt/webapp jamtur01/sinatra

Here we’ve launched a new container rom our jamtur01/sinatra image, called
webapp. We’ve specied a new volume, using the webapp directory that holds our
new Sinatra web application, and we’ve mounted it to the directory we created in
the Dockerfile: /opt/webapp.
We’ve not provided a command to run on the command line; instead, we’re using
the command we specied via the CMD instruction in the Dockerfile o the image.

Listing 5.22: The CMD instruction in our Dockerle

. . .
CMD [ "/opt/webapp/bin/webapp" ]
. . .

This command will be executed when a container is launched rom this image.
We can also use the docker logs command to see what happened when our com-
mand was executed.

Version: v17.03.0 (38f1319) 154


Chapter 5: Testing with Docker

Listing 5.23: Checking the logs o our Sinatra container

$ sudo docker logs webapp


[2016-08-03 17:34:46] INFO WEBrick 1.3.1
[2016-08-03 17:34:46] INFO ruby 2.3.1 (2016-04-26) [x86_64-linux
-gnu]
== Sinatra (v1.4.7) has taken the stage on 4567 for development
with backup from WEBrick
[2016-08-03 17:34:46] INFO WEBrick::HTTPServer#start: pid=1 port
=4567

By adding the -f ag to the docker logs command, you can get similar behavior
to the tail -f command and continuously stream new output rom the STDERR
and STDOUT o the container.

Listing 5.24: Tailing the logs o our Sinatra container

$ sudo docker logs -f webapp


. . .

We can also see the running processes o our Sinatra Docker container using the
docker top command.

Listing 5.25: Using docker top to list our Sinatra processes

$ sudo docker top webapp


UID PID PPID C STIME TTY TIME CMD
root 21506 15332 0 20:26 ? 00:00:00 /usr/bin/ruby /opt/
webapp/bin/webapp

Version: v17.03.0 (38f1319) 155


Chapter 5: Testing with Docker

We see rom the logs that Sinatra has been launched and the WEBrick server is
waiting on port 4567 in the container or us to test our application. Let’s check to
which port on our local host that port is mapped:

Listing 5.26: Checking the Sinatra port mapping

$ sudo docker port webapp 4567


0.0.0.0:49160

Right now, our basic Sinatra application doesn’t do much. It just takes incoming
parameters, turns them into JSON, and then outputs them. We can now use the
curl command to test our application.

Listing 5.27: Testing our Sinatra application

$ curl -i -H 'Accept: application/json' \


-d 'name=Foo&status=Bar' http://localhost:49160/json
HTTP/1.1 200 OK
Content-Type: text/html;charset=utf-8
Content-Length: 29
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
Server: WEBrick/1.3.1 (Ruby/2.3.1/2016-04-26)
Date: Wed, 03 Aug 2016 18:30:06 GMT
Connection: Keep-Alive
{"name":"Foo","status":"Bar"}

We see that we’ve passed some URL parameters to our Sinatra application and
returned to us as a JSON hash: {"name":"Foo","status":"Bar"}.
Neat! But let’s see i we can extend our example application container to an actual
application stack by connecting to a service running in another container.

Version: v17.03.0 (38f1319) 156


Chapter 5: Testing with Docker

Extending our Sinatra application to use Redis

We’re going to extend our Sinatra application now by adding a Redis back end
and storing our incoming URL parameters in a Redis database. To do this, we’re
going to download a new version o our Sinatra application. We’ll also create an
image and container that run a Redis database. We’ll then make use o Docker’s
capabilities to connect the two containers.

Updating our Sinatra application

Let’s start with downloading an updated Sinatra-based application with a con-


nection to Redis congured. From inside our sinatra directory let’s download a
Redis-enabled version o our application into a new directory: webapp_redis.

Listing 5.28: Download our updated Sinatra web application

$ cd sinatra
$ wget --cut-dirs=3 -nH -r --reject Dockerfile,index.html --no-
parent http://dockerbook.com/code/5/sinatra/webapp_redis/

We see we’ve downloaded the new application. Let’s look at its core code in lib/
app.rb now.

Version: v17.03.0 (38f1319) 157


Chapter 5: Testing with Docker

Listing 5.29: The webapp_redis app.rb le

require "rubygems"
require "sinatra"
require "json"
require "redis"

class App < Sinatra::Application

redis = Redis.new(:host => 'db', :port => '6379')

set :bind, '0.0.0.0'

get '/' do
"<h1>DockerBook Test Redis-enabled Sinatra app</h1>"
end

get '/json' do
params = redis.get "params"
params.to_json
end

post '/json/?' do
redis.set "params", [params].to_json
params.to_json
end
end

 NOTE You can see the ull source or our updated Redis-enabled Sinatra
application here or at The Docker Book site.

Version: v17.03.0 (38f1319) 158


Chapter 5: Testing with Docker

Our new application is basically the same as our previous application with support
or Redis added. We now create a connection to a Redis database on a host called
db on port 6379. We also post our parameters to that Redis database and then get
them back rom it when required.
We also need to ensure that the webapp_redis/bin/webapp binary is executable
prior to using it using the chmod command.

Listing 5.30: Making the webapp_redis/bin/webapp binary executable

$ chmod +x webapp_redis/bin/webapp

Building a Redis database image

To build our Redis database, we’re going to create a new image. Let’s create a
directory, redis inside our sinatra directory, to hold any associated les we’ll
need or the Redis container build.

Listing 5.31: Create directory or Redis container

$ mkdir redis
$ cd redis

Inside the sinatra/redis directory let’s start with another Dockerfile or our
Redis image.

Version: v17.03.0 (38f1319) 159


Chapter 5: Testing with Docker

Listing 5.32: Dockerle or Redis image

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01
RUN apt-get -yqq update; apt-get -yqq install redis-server redis-
tools
EXPOSE 6379
ENTRYPOINT ["/usr/bin/redis-server", "--protected-mode no" ]
CMD []

We’ve specied the installation o the Redis server, exposed port 6379, and speci-
ed an ENTRYPOINT that will launch that Redis server. Let’s now build that image
and call it jamtur01/redis.

Listing 5.33: Building our Redis image

$ sudo docker build -t jamtur01/redis .

Now let’s create a container rom our new image.

Listing 5.34: Launching a Redis container

$ sudo docker run -d -p 6379 --name redis jamtur01/redis


2df899db52baf469633459fa2abd34148ae4456a8c4a2343a0f372f2ee407756

We’ve launched a new container named redis rom our jamtur01/redis image.
Note that we’ve specied the -p ag to publish port 6379. Let’s see what port it’s
running on.

Version: v17.03.0 (38f1319) 160


Chapter 5: Testing with Docker

Listing 5.35: Checking the Redis port

$ sudo docker port redis 6379


0.0.0.0:49161

Our Redis port is published on port 49161. Let’s try to connect to that Redis
instance now.
We’ll need to install the Redis client locally to do the test. This is usually the
redis-tools package on Ubuntu.

Listing 5.36: Installing the redis-tools package on Ubuntu

$ sudo apt-get -y install redis-tools

Or the redis package on Red Hat and related distributions.

Listing 5.37: Installing the redis package on Red Hat et al

$ sudo yum install -y -q redis

Then we can use the redis-cli command to check our Redis server.

Listing 5.38: Testing our Redis connection

$ redis-cli -h 127.0.0.1 -p 49161


redis 127.0.0.1:49161>

Here we’ve connected the Redis client to 127.0.0.1 on port 49161 and veried

Version: v17.03.0 (38f1319) 161


Chapter 5: Testing with Docker

that our Redis server is working. You can use the quit command to exit the Redis
CLI interace.

Connecting our Sinatra application to the Redis container

Let’s now update our Sinatra application to connect to Redis and store our incom-
ing parameters. In order to do that, we’re going to need to be able to talk to the
Redis server. There are two ways we could do this using:

• Docker’s own internal network.


• From Docker 1.9 and later, using Docker Networking and the docker
network command.

So which method should I choose? Well the rst method, Docker’s internal net-
work, is not an overly exible or powerul solution. We’re mostly going to discuss
it to introduce you to how Docker networking unctions. We don’t recommend it
as a solution or connecting containers.
The more realistic method or connecting containers is Docker Networking.

• Docker Networking can connect containers to each other across diferent


hosts.
• Containers connected via Docker Networking can be stopped, started or
restarted without needing to update connections.
• With Docker Networking you don’t need to create a container beore you can
connect to it. You also don’t need to worry about the order in which you
run containers and you get internal container name resolution and discovery
inside the network.

We’re going to look Docker Networking or connecting Docker containers together
in the ollowing sections.

Version: v17.03.0 (38f1319) 162


Chapter 5: Testing with Docker

Docker internal networking

The rst method involves Docker’s own network stack. So ar, we’ve seen Docker
containers exposing ports and binding interaces so that container services are
published on the local Docker host’s external network (e.g., binding port 80 inside
a container to a high port on the local host). In addition to this capability, Docker
has a acet we haven’t yet seen: internal networking.
Every Docker container is assigned an IP address, provided through an interace
created when we installed Docker. That interace is called docker0. Let’s look at
that interace on our Docker host now.

 TIP Since Docker 1.5.0 IPv6 addresses are also supported. To enable this run
the Docker daemon with the --ipv6 ag.

Listing 5.39: The docker0 interace

$ ip a show docker0
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP
link/ether 06:41:69:71:00:ba brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::1cb3:6eff:fee2:2df1/64 scope link
valid_lft forever preferred_lft forever
. . .

 NOTE Depending on your distribution, you may need the iproute2 pack-
age to run the ip command.

Version: v17.03.0 (38f1319) 163


Chapter 5: Testing with Docker

The docker0 interace has an RFC1918 private IP address in the 172.16-172.30


range. This address, 172.17.42.1, will be the gateway address or the Docker
network and all our Docker containers.

 TIP Docker will deault to 172.17.x.x as a subnet unless that subnet is


already in use, in which case it will try to acquire another in the 172.16-172.30
ranges.

The docker0 interace is a virtual Ethernet bridge that connects our containers and
the local host network. I we look urther at the other interaces on our Docker
host, we’ll nd a series o interaces starting with veth.

Listing 5.40: The veth interaces

vethec6a Link encap:Ethernet HWaddr 86:e1:95:da:e2:5a


inet6 addr: fe80::84e1:95ff:feda:e25a/64 Scope:Link
. . .

Every time Docker creates a container, it creates a pair o peer interaces that are
like opposite ends o a pipe (i.e., a packet sent on one will be received on the
other). It gives one o the peers to the container to become its eth0 interace and
keeps the other peer, with a unique name like vethec6a, out on the host machine.
You can think o a veth interace as one end o a virtual network cable. One end is
plugged into the docker0 bridge, and the other end is plugged into the container.
By binding every veth* interace to the docker0 bridge, Docker creates a virtual
subnet shared between the host machine and every Docker container.
Let’s look inside a container now and see the other end o this pipe.

Version: v17.03.0 (38f1319) 164


Chapter 5: Testing with Docker

Listing 5.41: The eth0 interace in a container

$ sudo docker run -t -i ubuntu /bin/bash


root@b9107458f16a:/# ip a show eth0
1483: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP group default qlen 1000
link/ether f2:1f:28:de:ee:a7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.29/16 scope global eth0
inet6 fe80::f01f:28ff:fede:eea7/64 scope link
valid_lft forever preferred_lft forever

We see that Docker has assigned an IP address, 172.17.0.29, or our container that
will be peered with a virtual interace on the host side, allowing communication
between the host network and the container.
Let’s trace a route out o our container and see this now.

Listing 5.42: Tracing a route out o our container

root@b9107458f16a:/# apt-get -yqq update; apt-get install -yqq


traceroute
. . .
root@b9107458f16a:/# traceroute google.com
traceroute to google.com (74.125.228.78), 30 hops max, 60 byte
packets
1 172.17.42.1 (172.17.42.1) 0.078 ms 0.026 ms 0.024 ms
. . .
15 iad23s07-in-f14.1e100.net (74.125.228.78) 32.272 ms 28.050
ms 25.662 ms

We see that the next hop rom our container is the docker0 interace gateway IP
172.17.42.1 on the host network.

Version: v17.03.0 (38f1319) 165


Chapter 5: Testing with Docker

But there’s one other piece o Docker networking that enables this connectivity:
rewall rules and NAT conguration allow Docker to route between containers
and the host network.
Exit out o our container and let’s look at the IPTables NAT conguration on our
Docker host.

Listing 5.43: Docker iptables and NAT

$ sudo iptables -t nat -L -n


Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-
type LOCAL

Chain OUTPUT (policy ACCEPT)


target prot opt source destination
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-
type LOCAL

Chain POSTROUTING (policy ACCEPT)


target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 !172.17.0.0/16

Chain DOCKER (2 references)


target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49161 to
:172.17.0.18:6379

Here we have several interesting IPTables rules. Firstly, we can note that there
is no deault access into our containers. We specically have to open up ports to
communicate to them rom the host network. We see one example o this in the
DNAT, or destination NAT, rule that routes trac rom our container to port 49161
on the Docker host.

Version: v17.03.0 (38f1319) 166


Chapter 5: Testing with Docker

 TIP To learn more about advanced networking conguration or Docker, this
guide is useul.

Our Redis container’s network

Let’s examine our new Redis container and see its networking conguration using
the docker inspect command.

Version: v17.03.0 (38f1319) 167


Chapter 5: Testing with Docker

Listing 5.44: Redis container’s networking conguration

$ sudo docker inspect redis


. . .
"NetworkSettings": {
"Bridge": "",
. . .
"Ports": {
"6379/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "49161"
}
]
},
. . .
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.18",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:08",
. . .

The docker inspect command shows the details o a Docker container, including
its conguration and networking. We’ve truncated much o this inormation in
the example above and only shown the networking conguration. We could also
use the -f ag to only acquire the IP address.

Version: v17.03.0 (38f1319) 168


Chapter 5: Testing with Docker

Listing 5.45: Finding the Redis container’s IP address

$ sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis


172.17.0.18

Using the results o the docker inspect command we see that the container has an
IP address o 172.17.0.18 and uses the gateway address o the docker0 interace.
We can also see that the 6379 port is mapped to port 49161 on the local host, but,
because we’re on the local Docker host, we don’t have to use that port mapping.
We can instead use the 172.17.0.18 address to communicate with the Redis server
on port 6379 directly.

Listing 5.46: Talking directly to the Redis container

$ redis-cli -h 172.17.0.18
redis 172.17.0.18:6379>

Once you’ve conrmed the connection is working you can exit the Redis interace
using the quit command.

 NOTE Docker binds exposed ports on all interaces by deault; thereore,


the Redis server will also be available on the localhost or 127.0.0.1.

So, while this initially looks like it might be a good solution or connecting our
containers together, sadly, this approach has two big rough edges: Firstly, we’d
need to hard-code the IP address o our Redis container into our applications.
Secondly, i we restart the container, Docker changes the IP address. Let’s see this
now using the docker restart command (we’ll get the same result i we kill our
container using the docker kill command).

Version: v17.03.0 (38f1319) 169


Chapter 5: Testing with Docker

Listing 5.47: Restarting our Redis container

$ sudo docker restart redis

Let’s inspect its IP address.

Listing 5.48: Finding the restarted Redis container’s IP address

$ sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis


172.17.0.19

We see that our new Redis container has a new IP address, 172.17.0.19, which
means that i we’d hard-coded our Sinatra application, it would no longer be able
to connect to the Redis database. That’s not helpul.
Since Docker 1.9, Docker’s networking has become a lot more exible. Let’s look
at how we might connect our containers with this new networking ramework.

Docker networking

Container connections are created using networks. This is called Docker Network-
ing and was introduced in the Docker 1.9 release. Docker Networking allows
you to setup your own networks through which containers can communicate. Es-
sentially this supplements the existing docker0 network with new, user managed
networks. Importantly, containers can now communicate with each across hosts
and your networking conguration can be highly customizable. Networking also
integrates with Docker Compose and Swarm, we’ll see more o both in Chapter 7.

 NOTE The networking support is also pluggable, meaning you can add
network drivers to support specic topologies and networking rameworks rom

Version: v17.03.0 (38f1319) 170


Chapter 5: Testing with Docker

vendors like Cisco and VMware.

To use Docker networks we rst need to create a network and then launch a
container inside that network.

Listing 5.49: Creating a Docker network

$ sudo docker network create app


ec8bc3a70094a1ac3179b232bc185fcda120dad85dec394e6b5b01f7006476d4

This uses the docker network command to create a bridge network called app. A
network ID is returned or the network.
We can then inspect this network using the docker network inspect command.

Version: v17.03.0 (38f1319) 171


Chapter 5: Testing with Docker

Listing 5.50: Inspecting the app network

$ sudo docker network inspect app


[
{
"Name": "app",
"Id": "ec8bc...",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [ {} ]
},
"Containers": {},
"Options": {}
}
]

Our new network is a local, bridged network much like our docker0 network and
that currently no containers are running inside the network.

 TIP In addition to bridge networks, which exist on a single host, we can also
create overlay networks, which allow us to span multiple hosts. You can read
more about overlay networks in the Docker multi-host network documentation.

You can list all current networks using the docker network ls command.

Version: v17.03.0 (38f1319) 172


Chapter 5: Testing with Docker

Listing 5.51: The docker network ls command

$ sudo docker network ls


NETWORK ID NAME DRIVER
a74047bace7e bridge bridge
ec8bc3a70094 app bridge
8f0d4282ca79 none null
7c8cd5d23ad5 host host

And you can remove a network using the docker network rm command.
Let’s add some containers to our network, starting with a Redis container.

Listing 5.52: Creating a Redis container inside our Docker network

$ sudo docker run -d --net=app --name db jamtur01/redis

Here we’ve run a new container called db using our jamtur01/redis image. We’ve
also specied a new ag: --net. The --net ag species a network to run our
container inside.
Now i we re-run our docker network inspect command we’ll see quite a lot
more inormation.

Version: v17.03.0 (38f1319) 173


Chapter 5: Testing with Docker

Listing 5.53: The updated app network

$ sudo docker network inspect app


[
{
"Name": "app",
"Id": "ec8bc3a...",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [ {} ]
},
"Containers": { "9a5ac1...": {
"Name": "db"
"EndpointID": "21a90...",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": "" }
},
"Options": {}
}
]

Now, inside our network, we see a container with a MAC address and an IP address,
172.18.0.2.

Now let’s add a container to the network we’ve created. To do this we need to be
back in the sinatra directory.

Version: v17.03.0 (38f1319) 174


Chapter 5: Testing with Docker

Listing 5.54: Linking our Redis container

$ cd sinatra
$ sudo docker run -p 4567 \
--net=app --name network_test -t -i \
jamtur01/sinatra /bin/bash
root@305c5f27dbd1:/#

We’ve launched a container named network_test inside the app network. We’ve
launched it interactively so we can peek inside to see what’s happening.
As the container has been started inside the app network, Docker will have taken
note o all other containers running inside that network and populated their ad-
dresses in local DNS. Let’s see this now in the network_test container.
We rst need the dnsutils and iputils-ping packages to get the nslookup and
ping binaries respectively.

Listing 5.55: Installing nslookup

root@305c5f27dbd1:/# apt-get install -y dnsutils iputils-ping

Then let’s do the lookup.

Version: v17.03.0 (38f1319) 175


Chapter 5: Testing with Docker

Listing 5.56: DNS resolution in the network_test container

root@305c5f27dbd1:/# nslookup db
Server: 127.0.0.11
Address:127.0.0.11#53

Non-authoritative answer:
Name: db
Address: 172.18.0.2

We see that using the nslookup command to resolve the db container it returns
the IP address: 172.18.0.2. A Docker network will also add the app network as
a domain sux or the network, any host in the app network can be resolved by
hostname.app, here db.app. Let’s try that now.

Listing 5.57: Pinging db.app in the network_test container

root@305c5f27dbd1:/# ping db.app


PING db.app (172.18.0.2) 56(84) bytes of data.
64 bytes from db (172.18.0.2): icmp_seq=1 ttl=64 time=0.290 ms
64 bytes from db (172.18.0.2): icmp_seq=2 ttl=64 time=0.082 ms
64 bytes from db (172.18.0.2): icmp_seq=3 ttl=64 time=0.111 ms
. . .

In our case we just need the db entry to make our application unction. To make
that work our webapp’s Redis connection code already uses the db hostname.

Version: v17.03.0 (38f1319) 176


Chapter 5: Testing with Docker

Listing 5.58: The Redis DB hostname in code

redis = Redis.new(:host => 'db', :port => '6379')

We could now start our application and have our Sinatra application write its
variables into Redis via the connection between the db and webapp containers that
we’ve established via the app network.
Let’s try it now by exiting the network_test container and starting up a new con-
tainer running our Redis-enabled web application.

Listing 5.59: Starting the Redis-enabled Sinatra application

$ sudo docker run -d -p 4567 \


--net=app --name webapp_redis \
-v $PWD/webapp_redis:/opt/webapp jamtur01/sinatra

 NOTE This is the Redis-enabled Sinatra application we installed earlier in


the chapter. It’s available on GitHub here.

Here we’ve launched a new container called webapp_redis running our Redis-
enabled web application. Now let’s just check, on the Docker host, what port our
Sinatra container has bound the application.

Listing 5.60: Checking the Sinatra container’s port mapping

$ sudo docker port webapp_redis 4567


0.0.0.0:49162

Version: v17.03.0 (38f1319) 177


Chapter 5: Testing with Docker

Okay port 4567 in the container is bound to port 49162 on the Docker host. Let’s
use this inormation to test our application rom the Docker host using the curl
command.

Listing 5.61: Testing our Redis-enabled Sinatra application

$ curl -i -H 'Accept: application/json' \


-d 'name=Foo&status=Bar' http://localhost:49162/json
HTTP/1.1 200 OK
Content-Type: text/html;charset=utf-8
Content-Length: 29
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
Server: WEBrick/1.3.1 (Ruby/2.3.1/2016-04-26)
Date: Wed, 03 Aug 2016 18:30:06 GMT
Connection: Keep-Alive
{"name":"Foo","status":"Bar"}

And now let’s conrm that our Redis instance has received the update by querying
the Sinatra web application in webapp_redis.

Listing 5.62: Conrming Redis contains data

$ curl -i http://localhost:49162/json
"[{\"name\":\"Foo\",\"status\":\"Bar\"}]"

Here we’ve connected to our application, which has connected to Redis, checked
a list o keys to nd that we have a key called params, and then queried that key
to see that our parameters (name=Foo and status=Bar) have both been stored in
Redis. Our application works!

Version: v17.03.0 (38f1319) 178


Chapter 5: Testing with Docker

Connecting existing containers to the network

You can also add already running containers to existing networks using the docker
network connect command. So we can add an existing container to our app
network. Let’s say we have an existing container called db2 that also runs Redis.

Listing 5.63: Running the db2 container

$ sudo docker run -d --name db2 jamtur01/redis

Let’s add that to the app network (we could have also used the --net ag to
automatically add the container to the network at runtime).

Listing 5.64: Adding a new container to the app network

$ sudo docker network connect app db2

Now i we inspect the app network we should see three containers.

Version: v17.03.0 (38f1319) 179


Chapter 5: Testing with Docker

Listing 5.65: The app network ater adding db2

$ sudo docker network inspect app


. . .
"Containers": {
"2
fa7477c58d7707ea14d147f0f12311bb1f77104e49db55ac346d0ae961ac401
": {
"Name": "webapp_redis"
"EndpointID": "
c510c78af496fb88f1b455573d4c4d7fdfc024d364689a057b98ea20287bfc0d
",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"305
c5f27dbd11773378f93aa58e86b2f710dbfca9867320f82983fc6ba79e779
": {

. . .

"Name": "db2"
"EndpointID": "47
faec311dfac22f2ee8c1b874b87ce8987ee65505251366d4b9db422a749a1e"
,
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
}
},
. . .

Version: v17.03.0 (38f1319) 180


Chapter 5: Testing with Docker

We can also disconnect a container rom a network using the docker network
disconnect command.

Listing 5.66: Disconnecting a host rom a network

$ sudo docker network disconnect app db2

This would remove the db2 container rom the app network.
Containers can belong to multiple networks at once so you can create quite com-
plex networking models.

 TIP Further inormation on Docker Networking is available in the Docker


documentation.

Connecting containers summary

We’ve now seen all the ways Docker can connect containers together. You can see
that it is easy to create a ully unctional web application stack consisting o:

• A web server container running Sinatra.


• A Redis database container.
• A secure connection between the two containers.

You should also be able to see how easy it would be to extend this concept to pro-
vide any number o applications stacks and manage complex local development
with them, like:

• Wordpress, HTML, CSS, JavaScript.


• Ruby on Rails.

Version: v17.03.0 (38f1319) 181


Chapter 5: Testing with Docker

• Django and Flask.


• Node.js.
• Play!
• Or any other ramework that you like!

This way you can build, replicate, and iterate on production applications, even
complex multi-tier applications, in your local environment.

Using Docker or continuous integration

Up until now, all our testing examples have been local, single developer-centric
examples (i.e., how a local developer might make use o Docker to test a local web-
site or application). Let’s look at using Docker’s capabilities in a multi-developer
continuous integration testing scenario.
Docker excels at quickly generating and disposing o one or multiple containers.
There’s an obvious synergy with Docker’s capabilities and the concept o contin-
uous integration testing. Oten in a testing scenario you need to install sotware
or deploy multiple hosts requently, run your tests, and then clean up the hosts to
be ready to run again.
In a continuous integration environment, you might need these installation steps
and hosts multiple times a day. This adds a considerable build and conguration
overhead to your testing liecycle. Package and installation steps can also be time-
consuming and annoying, especially i requirements change requently or steps
require complex or time-consuming processes to clean up or revert.
Docker makes the deployment and cleanup o these steps and hosts cheap. To
demonstrate this, we’re going to build a testing pipeline in stages using Jenkins
CI: Firstly, we’re going to build a Jenkins server in Docker that runs other Docker
containers.
Once we’ve got Jenkins running, we’ll demonstrate a basic single-container test
run. Finally, we’ll look at a multi-container test scenario.

Version: v17.03.0 (38f1319) 182


Chapter 5: Testing with Docker

 TIP There are a number o continuous integration tool alternatives to Jenkins,


including Strider (http://stridercd.com/) and Drone.io (https://drone.io/), which
actually makes use o Docker.

Build a Jenkins and Docker server

To provide our Jenkins server, we’re going to build an image rom a Dockerfile
that both installs Jenkins and Docker.

Listing 5.67: Jenkins and Docker Dockerle

FROM jenkins
MAINTAINER [email protected]
ENV REFRESHED_AT 2016-06-01

USER root
RUN apt-get -qqy update; apt-get install -qqy sudo
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN wget http://get.docker.com/builds/Linux/x86_64/docker-latest.
tgz
RUN tar -xvzf docker-latest.tgz
RUN mv docker/* /usr/bin/

USER jenkins
RUN /usr/local/bin/install-plugins.sh junit git git-client ssh-
slaves greenballs chucknorris ws-cleanup

We see that our Dockerfile inherits rom the jenkins image. The jenkins image
is the ocial Jenkins image maintained by their community on the Docker Hub.
The Dockerfile then does a lot o other stuf. Indeed, it is probably the most
complex Dockerfile we’ve seen so ar. Let’s walk through what it does.

Version: v17.03.0 (38f1319) 183


Chapter 5: Testing with Docker

We’ve rst set the USER to root, installed the sudo package and allowed the
jenkins user to make use o sudo. We then installed the Docker binary. We’ll
use this to connect to our Docker host and run containers or our builds.
Next we switch back to the jenkins user. This user is the deault or the jenkins
image and is required or containers launched rom the image to run Jenkins
correctly. We then use a RUN instruction to execute the install-plugins.sh com-
mand to install a list o Jenkins plugins we’re going to use.
Next, let’s create a directory, /var/jenkins_home, to hold our Jenkin’s congura-
tion. This means every time we restart Jenkins we won’t lose our conguration.

 TIP Another approach would be to use Docker data volumes, which we’ll
discuss urther in Chapter 6.

Listing 5.68: Create directory or Jenkins

$ sudo mkdir -p /var/jenkins_home


$ cd /var/jenkins_home
$ sudo chown -R 1000 /var/jenkins_home

 TIP I you’re running this example on OS X you might need to create the
directory at /private/var/jenkins_home.

We also set the ownership o the jenkins_home directory to 1000, which is the UID
o the jenkins user inside the image we’re about to build. This will allow Jenkins
to write into this directory and store our Jenkins conguration.
Now that we have our Dockerfile and our Jenkins home directory, let’s build a
new image using the docker build command.

Version: v17.03.0 (38f1319) 184


Chapter 5: Testing with Docker

Listing 5.69: Building our Docker-Jenkins image

$ sudo docker build -t jamtur01/jenkins .

We’ve called our new image, somewhat unoriginally, jamtur01/jenkins. We can


now create a container rom this image using the docker run command.

Listing 5.70: Running our Docker-Jenkins image

$ sudo docker run -d -p 8080:8080 -p 50000:50000 \


-v /var/jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jamtur01/jenkins
cc130210491ee959a287f04b5e4c46340bbcb6a46971de15d3899699b7718656

We can see that we’ve used the -p ag to publish port 8080 on port 8080 on the
local host, which would normally be poor practice, but we’re only going to run
one Jenkins server. We’ve also bound port 50000 on port 50000 which will be used
by the Jenkins build API.
Next, we bind two volumes using the -v ag. The rst mounts our /var/
jenkins_home directory into the container at /var/jenkins_home. This will
contain Jenkin’s conguration data and allow us to perpetuate its state across
container launches.
The second volume mounts /var/run/docker.sock, the socket or Docker’s dae-
mon into the Docker container. This will allow us to run Docker containers rom
inside our Jenkins container.

 WARNING This is a security risk. By binding the Docker socket inside


Version: v17.03.0 (38f1319) 185
Chapter 5: Testing with Docker

the Jenkins container you give the container access to the underlying Docker host.
This is not overly secure. I recommend you only do this i you are comortable
that the Jenkins container, any other containers on that Docker host are at a
comparable security level.

We see that our new container, jenkins, has been started. Let’s check out its logs.

Version: v17.03.0 (38f1319) 186


Chapter 5: Testing with Docker

Listing 5.71: Checking the Docker Jenkins container logs

$ sudo docker logs jenkins


Running from: /usr/share/jenkins/jenkins.war
webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
Aug 04, 2016 3:11:50 AM org.eclipse.jetty.util.log.JavaUtilLog
info
INFO: Logging initialized @1760ms
Aug 04, 2016 3:11:51 AM winstone.Logger logInternal
INFO: Beginning extraction from war file

. . .

*************************************************************
*************************************************************
*************************************************************

Jenkins initial setup is required. An admin user has been created


and a password generated.
Please use the following password to proceed to installation:

e9eef9d4a4e44741b0368877a9efb17c

This may also be found at: /var/jenkins_home/secrets/


initialAdminPassword

*************************************************************
*************************************************************
*************************************************************

. . .

INFO: Jenkins is fully up and running

Version: v17.03.0 (38f1319) 187


Chapter 5: Testing with Docker

You can keep checking the logs, or run docker logs with the -f ag, until you
see a message similar to:

Listing 5.72: Checking that is Jenkins up and running

INFO: Jenkins is fully up and running

Take note o the initial admin password, in our case:


e9eef9d4a4e44741b0368877a9efb17c

This is also stored in a le in the jenkins_home directory at:


/var/jenkins_home/secrets/initialAdminPassword

Finally, our Jenkins server should now be available in your browser on port 8080,
as we see here:

Version: v17.03.0 (38f1319) 188


Chapter 5: Testing with Docker

Figure 5.3: Browsing the Jenkins server.

Put in the admin password generated during installation and click the Continue
button.

Figure 5.4: The Getting Started workow

This will initiate the Jenkins Getting Started workow. You can ollow it or
cancel it by clicking the X in the top right o the dialogue.
I you cancel the Getting Started dialogue you’ll also skip creating any users. To
log into Jenkins again we would use a user name o admin and our initial admin
password.

Create a new Jenkins job


Now that we have a running Jenkins server, let’s continue by creating a Jenkins
job to run. To do this, we’ll click the create new jobs link, which will open up

Version: v17.03.0 (38f1319) 189


Chapter 5: Testing with Docker

the New Job wizard.

Figure 5.5: Creating a new Jenkins job.

Let’s name our new job Docker_test_job, select a job type o Freestyle project,
and click OK to continue to the next screen.
Now let’s ll in a ew sections. We’ll start with a description o the job. Then
click the Advanced. . . button, tick the Use Custom workspace radio button, and
speciy /var/jenkins_home/jobs/${JOB_NAME}/workspace as the Directory. This
is the workspace in which our Jenkins job is going to run. It’s also stored in our
Jenkins home directory to ensure we maintain state across builds.
Under Source Code Management, select Git and speciy the ollowing test reposi-
tory: https://github.com/turnbullpress/docker-jenkins-sample.git. This is
a simple repository containing some Ruby-based RSpec tests.

Version: v17.03.0 (38f1319) 190


Chapter 5: Testing with Docker

Figure 5.6: Jenkins job details part 1.

Now we’ll scroll down and update a ew more elds. First, we’ll add a build step
by clicking the Add Build Step button and selecting Execute shell. Let’s speciy
this shell script that will launch our tests and Docker.

Version: v17.03.0 (38f1319) 191


Chapter 5: Testing with Docker

Listing 5.73: The Docker shell script or Jenkins jobs

# Build the image to be used for this job.


IMAGE=$(sudo docker build . | tail -1 | awk '{ print $NF }')

# Build the directory to be mounted into Docker.


MNT="$WORKSPACE/.."

# Execute the build inside Docker.


CONTAINER=$(sudo docker run -d -v $MNT:/opt/project/ $IMAGE /bin/
bash -c 'cd /opt/project/workspace; rake spec')

# Attach to the container so that we can see the output.


sudo docker attach $CONTAINER

# Get its exit code as soon as the container stops.


RC=$(sudo docker wait $CONTAINER)

# Delete the container we've just used.


sudo docker rm $CONTAINER

# Exit with the same value as that with which the process exited.
exit $RC

So what does this script do? Firstly, it will create a new Docker image using a
Dockerfile contained in the Git repository we’ve just specied. This Dockerfile
provides the test environment in which we wish to execute. Let’s take a quick
look at it now.

Version: v17.03.0 (38f1319) 192


Chapter 5: Testing with Docker

Listing 5.74: The Docker test job Dockerle

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01
RUN apt-get update
RUN apt-get -y install ruby rake
RUN gem install --no-rdoc --no-ri rspec ci_reporter_rspec

 TIP I we add a new dependency or require another package to run our tests,
all we’ll need to do is update this Dockerfile with the new requirements, and the
image will be automatically rebuilt when the tests are run.

Here we’re building an Ubuntu host, installing Ruby and RubyGems, and then
installing two gems: rspec and ci_reporter_rspec. This will build an image that
we can test using a typical Ruby-based application that relies on the RSpec test
ramework. The ci_reporter_rspec gem allows RSpec output to be converted
to JUnit-ormatted XML that Jenkins can consume. We’ll see the results o this
conversion shortly.
Back to our script. We’re building an image rom this Dockerfile. Next, we’re
creating a new environment variable called $MNT using the $WORKSPACE variable.
This is a variable, created by Jenkins, holding the workspace directory we dened
earlier in our job. This is where our Git repository containing the code we want
to test is going to be checked out to, and it is this directory we’re going to mount
into our Docker container. We can then execute our tests rom this checkout.
Next we create a container rom our image and run the tests. Inside this container,
we’ve mounted our workspace via a volume to the /opt/project directory. When
the container runs, we’re executing a command that changes into this directory
tree and executes the rake spec command, which actually runs our RSpec tests.

Version: v17.03.0 (38f1319) 193


Chapter 5: Testing with Docker

Now we’ve got a started container and we’ve grabbed the container ID.

 TIP Docker also comes with a command line option called --cidfile that
captures the container’s ID and stores it in a le specied in the --cidfile options,
like so: --cidfile=/tmp/containerid.txt

Whilst the container is running, we want to attach to that container to get the
output rom it using the docker attach command. and then use the docker wait
command. This will echo the test output into our Jenkins job. Finally, the docker
wait command blocks until the command the container is executing nishes and
then returns the exit code o the container. The RC variable captures the exit code
rom the container when it completes.
Finally, we clean up and delete the container we’ve just created and exit with the
container’s exit code. This should be the exit code o our test run. Jenkins relies
on this exit code to tell it i a job’s tests have run successully or ailed.
Next we click the Add post-build action and add Publish JUnit test result
report. In the Test report XMLs, we need to speciy spec/reports/*.xml; this
is the location o the ci_reporter gem’s XML output, and locating it will allow
Jenkins to consume our test history and output.
Finally, we must click the Save button to save our new job.

Version: v17.03.0 (38f1319) 194


Chapter 5: Testing with Docker

Figure 5.7: Jenkins job details part 2.

Running our Jenkins job

We now have our Jenkins job, so let’s run it. We’ll do this by clicking the Build
Now button; a job will appear in the Build History box.

Figure 5.8: Running the Jenkins job.

Version: v17.03.0 (38f1319) 195


Chapter 5: Testing with Docker

 NOTE The rst time the tests run, it’ll take a little longer because Docker
is building our new image. The next time you run the tests, however, it’ll be much
aster, as Docker will already have the required image prepared.

We’ll click on this job to get details o the test run we’re executing.

Figure 5.9: The Jenkins job details.

We can click on Console Output to see the commands that have been executed as
part o the job.

Version: v17.03.0 (38f1319) 196


Chapter 5: Testing with Docker

Figure 5.10: The Jenkins job console output.

We see that Jenkins has downloaded our Git repository to the workspace. We can
then execute our Shell script and build a Docker image using the docker build
command. Then, we’ll capture the image ID and use it to build a new container
using the docker run command. Running this new container executes the RSpec
tests and captures the results o the tests and the exit code. I the job exits with
an exit code o 0, then the job will be marked as successul.
You can also view the precise test results by clicking the Test Result link. This
will have captured the RSpec output o our tests in JUnit orm. This is the output
that the ci_reporter gem produces and our Ater Build step captures.

Next steps with our Jenkins job

We can also automate our Jenkins job urther by enabling SCM polling, which
triggers automatic builds when new commits are made to the repository. Similar
automation can be achieved with a post-commit hook or via a GitHub or Bitbucket
repository hook.

Version: v17.03.0 (38f1319) 197


Chapter 5: Testing with Docker

Summary o our Jenkins setup

We’ve achieved a lot so ar: we’ve installed Jenkins, run it, and created our rst
job. This Jenkins job uses Docker to create an image that we can manage and
keep updated using the Dockerfile contained in our repository. In this scenario,
not only does our inrastructure conguration live with our code, but managing
that conguration becomes a simple process. Containers are then created (rom
that image) in which we then run our tests. When we’re done with the tests, we
can dispose o the containers, which makes our testing ast and lightweight. It is
also easy to adapt this example to test on diferent platorms or using diferent
test rameworks or numerous languages.

 TIP You could also use parameterized builds to make this job and the shell
script step more generic to suit multiple rameworks and languages.

Multi-conguration Jenkins

We’ve now seen a simple, single container build using Jenkins. What i we wanted
to test our application on multiple platorms? Let’s say we’d like to test it on
Ubuntu, Debian, and CentOS. To do that, we can take advantage o a Jenkins job
type called a ”multi-conguration job” that allows a matrix o test jobs to be run.
When the Jenkins multi-conguration job is run, it will spawn multiple sub-jobs
that will test varying congurations.

Create a multi-conguration job

Let’s look at creating our new multi-conguration job. Click on the New Item link
rom the Jenkins console. We’re going to name our new job Docker_matrix_job,
select Multi-configuration project, and click OK.

Version: v17.03.0 (38f1319) 198


Chapter 5: Testing with Docker

Figure 5.11: Creating a multi-conguration job.

We’ll see a screen that is similar to the job creation screen we saw earlier. Let’s
add a description or our job, select Git as our repository type, and speciy our
sample application repository: https://github.com/turnbullpress/docker-
jenkins-sample.git.

Next, let’s scroll down and congure our multi-conguration axis. The axis is
the list o matrix elements that we’re going to execute as part o the job. We’ll
click the Add Axis button and select User-defined Axis. We’re going to speciy
an axis named OS (which will be short or operating system) and speciy three
values: centos, debian, and ubuntu. When we execute our multi-conguration
job, Jenkins will look at this axis and spawn three jobs: one or each point on the
axis.
Then, in the Build Environment section, we tick Delete workspace before
build starts. This option cleans up our build environment by deleting the
checked-out repository prior to initiating a new set o jobs.

Version: v17.03.0 (38f1319) 199


Chapter 5: Testing with Docker

Figure 5.12: Conguring a multi-conguration job Part 2.

Lastly, we’ve specied another shell build step with a simple shell script. It’s a
modication o the shell script we used earlier.

Version: v17.03.0 (38f1319) 200


Chapter 5: Testing with Docker

Listing 5.75: Jenkins multi-conguration shell step

# Build the image to be used for this run.


cd $OS; IMAGE=$(sudo docker build . | tail -1 | awk '{ print $NF
}')

# Build the directory to be mounted into Docker.


MNT="$WORKSPACE/.."

# Execute the build inside Docker.


CONTAINER=$(sudo docker run -d -v "$MNT:/opt/project" $IMAGE /bin
/bash -c "cd /opt/project/$OS; rake spec")

# Attach to the container's streams so that we can see the output


.
sudo docker attach $CONTAINER

# As soon as the process exits, get its return value.


RC=$(sudo docker wait $CONTAINER)

# Delete the container we've just used.


sudo docker rm $CONTAINER

# Exit with the same value as that with which the process exited.
exit $RC

We see that this script has a modication: we’re changing into directories named
or each operating system or which we’re executing a job. Inside our test repos-
itory that we have three directories: centos, debian, and ubuntu. Inside each
directory is a diferent Dockerfile containing the build instructions or a CentOS,
Debian, or Ubuntu image, respectively. This means that each job that is started
will change into the appropriate directory or the required operating system, build
an image based on that operating system, install any required prerequisites, and

Version: v17.03.0 (38f1319) 201


Chapter 5: Testing with Docker

launch a container based on that image in which to run our tests.


Let’s look at one o these new Dockerfile examples.

Listing 5.76: Our CentOS-based Dockerle

FROM centos:latest
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01
RUN yum -y install ruby rubygems rubygem-rake
RUN gem install --no-rdoc --no-ri rspec ci_reporter_rspec

This is a CentOS-based variant o the Dockerfile we were using as a basis o our


previous job. It basically perorms the same tasks as that previous Dockerfile
did, but uses the CentOS-appropriate commands like yum to install packages.
We’re also going to add a post-build action o Publish JUnit test result
report and speciy the location o our XML output: spec/reports/*.xml. This
will allow us to check the test result output.
Finally, we’ll click Save to create our new job and save our proposed conguration.
We can now see our reshly created job and note that it includes a section called
Configurations that contains sub-jobs or each element o our axis.

Version: v17.03.0 (38f1319) 202


Chapter 5: Testing with Docker

Figure 5.13: Our Jenkins multi-conguration job

Testing our multi-conguration job


Now let’s test this new job. We can launch our new multi-conguration job by
clicking the Build Now button. When Jenkins runs, it will create a master job.
This master job will, in turn, generate three sub-jobs that execute our tests on
each o the three platorms we’ve chosen.

 NOTE Like our previous job, it may take a little time to run the rst time,
as it builds the required images in which we’ll test. Once they are built, though,
the next runs should be much aster. Docker will only change the image i you
update the Dockerfile.

We see that the master job executes rst, and then each sub-job executes. Let’s
look at the output o one o these sub-jobs, our new centos job.

Version: v17.03.0 (38f1319) 203


Chapter 5: Testing with Docker

Figure 5.14: The centos sub-job.

We see that it has executed: the green ball tells us it executed successully. We
can drill down into its execution to see more. To do so, click on the #1 entry in
the Build History.

Figure 5.15: The centos sub-job details.

Here we see some more details o the executed centos job. We see that the job has
been Started by upstream project Docker_matrix_job and is build number 1.
To see the exact details o what happened during the run, we can check the console
output by clicking the Console Output link.

Version: v17.03.0 (38f1319) 204


Chapter 5: Testing with Docker

Figure 5.16: The centos sub-job console output.

We see that the job cloned the repository, built the required Docker image,
spawned a container rom that image, and then ran the required tests. All o
the tests passed successully (we can also check the Test Result link or the
uploaded JUnit test results i required).
We’ve now successully completed a simple, but powerul example o a multi-
platorm testing job or an application.

Summary o our multi-conguration Jenkins

These examples show simplistic implementations o Jenkins CI working with


Docker. You can enhance both o the examples shown with a lot o additional
capabilities ranging rom automated, triggered builds to multi-level job matrices
using combinations o platorm, architecture, and versions. Our simple Shell
build step could also be rewritten in a number o ways to make it more sophisti-
cated or to urther support multi-container execution (e.g., to provide separate
containers or web, database, or application layers to better simulate an actual
multi-tier production application).

Version: v17.03.0 (38f1319) 205


Chapter 5: Testing with Docker

Other alternatives

One o the more interesting parts o the Docker ecosystem is continuous integra-
tion and continuous deployment (CI/CD). Beyond integration with existing tools
like Jenkins, we’re also seeing people build their own tools and integrations on
top o Docker.

Drone

One o the more promising CI/CD tools being developed on top o Docker is Drone.
Drone is a SAAS continuous integration platorm that connects to GitHub, Bit-
bucket, and Google Code repositories written in a wide variety o languages, in-
cluding Python, Node.js, Ruby, Go, and numerous others. It runs the test suites
o repositories added to it inside a Docker container.

Shippable

Shippable is a ree, hosted continuous integration and deployment service or


GitHub and Bitbucket. It is blazing ast and lightweight, and it supports Docker
natively.

Summary

In this chapter, we’ve seen how to use Docker as a core part o our development
and testing workow. We’ve looked at developer-centric testing with Docker on a
local workstation or virtual machine. We’ve also explored scaling that testing up
to a continuous integration model using Jenkins CI as our tool. We’ve seen how
to use Docker or both point testing and how to build distributed matrix jobs.
In the next chapter, we’ll start to see how we can use Docker in production to
provide containerized, stackable, scalable, and resilient services.

Version: v17.03.0 (38f1319) 206


Chapter 6

Building services with Docker

In Chapter 5, we saw how to use Docker to acilitate better testing by using con-
tainers in our local development workow and in a continuous integration envi-
ronment. In this chapter, we’re going to explore using Docker to run production
services.
We’re going to build a simple application rst and then build some more complex
multi-container applications. We’ll explore how to make use o Docker eatures
like networking and volumes to combine and manage applications running in
Docker.

Building our rst application

The rst application we’re going to build is an on-demand website using the Jekyll
ramework. We’re going to build two images:

• An image that both installs Jekyll and the prerequisites we’ll need and builds
our Jekyll site.
• An image that serves our Jekyll site via Apache.

We’re going to make it on demand by creating a new Jekyll site when a new
container is launched. Our workow is going to be:

207
Chapter 6: Building services with Docker

• Create the Jekyll base image and the Apache image (once-of).
• Create a container rom our Jekyll image that holds our website source
mounted via a volume.
• Create a Docker container rom our Apache image that uses the volume con-
taining the compiled site and serve that out.
• Rinse and repeat as the site needs to be updated.

You could consider this a simple way to create multiple hosted website instances.
Our implementation is simple, but you will see how we extend it beyond this
simple premise later in the chapter.

The Jekyll base image

Let’s start creating a new Dockerfile or our rst image: the Jekyll base image.
Let’s create a new directory rst and an empty Dockerfile.

Listing 6.1: Creating our Jekyll Dockerle

$ mkdir jekyll
$ cd jekyll
$ vi Dockerfile

Now let’s populate our Dockerfile.

Version: v17.03.0 (38f1319) 208


Chapter 6: Building services with Docker

Listing 6.2: Jekyll Dockerle

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update


RUN apt-get -yqq install ruby ruby-dev build-essential nodejs
RUN gem install jekyll -v 2.5.3

VOLUME /data
VOLUME /var/www/html
WORKDIR /data

ENTRYPOINT [ "jekyll", "build", "--destination=/var/www/html" ]

Our Dockerfile uses the template we saw in Chapter 3 as its basis. Our image
is based on Ubuntu 16.04 and installs Ruby and the prerequisites necessary to
support Jekyll. It creates two volumes using the VOLUME instruction:

• /data/, which is going to hold our new website source code.


• /var/www/html/, which is going to hold our compiled Jekyll site.

We also need to set the working directory to /data/ and speciy an ENTRYPOINT
instruction that will automatically build any Jekyll site it nds in the /data/
working directory into the /var/www/html/ directory.

Building the Jekyll base image

With this Dockerfile, we will now build an image rom which we will launch
containers. We’ll do this using the docker build command.

Version: v17.03.0 (38f1319) 209


Chapter 6: Building services with Docker

Listing 6.3: Building our Jekyll image

$ sudo docker build -t jamtur01/jekyll .


Sending build context to Docker daemon 2.56 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:16.04
---> 99ec81b80c55
Step 1 : MAINTAINER James Turnbull <[email protected]>
. . .
Step 7 : ENTRYPOINT [ "jekyll", "build" "--destination=/var/www/
html" ]
---> Running in 542e2de2029d
---> 79009691f408
Removing intermediate container 542e2de2029d
Successfully built 79009691f408

We see that we’ve built a new image with an ID o 79009691f408 named jamtur01
/jekyll that is our new Jekyll image. We view our new image using the docker
images command.

Listing 6.4: Viewing our new Jekyll Base image

$ sudo docker images


REPOSITORY TAG ID CREATED SIZE
jamtur01/jekyll latest 79009691f408 6 seconds ago 12.29 kB (
virtual 671 MB)
. . .

Version: v17.03.0 (38f1319) 210


Chapter 6: Building services with Docker

The Apache image

Finally, let’s build our second image, an Apache server to serve out our new site.
Let’s create a new directory rst and an empty Dockerfile.

Listing 6.5: Creating our Apache Dockerle

$ mkdir apache
$ cd apache
$ vi Dockerfile

Now let’s populate our Dockerfile.

Version: v17.03.0 (38f1319) 211


Chapter 6: Building services with Docker

Listing 6.6: Jekyll Apache Dockerle

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update


RUN apt-get -yqq install apache2

VOLUME [ "/var/www/html" ]
WORKDIR /var/www/html

ENV APACHE_RUN_USER www-data


ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2

RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR

EXPOSE 80

ENTRYPOINT [ "/usr/sbin/apache2" ]
CMD ["-D", "FOREGROUND"]

This nal image is again based on Ubuntu 16.04 and installs Apache. It creates a
volume using the VOLUME instruction, /var/www/html/, which is going to hold our
compiled Jekyll website. We also set /var/www/html to be our working directory.
We’ll then use some ENV instructions to set some required environment variables,
create some required directories, and EXPOSE port 80. We’ve also specied an
ENTRYPOINT and CMD combination to run Apache by deault when the container

Version: v17.03.0 (38f1319) 212


Chapter 6: Building services with Docker

starts.

Building the Jekyll Apache image

With this Dockerfile, we will now build an image rom which we will launch
containers. We do this using the docker build command.

Listing 6.7: Building our Jekyll Apache image

$ sudo docker build -t jamtur01/apache .


Sending build context to Docker daemon 2.56 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:16.04
---> 99ec81b80c55
Step 1 : MAINTAINER James Turnbull <[email protected]>
---> Using cache
---> c444e8ee0058
. . .
Step 11 : CMD ["-D", "FOREGROUND"]
---> Running in 7aa5c127b41e
---> fc8e9135212d
Removing intermediate container 7aa5c127b41e
Successfully built fc8e9135212d

We see that we’ve built a new image with an ID o fc8e9135212d named jamtur01
/apache that is our new Apache image. We view our new image using the docker
images command.

Version: v17.03.0 (38f1319) 213


Chapter 6: Building services with Docker

Listing 6.8: Viewing our new Jekyll Apache image

$ sudo docker images


REPOSITORY TAG ID CREATED SIZE
jamtur01/apache latest fc8e9135212d 6 seconds ago 12.29 kB (
virtual 671 MB)
. . .

Launching our Jekyll site

Now we’ve got two images:

• Jekyll - Our Jekyll image with Ruby and the prerequisites installed.
• Apache - The image that will serve our compiled website via the Apache web
server.

Let’s get started on our new site by creating a new Jekyll container using the
docker run command. We’re going to launch a container and build our site.

We’re going to need some source code or our blog. Let’s clone a sample Jekyll
blog into our $HOME directory (in my case /home/james).

Listing 6.9: Getting a sample Jekyll blog

$ cd $HOME
$ git clone https://github.com/turnbullpress/james_blog.git

You can see a basic Twitter Bootstrap-enabled Jekyll blog inside this directory. I
you want to use it, you can easily update the _config.yml le and the theme to
suit your purposes.

Version: v17.03.0 (38f1319) 214


Chapter 6: Building services with Docker

Now let’s use this sample data inside our Jekyll container.

Listing 6.10: Creating a Jekyll container

$ sudo docker run -v /home/james/james_blog:/data/ \


--name james_blog jamtur01/jekyll
Configuration file: none
Source: /data
Destination: /var/www/html
Generating...
done.
Auto-regeneration: disabled. Use --watch to enable.

We’ve started a new container called james_blog and mounted our james_blog
directory inside the container as the /data/ volume. The container has taken
this source code and built it into a compiled site stored in the /var/www/html/
directory.
So we’ve got a completed site, now how do we use it? This is where volumes
become a lot more interesting. When we briey introduced volumes in Chapter
4, we discovered a bit about them. Let’s revisit that.
A volume is a specially designated directory within one or more containers that
bypasses the Union File System to provide several useul eatures or persistent or
shared data:

• Volumes can be shared and reused between containers.


• A container doesn’t have to be running to share its volumes.
• Changes to a volume are made directly.
• Changes to a volume will not be included when you update an image.
• Volumes persist even when no containers use them.

This allows you to add data (e.g., source code, a database, or other content) into
an image without committing it to the image and allows you to share that data
between containers.

Version: v17.03.0 (38f1319) 215


Chapter 6: Building services with Docker

Volumes live on your Docker host, in the /var/lib/docker/volumes directory.


You can identiy the location o specic volumes using the docker inspect com-
mand; or example:
docker inspect -f "{{ range .Mounts }}{{.}}{{end}}" james_blog

 TIP In Docker 1.9 volumes have been expanded to also support third-party
storage systems like Ceph, Flocker and EMC via plugins. You can read about them
in the volume plugins documentation and the docker volume create command
documentation.

So i we want to use our compiled site in the /var/www/html/ volume rom another
container, we can do so. To do this, we’ll create a new container that links to this
volume.

Listing 6.11: Creating an Apache container

$ sudo docker run -d -P --volumes-from james_blog jamtur01/apache


09a570cc2267019352525079fbba9927806f782acb88213bd38dde7e2795407d

This looks like a typical docker run, except that we’ve used a new ag: --volumes
-from. The --volumes-from ag adds any volumes in the named container to the
newly created container. This means our Apache container has access to the com-
piled Jekyll site in the /var/www/html volume within the james_blog container
we created earlier. It has that access even though the james_blog container is not
running. As you’ll recall, that is one o the special properties o volumes. The
container does have to exist, though.

 NOTE Even i you delete the last container that uses a volume, the volume
will still persist.

Version: v17.03.0 (38f1319) 216


Chapter 6: Building services with Docker

What is the end result o building our Jekyll website? Let’s see onto what port our
container has mapped our exposed port 80:

Listing 6.12: Resolving the Apache container’s port

$ sudo docker port 09a570cc2267 80


0.0.0.0:49160

Now let’s browse to that site on our Docker host.

Figure 6.1: Our Jekyll website.

We have a running Jekyll website!

Updating our Jekyll site

Things get even more interesting when we want to update our site. Let’s say we’d
like to make some changes to our Jekyll website. We’re going to rename our blog
by editing the james_blog/_config.yml le.

Version: v17.03.0 (38f1319) 217


Chapter 6: Building services with Docker

Listing 6.13: Editing our Jekyll blog

$ vi james_blog/_config.yml

And update the title eld to James' Dynamic Docker-driven Blog.


So how do we update our blog? All we need to do is start our Docker container
again with the docker start command..

Listing 6.14: Restarting our james_blog container

$ sudo docker start james_blog


james_blog

It looks like nothing happened. Let’s check the container’s logs.

Listing 6.15: Checking the james_blog container logs

$ sudo docker logs james_blog


Configuration file: /data/_config.yml
Source: /data
Destination: /var/www/html
Generating...
done.
Configuration file: /data/_config.yml
Source: /data
Destination: /var/www/html
Generating...
done.

We see that the Jekyll build process has been run a second time and our site has

Version: v17.03.0 (38f1319) 218


Chapter 6: Building services with Docker

been updated. The update has been written to our volume. Now i we browse to
the Jekyll website, we should see our update.

Figure 6.2: Our updated Jekyll website.

This all happened without having to update or restart our Apache container, be-
cause the volume it was sharing was updated automatically. You can see how easy
this workow is and how you could expand it or more complicated deployments.

Backing up our Jekyll volume

You’re probably a little worried about accidentally deleting your volume (although
we can prettily easily rebuild our site using the existing process). One o the
advantages o volumes is that because they can be mounted into any container,
we can easily create backups o them. Let’s create a new container now that backs
up the /var/www/html volume.

Version: v17.03.0 (38f1319) 219


Chapter 6: Building services with Docker

Listing 6.16: Backing up the /var/www/html volume

$ sudo docker run --rm --volumes-from james_blog \


-v $(pwd):/backup ubuntu \
tar cvf /backup/james_blog_backup.tar /var/www/html
tar: Removing leading '/' from member names
/var/www/html/
/var/www/html/assets/
/var/www/html/assets/themes/
. . .
$ ls james_blog_backup.tar
james_blog_backup.tar

Here we’ve run a stock Ubuntu container and mounted the volume rom
james_blog into that container. That will create the directory /var/www/html
inside the container. We’ve then used the -v ag to mount our current directory,
using the $(pwd) command, inside the container at /backup. Our container then
runs the command.

 TIP We’ve also specied the --rm ag, which is useul or single-use or throw-
away containers. It automatically deletes the container ater the process running
in it is ended. This is a neat way o tidying up ater ourselves or containers we
only need once.

Listing 6.17: Backup command

tar cvf /backup/james_blog_backup.tar /var/www/html

This will create a tarle called james_blog_backup.tar containing the contents o

Version: v17.03.0 (38f1319) 220


Chapter 6: Building services with Docker

the /var/www/html directory and then exit. This process creates a backup o our
volume.
This is a simple example o a backup process. You could easily extend this to back
up to storage locally or in the cloud (e.g., to Amazon S3 or to more traditional
backup sotware like Amanda).

 TIP This example could also work or a database stored in a volume or similar
data. Simply mount the volume in a resh container, perorm your backup, and
discard the container you created or the backup.

Extending our Jekyll website example

Here are some ways we could expand on our simple Jekyll website service:

• Run multiple Apache containers, all which use the same volume rom the
james_blog container. Put a load balancer in ront o it, and we have a web
cluster.
• Build a urther image that cloned or copied a user-provided source (e.g., a
git clone) into a volume. Mount this volume into a container created rom
our jamtur01/jeykll image. This would make the solution portable and
generic and would not require any local source on a host.
• With the previous expansion, you could easily build a web ront end or our
service that built and deployed sites automatically rom a specied source.
Then you would have your own variant o GitHub Pages.

Building a Java application server with Docker

Now let’s take a slightly diferent tack and think about Docker as an application
server and build pipeline. This time we’re serving a more ”enterprisey” and tra-

Version: v17.03.0 (38f1319) 221


Chapter 6: Building services with Docker

ditional workload: etching and running a Java application rom a WAR le in a
Tomcat server. To do this, we’re going to build a two-stage Docker pipeline:

• An image that pulls down specied WAR les rom a URL and stores them
in a volume.
• An image with a Tomcat server installed that runs those downloaded WAR
les.

A WAR le etcher

Let’s start by building an image to download a WAR le or us and mount it in a
volume.

Listing 6.18: Creating our etcher Dockerle

$ mkdir fetcher
$ cd fetcher
$ touch Dockerfile

Now let’s populate our Dockerfile.

Version: v17.03.0 (38f1319) 222


Chapter 6: Building services with Docker

Listing 6.19: Our war le etcher

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update


RUN apt-get -yqq install wget

VOLUME [ "/var/lib/tomcat7/webapps/" ]
WORKDIR /var/lib/tomcat7/webapps/

ENTRYPOINT [ "wget" ]
CMD [ "-?" ]

This incredibly simple image does one thing: it wgets whatever le rom a URL
that is specied when a container is run rom it and stores the le in the /var/lib
/tomcat7/webapps/ directory. This directory is also a volume and the working
directory or any containers. We’re going to share this volume with our Tomcat
server and run its contents.
Finally, the ENTRYPOINT and CMD instructions allow our container to run when no
URL is specied; they do so by returning the wget help output when the container
is run without a URL.
Let’s build this image now.

Listing 6.20: Building our etcher image

$ sudo docker build -t jamtur01/fetcher .

Version: v17.03.0 (38f1319) 223


Chapter 6: Building services with Docker

Fetching a WAR le

Let’s etch an example le as a way to get started with our new image. We’re
going to download the sample Apache Tomcat application rom https://tomcat.
apache.org/tomcat-7.0-doc/appdev/sample/.

Listing 6.21: Fetching a war le

$ sudo docker run -t -i --name sample jamtur01/fetcher \


https://tomcat.apache.org/tomcat-7.0-doc/appdev/sample/sample.war
--2014-06-21 06:05:19-- https://tomcat.apache.org/tomcat-7.0-doc
/appdev/sample/sample.war
Resolving tomcat.apache.org (tomcat.apache.org)...
140.211.11.131, 192.87.106.229, 2001:610:1:80bc:192:87:106:229
Connecting to tomcat.apache.org (tomcat.apache.org)
|140.211.11.131|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4606 (4.5K)
Saving to: 'sample.war'

100%[=================================>] 4,606 --.-K/s in


0s

2014-06-21 06:05:19 (14.4 MB/s) - 'sample.war' saved [4606/4606]

We see that our container has taken the provided URL and downloaded the sample
.war le. We can’t see it here, but because we set the working directory in the con-
tainer, that sample.war le will have ended up in our /var/lib/tomcat7/webapps
/ directory.

Our WAR le is in the /var/lib/docker directory. Let’s rst establish where the
volume is located using the docker inspect command.

Version: v17.03.0 (38f1319) 224


Chapter 6: Building services with Docker

Listing 6.22: Inspecting our Sample volume

$ sudo docker inspect -f "{{ range .Mounts }}{{.}}{{end}}" sample


{c20a0567145677ed46938825f285402566e821462632e1842e82bc51b47fe4dc
/var/lib/docker/volumes/
c20a0567145677ed46938825f285402566e821462632e1842e82bc51b47fe4dc
/_data /var/lib/tomcat7/webapps local true}

We then list this directory.

Listing 6.23: Listing the volume directory

$ ls -l /var/lib/docker/volumes/
c20a0567145677ed46938825f285402566e821462632e1842e82bc51b47fe4dc
/_data
total 8
-rw-r--r-- 1 root root 4606 Mar 31 2012 sample.war

Our Tomcat 7 application server

We have an image that will get us WAR les, and we have a sample WAR le down-
loaded into a container. Let’s build an image that will be the Tomcat application
server that will run our WAR le.

Version: v17.03.0 (38f1319) 225


Chapter 6: Building services with Docker

Listing 6.24: Creating our Tomcat 7 Dockerle

$ mkdir tomcat7
$ cd tomcat7
$ touch Dockerfile

Now let’s populate our Dockerfile.

Listing 6.25: Our Tomcat 7 Application server

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update


RUN apt-get -yqq install tomcat7 default-jdk

ENV CATALINA_HOME /usr/share/tomcat7


ENV CATALINA_BASE /var/lib/tomcat7
ENV CATALINA_PID /var/run/tomcat7.pid
ENV CATALINA_SH /usr/share/tomcat7/bin/catalina.sh
ENV CATALINA_TMPDIR /tmp/tomcat7-tomcat7-tmp

RUN mkdir -p $CATALINA_TMPDIR

VOLUME [ "/var/lib/tomcat7/webapps/" ]

EXPOSE 8080

ENTRYPOINT [ "/usr/share/tomcat7/bin/catalina.sh", "run" ]

Version: v17.03.0 (38f1319) 226


Chapter 6: Building services with Docker

Our image is pretty simple. We need to install a Java JDK and the Tomcat server.
We’ll speciy some environment variables Tomcat needs in order to get started,
then create a temporary directory. We’ll also create a volume called /var/lib
/tomcat7/webapps/, expose port 8080 (the Tomcat deault), and nally use an
ENTRYPOINT instruction to launch Tomcat.

Now let’s build our Tomcat 7 image.

Listing 6.26: Building our Tomcat 7 image

$ sudo docker build -t jamtur01/tomcat7 .

Running our WAR le

Now let’s see our Tomcat server in action by creating a new Tomcat instance
running our sample application.

Listing 6.27: Creating our rst Tomcat instance

$ sudo docker run --name sample_app --volumes-from sample \


-d -P jamtur01/tomcat7

This will create a new container named sample_app that reuses the volumes rom
the sample container. This means our WAR le, stored in the /var/lib/tomcat7/
webapps/ volume, will be mounted rom the sample container into the sample_app
container and then loaded by Tomcat and executed.
Let’s look at our sample application in the web browser. First, we must identiy
the port being exposed using the docker port command.

Version: v17.03.0 (38f1319) 227


Chapter 6: Building services with Docker

Listing 6.28: Identiying the Tomcat application port

$ sudo docker port sample_app 8080


0.0.0.0:49154

Now let’s browse to our application (using the URL and port and adding the /
sample sux) and see what’s there.

Figure 6.3: Our Tomcat sample application.

We should see our running Tomcat application.

Building on top o our Tomcat application server

Now we have the building blocks o a simple on-demand web service. Let’s look
at how we might expand on this. To do so, we’ve built a simple Sinatra-based web
application to automatically provision Tomcat applications via a web page. We’ve
called this application TProv. You can see its source code here or on GitHub.
Let’s install it as a demo o how you might extend this or similar examples. First,
we’ll need to ensure Ruby is installed. We’re going to install our TProv application
on our Docker host because our application is going to be directly interacting with
our Docker daemon, so that’s where we need to install Ruby.

Version: v17.03.0 (38f1319) 228


Chapter 6: Building services with Docker

 NOTE We could also install the TProv application inside a Docker container.

Listing 6.29: Installing Ruby

$ sudo apt-get -qqy install ruby make ruby-dev

We then install our application rom a Ruby gem.

Listing 6.30: Installing the TProv application

$ sudo gem install --no-rdoc --no-ri tprov


. . .
Successfully installed tprov-0.0.6

This will install the TProv application and some supporting gems.
We then launch the application using the tprov binary.

Listing 6.31: Launching the TProv application

$ sudo tprov
[2014-06-21 16:17:24] INFO WEBrick 1.3.1
[2014-06-21 16:17:24] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux
]
== Sinatra/1.4.5 has taken the stage on 4567 for development with
backup from WEBrick
[2014-06-21 16:17:24] INFO WEBrick::HTTPServer#start: pid=14209
port=4567

Version: v17.03.0 (38f1319) 229


Chapter 6: Building services with Docker

This command has launched our application; now we can browse to the TProv
website on port 4567 o the Docker host.

Figure 6.4: Our TProv web application.

We speciy a Tomcat application name and the URL to a Tomcat WAR le. Let’s
download a sample calendar application rom here and call it Calendar.

Version: v17.03.0 (38f1319) 230


Chapter 6: Building services with Docker

Figure 6.5: Downloading a sample application.

We click Submit to download the WAR le, place it into a volume, run a Tomcat
server, and serve the WAR le in that volume. We see our instance by clicking on
the List instances link.
This shows us:

• The container ID.


• The container’s internal IP address.
• The interace and port it is mapped to.

Figure 6.6: Listing the Tomcat instances.

Version: v17.03.0 (38f1319) 231


Chapter 6: Building services with Docker

Using this inormation, we check the status o our application by browsing to the
mapped port. We can also use the Delete? checkbox to remove an instance.
You can see how we achieved this by looking at the TProv application code. It’s a
pretty simple application that shells out to the docker binary and captures output
to run and remove containers.
You’re welcome to use the TProv code or adapt or write your own 1 , but its primary
purpose is to show you how easy it is to extend a simple application deployment
pipeline built with Docker.

 WARNING The TProv application is pretty simple and lacks some error
handling and tests. It’s simple code, built in an hour to demonstrate how powerul
Docker can be as a tool or building applications and services. I you nd a bug
with the application (or want to make it better), please let me know with an issue
or PR here.

A multi-container application stack

In our last service example, we’re going ull hipster by Dockerizing a Node.js
application that makes use o the Express ramework with a Redis back end. We’re
going to demonstrate a combination o all the Docker eatures we’ve learned over
the last two chapters, including networking and volumes.
In our sample application, we’re going to build a series o images that will allow
us to deploy a multi-container application:

• A Node container to serve our Node application, linked to:


• A Redis primary container to hold and cluster our state, linked to:
• Two Redis replica containers to cluster our state.
• A logging container to capture our application logs.
1
Really write your own - no one but me loves my code.

Version: v17.03.0 (38f1319) 232


Chapter 6: Building services with Docker

We’re then going to run our Node application in a container with Redis in primary-
replica conguration in multiple containers behind it.

The Node.js image

Let’s start with an image that installs Node.js, our Express application, and the
associated prerequisites.

Listing 6.32: Creating our Node.js Dockerle

$ mkdir -p nodejs/nodeapp
$ cd nodejs/nodeapp
$ wget https://raw.githubusercontent.com/jamtur01/dockerbook-code
/master/code/6/node/nodejs/nodeapp/package.json
$ wget https://raw.githubusercontent.com/jamtur01/dockerbook-code
/master/code/6/node/nodejs/nodeapp/server.js
$ cd ..
$ vi Dockerfile

We’ve created a new directory called nodejs and a sub-directory, nodeapp, to hold
our application code. We’ve then changed into this directory and downloaded the
source code or our Node.JS application.

 NOTE You can get our Node application’s source code here or on GitHub
here.

Finally, we’ve changed back to the nodejs directory and now we populate our
Dockerfile.

Version: v17.03.0 (38f1319) 233


Chapter 6: Building services with Docker

Listing 6.33: Our Node.js image

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update


RUN apt-get -yqq install nodejs npm
RUN ln -s /usr/bin/nodejs /usr/bin/node
RUN mkdir -p /var/log/nodeapp

ADD nodeapp /opt/nodeapp/

WORKDIR /opt/nodeapp
RUN npm install

VOLUME [ "/var/log/nodeapp" ]

EXPOSE 3000

ENTRYPOINT [ "nodejs", "server.js" ]

Our Node.js image installs Node and makes a simple workaround o linking the
binary nodejs to node to address some backwards compatibility issues on Ubuntu.
We then add our nodeapp code into the /opt/nodeapp directory using an ADD in-
struction. Our Node.js application is a simple Express server and contains both
a package.json le holding the application’s dependency inormation and the
server.js le that contains our actual application. Let’s look at a subset o that
application.

Version: v17.03.0 (38f1319) 234


Chapter 6: Building services with Docker

Listing 6.34: Our Node.js server.js application

. . .

var logFile = fs.createWriteStream('/var/log/nodeapp/nodeapp.log


', {flags: 'a'});

app.configure(function() {

. . .

app.use(express.session({
store: new RedisStore({
host: process.env.REDIS_HOST || 'redis_primary',
port: process.env.REDIS_PORT || 6379,
db: process.env.REDIS_DB || 0
}),
cookie: {

. . .

app.get('/', function(req, res) {


res.json({
status: "ok"
});
});

. . .

var port = process.env.HTTP_PORT || 3000;


server.listen(port);
console.log('Listening on port ' + port);

Version: v17.03.0 (38f1319) 235


Chapter 6: Building services with Docker

The server.js le pulls in all the dependencies and starts an Express application.
The Express app is congured to store its session inormation in Redis and exposes
a single endpoint that returns a status message as JSON. We’ve congured its
connection to Redis to use a host called redis_primary with an option to override
this with an environment variable i needed.
The application will also log to the /var/log/nodeapp/nodeapp.log le and will
listen on port 3000.

 NOTE You can get our Node application’s source code here or on GitHub
here.

We’ve then set the working directory to /opt/nodeapp and installed the prereq-
uisites or our Node application. We’ve also created a volume that will hold our
Node application’s logs, /var/log/nodeapp.
We expose port 3000 and nally speciy an ENTRYPOINT o nodejs server.js that
will run our Node application.
Let’s build our image now.

Listing 6.35: Building our Node.js image

$ sudo docker build -t jamtur01/nodejs .

The Redis base image

Let’s continue with our rst Redis image: a base image that will install Redis. It
is on top o this base image that we’ll build our Redis primary and replica images.

Version: v17.03.0 (38f1319) 236


Chapter 6: Building services with Docker

Listing 6.36: Creating our Redis base Dockerle

$ mkdir redis_base
$ cd redis_base
$ vi Dockerfile

Now let’s populate our Dockerfile.

Listing 6.37: Our Redis base image

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update


RUN apt-get install -yqq software-properties-common python-
software-properties
RUN add-apt-repository ppa:chris-lea/redis-server
RUN apt-get -yqq update
RUN apt-get -yqq install redis-server redis-tools

VOLUME [ "/var/lib/redis", "/var/log/redis" ]

EXPOSE 6379
CMD []

Our Redis base image installs the latest version o Redis (rom a PPA rather than
using the older packages shipped with Ubuntu), species two VOLUMEs (/var/lib
/redis and /var/log/redis), and exposes the Redis deault port 6379. It doesn’t
have an ENTRYPOINT or CMD because we’re not actually going to run this image.
We’re just going to build on top o it.

Version: v17.03.0 (38f1319) 237


Chapter 6: Building services with Docker

Let’s build our Redis primary image now.

Listing 6.38: Building our Redis base image

$ sudo docker build -t jamtur01/redis .

The Redis primary image

Let’s continue with our rst Redis image: a Redis primary server.

Listing 6.39: Creating our Redis primary Dockerle

$ mkdir redis_primary
$ cd redis_primary
$ vi Dockerfile

Now let’s populate our Dockerfile.

Listing 6.40: Our Redis primary image

FROM jamtur01/redis
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

ENTRYPOINT [ "redis-server", "--protected-mode no", "--logfile /


var/log/redis/redis-server.log" ]

Our Redis primary image is based on our jamtur01/redis image and has an
ENTRYPOINT that runs the deault Redis server with logging directed to /var/log/
redis/redis-server.log.

Version: v17.03.0 (38f1319) 238


Chapter 6: Building services with Docker

Let’s build our Redis primary image now.

Listing 6.41: Building our Redis primary image

$ sudo docker build -t jamtur01/redis_primary .

The Redis replica image

As a complement to our Redis primary image, we’re going to create an image


that runs a Redis replica to allow us to provide some redundancy to our Node.js
application.

Listing 6.42: Creating our Redis replica Dockerle

$ mkdir redis_replica
$ cd redis_replica
$ touch Dockerfile

Now let’s populate our Dockerfile.

Listing 6.43: Our Redis replica image

FROM jamtur01/redis
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

ENTRYPOINT [ "redis-server", "--protected-mode no", "--logfile /


var/log/redis/redis-replica.log", "--slaveof redis_primary 6379
" ]

Version: v17.03.0 (38f1319) 239


Chapter 6: Building services with Docker

Again, we base our image on jamtur01/redis and speciy an ENTRYPOINT that runs
the deault Redis server with our logle and the slaveof option. This congures
our primary-replica relationship and tells any containers built rom this image
that they are a replica o the redis_primary host and should attempt replication
on port 6379.
Let’s build our Redis replica image now.

Listing 6.44: Building our Redis replica image

$ sudo docker build -t jamtur01/redis_replica .

Creating our Redis back-end cluster

Now that we have both a Redis primary and replica image, we build our own Redis
replication environment. Let’s start by creating a network to hold our Express
application. We’ll call it express.

Listing 6.45: Creating the express network

$ sudo docker network create express


dfe9fe7ee5c9bfa035b7cf10266f29a701634442903ed9732dfdba2b509680c2

Now let’s run the Redis primary container inside this network.

Listing 6.46: Running the Redis primary container

$ sudo docker run -d -h redis_primary \


--net express --name redis_primary jamtur01/redis_primary
d21659697baf56346cc5bbe8d4631f670364ffddf4863ec32ab0576e85a73d27

Version: v17.03.0 (38f1319) 240


Chapter 6: Building services with Docker

Here we’ve created a container with the docker run command rom the jamtur01
/redis_primary image. We’ve used a new ag that we’ve not seen beore, -h,
which sets the hostname o the container. This overrides the deault behavior
(setting the hostname o the container to the short container ID) and allows us to
speciy our own hostname. We’ll use this to ensure that our container is given a
hostname o redis_primary and will thus be resolved that way with local DNS.
We’ve specied the --name ag to ensure that our container’s name is
redis_primary and we’ve specied the --net ag to run the container in
the express network. We’re going to use this network or our container
connectivity, as we’ll see shortly.
Let’s see what the docker logs command can tell us about our Redis primary
container.

Listing 6.47: Our Redis primary logs

$ sudo docker logs redis_primary

Nothing? Why is that? Our Redis server is logging to a le rather than to standard
out, so we see nothing in the Docker logs. So how can we tell what’s happening
to our Redis server? To do that, we use the /var/log/redis volume we created
earlier. Let’s use this volume and read some log les now.

Listing 6.48: Reading our Redis primary logs

$ sudo docker run -ti --rm --volumes-from redis_primary \


ubuntu cat /var/log/redis/redis-server.log
. . .
1:M 05 Aug 15:22:21.697 # Server started, Redis version 3.0.7
. . .
1:M 05 Aug 15:22:21.698 * The server is now ready to accept
connections on port 6379

Version: v17.03.0 (38f1319) 241


Chapter 6: Building services with Docker

Here we’ve run another container interactively. We’ve specied the --rm ag,
which automatically deletes a container ater the process it runs stops. We’ve
also specied the --volumes-from ag and told it to mount the volumes rom our
redis_primary container. Then we’ve specied a base ubuntu image and told it to
cat the /var/log/redis/redis-server.log log le. This takes advantage o vol-
umes to allow us to mount the /var/log/redis directory rom the redis_primary
container and read the log le inside it. We’re going to see more about how we
use this shortly.
Looking at our Redis logs, we see some general warnings, but everything is looking
pretty good. Our Redis server is ready to receive data on port 6379.
So next, let’s create our rst Redis replica.

Listing 6.49: Running our rst Redis replica container

$ sudo docker run -d -h redis_replica1 \


--name redis_replica1 \
--net express \
jamtur01/redis_replica
0ae440b5c56f48f3190332b4151c40f775615016bf781fc817f631db5af34ef8

We’ve run another container: this one rom the jamtur01/redis_replica image.
We’ve again specied a hostname (with the -h ag) o redis_replica1 and a
name (with --name) o redis_replica1. We’ve also used the --net ag to run our
Redis replica container inside the express network.
Let’s check this new container’s logs.

Version: v17.03.0 (38f1319) 242


Chapter 6: Building services with Docker

Listing 6.50: Reading our Redis replica logs

$ sudo docker run -ti --rm --volumes-from redis_replica1 \


ubuntu cat /var/log/redis/redis-replica.log
...
1:S 05 Aug 15:23:57.733 # Server started, Redis version 3.0.7
1:S 05 Aug 15:23:57.733 * The server is now ready to accept
connections on port 6379
1:S 05 Aug 15:23:57.733 * Connecting to MASTER redis_primary:6379
1:S 05 Aug 15:23:57.743 * MASTER <-> SLAVE sync started
1:S 05 Aug 15:23:57.743 * Non blocking connect for SYNC fired the
event.
1:S 05 Aug 15:23:57.743 * Master replied to PING, replication can
continue...
1:S 05 Aug 15:23:57.744 * Partial resynchronization not possible
(no cached master)
1:S 05 Aug 15:23:57.751 * Full resync from master: 692
b4d19978a2d6add881944a079ab8b8dae6653:1
1:S 05 Aug 15:23:57.841 * MASTER <-> SLAVE sync: receiving 18
bytes from master
1:S 05 Aug 15:23:57.841 * MASTER <-> SLAVE sync: Flushing old
data
1:S 05 Aug 15:23:57.841 * MASTER <-> SLAVE sync: Loading DB in
memory
1:S 05 Aug 15:23:57.841 * MASTER <-> SLAVE sync: Finished with
success

We’ve run another container to query our logs interactively. We’ve again specied
the --rm ag, which automatically deletes a container ater the process it runs
stops. We’ve specied the --volumes-from ag and told it to mount the volumes
rom our redis_replica1 container this time. Then we’ve specied a base ubuntu
image and told it to cat the /var/log/redis/redis-replica.log log le.

Version: v17.03.0 (38f1319) 243


Chapter 6: Building services with Docker

Woot! We’re of and replicating between our redis_primary container and our
redis_replica1 container.

Let’s add another replica, redis_replica2, just to be sure.

Listing 6.51: Running our second Redis replica container

$ sudo docker run -d -h redis_replica2 \


--name redis_replica2 \
--net express \
jamtur01/redis_replica
72267cd74c412c7b168d87bba70f3aaa3b96d17d6e9682663095a492bc260357

Let’s see a sampling o the logs rom our new container.

Version: v17.03.0 (38f1319) 244


Chapter 6: Building services with Docker

Listing 6.52: Our Redis replica2 logs

$ sudo docker run -ti --rm --volumes-from redis_replica2 ubuntu \


cat /var/log/redis/redis-replica.log
. . .
1:S 05 Aug 15:27:38.355 # Server started, Redis version 3.0.7
1:S 05 Aug 15:27:38.355 * The server is now ready to accept
connections on port 6379
1:S 05 Aug 15:27:38.355 * Connecting to MASTER redis_primary:6379
1:S 05 Aug 15:27:38.366 * MASTER <-> SLAVE sync started
1:S 05 Aug 15:27:38.366 * Non blocking connect for SYNC fired the
event.
1:S 05 Aug 15:27:38.366 * Master replied to PING, replication can
continue...
1:S 05 Aug 15:27:38.366 * Partial resynchronization not possible
(no cached master)
1:S 05 Aug 15:27:38.372 * Full resync from master: 692
b4d19978a2d6add881944a079ab8b8dae6653:309
1:S 05 Aug 15:27:38.465 * MASTER <-> SLAVE sync: receiving 18
bytes from master
1:S 05 Aug 15:27:38.465 * MASTER <-> SLAVE sync: Flushing old
data
1:S 05 Aug 15:27:38.465 * MASTER <-> SLAVE sync: Loading DB in
memory
1:S 05 Aug 15:27:38.465 * MASTER <-> SLAVE sync: Finished with
success

And again, we’re of and away replicating!

Version: v17.03.0 (38f1319) 245


Chapter 6: Building services with Docker

Creating our Node container

Now that we’ve got our Redis cluster running, we launch a container or our
Node.js application.

Listing 6.53: Running our Node.js container

$ sudo docker run -d \


--name nodeapp -p 3000:3000 \
--net express \
jamtur01/nodejs
9a9dd33957c136e98295de7405386ed2c452e8ad263a6ec1a2a08b24f80fd175

We’ve created a new container rom our jamtur01/nodejs image, specied a name
o nodeapp, and mapped port 3000 inside the container to port 3000 outside. We’ve
also run our new nodeapp container in the express network.
We use the docker logs command to see what’s going on in our nodeapp con-
tainer.

Listing 6.54: The nodeapp console log

$ sudo docker logs nodeapp


Listening on port 3000

Here we see that our Node application is bound and listening at port 3000.
Let’s browse to our Docker host and see the application at work.

Version: v17.03.0 (38f1319) 246


Chapter 6: Building services with Docker

Figure 6.7: Our Node application.

We see that our simple Node application returns an OK status.

Listing 6.55: Node application output

{
"status": "ok"
}

That tells us it’s working. Our session state will also be recorded and stored in
our primary Redis container, redis_primary, then replicated to our Redis replicas:
redis_replica1 and redis_replica2.

Capturing our application logs

Now that our application is up and running, we’ll want to put it into production,
which involves ensuring that we capture its log output and put it into our logging
servers. We are going to use Logstash to do so. We’re going to start by creating
an image that installs Logstash.

Version: v17.03.0 (38f1319) 247


Chapter 6: Building services with Docker

Listing 6.56: Creating our Logstash Dockerle

$ mkdir logstash
$ cd logstash
$ touch Dockerfile

Now let’s populate our Dockerfile.

Listing 6.57: Our Logstash image

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update


RUN apt-get -yqq install wget
RUN wget -O - http://packages.elasticsearch.org/GPG-KEY-
elasticsearch | apt-key add -
RUN echo 'deb http://packages.elasticsearch.org/logstash/1.5/
debian stable main' > /etc/apt/sources.list.d/logstash.list
RUN apt-get -yqq update
RUN apt-get -yqq install logstash default-jdk

ADD logstash.conf /etc/

WORKDIR /opt/logstash

ENTRYPOINT [ "bin/logstash" ]
CMD [ "--config=/etc/logstash.conf" ]

We’ve created an image that installs Logstash and adds a logstash.conf le to

Version: v17.03.0 (38f1319) 248


Chapter 6: Building services with Docker

the /etc/ directory using the ADD instruction. Let’s quickly create this le in the
logstash directory. Add a le called logstash.conf and populate it like so:

Listing 6.58: Our Logstash conguration

input {
file {
type => "syslog"
path => ["/var/log/nodeapp/nodeapp.log", "/var/log/redis/
redis-server.log"]
}
}
output {
stdout {
codec => rubydebug
}
}

This is a simple Logstash conguration that monitors two les: /var/log/nodeapp


/nodeapp.log and /var/log/redis/redis-server.log. Logstash will watch these
les and send any new data inside o them into Logstash. The second part o our
conguration, the output stanza, takes any events Logstash receives and outputs
them to standard out. In a real world Logstash conguration we would output to
an Elasticsearch cluster or other destination, but we’re just using this as a demo,
so we’re going to skip that.

 NOTE I you don’t know much about Logstash, you can learn more rom
my book or the Logstash documentation.

We’ve specied a working directory o /opt/logstash. Finally, we have specied


an ENTRYPOINT o bin/logstash and a CMD o --config=/etc/logstash.conf to

Version: v17.03.0 (38f1319) 249


Chapter 6: Building services with Docker

pass in our command ags. This will launch Logstash and load our /etc/logstash
.conf conguration le.

Let’s build our Logstash image now.

Listing 6.59: Building our Logstash image

$ sudo docker build -t jamtur01/logstash .

Now that we’ve built our Logstash image, we launch a container rom it.

Listing 6.60: Launching a Logstash container

$ sudo docker run -d --name logstash \


--volumes-from redis_primary \
--volumes-from nodeapp \
jamtur01/logstash

We’ve launched a new container called logstash and specied the --volumes-
from ag twice to get the volumes rom the redis_primary and nodeapp. This
gives us access to the Node and Redis log les. Any events added to those les
will be reected in the volumes in the logstash container and passed to Logstash
or processing.
Let’s browse to our web application again and reresh it to generate an event. We
should see that event reected in our logstash container output.

Version: v17.03.0 (38f1319) 250


Chapter 6: Building services with Docker

Listing 6.61: A Node event in Logstash

{
"message" => "::ffff:198.179.69.250 - - [Fri, 05 Aug 2016
16:39:25 GMT] \"GET / HTTP/1.1\" 200 20 \"-\" \"Mozilla
/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit
/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari
/537.36\"",
"@version" => "1",
"@timestamp" => "2016-08-05T16:39:25.945Z",
"host" => "1bbc26b1ed7d",
"path" => "/var/log/nodeapp/nodeapp.log",
"type" => "syslog"
}

And now we have our Node and Redis containers logging to Logstash. In a produc-
tion environment, we’d be sending these events to a Logstash server and storing
them in Elasticsearch. We could also easily add our Redis replica containers or
other components o the solution to our logging environment.

 NOTE We could also do Redis backups via volumes i we wanted to.

Summary o our Node stack

We’ve now seen a multi-container application stack. We’ve used Docker network-
ing to connect our application together and Docker volumes to help manage a
variety o aspects o our application. We can build on this oundation to produce
more complex applications and architectures.

Version: v17.03.0 (38f1319) 251


Chapter 6: Building services with Docker

Managing Docker containers without SSH

Lastly, beore we wrap up our chapter on running services with Docker, it’s im-
portant to understand some o the ways we can manage Docker containers and
how those difer rom some more traditional management techniques.
Traditionally, when managing services, we’re used to SSHing into our environ-
ment or virtual machines to manage them. In the Docker world, where most
containers run a single process, this access isn’t available. As we’ve seen much o
the time, this access isn’t needed: we will use volumes or networking to perorm
a lot o the same actions. For example, i our service is managed via a network
interace, we expose that on a container; i our service is managed through a Unix
socket, we expose that with a volume. I we need to send a signal to a Docker
container, we use the docker kill command, like so:

Listing 6.62: Using docker kill to send signals

$ sudo docker kill -s <signal> <container>

This will send the specic signal you want (e.g., a HUP) to the container in question
rather than killing the container.
Sometimes, however, we do need to sign into a container. To do that, though, we
don’t need to run an SSH service or open up any access. We can use the docker
exec command

 NOTE The docker exec command introduced in Docker 1.3 replaces the
previous tool, nsenter.

Version: v17.03.0 (38f1319) 252


Chapter 6: Building services with Docker

Listing 6.63: Installing nsenter

$ sudo docker exec -ti nodeapp /bin/bash

This will launch an interactive Bash shell inside our nodeapp container.

Summary

In this chapter, we’ve seen how to build some example production services using
Docker containers. We’ve seen a bit more about how we build multi-container
services and manage those stacks. We’ve combined eatures like Docker network-
ing and volumes and learned how to potentially extend those eatures to provide
us with capabilities like logging and backups.
In the next chapter, we’ll look at orchestration with Docker using the Docker
Compose, Docker Swarm and Consul tools.

Version: v17.03.0 (38f1319) 253


Chapter 7

Docker Orchestration and Service


Discovery

Orchestration is a pretty loosely dened term. It’s broadly the process o auto-
mated conguration, coordination, and management o services. In the Docker
world we use it to describe the set o practices around managing applications run-
ning in multiple Docker containers and potentially across multiple Docker hosts.
Native orchestration is in its inancy in the Docker community but an exciting
ecosystem o tools is being integrated and developed.
In the current ecosystem there are a variety o tools being built and integrated
with Docker. Some o these tools are simply designed to elegantly ”wire” together
multiple containers and build application stacks using simple composition. Other
tools provide larger scale coordination between multiple Docker hosts as well as
complex service discovery, scheduling and execution capabilities.
Each o these areas really deserves its own book but we’ve ocused on a ew useul
tools that give you some insight into what you can achieve when orchestrating
containers. They provide some useul building blocks upon which you can grow
your Docker-enabled environment.
In this chapter we will ocus on three areas:

• Simple container orchestration. Here we’ll look at Docker Compose. Docker


Compose (previously Fig) is an open source Docker orchestration tool devel-

254
Chapter 7: Docker Orchestration and Service Discovery

oped by the Orchard team and then acquired by Docker Inc in 2014. It’s
written in Python and licensed with the Apache 2.0 license.

• Distributed service discovery. Here we’ll introduce Consul. Consul is also


open source, licensed with the Mozilla Public License 2.0, and written in
Go. It provides distributed, highly available service discovery. We’re going
to look at how you might use Consul and Docker to manage application
service discovery.

• Orchestration and clustering o Docker. Here we’re looking at Swarm.


Swarm is open source, licensed with the Apache 2.0 license. It’s written in
Go and developed by the Docker Inc team. As o Docker 1.12 the Docker
Engine now has a Swarm-mode built in and we’ll be covering that in this
chapter.

 TIP We’ll also talk about many o the other orchestration tools available to
you later in this chapter.

Docker Compose

Now let’s get amiliar with Docker Compose. With Docker Compose, we dene a
set o containers to boot up, and their runtime properties, all dened in a YAML
le. Docker Compose calls each o these containers ”services” which it denes as:

A container that interacts with other containers in some way and that
has specic runtime properties.

We’re going to take you through installing Docker Compose and then using it to
build a simple, multi-container application stack.

Version: v17.03.0 (38f1319) 255


Chapter 7: Docker Orchestration and Service Discovery

Installing Docker Compose

We start by installing Docker Compose. Docker Compose is currently available or


Linux, Windows, and OS X. It can be installed directly as a binary, via Docker or
Mac or Windows or via a Python Pip package.
To install Docker Compose on Linux we can grab the Docker Compose binary rom
GitHub and make it executable. Like Docker, Docker Compose is currently only
supported on 64-bit Linux installations. We’ll need the curl command available
to do this.

Listing 7.1: Installing Docker Compose on Linux

$ sudo curl -L https://github.com/docker/compose/releases/


download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/
local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose

This will download the docker-compose binary rom GitHub and install it into
the /usr/local/bin directory. We’ve also used the chmod command to make the
docker-compose binary executable so we can run it.

I we’re on OS X Docker Compose comes bundled with Docker or Mac or we can
install it like so:

Listing 7.2: Installing Docker Compose on OS X

$ sudo bash -c "curl -L https://github.com/docker/compose/


releases/download/1.8.0/docker-compose-Darwin-x86_64 > /usr/
local/bin/docker-compose"
$ sudo chmod +x /usr/local/bin/docker-compose

Version: v17.03.0 (38f1319) 256


Chapter 7: Docker Orchestration and Service Discovery

 TIP Replace the 1.8.0 with the release number o the current Docker Compose
release.

I we’re on Windows Docker Compose comes bundled inside Docker or Windows.
Compose is also available as a Python package i you’re on another platorm or
i you preer installing via package. You will need to have the Python-Pip tool
installed to use the pip command. This is available via the python-pip package
on most Red Hat, Debian and Ubuntu releases.

Listing 7.3: Installing Compose via Pip

$ sudo pip install -U docker-compose

Once you have installed the docker-compose binary you can test it’s working using
the docker-compose command with the --version ag:

Listing 7.4: Testing Docker Compose is working

$ docker-compose --version
docker-compose version 1.8.0, build f3628c7

 NOTE I you’re upgrading rom a pre-1.3.0 release you’ll need to mi-


grate any existing container to the new 1.3.0 ormat using the docker-compose
migrate-to-labels command.

Version: v17.03.0 (38f1319) 257


Chapter 7: Docker Orchestration and Service Discovery

Getting our sample application

To demonstrate how Compose works we’re going to use a sample Python Flask
application that combines two containers:

• An application container running our sample Python application.


• A container running the Redis database.

Let’s start with building our sample application. Firstly, we create a directory and
a Dockerfile.

Listing 7.5: Creating the composeapp directory

$ mkdir composeapp
$ cd composeapp

Here we’ve created a directory to hold our sample application, which we’re calling
composeapp.

Next, we need to add our application code. Let’s create a le called app.py in the
composeapp directory and add the ollowing Python code to it.

Version: v17.03.0 (38f1319) 258


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.6: The app.py le

from flask import Flask


from redis import Redis
import os

app = Flask(__name__)
redis = Redis(host="redis", port=6379)

@app.route('/')
def hello():
redis.incr('hits')
return 'Hello Docker Book reader! I have been seen {0} times'
.format(redis.get('hits'))

if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)

 TIP You can nd this source code on GitHub here or on the Docker Book site
here.

This simple Flask application tracks a counter stored in Redis. The counter is
incremented each time the root URL, /, is hit.
We also need to create a requirements.txt le to store our application’s depen-
dencies. Let’s create that le now and add the ollowing dependencies.

Version: v17.03.0 (38f1319) 259


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.7: The requirements.txt le

flask
redis

Now let’s populate our Compose Dockerfile.

Listing 7.8: The composeapp Dockerle

# Compose Sample application image


FROM python:2.7
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

ADD . /composeapp

WORKDIR /composeapp

RUN pip install -r requirements.txt

Our Dockerfile is simple. It is based on the python:2.7 image. We add our app
.py and requirements.txt les into a directory in the image called /composeapp.
The Dockerfile then sets the working directory to /composeapp and runs the pip
installation process to install our application’s dependencies: flask and redis.
Let’s build that image now using the docker build command.

Version: v17.03.0 (38f1319) 260


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.9: Building the composeapp application

$ sudo docker build -t jamtur01/composeapp .


Sending build context to Docker daemon 16.9 kB
Sending build context to Docker daemon
Step 0 : FROM python:2.7
---> 1c8df2f0c10b
Step 1 : MAINTAINER James Turnbull <[email protected]>
---> Using cache
---> aa564fe8be5a
Step 2 : ADD . /composeapp
---> c33aa147e19f
Removing intermediate container 0097bc79d37b
Step 3 : WORKDIR /composeapp
---> Running in 76e5ee8544b3
---> d9da3105746d
Removing intermediate container 76e5ee8544b3
Step 4 : RUN pip install -r requirements.txt
---> Running in e71d4bb33fd2
Downloading/unpacking flask (from -r requirements.txt (line 1))
. . .
Successfully installed flask redis Werkzeug Jinja2 itsdangerous
markupsafe
Cleaning up...
---> bf0fe6a69835
Removing intermediate container e71d4bb33fd2
Successfully built bf0fe6a69835

This will build a new image called jamtur01/composeapp containing our sample
application and its required dependencies. We can now use Compose to deploy
our application.

Version: v17.03.0 (38f1319) 261


Chapter 7: Docker Orchestration and Service Discovery

 NOTE We’ll be using a Redis container created rom the deault Redis
image on the Docker Hub so we don’t need to build or customize that.

The docker-compose.yml le

Now we’ve got our application image built we can congure Compose to create
both the services we require. With Compose, we dene a set o services (in the
orm o Docker containers) to launch. We also dene the runtime properties we
want these services to start with, much as you would do with the docker run
command. We dene all o this in a YAML le. We then run the docker-compose
up command. Compose launches the containers, executes the appropriate runtime
conguration, and multiplexes the log output together or us.
Let’s create a docker-compose.yml le or our application inside our composeapp
directory.

Listing 7.10: Creating the docker-compose.yml le

$ touch docker-compose.yml

Let’s populate our docker-compose.yml le. The docker-compose.yml le is a


YAML le that contains instructions or running one or more Docker containers.
Let’s look at the instructions or our example application.

Version: v17.03.0 (38f1319) 262


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.11: The docker-compose.yml le

web:
image: jamtur01/composeapp
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/composeapp
links:
- redis
redis:
image: redis

Each service we wish to launch is specied as a YAML hash here: web and redis.
For our web service we’ve specied some runtime options. Firstly, we’ve specied
the image we’re using: the jamtur01/composeapp image. Compose can also build
Docker images. You can use the build instruction and provide the path to a
Dockerfile to have Compose build an image and then create services rom it.

Listing 7.12: An example o the build instruction

web:
build: /home/james/composeapp
. . .

This build instruction would build a Docker image rom a Dockerfile ound in
the /home/james/composeapp directory.
We’ve also specied the command to run when launching the service. Next we
speciy the ports and volumes as a list o the port mappings and volumes we want
or our service. We’ve specied that we’re mapping port 5000 inside our service

Version: v17.03.0 (38f1319) 263


Chapter 7: Docker Orchestration and Service Discovery

to port 5000 on the host. We’re also creating /composeapp as a volume. Finally,
we speciy any links or this service. Here we link our web service to the redis
service.
I we were executing the same conguration on the command line using docker
run we’d do it like so:

Listing 7.13: The docker run equivalent command

$ sudo docker run -d -p 5000:5000 -v .:/composeapp --link redis:


redis \
--name jamtur01/composeapp python app.py

Next we’ve specied another service called redis. For this service we’re not set-
ting any runtime deaults at all. We’re just going to use the base redis image. By
deault, containers run rom this image launches a Redis database on the standard
port. So we don’t need to congure or customize it.

 TIP You can see a ull list o the available instructions you can use in the
docker-compose.yml le here.

Running Compose

Once we’ve specied our services in docker-compose.yml we use the docker-


compose up command to execute them both.

Version: v17.03.0 (38f1319) 264


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.14: Running docker-compose up with our sample application

$ cd composeapp
$ sudo docker-compose up
Creating composeapp_redis_1
Creating composeapp_web_1
Attaching to composeapp_redis_1, composeapp_web_1
redis_1 | _._
redis_1 | _.-``__ ''-._
redis_1 | _.-`` `. `_. ''-._ Redis 3.2.3
(00000000/0)
redis_1 | .-`` .-```. ```\/ _.,_ ''-._
redis_1 | ( ' , .-` | `, ) Running in
standalone mode
redis_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
redis_1 | | `-._ `._ / _.-' | PID: 1
redis_1 | `-._ `-._ `-./ _.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' | http://
redis.io
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' |
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | `-._ `-.__.-' _.-'
redis_1 | `-._ _.-'
redis_1 | `-.__.-'
redis_1 |
redis_1 | 1:M 05 Aug 17:49:17.839 * The server is now ready to
accept connections on port 6379
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to
quit)

Version: v17.03.0 (38f1319) 265


Chapter 7: Docker Orchestration and Service Discovery

 TIP You must be inside the directory with the docker-compose.yml le in
order to execute most Compose commands.

Compose has created two new services: composeapp_redis_1 and composeapp_web_1


. So where did these names come rom? Well, to ensure our services are unique,
Compose has prexed and suxed the names specied in the docker-compose.
yml le with the directory and a number respectively.

Compose then attaches to the logs o each service, each line o log output is pre-
xed with the abbreviated name o the service it comes rom, and outputs them
multiplexed:

Listing 7.15: Compose service log output

redis_1 | 1:M 05 Aug 17:49:17.839 * The server is now ready to


accept connections on port 6379

The services (and Compose) are being run interactively. That means i you use
Ctrl-C or the like to cancel Compose then it’ll stop the running services. We
could also run Compose with -d ag to run our services daemonized (similar to
the docker run -d ag).

Listing 7.16: Running Compose daemonized

$ sudo docker-compose up -d

Let’s look at the sample application that’s now running on the host. The applica-
tion is bound to all interaces on the Docker host on port 5000. So we can browse
to that site on the host’s IP address or via localhost.

Version: v17.03.0 (38f1319) 266


Chapter 7: Docker Orchestration and Service Discovery

Figure 7.1: Sample Compose application.

We see a message displaying the current counter value. We can increment the
counter by rereshing the site. Each reresh stores the increment in Redis. The
Redis update is done via the link between the Docker containers controlled by
Compose.

 TIP By deault, Compose tries to connect to a local Docker daemon but it’ll
also honor the DOCKER_HOST environment variable to connect to a remote Docker
host.

Using Compose

Now let’s explore some o Compose’s other options. Firstly, let’s use Ctrl-C to
cancel our running services and then restart them as daemonized services.
Press Ctrl-C inside the composeapp directory and then re-run the docker-compose
up command, this time with the -d ag.

Version: v17.03.0 (38f1319) 267


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.17: Restarting Compose as daemonized

$ sudo docker-compose up -d
Recreating composeapp_redis_1...
Recreating composeapp_web_1...
$ . . .

We see that Compose has recreated our services, launched them and returned to
the command line.
Our Compose-managed services are now running daemonized on the host. Let’s
look at them now using the docker-compose ps command; a close cousin o the
docker ps command.

 TIP You can get help on Compose commands by running docker-compose


help and the command you wish to get help on, or example docker-compose
help ps.

The docker-compose ps command lists all o the currently running services rom
our local docker-compose.yml le.

Version: v17.03.0 (38f1319) 268


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.18: Running the docker-compose ps command

$ cd composeapp
$ sudo docker-compose ps
Name Command State Ports
-----------------------------------------------------------------
-
composeapp_redis_1 docker-entrypoint.sh redis Up 6379/tcp
composeapp_web_1 python app.py Up 0.0.0.0:5000
->5000/tcp

This shows some basic inormation about our running Compose services. The
name o each service, what command we used to start the service, and the ports
that are mapped on each service.
We can also drill down urther using the docker-compose logs command to show
us the log events rom our services.

Listing 7.19: Showing a Compose services logs

$ sudo docker-compose logs


docker-compose logs
Attaching to composeapp_redis_1, composeapp_web_1
redis_1 | ( ' , .-` | `, ) Running in
stand alone mode
redis_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
redis_1 | | `-._ `._ / _.-' | PID: 1
. . .

This will tail the log les o your services, much as the tail -f command. Like
the tail -f command you’ll need to use Ctrl-C or the like to exit rom it.
We can also stop our running services with the docker-compose stop command.

Version: v17.03.0 (38f1319) 269


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.20: Stopping running services

$ sudo docker-compose stop


Stopping composeapp_web_1...
Stopping composeapp_redis_1...

This will stop both services. I the services don’t stop you can use the docker-
compose kill command to orce kill the services.

We can veriy this with the docker-compose ps command again.

Listing 7.21: Veriying our Compose services have been stopped

$ sudo docker-compose ps
Name Command State Ports
-------------------------------------------------
composeapp_redis_1 redis-server Exit 0
composeapp_web_1 python app.py Exit 0

I you’ve stopped services using docker-compose stop or docker-compose kill


you can also restart them again with the docker-compose start command. This
is much like using the docker start command and will restart these services.
Finally, we can remove services using the docker-compose rm command.

Version: v17.03.0 (38f1319) 270


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.22: Removing Compose services

$ sudo docker-compose rm
Going to remove composeapp_redis_1, composeapp_web_1
Are you sure? [yN] y
Removing composeapp_redis_1...
Removing composeapp_web_1...

You’ll be prompted to conrm you wish to remove the services and then both
services will be deleted. The docker-compose ps command will now show no
running or stopped services.

Listing 7.23: Showing no Compose services

$ sudo docker-compose ps
Name Command State Ports
------------------------------

Compose in summary

Now in one le we have a simple Python-Redis stack built! You can see how much
easier this can make constructing applications rom multiple Docker containers.
This, however, just scratches the surace o what you can do with Compose. There
are some more examples using Rails, Django and Wordpress on the Compose web-
site that introduce some more advanced concepts.

 TIP You can see a ull command line reerence here.

Version: v17.03.0 (38f1319) 271


Chapter 7: Docker Orchestration and Service Discovery

Consul, Service Discovery and Docker

Service discovery is the mechanism by which distributed applications manage


their relationships. A distributed application is usually made up o multiple com-
ponents. These components can be located together locally or distributed across
data centers or geographic regions. Each o these components usually provides or
consumes services to or rom other components.
Service discovery allows these components to nd each other when they want
to interact. Due to the distributed nature o these applications, service discovery
mechanisms also need to be distributed. As they are usually the ”glue” between
components o distributed applications they also need to be dynamic, reliable,
resilient and able to quickly and consistently share data about these services.
Docker, with its ocus on distributed applications and service-oriented and mi-
croservices architectures, is an ideal candidate or integration with a service dis-
covery tool. Each Docker container can register its running service or services
with the tool. This provides the inormation needed, or example an IP address or
port or both, to allow interaction between services.
Our example service discovery tool, Consul, is a specialized datastore that uses
consensus algorithms. Consul specically uses the Rat consensus algorithm to
require a quorum or writes. It also exposes a key value store and service catalog
that is highly available, ault-tolerant, and maintains strong consistency guaran-
tees. Services can register themselves with Consul and share that registration
inormation in a highly available and distributed manner.
Consul is also interesting because it provides:

• A service catalog with an API instead o the traditional key=value store o


most service discovery tools.
• Both a DNS-based query interace through an inbuilt DNS server and a HTTP-
based REST API to query the inormation. The choice o interaces, especially
the DNS-based interace, allows you to easily drop Consul into your existing
environment.
• Service monitoring AKA health checks. Consul has powerul service moni-
toring built into the tool.

Version: v17.03.0 (38f1319) 272


Chapter 7: Docker Orchestration and Service Discovery

To get a better understanding o how Consul works, we’re going to see how to
run distributed Consul inside Docker containers. We’re then going to register
services rom Docker containers to Consul and query that data rom other Docker
containers. To make it more interesting we’re going to do this across multiple
Docker hosts.
To do this we’re going to:

• Create a Docker image or the Consul service.


• Build three hosts running Docker and then run Consul on each. The three
hosts will provide us with a distributed environment to see how resiliency
and ailover works with Consul.
• Build services that we’ll register with Consul and then query that data rom
another service.

 NOTE You can see a more generic introduction to Consul here.

Building a Consul image

We’re going to start with creating a Dockerfile to build our Consul image. Let’s
create a directory to hold our Consul image rst.

Listing 7.24: Creating a Consul Dockerle directory

$ mkdir consul
$ cd consul
$ touch Dockerfile

Now let’s look at the Dockerfile or our Consul image.

Version: v17.03.0 (38f1319) 273


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.25: The Consul Dockerle

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2014-08-01

RUN apt-get -qqy update


RUN apt-get -qqy install curl unzip

ADD https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4
_linux_amd64.zip /tmp/consul.zip
RUN cd /usr/sbin; unzip /tmp/consul.zip; chmod +x /usr/sbin/
consul; rm /tmp/consul.zip

RUN mkdir -p /webui/


ADD https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4
_web_ui.zip /webui/webui.zip
RUN cd /webui; unzip webui.zip; rm webui.zip

ADD consul.json /config/

EXPOSE 53/udp 8300 8301 8301/udp 8302 8302/udp 8400 8500

VOLUME ["/data"]

ENTRYPOINT [ "/usr/sbin/consul", "agent", "-config-dir=/config" ]


CMD []

Our Dockerfile is pretty simple. It’s based on an Ubuntu 16.04 image. It installs
curl and unzip. We then download the Consul zip le containing the consul
binary. We move that binary to /usr/sbin/ and make it executable. We also
download Consul’s web interace and place it into a directory called /webui. We’re
going to see this web interace in action a little later.

Version: v17.03.0 (38f1319) 274


Chapter 7: Docker Orchestration and Service Discovery

We then add a conguration le or Consul, consul.json, to the /config directory.
Let’s create and look at that le now.

Listing 7.26: The consul.json conguration le

{
"data_dir": "/data",
"ui_dir": "/webui",
"client_addr": "0.0.0.0",
"ports": {
"dns": 53
},
"recursor": "8.8.8.8"
}

The consul.json conguration le is JSON ormatted and provides Consul with
the inormation needed to get running. We’ve specied a data directory, /data,
to hold Consul’s data. We also speciy the location o the web interace les: /
webui. We use the client_addr variable to bind Consul to all interaces inside our
container.
We also use the ports block to congure on which ports various Consul services
run. In this case we’re speciying that Consul’s DNS service should run on port
53. Lastly, we’ve used the recursor option to speciy a DNS server to use or
resolution i Consul can’t resolve a DNS request. We’ve specied 8.8.8.8 which
is one o the IP addresses o Google’s public DNS service.

 TIP You can nd the ull list o available Consul conguration options here.

Back in our Dockerfile we’ve used the EXPOSE instruction to open up a series o
ports that Consul requires to operate. I’ve added a table showing each o these

Version: v17.03.0 (38f1319) 275


Chapter 7: Docker Orchestration and Service Discovery

ports and what they do.

Table 7.1: Consul’s deault ports.

Port Purpose
53/udp DNS server
8300 Server RPC
8301 + udp Ser LAN port
8302 + udp Ser WAN port
8400 RPC endpoint
8500 HTTP API

You don’t need to worry about most o them or the purposes o this chapter. The
important ones or us are 53/udp which is the port Consul is going to be running
DNS on. We’re going to use DNS to query service inormation. We’re also going to
use Consul’s HTTP API and its web interace, both o which are bound to port 8500.
The rest o the ports handle the backend communication and clustering between
Consul nodes. We’ll congure them in our Docker container but we don’t do
anything specic with them.

 NOTE You can nd more details o what each port does here.

Next, we’ve also made our /data directory a volume using the VOLUME instruction.
This is useul i we want to manage or work with this data as we saw in Chapter
6.
Finally, we’ve specied an ENTRYPOINT instruction to launch Consul using the
consul binary when a container is launched rom our image.

Let’s step through the command line options we’ve used. We’ve specied the
consul binary in /usr/sbin/. We’ve passed it the agent command which tells
Consul to run as an agent and the -config-dir ag and specied the location o
our consul.json le in the /config directory.

Version: v17.03.0 (38f1319) 276


Chapter 7: Docker Orchestration and Service Discovery

Let’s build our image now.

Listing 7.27: Building our Consul image

$ sudo docker build -t="jamtur01/consul" .

 NOTE You can get our Consul Dockerfile and conguration le here or
on GitHub here. I you don’t want to use a home grown image there is also an
ocially santionced Consul image on the Docker Hub.

Testing a Consul container locally

Beore we run Consul on multiple hosts, let’s see it working locally on a single
host. To do this we’ll run a container rom our new jamtur01/consul image.

Version: v17.03.0 (38f1319) 277


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.28: Running a local Consul node

$ sudo docker run -p 8500:8500 -p 53:53/udp \


-h node1 jamtur01/consul -server -bootstrap
==> WARNING: Bootstrap mode enabled! Do not enable unless
necessary
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
Node name: 'node1'
Datacenter: 'dc1'
Server: true (bootstrap: true)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC:
8400)
Cluster Addr: 172.17.0.8 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>

==> Log data will now stream in as it occurs:

. . .

2016/08/05 17:59:38 [INFO] consul: cluster leadership acquired


2016/08/05 17:59:38 [INFO] consul: New leader elected: node1
2016/08/05 17:59:38 [INFO] raft: Disabling EnableSingleNode (
bootstrap)
2016/08/05 17:59:38 [INFO] consul: member 'node1' joined, marking
health alive
2016/08/05 17:59:40 [INFO] agent: Synced service 'consul'

We’ve used the docker run command to create a new container. We’ve mapped
two ports, port 8500 in the container to 8500 on the host and port 53 in the con-
tainer to 53 on the host. We’ve also used the -h ag to speciy the hostname o

Version: v17.03.0 (38f1319) 278


Chapter 7: Docker Orchestration and Service Discovery

the container, here node1. This is going to be both the hostname o the container
and the name o the Consul node. We’ve then specied the name o our Consul
image, jamtur01/consul.
Lastly, we’ve passed two ags to the consul binary: -server and -bootstrap. The
-server ag tells the Consul agent to operate in server mode. The -bootstrap ag
tells Consul that this node is allowed to sel-elect as a leader. This allows us to
see a Consul agent in server mode doing a Rat leadership election.

 WARNING It is important that no more than one server per datacenter


be running in bootstrap mode. Otherwise consistency cannot be guaranteed i
multiple nodes are able to sel-elect. We’ll see some more on this when we add
other nodes to the cluster.

We see that Consul has started node1 and done a local leader election. As we’ve
got no other Consul nodes running it is not connected to anything else.
We can also see this via the Consul web interace i we browse to our local host’s
IP address on port 8500.

Figure 7.2: The Consul web interace.

Running a Consul cluster in Docker

As Consul is distributed we’d normally create three (or more) hosts to run in sep-
arate data centers, clouds or regions. Or even add an agent to every application
server. This will provide us with sucient distributed resilience. We’re going

Version: v17.03.0 (38f1319) 279


Chapter 7: Docker Orchestration and Service Discovery

to mimic this required distribution by creating three hosts each with a Docker
daemon to run Consul. Instead, we have created three new Ubuntu 16.04 hosts:
larry, curly, and moe. On each host we’ve installed a Docker daemon. We’ve
also pulled down the jamtur01/consul image.

 TIP To install Docker you can use the installation instructions in Chapter 2.

Listing 7.29: Pulling down the Consul image

$ sudo docker pull jamtur01/consul

On each host we’re going to run a Docker container with the jamtur01/consul
image. To do this we need to choose a network to run Consul over. In most cases
this would be a private network but as we’re just simulating a Consul cluster I
am going to use the public interaces o each host. To start Consul on this public
network I am going to need the public IP address o each host. This is the address
to which we’re going to bind each Consul agent.
Let’s grab that now on larry and assign it to an environment variable, $PUBLIC_IP.

Listing 7.30: Getting public IP on larry

larry$ PUBLIC_IP="$(ifconfig eth0 | awk -F ' *|:' '/inet addr/{


print $4}')"
larry$ echo $PUBLIC_IP
162.243.167.159

And then create the same $PUBLIC_IP variable on curly and moe too.

Version: v17.03.0 (38f1319) 280


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.31: Assigning public IP on curly and moe

curly$ PUBLIC_IP="$(ifconfig eth0 | awk -F ' *|:' '/inet addr/{


print $4}')"
curly$ echo $PUBLIC_IP
162.243.170.66
moe$ PUBLIC_IP="$(ifconfig eth0 | awk -F ' *|:' '/inet addr/{
print $4}')"
moe$ echo $PUBLIC_IP
159.203.191.16

We see we’ve got three hosts and three IP addresses, each assigned to the
$PUBLIC_IP environmental variable.

Table 7.2: Consul host IP addresses

Host IP Address
larry 162.243.167.159
curly 162.243.170.66
moe 159.203.191.16

We’re also going to need to nominate a host to bootstrap to start the cluster. We’re
going to choose larry. This means we’ll need larry’s IP address on curly and moe
to tell them which Consul node’s cluster to join. Let’s set that up now by adding
larry’s IP address o 162.243.167.159 to curly and moe as the environment vari-
able, $JOIN_IP.

Version: v17.03.0 (38f1319) 281


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.32: Adding the cluster IP address

curly$ JOIN_IP=162.243.167.159
moe$ JOIN_IP=162.243.167.159

Starting the Consul bootstrap node

Let’s start our initial bootstrap node on larry. Our docker run command is going
to be a little complex because we’re mapping a lot o ports. Indeed, we need to
map all the ports listed in Table 7.1 above. And, as we’re both running Consul in
a container and connecting to containers on other hosts, we’re going to map each
port to the corresponding port on the local host. This will allow both internal and
external access to Consul.
Let’s see our docker run command now.

Listing 7.33: Start the Consul bootstrap node

larry$ sudo docker run -d -h $HOSTNAME \


-p 8300:8300 -p 8301:8301 \
-p 8301:8301/udp -p 8302:8302 \
-p 8302:8302/udp -p 8400:8400 \
-p 8500:8500 -p 53:53/udp \
--name larry_agent jamtur01/consul \
-server -advertise $PUBLIC_IP -bootstrap-expect 3

Here we’ve launched a daemonized container using the jamtur01/consul image


to run our Consul agent. We’ve set the -h ag to set the hostname o the container
to the value o the $HOSTNAME environment variable. This sets our Consul agent’s
name to be the local hostname, here larry. We’re also mapped a series o eight
ports rom inside the container to the respective ports on the local host.

Version: v17.03.0 (38f1319) 282


Chapter 7: Docker Orchestration and Service Discovery

We’ve also specied some command line options or the Consul agent.

Listing 7.34: Consul agent command line arguments

-server -advertise $PUBLIC_IP -bootstrap-expect 3

The -server ag tell the agent to run in server mode. The -advertise ag tells
that server to advertise itsel on the IP address specied in the $PUBLIC_IP environ-
ment variable. Lastly, the -bootstrap-expect ag tells Consul how many agents
to expect in this cluster. In this case, 3 agents. It also bootstraps the cluster.
Let’s look at the logs o our initial Consul container with the docker logs com-
mand.

Version: v17.03.0 (38f1319) 283


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.35: Starting bootstrap Consul node

larry$ sudo docker logs larry_agent


==> WARNING: Expect Mode enabled, expecting 3 servers
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
Node name: 'larry'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC:
8400)
Cluster Addr: 162.243.167.159 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>

==> Log data will now stream in as it occurs:

. . .

2016/08/06 12:35:11 [INFO] serf: EventMemberJoin: larry.dc1


162.243.167.159
2016/08/06 12:35:11 [INFO] consul: adding LAN server larry (Addr:
162.243.167.159:8300) (DC: dc1)
2016/08/06 12:35:11 [INFO] consul: adding WAN server larry.dc1 (
Addr: 162.243.167.159:8300) (DC: dc1)
2016/08/06 12:35:11 [ERR] agent: failed to sync remote state: No
cluster leader
2016/08/06 12:35:12 [WARN] raft: EnableSingleNode disabled, and
no known peers. Aborting election.

We see that the agent on larry is started but because we don’t have any more
nodes yet no election has taken place. We know this rom the only error returned.

Version: v17.03.0 (38f1319) 284


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.36: Cluster leader error

[ERR] agent: failed to sync remote state: No cluster leader

Starting the remaining nodes

Now we’ve bootstrapped our cluster we can start our remaining nodes on curly
and moe. Let’s start with curly. We use the docker run command to launch our
second agent.

Listing 7.37: Starting the agent on curly

curly$ sudo docker run -d -h $HOSTNAME \


-p 8300:8300 -p 8301:8301 \
-p 8301:8301/udp -p 8302:8302 \
-p 8302:8302/udp -p 8400:8400 \
-p 8500:8500 -p 53:53/udp \
--name curly_agent jamtur01/consul \
-server -advertise $PUBLIC_IP -join $JOIN_IP

We see our command is similar to our bootstrapped node on larry with the ex-
ception o the command we’re passing to the Consul agent.

Listing 7.38: Launching the Consul agent on curly

-server -advertise $PUBLIC_IP -join $JOIN_IP

Again we’ve enabled the Consul agent’s server mode with -server and bound the
agent to the public IP address using the -advertise ag. Finally, we’ve told Con-

Version: v17.03.0 (38f1319) 285


Chapter 7: Docker Orchestration and Service Discovery

sul to join our Consul cluster by speciying larry’s IP address using the $JOIN_IP
environment variable.
Let’s see what happened when we launched our container.

Version: v17.03.0 (38f1319) 286


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.39: Looking at the Curly agent logs

curly$ sudo docker logs curly_agent


==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Node name: 'curly'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC:
8400)
Cluster Addr: 162.243.170.66 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>

==> Log data will now stream in as it occurs:

. . .

2016/08/06 12:37:17 [INFO] consul: adding LAN server curly (Addr:


162.243.170.66:8300) (DC: dc1)
2016/08/06 12:37:17 [INFO] consul: adding WAN server curly.dc1 (
Addr: 162.243.170.66:8300) (DC: dc1)
2016/08/06 12:37:17 [INFO] agent: (LAN) joining:
[162.243.167.159]
2016/08/06 12:37:17 [INFO] serf: EventMemberJoin: larry
162.243.167.159
2016/08/06 12:37:17 [INFO] agent: (LAN) joined: 1 Err: <nil>
2016/08/06 12:37:17 [ERR] agent: failed to sync remote state: No
cluster leader
2016/08/06 12:37:17 [INFO] consul: adding LAN server larry (Addr:
162.243.167.159:8300) (DC: dc1)
2016/08/06
Version: 12:37:18 [WARN] raft: EnableSingleNode disabled, and 287
v17.03.0 (38f1319)
no known peers. Aborting election.
Chapter 7: Docker Orchestration and Service Discovery

We see curly has joined larry, indeed on larry we should see something like the
ollowing:

Listing 7.40: Curly joining Larry

2016/08/06 12:37:17 [INFO] serf: EventMemberJoin: curly


162.243.170.66
2016/08/06 12:37:17 [INFO] consul: adding LAN server curly (Addr:
162.243.170.66:8300) (DC: dc1)

But we’ve still not got a quorum in our cluster, remember we told -bootstrap-
expect to expect 3 nodes. So let’s start our nal agent on moe.

Listing 7.41: Starting the agent on moe

moe$ sudo docker run -d -h $HOSTNAME \


-p 8300:8300 -p 8301:8301 \
-p 8301:8301/udp -p 8302:8302 \
-p 8302:8302/udp -p 8400:8400 \
-p 8500:8500 -p 53:53/udp \
--name moe_agent jamtur01/consul \
-server -advertise $PUBLIC_IP -join $JOIN_IP

Our docker run command is basically the same as what we ran on curly. But this
time we have three agents in our cluster. Now, i we look at the container’s logs,
we will see a ull cluster.

Version: v17.03.0 (38f1319) 288


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.42: Consul logs on moe

moe$ sudo docker logs moe_agent


==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Node name: 'moe'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC:
8400)
Cluster Addr: 159.203.191.16 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>

==> Log data will now stream in as it occurs:

. . .

2016/08/06 12:39:14 [ERR] agent: failed to sync remote state: No


cluster leader
2016/08/06 12:39:15 [INFO] consul: New leader elected: larry
2016/08/06 12:39:16 [INFO] agent: Synced service 'consul'

We see rom our container’s logs that moe has joined the cluster. This causes Consul
to reach its expected number o cluster members and triggers a leader election. In
this case larry is elected cluster leader.
We see the result o this nal agent joining in the Consul logs on larry too.

Version: v17.03.0 (38f1319) 289


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.43: Consul leader election on larry

2016/08/06 12:39:14 [INFO] consul: Attempting bootstrap with


nodes: [162.243.170.66:8300 159.203.191.16:8300
162.243.167.159:8300]
2016/08/06 12:39:15 [WARN] raft: Heartbeat timeout reached,
starting election
2016/08/06 12:39:15 [INFO] raft: Node at 162.243.170.66:8300 [
Candidate] entering Candidate state
2016/08/06 12:39:15 [WARN] raft: Remote peer 159.203.191.16:8300
does not have local node 162.243.167.159:8300 as a peer
2016/08/06 12:39:15 [INFO] raft: Election won. Tally: 2
2016/08/06 12:39:15 [INFO] raft: Node at 162.243.170.66:8300 [
Leader] entering Leader state
2016/08/06 12:39:15 [INFO] consul: cluster leadership acquired
2016/08/06 12:39:15 [INFO] consul: New leader elected: larry
2016/08/06 12:39:15 [INFO] raft: pipelining replication to peer
159.203.191.16:8300
2016/08/06 12:39:15 [INFO] consul: member 'larry' joined, marking
health alive
2016/08/06 12:39:15 [INFO] consul: member 'curly' joined, marking
health alive
2016/08/06 12:39:15 [INFO] raft: pipelining replication to peer
162.243.170.66:8300
2016/08/06 12:39:15 [INFO] consul: member 'moe' joined, marking
health alive

We can also browse to the Consul web interace on larry on port 8500 and select
the Consul service to see the current state

Version: v17.03.0 (38f1319) 290


Chapter 7: Docker Orchestration and Service Discovery

Figure 7.3: The Consul service in the web interace.

Finally, we can test the DNS is working using the dig command. We speciy our
local Docker bridge IP as the DNS server. That’s the IP address o the Docker
interace: docker0.

Listing 7.44: Getting the docker0 IP address

larry$ ip addr show docker0


3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::5484:7aff:fefe:9799/64 scope link
valid_lft forever preferred_lft forever

We see the interace has an IP o 172.17.0.1. We then use this with the dig
command.

Version: v17.03.0 (38f1319) 291


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.45: Testing the Consul DNS

larry$ dig @172.17.0.1 consul.service.consul


; <<>> DiG 9.10.3-P4-Ubuntu <<>> @172.17.0.1 consul.service.
consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42298
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0,
ADDITIONAL: 0

;; QUESTION SECTION:
;consul.service.consul. IN A

;; ANSWER SECTION:
consul.service.consul. 0 IN A 162.243.170.66
consul.service.consul. 0 IN A 159.203.191.16
consul.service.consul. 0 IN A 162.243.167.159

;; Query time: 1 msec


;; SERVER: 172.17.0.1#53(172.17.0.1)
;; WHEN: Sat Aug 06 12:54:18 UTC 2016
;; MSG SIZE rcvd: 150

Here we’ve queried the IP o the local Docker interace as a DNS server and asked
it to return any inormation on consul.service.consul. This ormat is Consul’s
DNS shorthand or services: consul is the host and service.consul is the domain.
Here consul.service.consul represent the DNS entry or the Consul service itsel.
For example:

Version: v17.03.0 (38f1319) 292


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.46: Querying another Consul service via DNS

larry$ dig @172.17.0.1 webservice.service.consul

Would return all DNS A records or the service webservice. We can also query
individual nodes.

Listing 7.47: Querying another Consul service via DNS

larry$ dig @172.17.0.1 curly.node.consul +noall +answer

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @172.17.0.1 curly.node.consul +


noall +answer
; (1 server found)
;; global options: +cmd
curly.node.consul. 0 IN A 162.243.170.66

 TIP You can see more details on Consul’s DNS interace here.

We now have a running Consul cluster inside Docker containers running on three
separate hosts. That’s pretty cool but it’s not overly useul. Let’s see how we can
register a service in Consul and then retrieve that data.

Running a distributed service with Consul in Docker

To register our service we’re going to create a phony distributed application writ-
ten in the uWSGI ramework. We’re going to build our application in two pieces.

• A web application, distributed_app. It runs web workers and registers

Version: v17.03.0 (38f1319) 293


Chapter 7: Docker Orchestration and Service Discovery

them as services with Consul when it starts.


• A client or our application, distributed_client. The client reads data
about distributed_app rom Consul and reports the current application
state and conguration.

We’re going run the distributed_app on two o our Consul nodes: larry and
curly. We’ll run the distributed_client client on the moe node.

Building our distributed application

We’re going to start with creating a Dockerfile to build distributed_app. Let’s


create a directory to hold our image rst.

Listing 7.48: Creating a distributed_app Dockerle directory

$ mkdir distributed_app
$ cd distributed_app
$ touch Dockerfile

Now let’s look at the Dockerfile or our distributed_app application.

Version: v17.03.0 (38f1319) 294


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.49: The distributed_app Dockerle

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01

RUN apt-get -qqy update


RUN apt-get -qqy install ruby-dev git libcurl4-openssl-dev curl
build-essential python
RUN gem install --no-ri --no-rdoc uwsgi sinatra

RUN mkdir -p /opt/distributed_app


WORKDIR /opt/distributed_app

RUN uwsgi --build-plugin https://github.com/unbit/uwsgi-consul

ADD uwsgi-consul.ini /opt/distributed_app/


ADD config.ru /opt/distributed_app/

ENTRYPOINT [ "uwsgi", "--ini", "uwsgi-consul.ini", "--ini", "


uwsgi-consul.ini:server1", "--ini", "uwsgi-consul.ini:server2"
]
CMD []

Our Dockerfile installs some required packages including the uWSGI and Sinatra
rameworks as well as a plugin to allow uWSGI to write to Consul. We create a
directory called /opt/distributed_app/ and make it our working directory. We
then add two les, uwsgi-consul.ini and config.ru to that directory.
The uwsgi-consul.ini le congured uWSGI itsel. Let’s look at it now.

Version: v17.03.0 (38f1319) 295


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.50: The uWSGI conguration

[uwsgi]
plugins = consul
socket = 127.0.0.1:9999
master = true
enable-threads = true

[server1]
consul-register = url=http://%h.node.consul:8500,name=
distributed_app,id=server1,port=2001
mule = config.ru

[server2]
consul-register = url=http://%h.node.consul:8500,name=
distributed_app,id=server2,port=2002
mule = config.ru

The uwsgi-consul.ini le uses uWSGI’s Mule construct to run two identical ap-
plications that do ”Hello World” in the Sinatra ramework. Let’s look at those in
the config.ru le.

Version: v17.03.0 (38f1319) 296


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.51: The distributed_app cong.ru le

require 'rubygems'
require 'sinatra'

get '/' do
"Hello World!"
end

run Sinatra::Application

Each application is dened in a block, labelled server1 and server2 respectively.


Also inside these blocks is a call to the uWSGI Consul plugin. This call connects
to our Consul instance and registers a service called distributed_app with an ID
o server1 or server2. Each service is assigned a diferent port, 2001 and 2002
respectively.
When the ramework runs this will create our two web application workers and
register a service or each on Consul. The application will use the local Consul
node to create the service with the %h conguration shortcut populating the Consul
URL with the right hostname.

Listing 7.52: The Consul plugin URL

url=http://%h.node.consul:8500...

Lastly, we’ve congured an ENTRYPOINT instruction to automatically run our web


application workers.
Let’s build our image now.

Version: v17.03.0 (38f1319) 297


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.53: Building our distributed_app image

$ sudo docker build -t="jamtur01/distributed_app" .

 NOTE You can get our distributed_app Dockerfile and conguration


and application les on the book’s site here or on GitHub here.

Building our distributed client

We’re now going to create a Dockerfile to build our distributed_client image.


Let’s create a directory to hold our image rst.

Listing 7.54: Creating a distributed_client Dockerle directory

$ mkdir distributed_client
$ cd distributed_client
$ touch Dockerfile

Now let’s look at the Dockerfile or the distributed_client application.

Version: v17.03.0 (38f1319) 298


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.55: The distributed_client Dockerle

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01

RUN apt-get -qqy update


RUN apt-get -qqy install ruby ruby-dev build-essential
RUN gem install --no-ri --no-rdoc json

RUN mkdir -p /opt/distributed_client


ADD client.rb /opt/distributed_client/

WORKDIR /opt/distributed_client

ENTRYPOINT [ "ruby", "/opt/distributed_client/client.rb" ]


CMD []

The Dockerfile installs Ruby and some prerequisite packages and gems. It creates
the /opt/distributed_client directory and makes it the working directory. It
copies our client application code, contained in the client.rb le, into the /opt
/distributed_client directory.

Let’s take a quick look at our application code now.

Version: v17.03.0 (38f1319) 299


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.56: The distributed_client application

require "rubygems"
require "json"
require "net/http"
require "uri"
require "resolv"

uri = URI.parse("http://consul.service.consul:8500/v1/catalog/
service/distributed_app")

http = Net::HTTP.new(uri.host, uri.port)


request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)

while true
if response.body == "{}"
puts "There are no distributed applications registered in
Consul"
sleep(1)
elsif
result = JSON.parse(response.body)
result.each do |service|
puts "Application #{service['ServiceName']} with element #{
service["ServiceID"]} on port #{service["ServicePort"]}
found on node #{service["Node"]} (#{service["Address"]}).
"
dns = Resolv::DNS.new.getresources("distributed_app.service
.consul", Resolv::DNS::Resource::IN::A)
puts "We can also resolve DNS - #{service['ServiceName']}
resolves to #{dns.collect { |d| d.address }.join(" and ")
}."
sleep(1)
end
end
Version: v17.03.0 (38f1319) 300
end
Chapter 7: Docker Orchestration and Service Discovery

Our client checks the Consul HTTP API and the Consul DNS or the presence o
a service called distributed_app. It queries the host consul.service.consul
which is the DNS CNAME entry we saw earlier that contains all the A records o
our Consul cluster nodes. This provides us with a simple DNS round robin or our
queries.
I no service is present it puts a message to that efect on the console. I it detects
a distributed_app service then it:

• Parses out the JSON output rom the API call and returns some useul inor-
mation to the console.
• Perorms a DNS lookup or any A records or that service and returns them
to the console.

This will allow us to see the results o launching our distributed_app containers
on our Consul cluster.
Lastly our Dockerfile species an ENTRYPOINT instruction that runs the client.rb
application when the container is started.
Let’s build our image now.

Listing 7.57: Building our distributed_client image

$ sudo docker build -t="jamtur01/distributed_client" .

 NOTE You can get our distributed_client Dockerfile and conguration


and application les on the book’s site here or on GitHub here.

Starting our distributed application

Now we’ve built the required images we can launch our distributed_app applica-
tion container on larry and curly. We’ve assumed that you have Consul running

Version: v17.03.0 (38f1319) 301


Chapter 7: Docker Orchestration and Service Discovery

as we’ve congured it earlier in the chapter. Let’s start by running one application
instance on larry.

Listing 7.58: Starting distributed_app on larry

larry$ sudo docker run --dns=172.17.0.1 -h $HOSTNAME -d --name


larry_distributed \
jamtur01/distributed_app

Here we’ve launched the jamtur01/distributed_app image and specied the --


dns ag to add a DNS lookup rom the Docker server, here represented by the
docker0 interace bridge IP address o 172.17.0.1. As we’ve bound Consul’s DNS
lookup when we ran the Consul server this will allow the application to lookup
nodes and services in Consul. You should replace this with the IP address o your
own docker0 interace.
We’ve also specied -h ag to set the hostname. This is important because we’re
using this hostname to tell uWSGI what Consul node to register the service on.
We’ve called our container larry_distributed and run it daemonized.
I we check the log output rom the container we should see uWSGI starting our
web application workers and registering the service on Consul.

Version: v17.03.0 (38f1319) 302


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.59: The distributed_app log output

larry$ sudo docker logs larry_distributed


*** Starting uWSGI 2.0.13.1 (64bit) on [Sat Aug 6 13:44:26 2016]
***
compiled with version: 5.4.0 20160609 on 06 August 2016 12:58:54
os: Linux-4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC
2016

. . .

Sat Aug 6 13:44:26 2016 - [consul] workers ready, let's register


the service to the agent
spawned uWSGI mule 2 (pid: 13)
[consul] service distributed_app registered successfully
Sat Aug 6 13:44:27 2016 - [consul] workers ready, let's register
the service to the agent
[consul] service distributed_app registered successfully

We see a subset o the logs here and that uWSGI has started. The Consul plugin has
constructed a service entry or each distributed_app worker and then registered
them with Consul. I we now look at the Consul web interace we should be able
to see our new services.

Version: v17.03.0 (38f1319) 303


Chapter 7: Docker Orchestration and Service Discovery

Figure 7.4: The distributed_app service in the Consul web interace.

Let’s start some more web application workers on curly now.

Listing 7.60: Starting distributed_app on curly

curly$ sudo docker run --dns=172.17.0.1 -h $HOSTNAME -d --name


curly_distributed \
jamtur01/distributed_app

I we check the logs and the Consul web interace we should now see more services
registered.

Version: v17.03.0 (38f1319) 304


Chapter 7: Docker Orchestration and Service Discovery

Figure 7.5: More distributed_app services in the Consul web interace.

Starting our distributed application client

Now we’ve got web application workers running on larry and curly let’s start
our client on moe and see i we can query data rom Consul.

Listing 7.61: Starting distributed_client on moe

moe$ sudo docker run -ti --dns=172.17.0.1 --name


moe_distributed_client jamtur01/distributed_client

This time we’ve run the jamtur01/distributed_client image on moe and created
an interactive container called moe_distributed_client. It should start emitting
log output like so:

Version: v17.03.0 (38f1319) 305


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.62: The distributed_client logs on moe

Application distributed_app with element server1 on port 2001


found on node curly (162.243.170.66).
We can also resolve DNS - distributed_app resolves to
162.243.167.159 and 162.243.170.66.
Application distributed_app with element server2 on port 2002
found on node curly (162.243.170.66).
We can also resolve DNS - distributed_app resolves to
162.243.167.159 and 162.243.170.66.
Application distributed_app with element server1 on port 2001
found on node larry (162.243.167.159).
We can also resolve DNS - distributed_app resolves to
162.243.170.66 and 162.243.167.159.
Application distributed_app with element server2 on port 2002
found on node larry (162.243.167.159).
We can also resolve DNS - distributed_app resolves to
162.243.167.159 and 162.243.170.66.

We see that our distributed_client application has queried the HTTP API and
ound service entries or distributed_app and its server1 and server2 workers
on both larry and curly. It has also done a DNS lookup to discover the IP address
o the nodes running that service, 162.243.167.159 and 162.243.170.66.
I this was a real distributed application our client and our workers could take
advantage o this inormation to congure, connect, route between elements o
the distributed application. This provides a simple, easy and resilient way to build
distributed applications running inside separate Docker containers and hosts.

Docker Swarm

Docker Swarm is native clustering or Docker. It turns a pool o Docker hosts

Version: v17.03.0 (38f1319) 306


Chapter 7: Docker Orchestration and Service Discovery

into a single virtual Docker host. Swarm has a simple architecture. It clusters
together multiple Docker hosts and serves the standard Docker API on top o that
cluster. This is incredibly powerul because it moves up the abstraction o Docker
containers to the cluster level without you having to learn a new API. This makes
integration with tools that already support the Docker API easy, including the
standard Docker client. To a Docker client a Swarm cluster is just another Docker
host.
Swarm, like many other Docker tools, ollows a design principle o ”batteries in-
cluded but removable”. This means it ships with tooling and backend integration
or simple use cases and provides an API or integration with more complex tools
and use cases. Swarm is shipped integrated into Docker since Docker 1.12. Prior
to that it was a standalone application licensed with the Apache 2 license.

Understanding the Swarm

A swarm is a cluster o Docker hosts onto which you can deploy services. Since
Docker 1.12 the Docker command line tool has included a swarm mode. This
allows the docker binary to create and manage swarms as well as run local con-
tainers.
A swarm is made up o manager and worker nodes. Manager do the dispatching
and organizing o work on the swarm. Each unit o work is called a task. Managers
also handle all the cluster management unctions that keep the swarm healthy and
active. You can have many manager nodes, i there is more than one then the
manager node will conduct an election or a leader.
Worker nodes run the tasks dispatched rom manager nodes. Out o the box, every
node, managers and workers, will run tasks. You can instead congure a swarm
manager node to only perorm management activities and not run tasks.
As a task is a pretty atomic unit swarms use a bigger abstraction, called a service
as a building block. Services dened which tasks are executed on your nodes.
Each service consists o a container image and a series o commands to execute
inside one or more containers on the nodes. You can run services in a number o
modes:

Version: v17.03.0 (38f1319) 307


Chapter 7: Docker Orchestration and Service Discovery

• Replicated services - a swarm manager distributes replica tasks amongst


workers according to a scale you speciy.

• Global services - a swarm manager dispatches one task or the service on
every available worker.

The swarm also manages load balancing and DNS much like a local Docker host.
Each swarm can expose ports, much like Docker containers publish ports. Like
container ports, These can be automatically or manually dened. The swarm
handles internal DNS much like a Docker host allowing services and workers to
be discoverable inside the swarm.

Installing Swarm

The easiest way to install Swarm is to use Docker itsel. As a result, Swarm doesn’t
have anymore prerequisites than what we saw in Chapter 2. These instructions
assume you’ve installed Docker in accordance with those instructions.

 TIP Prior to Docker 1.12, when Swarm was integrated into Docker, you can
use Swarm via a Docker image provided by the Docker Inc team called swarm.
Instructions or installation and usage are available on the Docker Swarm docu-
mentation site.

We’re going to reuse our larry, curly and moe hosts to demonstrate Swarm.
The latest Docker release is already installed on these hosts and we’re going to
turn them into nodes o a Swarm cluster.

Setting up a Swarm

Now let’s create a Swarm cluster. Each node in our cluster runs a Swarm node
agent. Each agent registers its related Docker daemon with the cluster. Also

Version: v17.03.0 (38f1319) 308


Chapter 7: Docker Orchestration and Service Discovery

available is the Swarm manager that we’ll use to manage our cluster. We’re going
to create two cluster workers and a manager on our three hosts.

Table 7.3: Swarm addresses and roles

Host IP Address Role


larry 162.243.167.159 Manager
curly 162.243.170.66 Worker
moe 159.203.191.16 Worker

We also need to make sure some ports are open between all our nodes. We need
to consider the ollowing access:

Table 7.4: Docker Swarm deault ports.

Port Purpose
2377 Cluster Management
7946 + udp Node communication
4789 + udp Overlay network

We’re going to start with registering a Swarm on our larry node and use this host
as our Swarm manager. We’re again going to need larry’s public IP address. Let’s
make sure it’s still assigned to an environment variable.

Listing 7.63: Getting public IP on larry again

larry$ PUBLIC_IP="$(ifconfig eth0 | awk -F ' *|:' '/inet addr/{


print $4}')"
larry$ echo $PUBLIC_IP
162.243.167.159

Now let’s initialize a swarm on larry using this address.

Version: v17.03.0 (38f1319) 309


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.64: Initializing a swarm on larry

$ sudo docker swarm init --advertise-addr $PUBLIC_IP


Swarm initialized: current node (bu84wfix0h0x31aut8qlpbi9x) is
now a manager.

To add a worker to this swarm, run the following command:


docker swarm join \
--token SWMTKN-1-2
mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-776
lg9r60gigwb32q329m0dli \
162.243.167.159:2377

To add a manager to this swarm, run the following command:


docker swarm join \
--token SWMTKN-1-2
mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-78
bsc54abf35rhpr3ntbh98t8 \
162.243.167.159:2377

You can see we’ve run a docker command: swarm. We’ve then used the init option
to initialize a swarm and the --advertise-addr ag to speciy the management
IP o the new swarm.
We can see the swarm has been started, assigning larry as the swarm manager.
Each swarm has two registration tokens initialized when the swarm begins. One
token or a manager and another or worker nodes. Each type o node can use
this token to join the swarm. We can see one o our tokens:
SWMTKN-1-2mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-776
lg9r60gigwb32q329m0dli

You can see that the output rom initializing the swarm has also provided sample
commands or adding workers and managers to the new swarm.

Version: v17.03.0 (38f1319) 310


Chapter 7: Docker Orchestration and Service Discovery

 TIP I you ever need to get this token back again then you can run the docker
swarm join-token worker command on the Swarm manager to retrieve it.

Let’s look at the state o our Swarm by running the docker info command.

Listing 7.65: The Docker

larry$ sudo docker info


. . .
Swarm: active
NodeID: bu84wfix0h0x31aut8qlpbi9x
Is Manager: true
ClusterID: 0qtrjqv37gs3yc5f7ywt8nwfq
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot interval: 10000
Heartbeat tick: 1
Election tick: 3
Dispatcher:
Heartbeat period: 5 seconds
CA configuration:
Expiry duration: 3 months
Node Address: 162.243.167.159
. . .

By enabling a swarm you’ll see a new section in the docker info output.
We can also view inormation on the nodes inside the swarm using the docker
node command.

Version: v17.03.0 (38f1319) 311


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.66: The docker node command

larry$ sudo docker node ls


ID HOSTNAME STATUS AVAILABILITY
MANAGER STATUS
bu84wfix0h0x31aut8qlpbi9x * larry Ready Active
Leader

The docker node command with the ls ag shows the list o nodes in the swarm.
Currently we only have one node larry which is active and shows its role as
Leader o the manager nodes.

Let’s add our curly and moe hosts to the swarm as workers. We can use the
command emitted when we initialized the swarm.

Listing 7.67: Adding worker nodes to the cluster

curly$ sudo docker swarm join \


--token SWMTKN-1-2
mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-776
lg9r60gigwb32q329m0dli \
162.243.167.159:2377
This node joined a swarm as a worker.

The docker swarm join command takes a token, in our case the worker token,
and the IP address and port o a Swarm manager node and adds that Docker host
to the swarm.
And then again with the same command on the moe node. Now let’s look at our
node list again on the larry host.

Version: v17.03.0 (38f1319) 312


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.68: Running the docker node command again

larry$ sudo docker node ls


ID HOSTNAME STATUS AVAILABILITY
MANAGER STATUS
bu84wfix0h0x31aut8qlpbi9x * larry Ready Active
Leader
c6viix7oja1twnyuc8ez7txhd curly Ready Active
dzxrvk6awnegjtj5aixnojetf moe Ready Active

Now we can see two more nodes added to our swarm as workers.

Running a service on your Swarm

With the swarm running, we can now start to run services on it. Remember ser-
vices are a container image and commands that will be executed on our swarm
nodes. Let’s create a simple replica service now. Remember that replica services
run the number o tasks you speciy.

Listing 7.69: Creating a swarm service

$ sudo docker service create --replicas 2 --name heyworld ubuntu


/bin/sh -c "while true; do echo hey world; sleep 1; done"
8bl7yw1z3gzir0rmcvnrktqol

 TIP You can nd the ull list o docker service create ags on the Docker
documentation site.

We’ve used the docker service command with the create keyword. This creates

Version: v17.03.0 (38f1319) 313


Chapter 7: Docker Orchestration and Service Discovery

services on our swarm. We’ve used the --name ag to call the service: heyworld.
The heyworld runs the ubuntu image and a while loop that echoes hey world. The
--replicas ag controls how many tasks are run on the swarm. In this case we’re
running two tasks.
Let’s look at our service using the docker service ls command.

Listing 7.70: Listing the services

$ sudo docker service ls


ID NAME REPLICAS IMAGE COMMAND
8bl7yw1z3gzi heyworld 2/2 ubuntu /bin/sh -c while true;
do echo hey world; sleep 1; done

This command lists all services in the swarm. We can see that our heyworld service
is running on two replicas. We can inspect the service in urther detail using the
docker service inspect command. We’ve also passed in the --pretty ag to
return the output in an elegant orm.

Version: v17.03.0 (38f1319) 314


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.71: Inspecting the heyworld service

$ sudo docker service inspect --pretty heyworld


ID: 8bl7yw1z3gzir0rmcvnrktqol
Name: heyworld
Mode: Replicated
Replicas: 2
Placement:
UpdateConfig:
Parallelism: 1
On failure: pause
ContainerSpec:
Image: ubuntu
Args: /bin/sh -c while true; do echo hey world; sleep 1;
done
Resources:

But we still don’t know where the service running. Let’s look at another command:
docker service ps.

Listing 7.72: Checking the heyworld service process

$ sudo docker service ps heyworld


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
103q... heyworld.1 ubuntu larry Running Running about a
minute ago
6ztf... heyworld.2 ubuntu moe Running Running about a
minute ago

We can see each task, suxed with the task number, and the node it is running
on.

Version: v17.03.0 (38f1319) 315


Chapter 7: Docker Orchestration and Service Discovery

Now, let’s say we wanted to add another task to the service, scaling it up. To do
this we use the docker service scale command.

Listing 7.73: Scaling the heyworld service

$ sudo docker service scale heyworld=3


heyworld scaled to 3

We speciy the service we want to scale and then the new number o tasks we
want run, here 3. The swarm has then let us know it has scaled. Let’s again check
the running processes.

Listing 7.74: Checking the heyworld service process

$ sudo docker service ps heyworld


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
103q... heyworld.1 ubuntu larry Running Running 5 minutes
ago
6ztf... heyworld.2 ubuntu moe Running Running 5 minutes
ago
1gib... heyworld.3 ubuntu curly Running Running about a
minute ago

We can see that our service is now running on a third node.


In addition to running replica services we can also run global services. Rather
than running as many replicas as you speciy, global services run on every worker
in the swarm.

Version: v17.03.0 (38f1319) 316


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.75: Running a global service

$ sudo docker service create --name heyworld_global --mode global


ubuntu /bin/sh -c "while true; do echo hey world; sleep 1;
done"

Here we’ve started a global service called heyworld_global. We’ve specied the
--mode ag with a value o global and run the same ubuntu image and the same
command we ran above.
Let’s see the processes or the heyworld_global service using the docker service
ps command.

Listing 7.76: The heyworld_global process

$ sudo docker service ps heyworld_global


ID NAME IMAGE NODE DESIRED STATE CURRENT
STATE
c8c1... heyworld_global ubuntu moe Running Running 30
seconds ago
48wm... \_ heyworld_global ubuntu curly Running Running 30
seconds ago
8b8u... \_ heyworld_global ubuntu larry Running Running 29
seconds ago

We can see that the heyworld_global service is running on every one o our nodes.
I we want to stop a service we can run the docker service rm command.

Version: v17.03.0 (38f1319) 317


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.77: Deleting the heyworld service

$ sudo docker service rm heyworld


heyworld

We can now list the running services.

Listing 7.78: Listing the remaining services

$ sudo docker service ls


ID NAME REPLICAS IMAGE COMMAND
5k3t... heyworld_global global ubuntu /bin/sh -c...

And we can see that only the heyworld_global service remains running.

 TIP Swarm mode also allows or scaling, draining and staged upgrades. You
can nd some examples o this in Docker Swarm tutorial.

Orchestration alternatives and components

As we mentioned earlier, Compose and Consul aren’t the only games in town when
it comes to Docker orchestration tools. There’s a ast growing ecosystem o them.
This is a non-comprehensive list o some o the tools available in that ecosystem.
Not all o them have matching unctionality and broadly all into two categories:

• Scheduling and cluster management.


• Service discovery.

Version: v17.03.0 (38f1319) 318


Chapter 7: Docker Orchestration and Service Discovery

 NOTE All o the tools listed are open source under various licenses.

Fleet and etcd

Fleet and etcd are released by the CoreOS team. Fleet is a cluster management tool
and etcd is highly available key value store or shared conguration and service
discovery. Fleet combines systemd and etcd to provide cluster management and
scheduling or containers. Think o it as an extension o systemd that operates at
the cluster level instead o the machine level.

Kubernetes

Kubernetes is a container cluster management tool open sourced by Google. It


allows you to deploy and scale applications using Docker across multiple hosts.
Kubernetes is primarily targeted at applications comprised o multiple containers,
such as elastic, distributed micro-services.

Apache Mesos

The Apache Mesos project is a highly available cluster management tool. Since
Mesos 0.20.0 it has built-in Docker integration to allow you to use containers with
Mesos. Mesos is popular with a number o startups, notably Twitter and AirBnB.

Helios

The Helios project has been released by the team at Spotiy and is a Docker or-
chestration platorm or deploying and managing containers across an entire eet.
It creates a ”job” abstraction that you can deploy to one or more Helios hosts
running Docker.

Version: v17.03.0 (38f1319) 319


Chapter 7: Docker Orchestration and Service Discovery

Centurion

Centurion is ocused on being a Docker-based deployment tool open sourced by


the New Relic team. Centurion takes containers rom a Docker registry and runs
them on a eet o hosts with the correct environment variables, host volume map-
pings, and port mappings. It is designed to help you do continuous deployment
with Docker.

Summary

In this chapter we’ve introduced you to orchestration with Compose. We’ve shown
you how to add a Compose conguration le to create simple application stacks.
We’ve shown you how to run Compose and build those stacks and how to perorm
basic management tasks on them.
We’ve also shown you a service discovery tool, Consul. We’ve installed Consul
onto Docker and created a cluster o Consul nodes. We’ve also demonstrated how
a simple distributed application might work on Docker.
We also took a look at Docker Swarm as a Docker clustering and scheduling tool.
We saw how to install Swarm, how to manage it and how to schedule workloads
across it.
Finally, we’ve seen some o the other orchestration tools available to us in the
Docker ecosystem.
In the next chapter we’ll look at the Docker API, how we can use it, and how we
can secure connections to our Docker daemon via TLS.

Version: v17.03.0 (38f1319) 320


Chapter 8

Using the Docker API

In Chapter 6, we saw some excellent examples o how to run services and build
applications and workow around Docker. One o those examples, the TProv ap-
plication, ocused on using the docker binary on the command line and capturing
the resulting output. This is not an elegant approach to integrating with Docker;
especially when Docker comes with a powerul API you can use to integrate di-
rectly.
In this chapter, we’re going to introduce you to the Docker API and see how to
make use o it. We’re going to take you through binding the Docker daemon on
a network port. We’ll then take you through the API at a high level and hit on
the key aspects o it. We’ll also look at the TProv application we saw in Chapter 6
and rewrite some portions o it to use the API instead o the docker binary. Lastly,
we’ll look at authenticating the API via TLS.

The Docker APIs

There are three specic APIs in the Docker ecosystem.

• The Registry API - provides integration with the Docker registry, which
stores our images.
• The Docker Hub API - provides integration with the Docker Hub.

321
Chapter 8: Using the Docker API

• The Docker Remote API - provides integration with the Docker daemon.

All three APIs are broadly RESTul. In this chapter, we’ll ocus on the Remote API
because it is key to any programmatic integration and interaction with Docker.

First steps with the Remote API

Let’s explore the Docker Remote API and see its capabilities. Firstly, we need
to remember the Remote API is provided by the Docker daemon. By deault, the
Docker daemons binds to a socket, unix:///var/run/docker.sock, on the host on
which it is running. The daemon runs with root privileges so as to have the access
needed to manage the appropriate resources. As we also discovered in Chapter 2,
i a group named docker exists on your system, Docker will apply ownership o
the socket to that group. Hence, any user that belongs to the docker group can
run Docker without needing root privileges.

 WARNING Remember that although the docker group makes lie easier,
it is still a security exposure. The docker group is root-equivalent and should be
limited to those users and applications that absolutely need it.

This works ne i we’re querying the API rom the same host running Docker, but
i we want remote access to the API, we need to bind the Docker daemon to a
network interace. This is done by passing or adjusting the -H ag to the Docker
daemon.
I you want to use the Docker API locally we use the curl command to query it,
like so:

Version: v17.03.0 (38f1319) 322


Chapter 8: Using the Docker API

Listing 8.1: Querying the Docker API locally

$ curl --unix-socket /var/run/docker.sock http:/info


{"ID":"PH4R:BT7H:44F6:GQGP:FS2O:7OZO:HQ2P:NSVF:MK27:NBGZ:N3VP:
K2O5","Containers":3,"ContainersRunning":3,"ContainersPaused"
:0,"ContainersStopped":0,"Images":3,"
. . .
}

On most distributions, we can bind the Docker daemon to a network interace by


editing the daemon’s startup conguration les. For older Ubuntu or Debian re-
leases, this would be the /etc/default/docker le; or those releases with Upstart,
it would be the /etc/init/docker.conf le; or systemd releases it’ll be /lib/
systemd/system/docker.service. For Red Hat, Fedora, and related distributions,
it would be the /etc/sysconfig/docker le; or those releases with Systemd, it is
the /usr/lib/systemd/system/docker.service le.
Let’s bind the Docker daemon to a network interace on a Red Hat derivative
running systemd. We’ll edit the /usr/lib/systemd/system/docker.service le
and change:

Listing 8.2: Deault systemd daemon start options

ExecStart=/usr/bin/dockerd --selinux-enabled

To:

Version: v17.03.0 (38f1319) 323


Chapter 8: Using the Docker API

Listing 8.3: Network binding systemd daemon start options

ExecStart=/usr/bin/dockerd --selinux-enabled -H tcp


://0.0.0.0:2375

This will bind the Docker daemon to all interaces on the host using port 2375.
We then need to reload and restart the daemon using the systemctl command.

Listing 8.4: Reloading and restarting the Docker daemon

$ sudo systemctl --system daemon-reload

 TIP You’ll also need to ensure that any rewall on the Docker host or between
you and the host allows TCP communication to the IP address on port 2375.

We now test that this is working using the docker client binary, passing the -H ag
to speciy our Docker host. Let’s connect to our Docker daemon rom a remote
host.

Version: v17.03.0 (38f1319) 324


Chapter 8: Using the Docker API

Listing 8.5: Connecting to a remote Docker daemon

$ sudo docker -H docker.example.com:2375 info


Containers: 0
Images: 0
Driver: devicemapper
Pool Name: docker-252:0-133394-pool
Data file: /var/lib/docker/devicemapper/devicemapper/data
Metadata file: /var/lib/docker/devicemapper/devicemapper/
metadata
. . .

This assumes the Docker host is called docker.example.com; we’ve used the -H
ag to speciy this host. Docker will also honor the DOCKER_HOST environment
variable rather than requiring the continued use o the -H ag.

Listing 8.6: Revisiting the DOCKER_HOST environment variable

$ export DOCKER_HOST="tcp://docker.example.com:2375"

 WARNING Remember this connection is unauthenticated and open to


the world! Later in this chapter, we’ll see how we add authentication to this
connection.

Version: v17.03.0 (38f1319) 325


Chapter 8: Using the Docker API

Testing the Docker Remote API

Now that we’ve established and conrmed connectivity to the Docker daemon via
the docker binary, let’s try to connect directly to the API. To do so, we’re going to
use the curl command. We’re going to connect to the info API endpoint, which
provides roughly the same inormation as the docker info command.

Listing 8.7: Using the ino API endpoint

$ curl http://docker.example.com:2375/info | python3 -mjson.tool


{
"ID": "PH4R:BT7H:44F6:GQGP:FS2O:7OZO:HQ2P:NSVF:MK27:NBGZ:N3VP
:K2O5",
"Containers": 7,
"ContainersRunning": 1,
"ContainersPaused": 0,
"ContainersStopped": 6,
"Images": 3,
"Driver": "aufs",
"DriverStatus": [
[
. . .

We’ve connected to the Docker API on http://docker.example.com:2375


using the curl command, and we’ve specied the path to the Docker API:
docker.example.com on port 2375 with endpoint info.

We see that the API has returned a JSON hash, o which we’ve included a sample,
containing the system inormation or the Docker daemon. This demonstrates that
the Docker API is working and we’re getting some data back. We’ve passed the
JSON through python’s JSON tool to prettiy it.

Version: v17.03.0 (38f1319) 326


Chapter 8: Using the Docker API

Managing images with the API

Let’s start with some API basics: working with Docker images. We’re going to
start by getting a list o all the images on our Docker daemon.

Listing 8.8: Getting a list o images via API

$ curl http://docker.example.com:2375/images/json | python3 -


mjson.tool
[
{
"Id": "sha256:
b608dbb10e2564f5bd0eef045bf297e56b1149edc70bece54fef4b217261a473
",
"ParentId": "",
"RepoTags": [
"jamtur01/distributed_app:latest"
],
"RepoDigests": [
"jamtur01/distributed_app@sha256:
ecc6b617e9c776d8bd7ed281a55b02e9214d701cad72b9628f5668edfbb86a26
"
],
"Created": 1470488372,
"Size": 469434429,
"VirtualSize": 469434429,
"Labels": {}
},
. . .
]

We’ve used the /images/json endpoint, which will return a list o all images on
the Docker daemon. It’ll give us much the same inormation as the docker images

Version: v17.03.0 (38f1319) 327


Chapter 8: Using the Docker API

command. We can also query specic images via ID, much like docker inspect
on an image ID.

Listing 8.9: Getting a specic image

$ curl http://docker.example.com:2375/images/
b608dbb10e2564f5bd0eef045bf297e56b1149edc70bece54fef4b217261a473
/json | python3 -mjson.tool
{
"Id": "sha256:
b608dbb10e2564f5bd0eef045bf297e56b1149edc70bece54fef4b217261a473
",
"RepoTags": [
"jamtur01/distributed_app:latest"
],
"RepoDigests": [
"jamtur01/distributed_app@sha256:
ecc6b617e9c776d8bd7ed281a55b02e9214d701cad72b9628f5668edfbb86a26
"
],
"Parent": "",
"Comment": "",
"Created": "2016-08-06T12:59:32.957396238Z",
. . .
}
}

Here we see a subset o the output o inspecting our jamtur01/distributed_app


image. And nally, like the command line, we can search or images on the
Docker Hub.

Version: v17.03.0 (38f1319) 328


Chapter 8: Using the Docker API

Listing 8.10: Searching or images with the API

$ curl "http://docker.example.com:2375/images/search?term=
jamtur01" | python3 -mjson.tool
[
{
"star_count": 0,
"is_official": false,
"name": "jamtur01/docker-jenkins-sample",
"is_automated": true,
"description": ""
},
{
"star_count": 5,
"is_official": false,
"name": "jamtur01/docker-presentation",
"is_automated": true,
"description": ""
},
. . .
]

Here we’ve searched or all images containing the term jamtur01 and displayed a
subset o the output returned. This is just a sampling o the actions we can take
with the Docker API. We can also build, update, and remove images.

Managing containers with the API

The Docker Remote API also exposes all o the container operations available to
us on the command line. We can list running containers using the /containers
endpoint much as we would with the docker ps command.

Version: v17.03.0 (38f1319) 329


Chapter 8: Using the Docker API

Listing 8.11: Listing running containers

$ curl -s "http://docker.example.com:2375/containers/json" |
python3 -mjson.tool
[
{
"Id": "
d580b605fa1bcd210af0d2fe28e50a018f9ea546b56e8b28806d8dc16596340e
",
"Names": [
"/heyworld_global.0.bbctscdrhkro371mkieb0roid"
],
"Image": "ubuntu:latest",
"ImageID": "sha256:42118
e3df429f09ca581a9deb3df274601930e428e452f7e4e9f1833c56a100a
",
"Command": "/bin/sh -c 'while true; do echo hey world;
sleep 1; done'",
"Created": 1470676972,
"Ports": [],
"Labels": {
"com.docker.swarm.node.id": "
c6viix7oja1twnyuc8ez7txhd",
"com.docker.swarm.service.id": "5
k3tw55i050qqh16ob9651pqx",
"com.docker.swarm.service.name": "heyworld_global",
"com.docker.swarm.task": "",
"com.docker.swarm.task.id": "
bbctscdrhkro371mkieb0roid",
"com.docker.swarm.task.name": "heyworld_global.0"
},
"State": "running",
"Status": "Up 11 hours",
"HostConfig": {
"NetworkMode": "default"
Version: v17.03.0 (38f1319) 330
},
. . .
}
]
Chapter 8: Using the Docker API

Our query will show all running containers on the Docker host, in our case, a
single container. To see running and stopped containers, we can add the all ag
to the endpoint and set it to 1.

Listing 8.12: Listing all containers via the API

http://docker.example.com:2375/containers/json?all=1

We can also use the API to create containers by using a POST request to the /
containers/create endpoint. Here is the simplest possible container creation
API call.

Listing 8.13: Creating a container via the API

$ curl -X POST -H "Content-Type: application/json" \


http://docker.example.com:2375/containers/create \
-d '{
"Image":"jamtur01/jekyll"
}'
{"Id":"591
ba02d8d149e5ae5ec2ea30ffe85ed47558b9a40b7405e3b71553d9e59bed3",
"Warnings":null}

We call the /containers/create endpoint and POST a JSON hash containing an


image name to the endpoint. The API returns the ID o the container we’ve just
created and potentially any warnings. This will create a container.
We can urther congure our container creation by adding key/value pairs to our
JSON hash.

Version: v17.03.0 (38f1319) 331


Chapter 8: Using the Docker API

Listing 8.14: Conguring container launch via the API

$ curl -X POST -H "Content-Type: application/json" \


"http://docker.example.com:2375/containers/create?name=jekyll" \
-d '{
"Image":"jamtur01/jekyll",
"Hostname":"jekyll"
}'
{"Id":"591
ba02d8d149e5ae5ec2ea30ffe85ed47558b9a40b7405e3b71553d9e59bed3",
"Warnings":null}

Here we’ve specied the Hostname key with a value o jekyll to set the hostname
o the resulting container.
To start the container we use the /containers/start endpoint.

Listing 8.15: Starting a container via the API

$ curl -X POST -H "Content-Type: application/json" \


http://docker.example.com:2375/containers/591
ba02d8d149e5ae5ec2ea30ffe85ed47558b9a40b7405e3b71553d9e59bed3/
start \
-d '{
"PublishAllPorts":true
}'

In combination, this provides the equivalent o running:

Version: v17.03.0 (38f1319) 332


Chapter 8: Using the Docker API

Listing 8.16: API equivalent or docker run command

$ sudo docker run jamtur01/jekyll

We can also inspect the resulting container via the /containers/ endpoint.

Listing 8.17: Listing all containers via the API

$ curl http://docker.example.com:2375/containers/591
ba02d8d149e5ae5ec2ea30ffe85ed47558b9a40b7405e3b71553d9e59bed3/
json | python3 -mjson.tool
{
"Args": [
"build",
"--destination=/var/www/html"
],
. . .
"Hostname": "591ba02d8d14",
"Image": "jamtur01/jekyll",
. . .
"Id": "591
ba02d8d149e5ae5ec2ea30ffe85ed47558b9a40b7405e3b71553d9e59bed3
",
"Image": "29
d4355e575cff59d7b7ad837055f231970296846ab58a037dd84be520d1cc31
",
. . .
"Name": "/hopeful_davinci",
. . .
}

Version: v17.03.0 (38f1319) 333


Chapter 8: Using the Docker API

Here we see we’ve queried our container using the container ID and shown a
sampling o the data available to us.

Improving the TProv application


Now let’s look at one o the methods inside the TProv application that we used
in Chapter 6. We’re going to look specically at the methods which create and
delete Docker containers.

Listing 8.18: The legacy TProv container launch methods

def create\_instance(name)
cid = `docker run -P --volumes-from #{name} -d -t jamtur01/
tomcat7 2>&1`.chop
[\$?.exitstatus == 0, cid]
end

 NOTE You can see the previous TProv code at here or on GitHub.

Pretty crude, eh? We’re directly calling out to the docker binary and capturing its
output. There are lots o reasons that that will be problematic, not least o which
is that you can only run the TProv application somewhere with the Docker client
installed.
We can improve on this interace by using the Docker API via one o its client
libraries, in this case the Ruby Docker-API client library.

 TIP You can nd a ull list o the available client libraries here. There are
client libraries or Ruby, Python, Node.JS, Go, Erlang, Java, and others.

Version: v17.03.0 (38f1319) 334


Chapter 8: Using the Docker API

Let’s start by looking at how we establish our connection to the Docker API.

Listing 8.19: The Docker Ruby client

require 'docker'
. . .

module TProvAPI
class Application < Sinatra::Base

. . .

Docker.url = ENV['DOCKER_URL'] || 'http://localhost:2375'


Docker.options = {
:ssl_verify_peer => false
}

We’ve added a require or the docker-api gem. We’d need to install this gem
rst to get things to work or add it to the TProv application’s gem specication.
We can then use the Docker.url method to speciy the location o the Docker
host we wish to use. In our code, we speciy this via an environment variable,
DOCKER_URL, or use a deault o http://localhost:2375.

We’ve also used the Docker.options to speciy options we want to pass to the
Docker daemon connection.
We can test this idea using the IRB shell locally. Let’s try that now. You’ll need
to have Ruby installed on the host on which you are testing. Let’s assume we’re
using a Fedora host.

Version: v17.03.0 (38f1319) 335


Chapter 8: Using the Docker API

Listing 8.20: Installing the Docker Ruby client API prerequisites

$ sudo yum -y install ruby ruby-irb


. . .
$ sudo gem install docker-api json
. . .

Now we can use irb to test our Docker API connection.

Version: v17.03.0 (38f1319) 336


Chapter 8: Using the Docker API

Listing 8.21: Testing our Docker API connection via irb

$ irb
irb(main):001:0> require 'docker'; require 'pp'
=> true
irb(main):002:0> Docker.url = 'http://docker.example.com:2375'
=> "http://docker.example.com:2375"
irb(main):003:0> Docker.options = { :ssl_verify_peer => false }
=> {:ssl_verify_peer=>false}
irb(main):004:0> pp Docker.info
{"Containers"=>9,
"Debug"=>0,
"Driver"=>"aufs",
"DriverStatus"=>[["Root Dir", "/var/lib/docker/aufs"], ["Dirs",
"882"]],
"ExecutionDriver"=>"native-0.2",
. . .
irb(main):005:0> pp Docker.version
{"ApiVersion"=>"1.12",
"Arch"=>"amd64",
"GitCommit"=>"990021a",
"GoVersion"=>"go1.2.1",
"KernelVersion"=>"3.10.0-33-generic",
"Os"=>"linux",
"Version"=>"1.0.1"}
. . .

We’ve launched irb and loaded the docker gem (via a require) and the pp library
to help make our output look nicer. We’ve then specied the Docker.url and
Docker.options methods to set the target Docker host and our required options
(here disabling SSL peer verication to use TLS, but not authenticate the client).

Version: v17.03.0 (38f1319) 337


Chapter 8: Using the Docker API

We’ve then run two global methods, Docker.info and Docker.version, which
provide the Ruby client API equivalents o the binary commands docker info
and docker version.
We can now update our TProv container management methods to use the API via
the docker-api client library. Let’s look at some code that does this now.

Listing 8.22: Our updated TProv container management methods

def get_war(name, url)


container = Docker::Container.create('Cmd' => url, 'Image' => '
jamtur01/fetcher', 'name' => name)
container.start
container.id
end

def create_instance(name)
container = Docker::Container.create('Image' => 'jamtur01/
tomcat7')
container.start('PublishAllPorts' => true, 'VolumesFrom' =>
name)
container.id
end

def delete_instance(cid)
container = Docker::Container.get(cid)
container.kill
end

You can see we’ve replaced the previous binary shell with a rather cleaner
implementation using the Docker API. Our get_war method creates and starts our
jamtur01/fetcher container using the Docker::Container.create and Docker
::Container.start methods. The create_instance method does the same or
the jamtur01/tomcat7 container. Finally, our delete_instance method has been

Version: v17.03.0 (38f1319) 338


Chapter 8: Using the Docker API

updated to retrieve a container using the container ID via the Docker::Container


.get method. We then kill the container with the Docker::Container.kill
method.
You can install the API-enabled version o the TProv application via gem to see it
in action.

Listing 8.23: Installing TProvAPI

$ sudo gem install tprov-api

 NOTE You can see the updated TProv code on the book’s site or on GitHub.

Authenticating the Docker Remote API

Whilst we’ve shown that we can connect to the Docker Remote API, that means
that anyone else can also connect to the API. That poses a bit o a security issue.
The Remote API has an authentication mechanism that has been available since
the 0.9 release o Docker. The authentication uses TLS/SSL certicates to secure
your connection to the API.

 TIP This authentication applies to more than just the API. By turning this
authentication on, you will also need to congure our Docker client to support
TLS authentication. We’ll see how to do that in this section, too.

There are a couple o ways we could authenticate our connection, including using
a ull PKI inrastructure, either creating our own Certicate Authority (CA) or

Version: v17.03.0 (38f1319) 339


Chapter 8: Using the Docker API

using an existing CA. We’re going to create our own certicate authority because
it is a simple and ast way to get started.

 WARNING This relies on a local CA running on your Docker host. This


is not as secure as using a ull-edged Certicate Authority.

Create a Certicate Authority

We’re going to quickly step through creating the required CA certicate and key,
as it is a pretty standard process on most platorms. It requires the openssl binary
as a prerequisite.

Listing 8.24: Checking or openssl

$ which openssl
/usr/bin/openssl

Let’s create a directory on our Docker host to hold our CA and related materials.

Listing 8.25: Create a CA directory

$ sudo mkdir /etc/docker

Now let’s create our CA.


We rst generate a private key.

Version: v17.03.0 (38f1319) 340


Chapter 8: Using the Docker API

Listing 8.26: Generating a private key

$ cd /etc/docker
$ echo 01 | sudo tee ca.srl
$ sudo openssl genrsa -des3 -out ca-key.pem
Generating RSA private key, 512 bit long modulus
....++++++++++++
.................++++++++++++
e is 65537 (0x10001)
Enter pass phrase for ca-key.pem:
Verifying - Enter pass phrase for ca-key.pem:

We’ll speciy a passphrase or the CA key, make note o this phrase, and secure it.
We’ll need it to create and sign certicates with our new CA.
This also creates a new le called ca-key.pem. This is our CA key; we’ll not want
to share it or lose it, as it is integral to the security o our solution.
Now let’s create a CA certicate.

Version: v17.03.0 (38f1319) 341


Chapter 8: Using the Docker API

Listing 8.27: Creating a CA certicate

$ sudo openssl req -new -x509 -days 365 -key ca-key.pem -out ca.
pem
Enter pass phrase for ca-key.pem:
You are about to be asked to enter information that will be
incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished
Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:docker.example.com
Email Address []:

This will create the ca.pem le that is the certicate or our CA. We’ll need this
later to veriy our secure connection.
Now that we have our CA, let’s use it to create a certicate and key or our Docker
server.

Create a server certicate signing request and key

We can use our new CA to sign and validate a certicate signing request or CSR
and key or our Docker server. Let’s start with creating a key or our server.

Version: v17.03.0 (38f1319) 342


Chapter 8: Using the Docker API

Listing 8.28: Creating a server key

$ sudo openssl genrsa -des3 -out server-key.pem


Generating RSA private key, 512 bit long modulus
...................++++++++++++
...............++++++++++++
e is 65537 (0x10001)
Enter pass phrase for server-key.pem:
Verifying - Enter pass phrase for server-key.pem:

This will create our server key, server-key.pem. As above, we need to keep this
key sae: it is what secures our Docker server.

 NOTE Speciy any pass phrase here. We’re going to strip it out beore we
use the key. You’ll only need it or the next couple o steps.

Next let’s create our server certicate signing request (CSR).

Version: v17.03.0 (38f1319) 343


Chapter 8: Using the Docker API

Listing 8.29: Creating our server CSR

$ sudo openssl req -new -key server-key.pem -out server.csr


Enter pass phrase for server-key.pem:
You are about to be asked to enter information that will be
incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished
Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:*
Email Address []:

Please enter the following 'extra' attributes


to be sent with your certificate request
A challenge password []:
An optional company name []:

This will create a le called server.csr. This is the request that our CA will sign
to create our server certicate. The most important option here is Common Name or
CN. This should either be the FQDN (ully qualied domain name) o the Docker
server (i.e., what is resolved to in DNS; or example, docker.example.com) or *,
which will allow us to use the server certicate on any server.
We also know olks connect to our host via IP address so we need to congure or

Version: v17.03.0 (38f1319) 344


Chapter 8: Using the Docker API

that too.

Listing 8.30: Connect via IP address

$ echo subjectAltName = IP:x.x.x.x,IP:127.0.0.1 > extfile.cnf

Replacing x.x.x.x with the IP address(es) o your Docker daemon.


Now let’s sign our CSR and generate our server certicate.

Listing 8.31: Signing our CSR

$ sudo openssl x509 -req -days 365 -in server.csr -CA ca.pem \
-CAkey ca-key.pem -out server-cert.pem -extfile extfile.cnf
Signature ok
subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=*
Getting CA Private Key
Enter pass phrase for ca-key.pem:

We’ll enter the passphrase o the CA’s key le, and a le called server-cert.pem
will be generated. This is our server’s certicate.
Now let’s strip out the passphrase rom our server key. We can’t enter one when
the Docker daemon starts, so we need to remove it.

Listing 8.32: Removing the passphrase rom the server key

$ sudo openssl rsa -in server-key.pem -out server-key.pem


Enter pass phrase for server-key.pem:
writing RSA key

Now let’s add some tighter permissions to the les to better protect them.

Version: v17.03.0 (38f1319) 345


Chapter 8: Using the Docker API

Listing 8.33: Securing the key and certicate on the Docker server

$ sudo chmod 0600 /etc/docker/server-key.pem /etc/docker/server-


cert.pem \
/etc/docker/ca-key.pem /etc/docker/ca.pem

Conguring the Docker daemon

Now that we’ve got our certicate and key, let’s congure the Docker daemon to
use them. As we did to expose the Docker daemon to a network socket, we’re
going to edit its startup conguration. As beore, or Ubuntu or Debian, we’ll edit
the /etc/default/docker le; or those distributions with Upstart, it’s the /etc/
init/docker.conf le. For Red Hat, Fedora, and related distributions, we’ll edit
the /etc/sysconfig/docker le; or those releases with Systemd, it’s the /usr/
lib/systemd/system/docker.service le.

Let’s again assume a Red Hat derivative running Systemd and edit the /usr/lib/
systemd/system/docker.service le:

Listing 8.34: Enabling Docker TLS on systemd

ExecStart=/usr/bin/docker -d -H tcp://0.0.0.0:2376 --tlsverify --


tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.
pem --tlskey=/etc/docker/server-key.pem

 NOTE You can see that we’ve used port number 2376; this is the deault
TLS/SSL port or Docker. You should only use 2375 or unauthenticated connec-
tions.

Version: v17.03.0 (38f1319) 346


Chapter 8: Using the Docker API

This code will enable TLS using the --tlsverify ag. We’ve also specied the lo-
cation o our CA certicate, certicate, and key using the --tlscacert, --tlscert
and --tlskey ags, respectively. There are a variety o other TLS options that we
could also use.

 TIP You can use the --tls ag to enable TLS, but not client-side authentica-
tion.

We then need to reload and restart the daemon using the systemctl command.

Listing 8.35: Reloading and restarting the Docker daemon

$ sudo systemctl --system daemon-reload

Creating a client certicate and key

Our server is now TLS enabled; next, we need to create and sign a certicate and
key to secure our Docker client. Let’s start with a key or our client.

Listing 8.36: Creating a client key

$ sudo openssl genrsa -des3 -out client-key.pem


Generating RSA private key, 512 bit long modulus
..........++++++++++++
.......................................++++++++++++
e is 65537 (0x10001)
Enter pass phrase for client-key.pem:
Verifying - Enter pass phrase for client-key.pem:

Version: v17.03.0 (38f1319) 347


Chapter 8: Using the Docker API

This will create our key le client-key.pem. Again, we’ll need to speciy a tem-
porary passphrase to use during the creation process.
Now let’s create a client CSR.

Listing 8.37: Creating a client CSR

$ sudo openssl req -new -key client-key.pem -out client.csr


Enter pass phrase for client-key.pem:
You are about to be asked to enter information that will be
incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished
Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []: Docker daemon
host FQDN
Email Address []:

Please enter the following 'extra' attributes


to be sent with your certificate request
A challenge password []:
An optional company name []:

Replace the Docker daemon host FQDN with the ully-qualied domain name o
your Docker daemon host.

Version: v17.03.0 (38f1319) 348


Chapter 8: Using the Docker API

We next need to enable client authentication or our key by adding some extended
SSL attributes.

Listing 8.38: Adding Client Authentication attributes

$ echo extendedKeyUsage = clientAuth > extfile.cnf

Now let’s sign our CSR with our CA.

Listing 8.39: Signing our client CSR

$ sudo openssl x509 -req -days 365 -in client.csr -CA ca.pem \
-CAkey ca-key.pem -out client-cert.pem -extfile extfile.cnf
Signature ok
subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
Getting CA Private Key
Enter pass phrase for ca-key.pem:

Again, we use the CA key’s passphrase and generate another certicate: client-
cert.pem.

Finally, we strip the passphrase rom our client-key.pem le to allow us to use
it with the Docker client.

Listing 8.40: Stripping out the client key pass phrase

$ sudo openssl rsa -in client-key.pem -out client-key.pem


Enter pass phrase for client-key.pem:
writing RSA key

Version: v17.03.0 (38f1319) 349


Chapter 8: Using the Docker API

Conguring our Docker client or authentication

Next let’s congure our Docker client to use our new TLS conguration. We need
to do this because the Docker daemon now expects authenticated connections or
both the client and the API.
We’ll need to copy our ca.pem, client-cert.pem, and client-key.pem les to the
host on which we’re intending to run the Docker client.

 TIP Remember that these keys provide root-level access to the Docker dae-
mon. You should protect them careully.

Let’s install them into the .docker directory. This is the deault location where
Docker will look or certicates and keys. Docker will specically look or key.
pem, cert.pem, and our CA certicate: ca.pem.

Listing 8.41: Copying the key and certicate on the Docker client

$ mkdir -p ~/.docker/
$ cp ca.pem ~/.docker/ca.pem
$ cp client-key.pem ~/.docker/key.pem
$ cp client-cert.pem ~/.docker/cert.pem
$ chmod 0600 ~/.docker/key.pem ~/.docker/cert.pem

Now let’s test the connection to the Docker daemon rom the client. To do this,
we’re going to use the docker info command.

Version: v17.03.0 (38f1319) 350


Chapter 8: Using the Docker API

Listing 8.42: Testing our TLS-authenticated connection

$ sudo docker -H=docker.example.com:2376 --tlsverify info


Containers: 33
Images: 104
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 170
Execution Driver: native-0.1
Kernel Version: 3.10.0-33-generic
Username: jamtur01
Registry: [https://index.docker.io/v1/]
WARNING: No swap limit support

We see that we’ve specied the -H ag to tell the client to which host it should
connect. We could instead speciy the host using the DOCKER_HOST environment
variable i we didn’t want to speciy the -H ag each time. We’ve also specied the
--tlsverify ag, which enables our TLS connection to the Docker daemon. We
don’t need to speciy any certicate or key les, because Docker has automatically
looked these up in our ~/.docker/ directory. I we did need to speciy these les,
we could with the --tlscacert, --tlscert, and --tlskey ags.
So what happens i we don’t speciy a TLS connection? Let’s try again now without
the --tlsverify ag.

Listing 8.43: Testing our TLS-authenticated connection

$ sudo docker -H=docker.example.com:2376 info


2014/04/13 17:50:03 malformed HTTP response "\x15\x03\x01\x00\x02
\x02"

Ouch. That’s not good. I you see an error like this, you know you’ve probably not

Version: v17.03.0 (38f1319) 351


Chapter 8: Using the Docker API

enabled TLS on the connection, you’ve not specied the right TLS conguration,
or you have an incorrect certicate or key.
Assuming you’ve got everything working, you should now have an authenticated
Docker connection!

Summary

In this chapter, we’ve been introduced to the Docker Remote API. We’ve also seen
how to secure the Docker Remote API via SSL/TLS certicates. We’ve explored the
Docker API and how to use it to manage images and containers. We’ve also seen
how to use one o the Docker API client libraries to rewrite our TProv application
to directly use the Docker API.
In the next and last chapter, we’ll look at how you can contribute to Docker.

Version: v17.03.0 (38f1319) 352


Chapter 9

Getting help and extending Docker

Docker is in its inancy -- sometimes things go wrong. This chapter will talk about:

• How and where to get help.


• Contributing xes and eatures to Docker.

You’ll nd out where to nd Docker olks and the best way to ask or help. You’ll
also learn how to engage with Docker’s developer community: there’s a huge
amount o development efort surrounding Docker with hundreds o committers
in the open-source community. I you’re excited by Docker, then it’s easy to make
your own contribution to the project. This chapter will also cover the basics o
contributing to the Docker project, how to build a Docker development environ-
ment, and how to create a good pull request.

 NOTE This chapter assumes some basic amiliarity with Git, GitHub, and
Go, but doesn’t assume you’re a ully edged developer.

353
Chapter 9: Getting help and extending Docker

Getting help

The Docker community is large and riendly. Most Docker olks congregate in
three places:

 NOTE Docker, Inc. also sells enterprise support or Docker. You can nd
the inormation on the Support page.

The Docker orums

There is a Docker orum available.

Docker on IRC

The Docker community also has two strong IRC channels: #docker and #docker-
dev. Both are on the Freenode IRC network

The #docker channel is generally or user help and general Docker issues, whereas
#docker-dev is where contributors to Docker’s source code gather.

You can nd logs or #docker at https://botbot.me/freenode/docker/ and or


#docker-dev at https://botbot.me/freenode/docker-dev/.

Docker on GitHub

Docker (and most o its components and ecosystem) is hosted on GitHub. The
principal repository or Docker itsel is https://github.com/docker/docker/.
Other repositories o note are:

• distribution - The stand-alone Docker registry and distribution tools.


• runc - The Docker container ormat and CLI tools.

Version: v17.03.0 (38f1319) 354


Chapter 9: Getting help and extending Docker

• Docker Swarm - Docker’s orchestration ramework.


• Docker Compose - The Docker Compose tool.

Reporting issues or Docker

Let’s start with the basics around submitting issues and patches and interacting
with the Docker community. When reporting issues with Docker, it’s important to
be an awesome open-source citizen and provide good inormation that can help
the community resolve your issue. When you log a ticket, please remember to
include the ollowing background inormation:

• The output o docker info and docker version.


• The output o uname -a.
• Your operating system and version (e.g., Ubuntu 16.04).

Then provide a detailed explanation o your problem and the steps others can take
to reproduce it.
I you’re logging a eature request, careully explain what you want and how you
propose it might work. Think careully about generic use cases: is your eature
something that will make lie easier or just you or or everyone?
Please take a moment to check that an issue doesn’t already exist documenting
your bug report or eature request. I it does, you can add a quick ”+1” or ”I have
this problem too”, or i you eel your input expands on the proposed implementa-
tion or bug x, then add a more substantive update.

Setting up a build environment

To make it easier to contribute to Docker, we’re going to show you how to build
a development environment. The development environment provides all o the
required dependencies and build tooling to work with Docker.

Version: v17.03.0 (38f1319) 355


Chapter 9: Getting help and extending Docker

Install Docker

You must rst install Docker in order to get a development environment, because
the build environment is a Docker container in its own right. We use Docker to
build and develop Docker. Use the steps rom Chapter 2 to install Docker on your
local host. You should install the most recent version o Docker available.

Install source and build tools

Next, you need to install Make and Git so that we can check out the Docker source
code and run the build process. The source code is stored on GitHub, and the build
process is built around a Makefile.
On Ubuntu, we would install the git package.

Listing 9.1: Installing git on Ubuntu

$ sudo apt-get -y install git make

On Red Hat and derivatives we would do the ollowing:

Listing 9.2: Installing git on Red Hat et al

$ sudo yum install git make

Check out the source

Now let’s check out the Docker source code (or, i you’re working on another
component, the relevant source code repository) and change into the resulting
directory.

Version: v17.03.0 (38f1319) 356


Chapter 9: Getting help and extending Docker

Listing 9.3: Check out the Docker source code

$ git clone https://github.com/docker/docker.git


$ cd docker

You can now work on the source code and x bugs, update documentation, and
write awesome eatures!

Contributing to the documentation

One o the great ways anyone, even i you’re not a developer or skilled in Go, can
contribute to Docker is to update, enhance, or develop new documentation. The
Docker documentation lives on the Docs website. The source documentation, the
theme, and the tooling that generates this site are stored in the Docker repo on
GitHub.
You can nd specic guidelines and a basic style guide or the documentation at:

• https://github.com/docker/docker/blob/master/docs/README.md.

You can build the documentation locally using Docker itsel.


Make any changes you want to the documentation, and then you can use the make
command to build and review the new or changed documentation.

Version: v17.03.0 (38f1319) 357


Chapter 9: Getting help and extending Docker

Listing 9.4: Building the Docker documentation

$ cd docker
$ make docs
. . .
docker run --rm -it -e AWS_S3_BUCKET -p 8000:8000 "docker-docs:
master" mkdocs serve
Running at: http://0.0.0.0:8000/
Live reload enabled.
Hold ctrl+c to quit.

You can then browse to a local version o the Docker documentation on port 8000.

Build the environment

I you want to contribute to Docker Engine, you can now use make and Docker
to build the development environment. The Docker source code ships with a
Dockerfile that we use to install all the build and runtime dependencies necessary
to build and test Docker.

Listing 9.5: Building the Docker environment

$ sudo make build

 TIP This command will take some time to complete when you rst execute
it. It might require a host with at least 2Gb o RAM to also run the development
build.

This command will create a ull, running Docker development environment. It

Version: v17.03.0 (38f1319) 358


Chapter 9: Getting help and extending Docker

will upload the current source directory as build context or a Docker image, build
the image containing Go and any other required dependencies, and then launch
a container rom this image.
Using this development image, we also create a Docker binary to test any xes or
eatures. We do this using the make tool again.

Listing 9.6: Building the Docker binary

$ sudo make binary

This command will create a docker and dockerd binary in a volume at ./bundles
/<version>-dev/binary-client/ and ./bundles/<version>-dev/binary-daemon
/ respectively. For example, we would create a client binary and associated check-
sums like so:

Listing 9.7: The Docker dev client binary

$ ls -l bundles/1.13.0-dev/binary-client/
total 15344
lrwxrwxrwx 1 root root 17 Aug 9 13:59 docker -> docker-
1.13.0-dev
-rwxr-xr-x 1 root root 15700192 Aug 9 13:59 docker-1.13.0-dev
-rw-r--r-- 1 root root 52 Aug 9 13:59 docker-1.13.0-dev.
md5
-rw-r--r-- 1 root root 84 Aug 9 13:59 docker-1.13.0-dev.
sha256

You can then use this binary or live testing by running it instead o the local
Docker daemon. To do so, we need to stop Docker and run this new binary instead.

Version: v17.03.0 (38f1319) 359


Chapter 9: Getting help and extending Docker

Listing 9.8: Using the development daemon

$ sudo service docker stop


$ ~/docker/bundles/1.13.0-dev/binary-daemon/dockerd

This will run the development Docker daemon interactively. You can also back-
ground the daemon i you wish.
We can then test the docker binary by running it against this daemon.

Listing 9.9: Using the development binary

$ ~/docker/bundles/1.13.0-dev/binary-client/docker version
Client:
Version: 1.13.0-dev
API version: 1.25
Go version: go1.6.3
Git commit: 04e021d
Built: Tue Aug 9 13:58:52 2016
OS/Arch: linux/amd64

Server:
Version: 1.13.0-dev
API version: 1.25
Go version: go1.6.3
Git commit: 04e021d
Built: Tue Aug 9 13:58:52 2016
OS/Arch: linux/amd64

You can see that we’re running a 1.13.0-dev client, this binary, against the 1.13.0
-dev daemon we just started. You can use this combination to test and ensure any
changes you’ve made to the Docker source are working correctly.

Version: v17.03.0 (38f1319) 360


Chapter 9: Getting help and extending Docker

Running the tests

It’s also important to ensure that all o the Docker tests pass beore contributing
code back upstream. To execute all the tests, you need to run this command:

Listing 9.10: Running the Docker tests

$ sudo make test

This command will again upload the current source as build context to an image
and then create a development image. A container will be launched rom this
image, and the test will run inside it. Again, this may take some time or the
initial build.
I the tests are successul, then the end o the output should look something like
this:

Version: v17.03.0 (38f1319) 361


Chapter 9: Getting help and extending Docker

Listing 9.11: Docker test output

. . .
[PASSED]: save - save a repo using stdout
[PASSED]: load - load a repo using stdout
[PASSED]: save - save a repo using -o
[PASSED]: load - load a repo using -i
[PASSED]: tag - busybox -> testfoobarbaz
[PASSED]: tag - busybox's image ID -> testfoobarbaz
[PASSED]: tag - busybox fooo/bar
[PASSED]: tag - busybox fooaa/test
[PASSED]: top - sleep process should be listed in non privileged
mode
[PASSED]: top - sleep process should be listed in privileged mode
[PASSED]: version - verify that it works and that the output is
properly formatted
PASS
PASS github.com/docker/docker/integration-cli 178.685s

 TIP You can use the $TESTFLAGS environment variable to pass in arguments
to the test run.

Use Docker inside our development environment

You can also launch an interactive session inside the newly built development
container:

Version: v17.03.0 (38f1319) 362


Chapter 9: Getting help and extending Docker

Listing 9.12: Launching an interactive session

$ sudo make shell

To exit the container, type exit or Ctrl-D.

Submitting a pull request

I you’re happy with your documentation update, bug x, or new eature, you
can submit a pull request or it on GitHub. To do so, you should ork the Docker
repository and make changes on your ork in a eature branch:

• I it is a bug x branch, name it XXXX-something, where XXXX is the number


o the issue.
• I it is a eature branch, create a eature issue to announce your intentions,
and name it XXXX-something, where XXXX is the number o the issue.

You should always submit unit tests or your changes. Take a look at the existing
tests or inspiration. You should also always run the ull test suite on your branch
beore submitting a pull request.
Any pull request with a eature in it should include updates to the documentation.
You should use the process above to test your documentation changes beore you
submit your pull request. There are also specic guidelines (as we mentioned
above) or documentation that you should ollow.
We have some other simple rules that will help get your pull request reviewed and
merged quickly:

• Always run gofmt -s -w file.go on each changed le beore committing


your changes. This produces consistent, clean code.
• Pull requests descriptions should be as clear as possible and include a reer-
ence to all the issues that they address.

Version: v17.03.0 (38f1319) 363


Chapter 9: Getting help and extending Docker

• Pull requests must not contain commits rom other users or branches.
• Commit messages must start with a capitalized and short summary (50 char-
acters maximum) written in the imperative, ollowed by an optional, more
detailed explanatory text that is separated rom the summary by an empty
line.
• Squash your commits into logical units o work using git rebase -i and
git push -f. Include documentation changes in the same commit so that a
revert would remove all traces o the eature or x.

Lastly, the Docker project uses a Developer Certicate o Origin to veriy that you
wrote any code you submit or otherwise have the right to pass it on as an open-
source patch. You can read about why we do this at http://blog.docker.com/
2014/01/docker-code-contributions-require-developer-certificate-of-origin/.
The certicate is easy to apply. All you need to do is add a line to each Git
commit message.

Listing 9.13: The Docker DCO

Docker-DCO-1.1-Signed-off-by: Joe Smith <[email protected]> (


github: github_handle)

 NOTE You must use your real name. We do not allow pseudonyms or
anonymous contributions or legal reasons.

There are several small exceptions to the signing requirement. Currently these
are:

• Your patch xes spelling or grammar errors.


• Your patch is a single-line change to documentation contained in the docs
directory.
• Your patch xes Markdown ormatting or syntax errors in the documentation
contained in the docs directory.

Version: v17.03.0 (38f1319) 364


Chapter 9: Getting help and extending Docker

It’s also pretty easy to automate the signing o your Git commits using the git
commit -s command.

 NOTE The signing script expects to nd your GitHub user name in git
config --get github.user. You can set this option with the git config --set
github.user username command.

Merge approval and maintainers

Once you’ve submitted your pull request, it will be reviewed, and you will po-
tentially receive eedback. Docker uses a maintainer system much like the Linux
kernel. Each component inside Docker is managed by one or more maintainers
who are responsible or ensuring the quality, stability, and direction o that com-
ponent. The maintainers are supplemented by Docker’s benevolent dictator and
chie maintainer, Solomon Hykes. He’s the only one who can override a main-
tainer, and he has sole responsibility or appointing new maintainers.
Docker maintainers use the shorthand LGTM (or Looks Good To Me) in comments
on the code review to indicate acceptance o a pull request. A change requires
LGTMs rom an absolute majority o the maintainers o each component afected
by the changes (or or documentation - a minimum o two maintainers). I a
change afects docs/ and registry/, then it needs two maintainers o docs/ and
an absolute majority o the maintainers o registry/.

 TIP For more details, see the maintainer process documentation.

Version: v17.03.0 (38f1319) 365


Chapter 9: Getting help and extending Docker

Summary

In this chapter, we’ve learned about how to get help with Docker and the places
where useul Docker community members and developers hang out. We’ve also
learned about the best way to log an issue with Docker, including the sort o
inormation you need to provide to get the best response.
We’ve also seen how to set up a development environment to work on the Docker
source or documentation and how to build and test inside this environment to
ensure your x or eature works. Finally, we’ve learned about how to create a
properly structured and good-quality pull request with your update.

Version: v17.03.0 (38f1319) 366


List o Figures

1.1 Docker architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1 Installing Docker or Mac on OS X . . . . . . . . . . . . . . . . . . . . . 32

4.1 The Docker lesystem layers . . . . . . . . . . . . . . . . . . . . . . . . 72


4.2 Docker Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.3 Creating a Docker Hub account. . . . . . . . . . . . . . . . . . . . . . . 82
4.4 Your image on the Docker Hub. . . . . . . . . . . . . . . . . . . . . . . 128
4.5 The Add Repository button. . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.6 Selecting your repository. . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.7 Conguring your Automated Build. . . . . . . . . . . . . . . . . . . . . 130
4.8 Deleting a repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

5.1 Browsing the Sample website. . . . . . . . . . . . . . . . . . . . . . . . 148


5.2 Browsing the edited Sample website. . . . . . . . . . . . . . . . . . . . 149
5.3 Browsing the Jenkins server. . . . . . . . . . . . . . . . . . . . . . . . . 188
5.4 The Getting Started workow . . . . . . . . . . . . . . . . . . . . . . . . 189
5.5 Creating a new Jenkins job. . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.6 Jenkins job details part 1. . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.7 Jenkins job details part 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.8 Running the Jenkins job. . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.9 The Jenkins job details. . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.10 The Jenkins job console output. . . . . . . . . . . . . . . . . . . . . . . 197
5.11 Creating a multi-conguration job. . . . . . . . . . . . . . . . . . . . . 199
5.12 Conguring a multi-conguration job Part 2. . . . . . . . . . . . . . 200
5.13 Our Jenkins multi-conguration job . . . . . . . . . . . . . . . . . . . 202
5.14 The centos sub-job. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.15 The centos sub-job details. . . . . . . . . . . . . . . . . . . . . . . . . . 204

367
List o Figures

5.16 The centos sub-job console output. . . . . . . . . . . . . . . . . . . . . 205

6.1 Our Jekyll website. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217


6.2 Our updated Jekyll website. . . . . . . . . . . . . . . . . . . . . . . . . . 219
6.3 Our Tomcat sample application. . . . . . . . . . . . . . . . . . . . . . . 228
6.4 Our TProv web application. . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.5 Downloading a sample application. . . . . . . . . . . . . . . . . . . . . 231
6.6 Listing the Tomcat instances. . . . . . . . . . . . . . . . . . . . . . . . . 231
6.7 Our Node application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

7.1 Sample Compose application. . . . . . . . . . . . . . . . . . . . . . . . . 267


7.2 The Consul web interace. . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.3 The Consul service in the web interace. . . . . . . . . . . . . . . . . . 291
7.4 The distributed_app service in the Consul web interace. . . . . . . . 304
7.5 More distributed_app services in the Consul web interace. . . . . . . 305

Version: v17.03.0 (38f1319) 368


Listings

1 Sample code block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4


2.1 Checking or the Linux kernel version on Ubuntu . . . . . . . . . . . . 21
2.2 Installing the linux-image-extra package . . . . . . . . . . . . . . . . . 22
2.3 Installing a 3.10 or later kernel on Ubuntu . . . . . . . . . . . . . . . . 22
2.4 Updating the boot loader on Ubuntu Precise . . . . . . . . . . . . . . . 22
2.5 Reboot the Ubuntu host . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6 Adding prerequisite Ubuntu packages . . . . . . . . . . . . . . . . . . . 23
2.7 Adding the Docker GPG key . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8 Adding the Docker APT repository . . . . . . . . . . . . . . . . . . . . . 24
2.9 Updating APT sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.10 Installing the Docker packages on Ubuntu . . . . . . . . . . . . . . . 24
2.11 Checking Docker is installed on Ubuntu . . . . . . . . . . . . . . . . . 25
2.12 Old UFW orwarding policy . . . . . . . . . . . . . . . . . . . . . . . . 25
2.13 New UFW orwarding policy . . . . . . . . . . . . . . . . . . . . . . . . 25
2.14 Reload the UFW rewall . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.15 Checking the Red Hat or Fedora kernel . . . . . . . . . . . . . . . . . 27
2.16 Installing EPEL on Red Hat Enterprise Linux 6 and CentOS 6 . . . . 27
2.17 Installing the Docker package on Red Hat Enterprise Linux 6 and
CentOS 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.18 Installing Docker on RHEL 7 . . . . . . . . . . . . . . . . . . . . . . . . 28
2.19 Installing the Docker package on Fedora 19 . . . . . . . . . . . . . . 29
2.20 Installing the Docker package on Fedora 20 . . . . . . . . . . . . . . 29
2.21 Installing the Docker package on Fedora 21 . . . . . . . . . . . . . . 29
2.22 Installing the Docker package on Fedora 22 . . . . . . . . . . . . . . 29
2.23 Starting the Docker daemon on Red Hat Enterprise Linux 6 . . . . . 30

369
Listings

2.24 Ensuring the Docker daemon starts at boot on Red Hat Enterprise
Linux 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.25 Starting the Docker daemon on Red Hat Enterprise Linux 7 . . . . . 30
2.26 Ensuring the Docker daemon starts at boot on Red Hat Enterprise
Linux 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.27 Checking Docker is installed on the Red Hat amily . . . . . . . . . . 31
2.28 Downloading the Docker or Mac DMG le . . . . . . . . . . . . . . . 32
2.29 Testing Docker or Mac on OS X . . . . . . . . . . . . . . . . . . . . . 33
2.30 Downloading the Docker or Windows .MSI le . . . . . . . . . . . . 34
2.31 Testing Docker or Windows . . . . . . . . . . . . . . . . . . . . . . . . 35
2.32 Testing or curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.33 Installing curl on Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.34 Installing curl on Fedora . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.35 Installing Docker rom the installation script . . . . . . . . . . . . . . 37
2.36 Downloading the Docker binary . . . . . . . . . . . . . . . . . . . . . 37
2.37 Changing Docker daemon networking . . . . . . . . . . . . . . . . . . 39
2.38 Using the DOCKER_HOST environment variable . . . . . . . . . . . . 39
2.39 Binding the Docker daemon to a diferent socket . . . . . . . . . . . 39
2.40 Binding the Docker daemon to multiple places . . . . . . . . . . . . 40
2.41 Turning on Docker daemon debug . . . . . . . . . . . . . . . . . . . . 40
2.42 The systemd override le . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.43 Checking the status o the Docker daemon . . . . . . . . . . . . . . . 41
2.44 Starting and stopping Docker with Upstart . . . . . . . . . . . . . . . 42
2.45 Starting and stopping Docker on Red Hat and Fedora . . . . . . . . 42
2.46 The Docker daemon isn’t running . . . . . . . . . . . . . . . . . . . . . 42
2.47 Upgrade docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.1 Checking that the docker binary works . . . . . . . . . . . . . . . . . . 46
3.2 Running our rst container . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 The docker run command . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Our rst container’s shell . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5 Checking the container’s hostname . . . . . . . . . . . . . . . . . . . . 50
3.6 Checking the container’s /etc/hosts . . . . . . . . . . . . . . . . . . . . 50
3.7 Checking the container’s interaces . . . . . . . . . . . . . . . . . . . . 51
3.8 Checking container’s processes . . . . . . . . . . . . . . . . . . . . . . . 51
3.9 Installing a package in our rst container . . . . . . . . . . . . . . . . 51

Version: v17.03.0 (38f1319) 370


Listings

3.10 Listing Docker containers . . . . . . . . . . . . . . . . . . . . . . . . . . 52


3.11 Naming a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.12 Starting a stopped container . . . . . . . . . . . . . . . . . . . . . . . . 54
3.13 Starting a stopped container by ID . . . . . . . . . . . . . . . . . . . . 54
3.14 Attaching to a running container . . . . . . . . . . . . . . . . . . . . . 55
3.15 Attaching to a running container via ID . . . . . . . . . . . . . . . . . 55
3.16 Inside our re-attached container . . . . . . . . . . . . . . . . . . . . . 55
3.17 Creating a long running container . . . . . . . . . . . . . . . . . . . . 56
3.18 Viewing our running daemon_dave container . . . . . . . . . . . . . 56
3.19 Fetching the logs o our daemonized container . . . . . . . . . . . . 57
3.20 Tailing the logs o our daemonized container . . . . . . . . . . . . . 58
3.21 Tailing the logs o our daemonized container . . . . . . . . . . . . . 59
3.22 Enabling Syslog at the container level . . . . . . . . . . . . . . . . . . 60
3.23 Inspecting the processes o the daemonized container . . . . . . . . 61
3.24 The docker top output . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.25 The docker stats command . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.26 Running a background task inside a container . . . . . . . . . . . . . 63
3.27 Running an interactive command inside a container . . . . . . . . . 63
3.28 Stopping the running Docker container . . . . . . . . . . . . . . . . . 64
3.29 Stopping the running Docker container by ID . . . . . . . . . . . . . 64
3.30 Automatically restarting containers . . . . . . . . . . . . . . . . . . . 65
3.31 On-ailure restart count . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.32 Inspecting a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.33 Selectively inspecting a container . . . . . . . . . . . . . . . . . . . . 67
3.34 Inspecting the container’s IP address . . . . . . . . . . . . . . . . . . . 67
3.35 Inspecting multiple containers . . . . . . . . . . . . . . . . . . . . . . 67
3.36 Deleting a container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.37 Deleting all containers . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1 Revisiting running a basic Docker container . . . . . . . . . . . . . . . 70
4.2 Listing Docker images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.3 Pulling the Ubuntu 16.04 image . . . . . . . . . . . . . . . . . . . . . . 75
4.4 Listing the ubuntu Docker images . . . . . . . . . . . . . . . . . . . . . 75
4.5 Running a tagged Docker image . . . . . . . . . . . . . . . . . . . . . . 76
4.6 Docker run and the deault latest tag . . . . . . . . . . . . . . . . . . . 77
4.7 Pulling the edora image . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Version: v17.03.0 (38f1319) 371


Listings

4.8 Viewing the edora image . . . . . . . . . . . . . . . . . . . . . . . . . . 78


4.9 Pulling a tagged edora image . . . . . . . . . . . . . . . . . . . . . . . 78
4.10 Searching or images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.11 Pulling down the jamtur01/puppetmaster image . . . . . . . . . . . 80
4.12 Creating a Docker container rom the puppetmaster image . . . . . 80
4.13 Logging into the Docker Hub . . . . . . . . . . . . . . . . . . . . . . . 82
4.14 Creating a custom container to modiy . . . . . . . . . . . . . . . . . 83
4.15 Adding the Apache package . . . . . . . . . . . . . . . . . . . . . . . . 83
4.16 Committing the custom container . . . . . . . . . . . . . . . . . . . . 84
4.17 Reviewing our new image . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.18 Committing another custom container . . . . . . . . . . . . . . . . . . 85
4.19 Inspecting our committed image . . . . . . . . . . . . . . . . . . . . . 85
4.20 Running a container rom our committed image . . . . . . . . . . . . 86
4.21 Creating a sample repository . . . . . . . . . . . . . . . . . . . . . . . 87
4.22 Our rst Dockerle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.23 A RUN instruction in exec orm . . . . . . . . . . . . . . . . . . . . . . 89
4.24 Running the Dockerle . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.25 Tagging a build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.26 Building rom a Git repository . . . . . . . . . . . . . . . . . . . . . . . 92
4.27 Uploading the build context to the daemon . . . . . . . . . . . . . . . 93
4.28 Managing a ailed instruction . . . . . . . . . . . . . . . . . . . . . . . 94
4.29 Creating a container rom the last successul step . . . . . . . . . . . 94
4.30 Bypassing the Dockerle build cache . . . . . . . . . . . . . . . . . . 95
4.31 A template Ubuntu Dockerle . . . . . . . . . . . . . . . . . . . . . . . 96
4.32 A template Fedora Dockerle . . . . . . . . . . . . . . . . . . . . . . . 96
4.33 Listing our new Docker image . . . . . . . . . . . . . . . . . . . . . . . 97
4.34 Using the docker history command . . . . . . . . . . . . . . . . . . . . 97
4.35 Launching a container rom our new image . . . . . . . . . . . . . . 98
4.36 Viewing the Docker port mapping . . . . . . . . . . . . . . . . . . . . 99
4.37 The docker port command . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.38 The docker port command with container name . . . . . . . . . . . . 99
4.39 Exposing a specic port with -p . . . . . . . . . . . . . . . . . . . . . . 100
4.40 Binding to a diferent port . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.41 Binding to a specic interace . . . . . . . . . . . . . . . . . . . . . . . 100
4.42 Binding to a random port on a specic interace . . . . . . . . . . . . 101

Version: v17.03.0 (38f1319) 372


Listings

4.43 Exposing a port with docker run . . . . . . . . . . . . . . . . . . . . . 101


4.44 Connecting to the container via curl . . . . . . . . . . . . . . . . . . . 102
4.45 Speciying a specic command to run . . . . . . . . . . . . . . . . . . 103
4.46 Using the CMD instruction . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.47 Passing parameters to the CMD instruction . . . . . . . . . . . . . . . 103
4.48 Overriding CMD instructions in the Dockerle . . . . . . . . . . . . . 104
4.49 Launching a container with a CMD instruction . . . . . . . . . . . . 104
4.50 Overriding a command locally . . . . . . . . . . . . . . . . . . . . . . 105
4.51 Speciying an ENTRYPOINT . . . . . . . . . . . . . . . . . . . . . . . . 106
4.52 Speciying an ENTRYPOINT parameter . . . . . . . . . . . . . . . . . 106
4.53 Rebuilding static_web with a new ENTRYPOINT . . . . . . . . . . . 106
4.54 Using docker run with ENTRYPOINT . . . . . . . . . . . . . . . . . . 107
4.55 Using ENTRYPOINT and CMD together . . . . . . . . . . . . . . . . . 107
4.56 Using the WORKDIR instruction . . . . . . . . . . . . . . . . . . . . . 108
4.57 Overriding the working directory . . . . . . . . . . . . . . . . . . . . . 108
4.58 Setting an environment variable in Dockerle . . . . . . . . . . . . . 109
4.59 Prexing a RUN instruction . . . . . . . . . . . . . . . . . . . . . . . . 109
4.60 Executing with an ENV prex . . . . . . . . . . . . . . . . . . . . . . . 109
4.61 Setting multiple environment variables using ENV . . . . . . . . . . 109
4.62 Using an environment variable in other Dockerle instructions . . 110
4.63 Persistent environment variables in Docker containers . . . . . . . . 110
4.64 Runtime environment variables . . . . . . . . . . . . . . . . . . . . . . 111
4.65 Using the USER instruction . . . . . . . . . . . . . . . . . . . . . . . . 111
4.66 Speciying USER and GROUP variants . . . . . . . . . . . . . . . . . . 112
4.67 Using the VOLUME instruction . . . . . . . . . . . . . . . . . . . . . . 113
4.68 Using multiple VOLUME instructions . . . . . . . . . . . . . . . . . . 113
4.69 Using the ADD instruction . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.70 URL as the source o an ADD instruction . . . . . . . . . . . . . . . . 114
4.71 Archive as the source o an ADD instruction . . . . . . . . . . . . . . 114
4.72 Using the COPY instruction . . . . . . . . . . . . . . . . . . . . . . . . 116
4.73 Adding LABEL instructions . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.74 Using docker inspect to view labels . . . . . . . . . . . . . . . . . . . 117
4.75 Adding ARG instructions . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.76 Using an ARG instruction . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.77 The predened ARG variables . . . . . . . . . . . . . . . . . . . . . . . 119

Version: v17.03.0 (38f1319) 373


Listings

4.78 Speciying a HEALTHCHECK instruction . . . . . . . . . . . . . . . . 120


4.79 Docker inspect the health state . . . . . . . . . . . . . . . . . . . . . . 121
4.80 Health log output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.81 Disabling inherited health checks . . . . . . . . . . . . . . . . . . . . . 122
4.82 Adding ONBUILD instructions . . . . . . . . . . . . . . . . . . . . . . . 122
4.83 Showing ONBUILD instructions with docker inspect . . . . . . . . . 123
4.84 A new ONBUILD image Dockerle . . . . . . . . . . . . . . . . . . . . 123
4.85 Building the apache2 image . . . . . . . . . . . . . . . . . . . . . . . . 124
4.86 The webapp Dockerle . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.87 Building our webapp image . . . . . . . . . . . . . . . . . . . . . . . . 125
4.88 Trying to push a root image . . . . . . . . . . . . . . . . . . . . . . . . 126
4.89 Pushing a Docker image . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.90 Deleting a Docker image . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.91 Deleting multiple Docker images . . . . . . . . . . . . . . . . . . . . . 133
4.92 Deleting all images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
4.93 Running a container-based registry . . . . . . . . . . . . . . . . . . . . 134
4.94 Listing the jamtur01 static_web Docker image . . . . . . . . . . . . . 134
4.95 Tagging our image or our new registry . . . . . . . . . . . . . . . . . 135
4.96 Pushing an image to our new registry . . . . . . . . . . . . . . . . . . 135
4.97 Building a container rom our local registry . . . . . . . . . . . . . . 136
5.1 Creating a directory or our Sample website Dockerle . . . . . . . . 139
5.2 Getting our Nginx conguration les . . . . . . . . . . . . . . . . . . . 140
5.3 The Dockerle or the Sample website . . . . . . . . . . . . . . . . . . 140
5.4 The global.con le . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.5 The nginx.con conguration le . . . . . . . . . . . . . . . . . . . . . . 142
5.6 Building our new Nginx image . . . . . . . . . . . . . . . . . . . . . . . 143
5.7 Showing the history o the Nginx image . . . . . . . . . . . . . . . . . 144
5.8 Downloading our Sample website . . . . . . . . . . . . . . . . . . . . . 145
5.9 Running our rst Nginx testing container . . . . . . . . . . . . . . . . . 145
5.10 Controlling the write status o a volume . . . . . . . . . . . . . . . . . 147
5.11 Viewing the Sample website container . . . . . . . . . . . . . . . . . 147
5.12 Editing our Sample website . . . . . . . . . . . . . . . . . . . . . . . . 148
5.13 Old title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.14 New title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.15 Create directory or web application testing . . . . . . . . . . . . . . 150

Version: v17.03.0 (38f1319) 374


Listings

5.16 Dockerle or our Sinatra container . . . . . . . . . . . . . . . . . . . 151


5.17 Building our new Sinatra image . . . . . . . . . . . . . . . . . . . . . . 151
5.18 Download our Sinatra web application . . . . . . . . . . . . . . . . . 152
5.19 The Sinatra app.rb source code . . . . . . . . . . . . . . . . . . . . . . 153
5.20 Making the webapp/bin/webapp binary executable . . . . . . . . . 153
5.21 Launching our rst Sinatra container . . . . . . . . . . . . . . . . . . 154
5.22 The CMD instruction in our Dockerle . . . . . . . . . . . . . . . . . 154
5.23 Checking the logs o our Sinatra container . . . . . . . . . . . . . . . 155
5.24 Tailing the logs o our Sinatra container . . . . . . . . . . . . . . . . 155
5.25 Using docker top to list our Sinatra processes . . . . . . . . . . . . . 155
5.26 Checking the Sinatra port mapping . . . . . . . . . . . . . . . . . . . . 156
5.27 Testing our Sinatra application . . . . . . . . . . . . . . . . . . . . . . 156
5.28 Download our updated Sinatra web application . . . . . . . . . . . . 157
5.29 The webapp_redis app.rb le . . . . . . . . . . . . . . . . . . . . . . . 158
5.30 Making the webapp_redis/bin/webapp binary executable . . . . . . 159
5.31 Create directory or Redis container . . . . . . . . . . . . . . . . . . . 159
5.32 Dockerle or Redis image . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.33 Building our Redis image . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.34 Launching a Redis container . . . . . . . . . . . . . . . . . . . . . . . . 160
5.35 Checking the Redis port . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5.36 Installing the redis-tools package on Ubuntu . . . . . . . . . . . . . . 161
5.37 Installing the redis package on Red Hat et al . . . . . . . . . . . . . . 161
5.38 Testing our Redis connection . . . . . . . . . . . . . . . . . . . . . . . 161
5.39 The docker0 interace . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.40 The veth interaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.41 The eth0 interace in a container . . . . . . . . . . . . . . . . . . . . . 165
5.42 Tracing a route out o our container . . . . . . . . . . . . . . . . . . . 165
5.43 Docker iptables and NAT . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.44 Redis container’s networking conguration . . . . . . . . . . . . . . . 168
5.45 Finding the Redis container’s IP address . . . . . . . . . . . . . . . . 169
5.46 Talking directly to the Redis container . . . . . . . . . . . . . . . . . 169
5.47 Restarting our Redis container . . . . . . . . . . . . . . . . . . . . . . 170
5.48 Finding the restarted Redis container’s IP address . . . . . . . . . . . 170
5.49 Creating a Docker network . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.50 Inspecting the app network . . . . . . . . . . . . . . . . . . . . . . . . 172

Version: v17.03.0 (38f1319) 375


Listings

5.51 The docker network ls command . . . . . . . . . . . . . . . . . . . . . 173


5.52 Creating a Redis container inside our Docker network . . . . . . . . 173
5.53 The updated app network . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.54 Linking our Redis container . . . . . . . . . . . . . . . . . . . . . . . . 175
5.55 Installing nslookup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.56 DNS resolution in the network_test container . . . . . . . . . . . . . 176
5.57 Pinging db.app in the network_test container . . . . . . . . . . . . . 176
5.58 The Redis DB hostname in code . . . . . . . . . . . . . . . . . . . . . . 177
5.59 Starting the Redis-enabled Sinatra application . . . . . . . . . . . . . 177
5.60 Checking the Sinatra container’s port mapping . . . . . . . . . . . . 177
5.61 Testing our Redis-enabled Sinatra application . . . . . . . . . . . . . 178
5.62 Conrming Redis contains data . . . . . . . . . . . . . . . . . . . . . . 178
5.63 Running the db2 container . . . . . . . . . . . . . . . . . . . . . . . . . 179
5.64 Adding a new container to the app network . . . . . . . . . . . . . . 179
5.65 The app network ater adding db2 . . . . . . . . . . . . . . . . . . . . 180
5.66 Disconnecting a host rom a network . . . . . . . . . . . . . . . . . . 181
5.67 Jenkins and Docker Dockerle . . . . . . . . . . . . . . . . . . . . . . 183
5.68 Create directory or Jenkins . . . . . . . . . . . . . . . . . . . . . . . . 184
5.69 Building our Docker-Jenkins image . . . . . . . . . . . . . . . . . . . 185
5.70 Running our Docker-Jenkins image . . . . . . . . . . . . . . . . . . . 185
5.71 Checking the Docker Jenkins container logs . . . . . . . . . . . . . . 187
5.72 Checking that is Jenkins up and running . . . . . . . . . . . . . . . . 188
5.73 The Docker shell script or Jenkins jobs . . . . . . . . . . . . . . . . . 192
5.74 The Docker test job Dockerle . . . . . . . . . . . . . . . . . . . . . . 193
5.75 Jenkins multi-conguration shell step . . . . . . . . . . . . . . . . . . 201
5.76 Our CentOS-based Dockerle . . . . . . . . . . . . . . . . . . . . . . . 202
6.1 Creating our Jekyll Dockerle . . . . . . . . . . . . . . . . . . . . . . . 208
6.2 Jekyll Dockerle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
6.3 Building our Jekyll image . . . . . . . . . . . . . . . . . . . . . . . . . . 210
6.4 Viewing our new Jekyll Base image . . . . . . . . . . . . . . . . . . . . 210
6.5 Creating our Apache Dockerle . . . . . . . . . . . . . . . . . . . . . . . 211
6.6 Jekyll Apache Dockerle . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
6.7 Building our Jekyll Apache image . . . . . . . . . . . . . . . . . . . . . 213
6.8 Viewing our new Jekyll Apache image . . . . . . . . . . . . . . . . . . 214
6.9 Getting a sample Jekyll blog . . . . . . . . . . . . . . . . . . . . . . . . 214

Version: v17.03.0 (38f1319) 376


Listings

6.10 Creating a Jekyll container . . . . . . . . . . . . . . . . . . . . . . . . . 215


6.11 Creating an Apache container . . . . . . . . . . . . . . . . . . . . . . . 216
6.12 Resolving the Apache container’s port . . . . . . . . . . . . . . . . . . 217
6.13 Editing our Jekyll blog . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6.14 Restarting our james_blog container . . . . . . . . . . . . . . . . . . . 218
6.15 Checking the james_blog container logs . . . . . . . . . . . . . . . . . 218
6.16 Backing up the /var/www/html volume . . . . . . . . . . . . . . . . 220
6.17 Backup command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
6.18 Creating our etcher Dockerle . . . . . . . . . . . . . . . . . . . . . . 222
6.19 Our war le etcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
6.20 Building our etcher image . . . . . . . . . . . . . . . . . . . . . . . . . 223
6.21 Fetching a war le . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
6.22 Inspecting our Sample volume . . . . . . . . . . . . . . . . . . . . . . 225
6.23 Listing the volume directory . . . . . . . . . . . . . . . . . . . . . . . . 225
6.24 Creating our Tomcat 7 Dockerle . . . . . . . . . . . . . . . . . . . . 226
6.25 Our Tomcat 7 Application server . . . . . . . . . . . . . . . . . . . . . 226
6.26 Building our Tomcat 7 image . . . . . . . . . . . . . . . . . . . . . . . 227
6.27 Creating our rst Tomcat instance . . . . . . . . . . . . . . . . . . . . 227
6.28 Identiying the Tomcat application port . . . . . . . . . . . . . . . . . 228
6.29 Installing Ruby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6.30 Installing the TProv application . . . . . . . . . . . . . . . . . . . . . . 229
6.31 Launching the TProv application . . . . . . . . . . . . . . . . . . . . . 229
6.32 Creating our Node.js Dockerle . . . . . . . . . . . . . . . . . . . . . . 233
6.33 Our Node.js image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
6.34 Our Node.js server.js application . . . . . . . . . . . . . . . . . . . . . 235
6.35 Building our Node.js image . . . . . . . . . . . . . . . . . . . . . . . . 236
6.36 Creating our Redis base Dockerle . . . . . . . . . . . . . . . . . . . . 237
6.37 Our Redis base image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
6.38 Building our Redis base image . . . . . . . . . . . . . . . . . . . . . . 238
6.39 Creating our Redis primary Dockerle . . . . . . . . . . . . . . . . . . 238
6.40 Our Redis primary image . . . . . . . . . . . . . . . . . . . . . . . . . . 238
6.41 Building our Redis primary image . . . . . . . . . . . . . . . . . . . . 239
6.42 Creating our Redis replica Dockerle . . . . . . . . . . . . . . . . . . 239
6.43 Our Redis replica image . . . . . . . . . . . . . . . . . . . . . . . . . . 239
6.44 Building our Redis replica image . . . . . . . . . . . . . . . . . . . . . 240

Version: v17.03.0 (38f1319) 377


Listings

6.45 Creating the express network . . . . . . . . . . . . . . . . . . . . . . . 240


6.46 Running the Redis primary container . . . . . . . . . . . . . . . . . . 240
6.47 Our Redis primary logs . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.48 Reading our Redis primary logs . . . . . . . . . . . . . . . . . . . . . . 241
6.49 Running our rst Redis replica container . . . . . . . . . . . . . . . . 242
6.50 Reading our Redis replica logs . . . . . . . . . . . . . . . . . . . . . . 243
6.51 Running our second Redis replica container . . . . . . . . . . . . . . 244
6.52 Our Redis replica2 logs . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
6.53 Running our Node.js container . . . . . . . . . . . . . . . . . . . . . . 246
6.54 The nodeapp console log . . . . . . . . . . . . . . . . . . . . . . . . . . 246
6.55 Node application output . . . . . . . . . . . . . . . . . . . . . . . . . . 247
6.56 Creating our Logstash Dockerle . . . . . . . . . . . . . . . . . . . . . 248
6.57 Our Logstash image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
6.58 Our Logstash conguration . . . . . . . . . . . . . . . . . . . . . . . . 249
6.59 Building our Logstash image . . . . . . . . . . . . . . . . . . . . . . . . 250
6.60 Launching a Logstash container . . . . . . . . . . . . . . . . . . . . . . 250
6.61 A Node event in Logstash . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.62 Using docker kill to send signals . . . . . . . . . . . . . . . . . . . . . 252
6.63 Installing nsenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
7.1 Installing Docker Compose on Linux . . . . . . . . . . . . . . . . . . . . 256
7.2 Installing Docker Compose on OS X . . . . . . . . . . . . . . . . . . . . 256
7.3 Installing Compose via Pip . . . . . . . . . . . . . . . . . . . . . . . . . . 257
7.4 Testing Docker Compose is working . . . . . . . . . . . . . . . . . . . . 257
7.5 Creating the composeapp directory . . . . . . . . . . . . . . . . . . . . 258
7.6 The app.py le . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
7.7 The requirements.txt le . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
7.8 The composeapp Dockerle . . . . . . . . . . . . . . . . . . . . . . . . . 260
7.9 Building the composeapp application . . . . . . . . . . . . . . . . . . . 261
7.10 Creating the docker-compose.yml le . . . . . . . . . . . . . . . . . . 262
7.11 The docker-compose.yml le . . . . . . . . . . . . . . . . . . . . . . . 263
7.12 An example o the build instruction . . . . . . . . . . . . . . . . . . . 263
7.13 The docker run equivalent command . . . . . . . . . . . . . . . . . . 264
7.14 Running docker-compose up with our sample application . . . . . . 265
7.15 Compose service log output . . . . . . . . . . . . . . . . . . . . . . . . 266
7.16 Running Compose daemonized . . . . . . . . . . . . . . . . . . . . . . 266

Version: v17.03.0 (38f1319) 378


Listings

7.17 Restarting Compose as daemonized . . . . . . . . . . . . . . . . . . . 268


7.18 Running the docker-compose ps command . . . . . . . . . . . . . . . 269
7.19 Showing a Compose services logs . . . . . . . . . . . . . . . . . . . . . 269
7.20 Stopping running services . . . . . . . . . . . . . . . . . . . . . . . . . 270
7.21 Veriying our Compose services have been stopped . . . . . . . . . . 270
7.22 Removing Compose services . . . . . . . . . . . . . . . . . . . . . . . . 271
7.23 Showing no Compose services . . . . . . . . . . . . . . . . . . . . . . . 271
7.24 Creating a Consul Dockerle directory . . . . . . . . . . . . . . . . . 273
7.25 The Consul Dockerle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7.26 The consul.json conguration le . . . . . . . . . . . . . . . . . . . . 275
7.27 Building our Consul image . . . . . . . . . . . . . . . . . . . . . . . . . 277
7.28 Running a local Consul node . . . . . . . . . . . . . . . . . . . . . . . 278
7.29 Pulling down the Consul image . . . . . . . . . . . . . . . . . . . . . . 280
7.30 Getting public IP on larry . . . . . . . . . . . . . . . . . . . . . . . . . 280
7.31 Assigning public IP on curly and moe . . . . . . . . . . . . . . . . . . 281
7.32 Adding the cluster IP address . . . . . . . . . . . . . . . . . . . . . . . 282
7.33 Start the Consul bootstrap node . . . . . . . . . . . . . . . . . . . . . . 282
7.34 Consul agent command line arguments . . . . . . . . . . . . . . . . . 283
7.35 Starting bootstrap Consul node . . . . . . . . . . . . . . . . . . . . . . 284
7.36 Cluster leader error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.37 Starting the agent on curly . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.38 Launching the Consul agent on curly . . . . . . . . . . . . . . . . . . 285
7.39 Looking at the Curly agent logs . . . . . . . . . . . . . . . . . . . . . . 287
7.40 Curly joining Larry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
7.41 Starting the agent on moe . . . . . . . . . . . . . . . . . . . . . . . . . 288
7.42 Consul logs on moe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
7.43 Consul leader election on larry . . . . . . . . . . . . . . . . . . . . . . 290
7.44 Getting the docker0 IP address . . . . . . . . . . . . . . . . . . . . . . 291
7.45 Testing the Consul DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.46 Querying another Consul service via DNS . . . . . . . . . . . . . . . . 293
7.47 Querying another Consul service via DNS . . . . . . . . . . . . . . . . 293
7.48 Creating a distributed_app Dockerle directory . . . . . . . . . . . . 294
7.49 The distributed_app Dockerle . . . . . . . . . . . . . . . . . . . . . . 295
7.50 The uWSGI conguration . . . . . . . . . . . . . . . . . . . . . . . . . . 296
7.51 The distributed_app cong.ru le . . . . . . . . . . . . . . . . . . . . . 297

Version: v17.03.0 (38f1319) 379


Listings

7.52 The Consul plugin URL . . . . . . . . . . . . . . . . . . . . . . . . . . . 297


7.53 Building our distributed_app image . . . . . . . . . . . . . . . . . . . 298
7.54 Creating a distributed_client Dockerle directory . . . . . . . . . . . 298
7.55 The distributed_client Dockerle . . . . . . . . . . . . . . . . . . . . . 299
7.56 The distributed_client application . . . . . . . . . . . . . . . . . . . . . 300
7.57 Building our distributed_client image . . . . . . . . . . . . . . . . . . 301
7.58 Starting distributed_app on larry . . . . . . . . . . . . . . . . . . . . . 302
7.59 The distributed_app log output . . . . . . . . . . . . . . . . . . . . . . 303
7.60 Starting distributed_app on curly . . . . . . . . . . . . . . . . . . . . . 304
7.61 Starting distributed_client on moe . . . . . . . . . . . . . . . . . . . . 305
7.62 The distributed_client logs on moe . . . . . . . . . . . . . . . . . . . . 306
7.63 Getting public IP on larry again . . . . . . . . . . . . . . . . . . . . . . 309
7.64 Initializing a swarm on larry . . . . . . . . . . . . . . . . . . . . . . . . 310
7.65 The Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
7.66 The docker node command . . . . . . . . . . . . . . . . . . . . . . . . 312
7.67 Adding worker nodes to the cluster . . . . . . . . . . . . . . . . . . . 312
7.68 Running the docker node command again . . . . . . . . . . . . . . . 313
7.69 Creating a swarm service . . . . . . . . . . . . . . . . . . . . . . . . . . 313
7.70 Listing the services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
7.71 Inspecting the heyworld service . . . . . . . . . . . . . . . . . . . . . . 315
7.72 Checking the heyworld service process . . . . . . . . . . . . . . . . . 315
7.73 Scaling the heyworld service . . . . . . . . . . . . . . . . . . . . . . . 316
7.74 Checking the heyworld service process . . . . . . . . . . . . . . . . . 316
7.75 Running a global service . . . . . . . . . . . . . . . . . . . . . . . . . . 317
7.76 The heyworld_global process . . . . . . . . . . . . . . . . . . . . . . . 317
7.77 Deleting the heyworld service . . . . . . . . . . . . . . . . . . . . . . . 318
7.78 Listing the remaining services . . . . . . . . . . . . . . . . . . . . . . . 318
8.1 Querying the Docker API locally . . . . . . . . . . . . . . . . . . . . . . 323
8.2 Deault systemd daemon start options . . . . . . . . . . . . . . . . . . . 323
8.3 Network binding systemd daemon start options . . . . . . . . . . . . . 324
8.4 Reloading and restarting the Docker daemon . . . . . . . . . . . . . . 324
8.5 Connecting to a remote Docker daemon . . . . . . . . . . . . . . . . . 325
8.6 Revisiting the DOCKER_HOST environment variable . . . . . . . . . . 325
8.7 Using the ino API endpoint . . . . . . . . . . . . . . . . . . . . . . . . . 326
8.8 Getting a list o images via API . . . . . . . . . . . . . . . . . . . . . . . 327

Version: v17.03.0 (38f1319) 380


Listings

8.9 Getting a specic image . . . . . . . . . . . . . . . . . . . . . . . . . . . 328


8.10 Searching or images with the API . . . . . . . . . . . . . . . . . . . . 329
8.11 Listing running containers . . . . . . . . . . . . . . . . . . . . . . . . . 330
8.12 Listing all containers via the API . . . . . . . . . . . . . . . . . . . . . 331
8.13 Creating a container via the API . . . . . . . . . . . . . . . . . . . . . 331
8.14 Conguring container launch via the API . . . . . . . . . . . . . . . . 332
8.15 Starting a container via the API . . . . . . . . . . . . . . . . . . . . . . 332
8.16 API equivalent or docker run command . . . . . . . . . . . . . . . . 333
8.17 Listing all containers via the API . . . . . . . . . . . . . . . . . . . . . 333
8.18 The legacy TProv container launch methods . . . . . . . . . . . . . . 334
8.19 The Docker Ruby client . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
8.20 Installing the Docker Ruby client API prerequisites . . . . . . . . . . 336
8.21 Testing our Docker API connection via irb . . . . . . . . . . . . . . . 337
8.22 Our updated TProv container management methods . . . . . . . . . 338
8.23 Installing TProvAPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.24 Checking or openssl . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
8.25 Create a CA directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
8.26 Generating a private key . . . . . . . . . . . . . . . . . . . . . . . . . . 341
8.27 Creating a CA certicate . . . . . . . . . . . . . . . . . . . . . . . . . . 342
8.28 Creating a server key . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
8.29 Creating our server CSR . . . . . . . . . . . . . . . . . . . . . . . . . . 344
8.30 Connect via IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8.31 Signing our CSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8.32 Removing the passphrase rom the server key . . . . . . . . . . . . . 345
8.33 Securing the key and certicate on the Docker server . . . . . . . . 346
8.34 Enabling Docker TLS on systemd . . . . . . . . . . . . . . . . . . . . . 346
8.35 Reloading and restarting the Docker daemon . . . . . . . . . . . . . 347
8.36 Creating a client key . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
8.37 Creating a client CSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
8.38 Adding Client Authentication attributes . . . . . . . . . . . . . . . . . 349
8.39 Signing our client CSR . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
8.40 Stripping out the client key pass phrase . . . . . . . . . . . . . . . . . 349
8.41 Copying the key and certicate on the Docker client . . . . . . . . . 350
8.42 Testing our TLS-authenticated connection . . . . . . . . . . . . . . . 351
8.43 Testing our TLS-authenticated connection . . . . . . . . . . . . . . . 351

Version: v17.03.0 (38f1319) 381


Listings

9.1 Installing git on Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . . 356


9.2 Installing git on Red Hat et al . . . . . . . . . . . . . . . . . . . . . . . . 356
9.3 Check out the Docker source code . . . . . . . . . . . . . . . . . . . . . 357
9.4 Building the Docker documentation . . . . . . . . . . . . . . . . . . . . 358
9.5 Building the Docker environment . . . . . . . . . . . . . . . . . . . . . 358
9.6 Building the Docker binary . . . . . . . . . . . . . . . . . . . . . . . . . 359
9.7 The Docker dev client binary . . . . . . . . . . . . . . . . . . . . . . . . 359
9.8 Using the development daemon . . . . . . . . . . . . . . . . . . . . . . 360
9.9 Using the development binary . . . . . . . . . . . . . . . . . . . . . . . 360
9.10 Running the Docker tests . . . . . . . . . . . . . . . . . . . . . . . . . . 361
9.11 Docker test output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
9.12 Launching an interactive session . . . . . . . . . . . . . . . . . . . . . 363
9.13 The Docker DCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364

Version: v17.03.0 (38f1319) 382


Index

.dockerignore, 93 cgroups, 15, 20


/etc/hosts, 50 Che, 14, 19
/var/lib/docker, 46, 68, 73, 216 Chocolatey, 34
chroot, 6
Apache, 207, 213
CI, 13, 182
API, 321
Compose, 255
/containers, 329
services, 255
/containers/create, 331
Connecting containers, 139
/images/json, 327
Consul, 272
/ino, 326
conguration, 275
Client libraries, 334
DNS, 272, 275, 301
containers, 333
HTTP API, 272, 276, 301
ino, 326
ports, 275
API documentation, 321
web interace, 276
AUFS, 20
container
Automated Builds, 128
logging, 57
Automatically restarting containers, 65
names, 53
Back up volumes, 219 Container ID, 52
Boot2Docker, 18, 31, 33 container ID, 50, 53, 54, 56, 64
btrs, 20 containers
Build content, 115 introduction, 6
Build context, 87, 93, 359 Context, 87
.dockerignore, 93 Continuous Integration, 13, 138, 182
Building images, 86 Copy-on-write, 16
Bypassing the Dockerle cache, 95 curl, 156

CentOS, 26 DCO, 364

383
Index

Debian, 21 DCO, 364


Debugging Dockerles, 94 dind, 229
deault storage driver, 20 DNS, 241
Developer Certicate o Origin, see also docker binary, 38
DCO docker group, 38, 322
Device Mapper, 20 Docker Hub, 74
dind, 229 docker0, 163
DNS, 241 Dockerle
Docker ADD, 114
API, 321, 339 ARG, 118
Client libraries, 334 CMD, 102, 151, 154
List images, 327 COPY, 115
APT repository, 23 ENTRYPOINT, 105, 160
Authentication, 339 ENV, 109
automatic container restart, 65 EXPOSE, 89, 101
binary installation, 37 FROM, 88
Bind UDP ports, 101 LABEL, 116
build context, 359 MAINTAINER, 88
build environment, 355, 358 ONBUILD, 122
clustering, 307 RUN, 88, 89
Conguration Management, 14 STOPSIGNAL, 117
connecting containers, 139, 156 USER, 111
container ID, 50, 52–54, 56, 64 VOLUME, 112
container names, 53 WORKDIR, 108
curl installation, 36 Documentation, 357
daemon, 38 Fedora
–tls, 347 installation, 28
–tlsveriy, 351 Forum, 354
-H ag, 38 Getting help, 354
deaults, 40 Hub API, 321
DOCKER_HOST, 39, 325, 351 installation, 20, 26
DOCKER_OPTS, 40 iptables, 166
network conguration, 38 IPv6, 163
Unix socket, 39 IRC, 354
Upstart, 40 kernel versions, 20

Version: v17.03.0 (38f1319) 384


Index

launching containers, 47 installation, 33


license, 7 docker
listing containers, 52 –log-driver, 59
naming containers, 53 attach, 55, 194
NAT, 166 build, 86, 90, 91, 118, 143, 151,
networking, 163 184, 197, 209, 213, 260
OS X, 18 –no-cache, 95
installation, 31 -, 92
packages, 23 context, 87
Red Hat Enterprise Linux commit, 84
installation, 26 create, 54
registry, 49 daemon, 38
Registry API, 321 exec, 62, 252
Remote API, 322 -d, 63
remote installation script, 36 -i, 63
required kernel version, 21 -t, 63
Running your own registry, 133 -u, 63
Security, 77 history, 97, 143
set container hostname, 241 images, 73, 78, 97, 134, 210, 213,
setting the working directory, 108 327
signals, 252 ino, 25, 30, 46, 311, 326, 338
speciying a Docker build source, 92 inspect, 66, 85, 101, 117, 122, 167,
SSH, 252 168, 328
tags, 76 kill, 68, 169, 252
testing, 138 signals, 252
TLS, 340 log driver, 59
Ubuntu login, 82
installation, 20 logout, 82
Ubuntu rewall, 25 logs, 57, 154, 241, 246
ubuntu image, 49 –tail, 58
UFW, 25 -, 57, 155
upgrading, 42 -t, 58
use o sudo, 22 network, 162, 171
volumes, 146, 215, 216, 219 connect, 179
Windows, 18, 34 create, 171

Version: v17.03.0 (38f1319) 385


Index

disconnect, 181 -e, 110


inspect, 171 -h, 241
ls, 172 -u, 112
rm, 173 -v, 215, 220
node, 311 -w, 108
ls, 312 set environment variables, 110
port, 99, 101, 227 search, 79
ps, 52, 56, 64, 69, 98, 147, 268, 329 service, 313
–ormat, 52 create, 314
-a, 52, 69 inspect, 314
-l, 52 ls, 314
-n, 64 ps, 315, 317
-q, 69 rm, 317
pull, 75, 78, 280 scale, 316
push, 126, 131, 135 start, 54, 218, 270
restart, 54, 169 stats, 61
rm, 68, 69, 133, 194 stop, 64, 68
-, 69 swarm, 310
rmi, 131, 133 init, 310
run, 47, 56, 59, 65, 70, 77, 80, 89, join, 312
94, 98, 102, 103, 135, 145, 154, join-token, 311
197, 214, 264, 332 tag, 134
–cidle, 194 top, 60, 155
–dns, 302 version, 338
–entrypoint, 107 wait, 194
–expose, 89 Docker API, 9
–hostname, 241 Docker Compose, 31, 33, 255
–name, 53, 227, 242, 246 –version, 257
–net, 173 Installation, 256
–restart, 65 upgrading, 257
–rm, 220, 242, 243 Docker Content Trust, 77
–volumes-rom, 216, 227, 242, Docker Engine, 9
243 Docker or Mac, 31, 256
-P, 101 Docker or Windows, 33, 256
-d, 56 docker group, 38, 322

Version: v17.03.0 (38f1319) 386


Index

Docker Hub, 74, 79, 126, 128, 321 CMD, 213, 223
Logging in, 82 DSL, 86
Private repositories, 126 ENTRYPOINT, 209, 213, 223, 227,
Docker Hub Enterprise, 74 236, 238, 240, 276, 297
Docker Inc, 7, 77, 354 ENV, 212
Docker Machine, 31, 33 exec ormat, 89
Docker Networking, 162, 170 EXPOSE, 140, 212, 227
bridge, 172 RUN, 141
documentation, 181 template, 95
overlay, 172 VOLUME, 209, 212, 223, 237, 276
docker run WORKDIR, 209, 223, 224
-h, 279 DockerUI, 43
Docker Swarm, 255 Documentation, 357
Docker Trusted Registry, 74 dotCloud, 7
Docker user interaces Drone, 206
DockerUI, 43
Shipyard, 43 EPEL, 27
docker-compose exec ormat, 89
kill, 270 Fedora, 26
logs, 269 Fluentd, 60
ps, 268 Forum, 354
rm, 270
start, 270 GELF, 60
stop, 269 Getting help, 354
up, 264 GitHub, 128
Docker-in-Docker, 229 gomt, 363
docker0, 162, 163, 172 Golden image, 14
DOCKER_HOST, 39, 325, 351
HTTP_PROXY, 40, 133
DOCKER_HOST, 267
HTTPS_PROXY, 40, 133
dockerd, 38
Dockerle, 86, 123, 128, 136, 139, 144, Image management, 14
150, 159, 183, 192, 193, 198, iptables, 166
208, 209, 211, 213, 222, 226, IPv6, 163
233, 237–239, 248, 358 IRC, 354
ADD, 140, 142, 234, 249

Version: v17.03.0 (38f1319) 387


Index

jail, 6 Puppet, 14, 19


Jekyll, 207, 210
Jenkins CI, 13, 138, 182 Red Hat Enterprise Linux, 26
automated builds, 197 Redis, 157, 162
parameterized builds, 198 Registry
post commit hook, 197 private, 133
JSON, 156 Registry API, 321
Remote API, 322
kernel, 20, 21 REST, 322
Kitematic, 31, 33, 43 RFC1918, 164
Kubernetes, 319
Service Oriented Architecture, 9
libcontainer, 15 Shipyard, 43
license, 7 Signals, 252
logging, 57, 59 Sinatra, 156
timestamps, 58 SOA, 9
lxc, 7 Solaris Zones, 7
SSH, 252
Microservices, 9 SSL, 339
names, 53 sudo, 22
namespaces, 20 Supervisor, 105
NAT, 166 Swarm, 255, 307
Nginx, 139 tags, 76
NO_PROXY, 40, 133 Testing applications, 138
nsenter, 64, 252 Testing workow, 138
openssl, 340 TLS, 321, 339
OpenVZ, 7 Trusted builds, 128
Orchestration, 255 Ubuntu, 20
PAAS, 7, 14 UI or Docker, 43
Platorm-as-a-Service, 14 Union mount, 71
Port mapping, 89 Upstart, 41
Portainer, 43 vs, 20
Private repositories, 126 Volumes, 146, 215, 216
proxy, 40, 133 backing up, 219, 251

Version: v17.03.0 (38f1319) 388


Index

logging, 247

ZFS, 20

Version: v17.03.0 (38f1319) 389


Thanks! I hope you enjoyed the book.

© Copyright 2015 - James Turnbull <[email protected]>

You might also like