Notes taken from watching “Docker for Web Developers” by Dan Wahlin on Pluralsight
What is Docker?
- Lightweight, open, secure platform
- Simplify building, shipping, and running apps
- Shipping container system for code
- Without standard-size shipping containers, real-world shipping was needlessly complex. Now, they have standard size containers and transportation is simple and efficient. Docker is the same way.
- Runs natively on Linux or Windows Server >2016
- Relies on “images” and “containers”
- Image:
- simply the definition or blueprint of a virtual machine. Has the necessary files to run something, for example an app, database, etc, and the support files for these.
- A read-only template composed of layered filesystems used to share common files and create Docker container instances
- Container:
- created using an image. Runs you application.
- isolated and secured shipping container created from an image that can be run, started, stopped, moved, and deleted very quickly
- Image:
- The Docker Client runs in on the Docker Engine, which integrates with Linux Container Support (LXC) on Linux or Windows Server Container Support on Windows.
- VMs vs Docker Containers
- There isn’t a copy of the Guest OS for each app
- Because of a VMs size, it can take a long time to start compared to containers

Docker Benefits
- Accelerate Developer Onboarding
- installing a full stack on a new hire’s computer is tedious and you can’t be sure that everything will play well in the new environment
- Eliminate App Conflicts
- You can have and run different versions of software without conflict
- Environment Consistency
- Movement from Dev to Staging to Production is seamless. If it runs on dev, it will run on production
- Ship software faster
- Predictability
- Consistency
Common Docker Commands
Note that [container id] and [container name] are interchangeable in docker commands. So are [image id] and [image name]
Also note that you only need to enter enough of the [container id] or [image id] for docker to differentiate between containers; you can indeed only use the first couple characters.
docker pull [image name]to download an image from DockerHubdocker run [image name]will run an image. Common flags are described below.- Anything written after the container name is run as a command into that container.
-p [external port]:[container port]too route ports in/out of the container-v <path/in/container>will create a random UUID volume connected to that path. This is explained later.-v <path/in/host>:<path/in/container>created a bind-mount style volume. This is explained later.-w <working/directory>will set the working directory for any command that you want to run. This is explained later.-itwill link your shell to the docker container’s shell after executing run. This is also called “interactive mode”. You’ll need to run a command like/bin/bashafter the container name in order to get a shell into the machine that you can use.-druns the container in “daemon mode”, in the background-n <name>will assign a name to that container--link <name>:<internal alias>will link the network of this container to the name of another container. This is not needed if using a custom network. This is explained later.--net=<network name>will run the container in an already existing custom network.
docker imagesto list images and basic factsdocker psto list running containers (add-ato see stopped ones too) and basic factsdocker rm [container id]to delete a container (but not the image)-vwill delete all docker-made volumes that the container was using. You don’t want to run this until this is the last container using this volume. Volumes are explained below.
docker rmi [image id]to delete an imagedocker exec <container name> <command>will execute a command in a container. This can be combined with the aforementioned-itflag to get a shell in an already running container.docker network create --driver bridge <network name>will create a custom bridge network that you can run containers in
Hook Source Code into a Container
Note that I may use $(pwd) in some commands as a way to get the current working directory. This is the syntax on Linux and Mac. This is different on Windows and DOS. In PowerShell, it would be ${PWD}, for example.
- Docker uses a layered file system
- Each layer in an image is read-only
- A container adds a thin read/write layer on top of the read only image layers. This thin layer is the difference between an image and a container.
- Containers can share image layers! So you don’t need a new copy of the file system if you need multiple containers of the same image.Taking it a step farther, if one specific layer is used in multiple images, that layer does not need to be pulled down from DockerHub, thus saving even more space.
- You could put your code into the thin read-write layer…but even better is to use a volume.

- Volumes are used for persistent storage across container creation/deletion.
- Typically referred to as a Data Volume
- Multiple containers can read-write to the same volume simultaneously.
- Any updates to am image don’t change the volume
- Data volumes are persisted even after the container is deleted.
- Volumes just act as aliases to folders on the Host
- Without a volume, any changes in docker are not saved
- Let Docker create a data volume for you by adding
-v <path/in/container>to yourdocker runcommand. Docker will make a random UUID for a volume and store it in a place of its choosing. You can see where this new container is by runningdocker inspect <container name>. On linux it will be in/var/lib/docker/volumes/ - Customize the volume be adding your own folder path. Add
-v <host/path>:<path/in/containerto yourdocker run command. Example:docker run -p 8080:3000 -v $(pwd):/var/www nodewill create a volume in the current working directory. - Source code can thus live on your host but be passed into a container with a volume

- You can link your code into a Container using the current working directory (as seen two bullet points ago).
- Anything written after the image name in the
docker runcommand are commands that will be executed in the container on start. So,docker run -p 8080:3000 -v $(pwd):/var/www node npm startwill runnpm startin the node container on boot.- However,
npm startwouldn’t be run in the current folder! You can use the-wflag indocker runto specify that you want to set the Working directory for the commands that are about to be run. So-w <path/to/run/commands/in>will runnpm startin that folder.
- However,
Building Custom Images with Dockerfile
- Think about images as being a type of pre-built – or compiled – code. How can you how compile your own image? With
docker buildand aDockerfile - The
Dockerfileis a text file with pre-defined docker commands in it. These commands are documented on Docker’s website. - You can name this file whatever you want, really, but you’ll need an extra flag (
-f <filename>) indocker build - BASIC FORMAT:
- FROM (what image to use as your base, to build upon)
- without a version number on the end,
:latestis implied
- without a version number on the end,
- LABEL (specify things like who owns or maintains the image)
- RUN (will run commands on the container)
- COPY (can copy things like source code into the container)
- ENTRYPOINT (what is the initial starting point for the container. This is a command split up into a json array. This is different from RUN because it runs after the container will is made, not before, like RUN.)
- WORKDIR (set context directory for next commands)
- EXPOSE (will expose a port)
- ENV (defines environment variables to be used in the container)
- VOLUME (define the volume and how it stores data on the host system)
- FROM (what image to use as your base, to build upon)
- Examples:


- Build the image with
docker build -t <your username>/<image name>:<optional version info> .. The-tstands for “tag” - Every command in
Dockerfilecreates an “intermediate container” which are cached behind the scenes so each new build won’t need to take nearly as long as the first one! - You can push your image to a registry with
docker push <your username>/<image name>. You may need to login to the registry;docker loginwill do this.
Communicating between Docker Containers
- There are two ways to link containers
- Legacy Linking – using container names.
- To add a name to a container, use the
-n <name>flag todocker run - On the second container, you’ll need the
--link <name>:<internal alias>flag. The name same name as the first container in order to link it. The internal alias is a name that the second container can use internally instead of using the original name.
- To add a name to a container, use the
- Custom Bridge Network – isolated network where only containers in the network can communicate with each other.
- Basic idea: create a custom bridge network, then run containers in that network
- You can create a new bridge network with
docker network create --driver bridge <network name> - For each container you’d like to run in that network, use
--net=<network name>flag in thedocker runcommand. Make sure to name this container with--name <name>so that other containers know which hostname they can use to connect to it. - You can see all containers connected to a network with
docker network inspect <network name>
- Legacy Linking – using container names.
Managing Containers with Docker-Compose
- Provides a great way to get multiple containers up and running, named, connected, and configured with a single command
- In docker-compose context, a container in a docker-compose “stack” is called a “service”
- docker-compose manages the whole application lifestyle
- Start, stop, and rebuild services
- View the status of running services
- Stream the log output of running services
- Run a one-off command on a service
- Theres a single file that controls it :
docker-compose.yml - BASIC FORMAT:
- version: (version of docker-compose)
- services: (define what containers you want to be running)
- build: (build context for this service container)
- context: (which folder to use for building)
- dockerfile: (name of dockerfile to use to build service container)
- container_name: (name the container
- environment: (define environment variables)
- image: (which image to use if not using “build”)
- networks: (which network to connect to)
- ports: (which ports are exposed)
- volumes: (how volumes will be mounted in the service container)
- build: (build context for this service container)
- networks: (what networks to create for this stack)
<network name>: (name your network)- driver: (driver the network will use, ie. bridge)
- volumes: (which volumes to create for this stack)
<volume name>:

- docker-compose commands:
docker-compose buildwill build everything but not run it- You can add the service container name to the end to build only one service
docker-compose upwill create and run every service container (and build it if not done already)- The
--no-deps <service name>flag will create and bring up a single service and leave out any service that it may depend on. This “depends on” relationship is part of thedocker-compose.ymlfile. This is especially useful when you’re rebuilding a service but don’t want to rebuild other ones (which would effectively take them out of commission for the time they are building) - You may want the
-dflag for daemon mode to run the stack in the background
- The
docker-compose downwill tear down all the services and remove them. Usestopinstead if you don’t want to remove the service containers.--rmi <"all" or image name>flag will even remove all or one of the images--volumesflag will delete any volumes the services were using
docker-compose logswill show the logs for all the services. This will look the same asdocker-compose upas far as what is displayed ifdocker-compose upis run without-d, daemon mode. Unlikedocker-compose up, it is safe to use Ctrl+C to exit without stopping the stack.docker-compose pswill show you the running servicesdocker-compose stopwill stop all the different servicesdocker-compose startwill start all the different servicesdocker-compose rmwill remove all the services
- Examples:


Moving to Kubernetes
- Docker-Compose doesn’t do everything we might need in production…
- How can we scale, run, and manage containers without the nitty-gritty commands?
- Docker-Compose does have a scaling feature (and the ability to restart containers if they fail), but it does not do load balancing. In production, you might want something more robust.

- What if we could define the containers we want and then hand if off to a system that manages it all for us? This is Kubernetes.
- Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
- Kubernetes is the conductor of a container orchestra


A Kubernetes “cluster” has a “Master” node that directs the worker nodes
- Worker nodes are VMs that contain “pods”, which contain containers

You can setup Kubernetes locally with
- Minikube, or
- Docker Desktop (has Kubernetes support by flipping one switch)
- Go to Docker Tray Icon > Preferences or Settings > Kubernetes > Enable Kubernetes
Kubernetes Key Concepts:
- Deployment
- Describe the desired state
- Can be used to replicate pods
- Support rolling updates and rollbacks
- Service
- Pods can live and die, we can’t rely in their ip addresses or which worker node they’re stored on
- Services can abstract pod IP addresses from consumers.
- Load balancing between pods
- A service has a ip address, but so does a pod. A consumer of the pod will need the service ip so that it can have a connection to the pod, even after the pod moves and changes IPs.
- Thus, the networking is abstracted for the consumer, because all they need to know is the service ip which does not change.
- Deployment
In summary, for each type of image in a cluster stack, there will be one service container and one or more deployment containers.

- You can convert from docker-compose to Kubernetes in a couple ways…
- You can use Docker Desktop, called “Compose on Kubernetes”
- Pre-installed on Docker Desktop
- Uses docker “stacks”
- Docker has its own solution for Kubernetes called Docker Swarm, but Docker Desktop supports Kubernetes nonetheless.
- You can deploy a stack with a docker-compose file with
docker stack deploy --orchestrator=kubernetes -c <docker-compose file name> <stack name> - Notice the
deploy:andreplicas:keywords in thedocker-compose.yml. This can set how many of each container you’d like

- You can also use an open-source project called Kompose, which you’ll need to install yourself follow instructions on kompose.io
- Convert a
docker-compose.yamlfile withkompose convert.- You can add the
--out <file name>.ymlflag to get everythingkomposewill create into a single file instead of many.
- You can add the
- This will create many files for Kubernetes to work with – mainly
deploymentandservicefiles for each type of container - In deployment files, the
replicas:keyword is also present to define how many of each container is wanted - In service files, the
port:keyword is set so that the service can connect to that pod on the right port whenever a consumer asks the service for data.
- Convert a
- Common Kubernetes Commands
kubectl versionto get versionkubectl get [deploy | services | pods]will get you information for each. This is how you can see the names and other info in your clusterkubectl run <name> --image=<image name>:<image version>to get a specific container up and runningkubectl delete deployment <pod name>will delete single deployment (or set one or more containers in a pod)kubectl apply -f [fileName | folderName]to run all the containers specified in the config file(s)kubectl delete -f [fileName | folderName]to delete all the pods specified in the config file(s)kubectl port-forward <name of pod> <external port>:<internal port>
- There are many Pluralsight courses on Kubernetes to get you started on it.




