250 Practice Questions for the DCA Exam

Sample questions get you ready for the Docker Certified Associate exam

Bhargav Bachina
Nov 2, 2019 · 28 min read
Photo by Chase Kinney on Unsplash

Docker is an essential tool in every organization nowadays. Every company is implementing DevOps and containerize their applications for easier and faster production deployments. Docker Certified Associate Exam is designed to validate the Docker skills necessary for any individual to succeed in today's DevOps world.

These practice questions are intended for those who want to take the DCA exam and entirely based on this study guide from the Docker. All the material for these questions is taken from the official docker documentation. These questions are divided based on the sections from the study guide.

Orchestration (25%)

docker swarm init --advertise-addr <MANAGER-IP>

This flag configures the IP address for the manager node and The other nodes in the swarm must be able to access the manager at the IP address.

docker info // you can find the info under the swarm section

docker node ls

// it generate the instructions for the manager to be added
docker swarm join-token manager

// it generate the instructions for the worker to be added
docker swarm join-token worker

docker run <image>

When Docker restarts, both the TLS key used to encrypt communication among swarm nodes, and the key used to encrypt and decrypt Raft logs on disk, are loaded into each manager node’s memory.Docker 1.13 introduces the ability to protect the mutual TLS encryption key and the key used to encrypt and decrypt Raft logs at rest, by allowing you to take ownership of these keys and to require manual unlocking of your managers. This feature is called autolock.

// This command produces unlock key. You need to place that in safe placedocker swarm init --autolock

docker swarm unlock

No. You can lock the existing swarm as well

//enable autolock
docker swarm update --autolock=true
//disable autolock
docker swarm update --autolock=false

docker swarm unlock-key

docker swarm unlock-key --rotate

Yes

// for the nginx image
docker create service --replicas 3 --name nginx-web nginx

docker service ls

docker service ps <service name>

docker service inspect <service name>

docker service inspect <service> --pretty

docker service ps <service>

// you need to run this command on the particular node
docker ps

stack

A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.

// deploy the new stack or update
docker stack deploy -c <compose file>
// list services in the stack
docker stack services
// list the tasks in the stack
docker stack ps
// remove the stack
docker stack rm
//List stack
docker stack ls

// with the help of --filter flag
docker stack service nginx-web --filter name=web

docker stack services --format "{{.ID}}: {{.Mode}} {{.Replicas}}"

docker service scale SERVICE=REPLICAS// example
docker service scale frontend=50
// you can scale multiple services as well
docker service scale frontend=50 backend=30
// you can also scale with the update command
docker service update --replicas=50 frontend

docker service rollback my-service

 manage communications among the Docker daemons participating in the swarm.You can attach a service to one or more existing overlay networks as well, to enable service-to-service communication.is a special overlay network that facilitates load balancing among a service’s nodes. When any swarm node receives a request on a published port, it hands that request off to a module called IPVS. IPVS keeps track of all the IP addresses participating in that service, selects one of them, and routes the request to it, over the ingress network.is a bridge network that connects the overlay networks (including the ingress network) to an individual Docker daemon’s physical network.

Yes

Yes

docker network create --driver overlay my-network// you can customize it
docker network create \
--driver overlay \
--subnet 10.0.9.0/24 \
--gateway 10.0.9.99 \
my-network

docker network inspect my-network

docker service create \
--replicas 3 \
--name my-web \
--network my-network \
nginx

Yes

docker network inspect my-network
or
docker service ls // for the name
docker service ps <SERVICE> // to list the networks

Yes

docker network rm ingressdocker network create \
--driver overlay \
--ingress \
--subnet=10.11.0.0/16 \
--gateway=10.11.0.2 \
--opt com.docker.network.mtu=1200 \
my-ingress

Originally, the -v or --volume flag was used for standalone containers and the --mount flag was used for swarm services. However, starting with Docker 17.06, you can also use --mount with standalone containers. In general, --mount is more explicit and verbose.

docker service create -d \
--replicas=4 \
--name devtest-service \
--mount source=myvol2,target=/app \
nginx:latest

No

When building fault-tolerant applications, you might need to configure multiple replicas of the same service to have access to the same files.Volume drivers allow you to abstract the underlying storage system from the application logic. For example, if your services use a volume with an NFS driver, you can update the services to use a different driver, as an example to store data in the cloud, without changing the application logic.

docker volume create --driver vieux/sshfs \
-o sshcmd=test@node2:/home/test \
-o password=testpassword \
sshvolume

docker service create -d \
--name nfs-service \
--mount 'type=volume,source=nfsvolume,target=/app,volume-driver=local,volume-opt=type=nfs,volume-opt=device=:/var/docker-nfs,volume-opt=o=addr=10.0.0.10' \
nginx:latest

global

replicated

it's always best practice to use client bundle to troubleshoot UCP clusters

docker service ls
docker service ps <service>
docker service inspect <service>
docker inspect <task>
docker inspect <container>
docker logs <container>

you can use labels to add metadata about the node

docker node update --label-add foo worker1// add multiple labels
docker node update --label-add foo --label-add bar worker1

docker node update --label-rm foo worker1

--placement-pref// example: if we have three datacenters 3 replicas will be placed on each datacenterdocker service create \
--replicas 9 \
--name redis_2 \
--placement-pref 'spread=node.labels.datacenter' \
redis:3.0.6

--constraint// example: the following limits tasks for the redis service to nodes where the node type label equals queuedocker service create \
--name redis_2 \
--constraint 'node.labels.type == queue' \
redis:3.0.6

Raft Consensus Algorithm

Quorun ensure that the cluster state stays consistent in the presence of failures by requiring a majority of nodes to agree on values.Raft tolerates up to (N-1)/2 failures and requires a majority or quorum of (N/2)+1 members to agree on values proposed to the cluster.without quorun swarm wont be able to serve the requests

--env
--mount
--hostname
// example
service create --name hosttempl \
--hostname="{{.Node.Hostname}}-{{.Node.ID}}-{{.Service.Name}}"\
busybox top

Image Creation, Management, and Registry (20%)

FROM

No. ARG is the only instruction can precede FROM

shell form: RUN <command>
exec form: RUN ["executable", "param1", "param2"]

The RUN instruction will execute any commands in a new layer on top of the current image and commit the results.

--no-cachedocker build --no-cache .

Yes. ADD

CMD ["executable","param1","param2"] (exec form, this is the preferred form)CMD ["param1","param2"] (as default parameters to ENTRYPOINT)CMD command param1 param2 (shell form)

Yes

 These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.

use ENTRYPOINT in combination with CMD

It adds metadata to the Image

docker inspect // Under Labels section

No. It serves as a type of documentation between the image publisher and image consumer

use -p flag when running a container

ENV <key> <value>an ENV instruction sets the enviroment value to the key and it is available for the subsequent build steps and in the running container as well.

docker run --env <key>=<value>

ADD [--chown=<user>:<group>] <src>... <dest>The ADD instruction copies new files, directories or remote file URLs from <src> and adds them to the filesystem of the image at the path <dest>.COPY [--chown=<user>:<group>] <src>... <dest>The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>.

An ENTRYPOINT allows you to configure a container that will run as an executable.Command line arguments to docker run <image> will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD.

docker run --entrypoint

The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.

docker run -v

The USER instruction sets the user name (or UID) and optionally the user group (or GID) to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile.

The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.

WORKDIR /a
WORKDIR b
WORKDIR c
RUN pwd
result: /a/b/c

WORKDIR /a
WORKDIR /b
WORKDIR c
RUN pwd
result: /b/c

ARG <name>[=<default value>]The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.

The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build.

STOPSIGNAL signal

HEALTHCHECK

--interval=DURATION (default: 30s)
--timeout=DURATION (default: 30s)
--start-period=DURATION (default: 0s)
--retries=N (default: 3)

The SHELL instruction allows the default shell used for the shell form of commands to be overridden. The default shell on Linux is ["/bin/sh", "-c"], and on Windows is ["cmd", "/S", "/C"]. The SHELL instruction must be written in JSON form in a Dockerfile.

Yes

use .dockerignore file

Multi Stage Builds

Only the instructions RUN, COPY, ADD create layers.Where possible, use multi-stage builds, and only copy the artifacts you need into the final image.sort multi line arguments
RUN apt-get update && apt-get install -y \
bzr \
cvs \
git \
mercurial \
subversion

Put instructions that likely to change often at the bottom of the dockerfile.

docker image prune

docker image history

by using --format flag//examples
docker inspect --format='{{range .NetworkSettings.Networks}}{{.MacAddress}}{{end}}' $INSTANCE_ID
docker inspect --format='{{.LogPath}}' $INSTANCE_ID

docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]docker tag 0e5574283393 fedora/httpd:version1.0 // by id
docker tag httpd fedora/httpd:version1.0 // by name
docker tag httpd:test fedora/httpd:version1.0.test // by name and tag
docker tag 0e5574283393 myregistryhost:5000/fedora/httpd:version1.0

docker run -d -p 5000:5000 --restart=always --name registry registry:2

// pull an image from the Docker Hub
docker pull ubuntu
// tag an image
docker tag ubuntu:16.04 localhost:5000/my-ubuntu
// push the image
docker push localhost:5000/my-ubuntu

docker container stop registry && docker container rm -v registry

docker image inspect //under Layers section

docker image load// example
docker image load -i example.tar

// take any multiple layer image
// run the container
docker export <container> > single-layer.tar
docker import /path/to/single-layer.tar
// check the history
docker image history

Yes

yes

Copy-on-write is a strategy of sharing and copying files for maximum efficiency. If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified. This minimizes I/O and the size of each of the subsequent layers.

// customize published port
docker run -d \
-p 5001:5000 \
--name registry-test \
registry:2
// If you want to change the port the registry listens on within the containerdocker run -d \
-e REGISTRY_HTTP_ADDR=0.0.0.0:5001 \
-p 5001:5001 \
--name registry-test \
registry:2
// storage customization
docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-v /mnt/registry:/var/lib/registry \
registry:2

The Registry configuration is based on a YAML file. you can specify a configuration variable from the environment by passing -e arguments to your docker run stanza or from within a Dockerfile using the ENV instruction.// for example you have a configuration like this for root directory
storage:
filesystem:
rootdirectory: /var/lib/registry
// you can create environment variable like this
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/somewhere
it will change from /var/lib/registry to /somewhere

/etc/docker/registry/config.yml

docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/config.yml:/etc/docker/registry/config.yml \
registry:2

docker login localhost:5000

/etc/docker/daemon.json

docker search nginx --limit=2

docker search --format "{{.Name}}: {{.StarCount}}" nginx

docker push [OPTIONS] NAME[:TAG]--disable-content-trust=true

export DOCKER_CONTENT_TRUST=1
docker push <dtr-domain>/<repository>/<image>:<tag>

docker pull [OPTIONS] NAME[:TAG|@DIGEST]// pulling from docker hub by default
docker pull debian
// pulling from other repositories
docker pull myregistry.local:5000/testing/test-image

-a or --all-tags
docker pull --all-tags fedora

docker image prune -a

by uisng --filter flag
docker image prune -a --filter "until=24h"

docker rmi <IMAGE ID>

docker rmi --no-prune <IMAGE ID>

login into DTR web UI
go to the TAGS section delete the specific TAG
you can also delete all images by deleting the entire repository

Installation and Configuration (15%)

set up docker repositories
install from them for the ease of installation and upgrade tasks.

sudo apt-get update
install docker from the instructions from here

sudo apt-get purge docker-ce
sudo rm -rf /var/lib/docker

No. You need to explicitly delete those

sudo usermod -aG docker your-user

1. using repositories
2. using DEB package
3. using convience scripts

// uninstall older versions
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
// install required libs
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
// set up the stable repo
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
// install
sudo yum install docker-ce docker-ce-cli containerd.io
// if you want to install specific versions
sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
// start docker
sudo systemctl start docker

// uninstall older versions
sudo apt-get remove docker docker-engine docker.io containerd runc
// update
sudo apt-get update
// install required
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
// add dockers official gpg key
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
// set up stable repo
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
// update and install
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
// if you want to install specific versions
sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io

// uninstall old versions
sudo dnf remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
// install required packages
sudo dnf -y install dnf-plugins-core
// add the stable repo
sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
// install community version
sudo dnf install docker-ce docker-ce-cli containerd.io
// if you want specific versions
sudo dnf -y install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
// start docker
sudo systemctl start docker

// uninstall old versions
sudo apt-get remove docker docker-engine docker.io containerd runc
// update and install required packages
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
// add official gpg key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
// stable repo
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
// update and install
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io// if you want specific versions
sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io

Centos: overlay2
Ubuntu supports overlay2, aufs and btrfs storage drivers. Overlay2 is the default one

 gives you latest releases for general availability.
gives pre-releases that are ready for testing before general availability.
gives you latest builds of work in progress for the next major release.

Docker Engine - Community binaries for a release are available on download.docker.com as packages for the supported operating systems.

Docker Hub

Docker has multiple mechanisms to get the logging information from running docker containers and services. These mechanisms are called logging drivers

configure log-driver in /etc/docker/daemon.json{
"log-driver": "syslog"
}

json-file

use log-opts in the daemon.json file{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}

docker info --format '{{.LoggingDriver}}'

docker run -it --log-driver json-file --log-opt max-size=10m alpine ash

json-file
local
journald

Yes

No. All the existing tasks will continue to run. But, new nodes cannot be added and new tasks can't be created.

Yes. If the whole swarm restarts and every manager node subsequently gets a new IP address, there is no way for any node to contact an existing manager. Therefore the swarm is hung while nodes try to contact one another at their old IP addresses.

Yes.

Manager Nodes           Availability Zones    3                     1-1-1
5 2-2-1
7 3-2-2
9 3-3-3

docker node update --availability drain <NODE>

1. To demote the node to a worker, run docker node demote <NODE>
2. To remove the node from the swarm, run docker node rm <NODE>
3. Re-join the node to the swarm with a fresh state using docker swarm jo

docker node rm --force <NODE>

Yes. You must ensure that there is a quorum

/var/lib/docker/swarm

1. If autolock is enabled. You must unlock the swarm
2. stop the docker on the manager node so that you don't have unpredictable results
3. save the entire contents of /var/lib/docker/swarm
4. start the manager

1. shut down the docker on the targeted machine
2. Remove the contents of /var/lib/docker/swarm
3. Restore the /var/lib/docker/swarm directory from the backup
4. Start the docker on the node so that it doesn't connect to old ones
docker swarm init --force-new-cluster
5. Verify the state of the swarm docker service ls
6. rotate the autolock key
7. Add manager and worker nodes for the required capacity
8. backup this swarm

A team defines the permissions a set of users have for a set of repositories.

Read Only: View repository and pull images.
Read Write: View repository, pull and push images.
Admin: Manage repository and change its settings, pull and push images.

/var/lib/docker on Linux
C:\ProgramData\docker on Windows

1. add this flag in /etc/docker/daemon.json{
"debug": true
}
2. Send a HUP signal to the daemon to cause it to reload its configuration.sudo kill -SIGHUP $(pidof dockerd)

// all these can be used depending on the operating system
docker info
sudo systemctl is-active docker
sudo status docker
sudo service docker status

Minimum
1. 8GB of RAM for manager nodes or nodes running DTR
2. 4GB of RAM for worker nodes
3. 3GB of free disk space
Recommended
1. 16GB of RAM for manager nodes or nodes running DTR
2. 4 vCPUs for manager nodes or nodes running DTR
3. 25-100GB of free disk space

UCP
DTR
Docker Engine with enterprise-grade support,

/etc/docker/certs.d

80/tcp     -     Web app and API client access to DTR.
443/tcp - Web app and API client access to DTR

Yes

To create a UCP backup, run the  command on a single UCP managerdocker container run \
--log-driver none --rm \
--interactive \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:2.2.22 backup \
--id <ucp-instance-id> \
--passphrase "secret" > /tmp/backup.tar

docker/ucp:2.2.22 restore --passphrase "secret"docker container run --rm -i --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:2.2.22 restore --passphrase "secret" < /tmp/backup.tar

Yes

To perform a backup of a DTR node, run the docker/dtr backup command.

Yes

sudo systemctl enable docker

Networking (15%)

Bridge

docker netwrok ls

// since no network is specified, it will be connected to default bridge networkdocker run -dit --name alpine1 alpine ash

docker network inspect bridge

Yes

docker network create --driver bridge my-network

docker network inspect my-network

docker run -dit --name alpine1 --network my-network alpine ash

docker netwrok connect my-network alpine2

// using  nicolaka/netshoot
docker run -it --rm --network container:<container_name> nicolaka/netshoot

docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t <image>

// List the containers
docker ps
// use this command with container name
docker port <CONTAINER NAME>
// USE the specific port
docker port <CONTAINER NAME> <specific port>

Bridge Network Driver
Overlay Network Driver
MACVLAN Driver
Host
None

The bridge driver creates a private network internal to the host so containers on this network can communicate.The bridge driver does the service discovery for us automatically if two containers are on the same networkThe bridge driver is a driver, which means it only provides service discovery, IPAM, and connectivity on a single host.

local

The built-in Docker overlay network driver radically simplifies many of the complexities in multi-host networking.It is a driver, which means that it operates across an entire Swarm or UCP cluster rather than individual hosts.

swarm

The macvlan driver is the newest built-in network driver and offers several unique characteristics. It’s a very lightweight driver, because rather than using any Linux bridging or port mapping, it connects container interfaces directly to host interfaces.

local

With the host driver, a container uses the networking stack of the host. There is no namespace separation, and all interfaces on the host can be used directly by the container.

local

The none driver gives a container its own networking stack and network namespace but does not configure interfaces inside the container. Without additional configuration, the container is completely isolated from the host networking stack.

local

Yes

A Sandbox contains the configuration of a container's network stack. This includes the management of the container's interfaces, routing table, and DNS settings. An implementation of a Sandbox could be a Windows HNS or Linux Network Namespace, a FreeBSD Jail, or other similar concept. A Sandbox may contain many endpoints from multiple networks.

An Endpoint joins a Sandbox to a Network. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability so that a service can use different types of network drivers without being concerned with how it's connected to that network.

The CNM does not specify a Network in terms of the OSI model. An implementation of a Network could be a Linux bridge, a VLAN, etc. A Network is a collection of endpoints that have connectivity between them. Endpoints that are not connected to a network do not have connectivity on a network.

Network Drivers

Docker has a native IP Address Management Driver that provides default subnets or IP addresses for the networks and endpoints if they are not specified.

edit the /etc/docker/daemon.json{    
"dns": ["10.0.0.2", "8.8.8.8"]
}
restart the docker
sudo systemctl docker restart

ingress

docker_gwbridge

ingress

docker network create -d overlay my-overlay

create with --attachable flagdocker network create -d overlay --attachable my-attachable-overlay

Yes

No

// use --opt=encrypted
docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network

To publish a service’s port directly on the node where it is running, use the mode=host option to the --publish flag.

Security (15%)

Through DCT, image publishers can sign their images and image consumers can ensure that the images they use are signed.

Docker Content Trust

docker trust generate key

docker trust key load

docker trust sign dtr.example.com/admin/demo:1

export DOKCER_CONTENT_TRUST=1

docker trust inspect

docker trust revoke

A grant defines who has how much access to set of resources

A subject can be user, team, organization and is granted a role for set of resources

A role is a set of permitted API operations that you can assign to a specific subject and collection by using a grant

A client bundle is a group of certificates downloadable directly from the Docker Universal Control Plane (UCP) user interface within the admin section for “My Profile”. This allows you to authorize a remote Docker engine to a specific user account managed in Docker EE, absorbing all associated RBAC controls in the process. You can now execute docker swarm commands from your remote machine that take effect on the remote cluster.

Client Bundle

Namespaces

Control Groups

user

yes

Storage and Volumes (10%)

Storage Drivers

Overlay2

direct-lvm

loopback-lvm

docker info

// stop docker
sudo systemctl stop docker
// set the device-mapper in /etc/docker/daemon.json file
{
"storage-driver": "devicemapper"
}
//start docker
sudo systemctl start docker

dm.directlvm_device

{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.directlvm_device=/dev/xdf",
"dm.thinp_percent=95",
"dm.thinp_metapercent=1",
"dm.thinp_autoextend_threshold=80",
"dm.thinp_autoextend_percent=20",
"dm.directlvm_device_force=false"
]
}

Block Storage
FiLE System Storage
Object Storage

Yes

/var/lib/docker/<storage-driver>

Volumes are completely managed by docker
Bind Mounts are dependent on the host directory structure

Yes. Because volumes live outside of containers

Volumes

docker volume create my-volume

docker volume ls

docker volume inspect my-vol

docker volume rm my-vol

Yes

myvol2

docker run -d \
--name devtest \
-v myvol2:/app \
nginx:latest

// Look for the mounts section
docker inspect devtest

docker run -d \
--name devtest \
--mount source=myvol2,target=/app \
nginx:latest

docker system prune --all

Recent Updates In the Exam

Update 1:

The DCA exam has included some Kubernetes questions recently and I would encourage you to go through this article where Jon Middaugh put together some excellent questions on those.

Update 2:

Now the exam format is completely different than before. You will have questions in the DOMC (Discrete Option Multiple Choice) format.

Conclusion

These 250 questions will help you understand the concepts and prep you for the Docker Certified Associate exam. You might get similar questions or completely different questions. This article is intended to prep you for the exam.

Bachina Labs

Tutorials Ranging from Beginner guides to Advanced | Never…

Sign up for BB Tutorials & Thoughts

By Bachina Labs

Tutorials Ranging from Beginner guides to advanced on Frontend, Backend, Blockchain, Docker, k8s, DevOps, Cloud,AI, ML. Thank you for subscribing and let me know if you want me cover anything?  Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Bhargav Bachina

Written by

Software Architect — Sharing Experiences With Examples | Frontend, Backend, Blockchain, Docker, k8s, DevOps, Cloud,AI, ML | https://www.linkedin.com/in/bachina/

Bachina Labs

Tutorials Ranging from Beginner guides to Advanced | Never Stop Learning

Bhargav Bachina

Written by

Software Architect — Sharing Experiences With Examples | Frontend, Backend, Blockchain, Docker, k8s, DevOps, Cloud,AI, ML | https://www.linkedin.com/in/bachina/

Bachina Labs

Tutorials Ranging from Beginner guides to Advanced | Never Stop Learning

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store