Docker Guide – BMC Software | Blogs https://s7280.pcdn.co Thu, 25 Apr 2024 09:37:33 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Docker Guide – BMC Software | Blogs https://s7280.pcdn.co 32 32 Docker CMD vs. ENTRYPOINT: What’s the Difference and How to Choose https://s7280.pcdn.co/docker-cmd-vs-entrypoint/ Fri, 08 Mar 2024 08:50:46 +0000 https://www.bmc.com/blogs/?p=49213 CMD and ENTRYPOINT are two Dockerfile instructions that together define the command that runs when your container starts. You must use these instructions in your Dockerfiles so that users can easily interact with your images. Because CMD and ENTRYPOINT work in tandem, they can often be confusing to understand. This article helps eliminate any potential […]]]>

CMD and ENTRYPOINT are two Dockerfile instructions that together define the command that runs when your container starts. You must use these instructions in your Dockerfiles so that users can easily interact with your images. Because CMD and ENTRYPOINT work in tandem, they can often be confusing to understand. This article helps eliminate any potential disparity in this realm.

In a cloud-native setup, Docker containers are essential elements that ensure an application runs effectively across different computing environments. These containers are meant to carry specific tasks and processes of an application workflow and are supported by Docker images.

The images, on the other hand, are run by executing Docker instructions through a Dockerfile. There are three types of instructions (commands) that you use to build and run Dockerfiles:

  • RUN. Mainly used to build images and install applications and packages, RUN builds a new layer over an existing image by committing the results.
  • CMD. Sets default parameters that can be overridden from the Docker Command Line Interface (CLI) when a container is running.
  • ENTRYPOINT. Default parameters that cannot be overridden when Docker Containers run with CLI parameters.

Any Docker image must have an ENTRYPOINT or CMD declaration for a container to start. Though the ENTRYPOINT and CMD instructions may seem similar at first glance, there are fundamental differences in how they build container images.

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

Shell form vs. executable form

First, we need to understand how a Docker Daemon processes instructions once they are passed.

All Docker instruction types (commands) can be specified in either shell or exec forms. Let’s build a sample Dockerfile to understand these two commands.

(Explore more Docker commands.)

Shell command form

As the name suggests, a shell form of instructions initiate processes that run within the shell. To execute this, invoke /bin/sh -c <command>.

Typically, every execution through a shell command requires environment variables to go through validation before returning the results.

Syntaxes of shell commands are specified in the form:
<instruction> <command>
Examples of shell form commands include:

RUN         yum -y update
RUN         yum -y install httpd
COPY        ./index.html/var/www/index.html
CMD         echo “Hello World”

A Dockerfile named Darwin that uses the shell command will have the following specifications:

ENV name Darwin
ENTRYPOINT /bin/echo "Welcome, $name"

(The command specifications used above are for reference. You can include any other shell command based on your own requirements.)

Based on the specification above, the output of the docker run-it Darwin command will be:

Welcome, Darwin

This command form directs the shell to go through validation before returning results, which often leads to performance bottlenecks. As a result, shell forms are usually not a preferred method unless there are specific command/environment validation requirements.

Executable command form

Unlike the shell command type, an instruction written in executable form directly runs the executable binaries, without going through shell validation and processing.

Executable command syntaxes are specified in the form:

<instruction> [“executable”, “parameter 1”, “parameter 2”, …]

Examples of executable commands include:

RUN ["yum", "-y", "update"]
CMD ["yum", "-y" "install" "httpd"]
COPY ["./index.html/var/www/index.html"]

To build a Dockerfile named Darwin in exec form:

ENV name Darwin
ENTRYPOINT ["/bin/echo", "Welcome, $name"]

Because this avoids a shell processing, the output of the docker run -it Darwin command will be returned as: Welcome, $name.

This is because the environment variable is not substituted in the Dockerfile. To run bash in exec form, specify /bin/bash as executable, i.e.:

ENV name Darwin
ENTRYPOINT ["/bin/bash", “-c” "echo Welcome, $name"]

This prompts shell processing, so the output of the Dockerfile will be: Welcome, Darwin.

Commands in a containerized setup are essential instructions that are passed to the operating environment for a desired output. It is of utmost importance to use the right command form for passing instructions in order to:

  • Return the desired result
  • Ensure that you don’t push the environment into unnecessary processing, thereby impacting operational efficiency

Interested in Enterprise DevOps? Learn more about DevOps Solutions and Tools with BMC. ›

CMD vs. ENTRYPOINT: Fundamental differences

CMD and ENTRYPOINT instructions have fundamental differences in how they function, making each one suitable for different applications, environments, and scenarios.

They both specify programs that execute when the container starts running, but:

  • CMD commands are ignored by Daemon when there are parameters stated within the docker run command.
  • ENTRYPOINT instructions are not ignored, but instead, are appended as command-line parameters by treating those as arguments of the command.

Next, let’s take a closer look. We’ll use both command forms to go through the different stages of running a Docker container.

Docker CMD

Docker CMD commands are passed through a Dockerfile that consists of:

  • Instructions on building a Docker image
  • Default binaries for running a container over the image

With a CMD instruction type, a default command/program executes even if no command is specified in the CLI.

Ideally, there should be a single CMD command within a Dockerfile. For example, where there are multiple CMD commands in a Dockerfile, all except the last one are ignored for execution.

An essential feature of a CMD command is its ability to be overridden. This allows users to execute commands through the CLI to override CMD instructions within a Dockerfile.

A Docker CMD instruction can be written in both Shell and Exec forms as:

  • Exec form: CMD [“executable”, “parameter1”, “parameter2”]
  • Shell form: CMD command parameter1 parameter2

Stage 1. Creating a Dockerfile

When building a Dockerfile, the CMD instruction specifies the default program that will execute once the container runs. A quick point to note: CMD commands will only be utilized when command-line arguments are missing.

We’ll look at a Dockerfile named Darwin with CMD instructions and analyze its behavior.

The Dockerfile specifications for Darwin are:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
CMD ["echo", "Hello, Darwin"]

The CMD instruction in the file above echoes the message Hello, Darwin when the container is started without a CLI argument.

Stage 2. Building an image

Docker images are built from Dockerfiles using the command:

$ docker build -t Darwin .

The above command does two things:

  • Tells the Docker Daemon to build an image
  • Sets the tag name to Darwin located within the current directory

Stage 3. Running a Docker container

To run a Docker container, use the docker run command:

$ docker run Darwin

Since this excludes a Command-line argument, the container runs the default CMD instruction and displays Hello, Darwin as output.

If we add an argument with the run command, it overrides the default instruction, i.e.:

$ docker run Darwin hostname

As a CMD default command gets overridden, the above command will run the container and display the hostname, thereby ignoring the echo instruction in the Dockerfile with the following output:

6e14beead430

which is the hostname of the Darwin container.

When to use CMD

The best way to use a CMD instruction is by specifying default programs that should run when users do not input arguments in the command line.

This instruction ensures the container is in a running state by starting an application as soon as the container image is run. By doing so, the CMD argument loads the base image as soon as the container starts.

Additionally, in specific use cases, a docker run command can be executed through a CLI to override instructions specified within the Dockerfile.


Ready to take your IT Service Management to the next level? BMC Helix ITSM can help. ›

Docker ENTRYPOINT

In Dockerfiles, an ENTRYPOINT instruction is used to set executables that will always run when the container is initiated.

Unlike CMD commands, ENTRYPOINT commands cannot be ignored or overridden—even when the container runs with command line arguments stated.

A Docker ENTRYPOINT instruction can be written in both shell and exec forms:

  • Exec form: ENTRYPOINT [“executable”, “parameter1”, “parameter2”]
  • Shell form: ENTRYPOINT command parameter1 parameter2

Stage 1. Creating a Dockerfile

ENTRYPOINT instructions are used to build Dockerfiles meant to run specific commands.

These are reference Dockerfile specifications with an Entrypoint command:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
ENTRYPOINT ["echo", "Hello, Darwin"]

The above Dockerfile uses an ENTRYPOINT instruction that echoes Hello, Darwin when the container is running.

Stage 2. Building an Image

The next step is to build a Docker image. Use the command:

$ docker build -t Darwin .

When building this image, the daemon looks for the ENTRYPOINT instruction and specifies it as a default program that will run with or without a command-line input.

Stage 3. Running a Docker container

When running a Docker container using the Darwin image without command-line arguments, the default ENTRYPOINT instructions are executed, echoing Hello, Darwin.

In case additional command-line arguments are introduced through the CLI, the ENTRYPOINT is not ignored. Instead, the command-line parameters are appended as arguments for the ENTRYPOINT command, i.e.:

$ docker run Darwin hostname

will execute the ENTRYPOINT, echoing Hello, Darwin and then displaying the hostname to return the following output:

Hello, Darwin 6e14beead430

When to use ENTRYPOINT

ENTRYPOINT instructions are suitable for both single-purpose and multi-mode images where there is a need for a specific command to always run when the container starts.

One of its popular use cases is building wrapper container images that encapsulate legacy programs for containerization, which leverages an ENTRYPOINT instruction to ensure the program will always run.

Using CMD and ENTRYPOINT instructions together

While there are fundamental differences in their operations, CMD and ENTRYPOINT instructions are not mutually exclusive. Several scenarios may call for the use of their combined instructions in a Dockerfile.

A very popular use case for blending them is to automate container startup tasks. In such a case, the ENTRYPOINT instruction can be used to define the executable while using CMD to define parameters.

Let’s walk through this with the Darwin Dockerfile, with its specifications as:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
ENTRYPOINT ["echo", "Hello"]CMD [“Darwin”]

The image is then built with the command:

$ docker build -t darwin .

If we run the container without CLI parameters, it will echo the message Hello, Darwin.

Appending the command with a parameter, such as Username, will override the CMD instruction, and execute only the ENTRYPOINT instruction using the CLI parameters as arguments. For example, the command:

$ docker run Darwin User_JDArwin

will return the output:

Hello User_JDArwin

This is because the ENTRYPOINT instructions cannot be ignored, while with CMD, the command-line arguments override the instruction.


Scale operational effectiveness with an artificial intelligence for IT operations. Learn more about AIOps with BMC! ›

Using ENTRYPOINT or CMD

Both ENTRYPOINT and CMD are essential for building and running Dockerfiles—it simply depends on your use case. As a general rule of thumb:

  • Use ENTRYPOINT instructions when building an executable Docker image using commands that always need to be executed.
  • Use CMD instructions when you need an additional set of arguments that act as default instructions until there is explicit command-line usage when a Docker container runs.

A container image requires different elements, including runtime instructions, system tools, and libraries to run an application. To get the best out of a Docker setup, it is strongly advised that your administrators understand various functions, structures, and applications of these instructions, as they are critical functions that help you build images and run containers efficiently.

Related reading

]]>
Introduction To Docker: A Beginner’s Guide https://www.bmc.com/blogs/docker-101-introduction/ Thu, 25 Mar 2021 00:21:45 +0000 http://www.bmc.com/blogs/?p=8310 Docker is one of the most popular tools for application containerization. Docker enables efficiency and reduces operational overheads so that any developer, in any dev environment, can build stable and reliable applications. Let’s take a look at Docker, starting with application development before Docker. (This is part of our Docker Guide. Use the right-hand menu […]]]>

Docker is one of the most popular tools for application containerization. Docker enables efficiency and reduces operational overheads so that any developer, in any dev environment, can build stable and reliable applications.

Let’s take a look at Docker, starting with application development before Docker.

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

App development today

One common challenge for DevOps teams is managing an application’s dependencies and technology stack across various cloud and development environments. As part of their routine tasks, they must keep the application operational and stable—regardless of the underlying platform that it runs on.

Development teams, on the other hand, focus on releasing new features and updates. Unfortunately, these often compromise the application’s stability by deploying codes that introduce environment-dependent bugs.

To avoid this inefficiency, organizations are increasingly adopting a containerized framework that allows designing a stable framework without adding:

  • Complexities
  • Security vulnerabilities
  • Operational loose ends

Put simply, containerization is the process of packaging an application’s code—with dependencies, libraries, and configuration files that the application needs to launch and operate efficiently—into a standalone executable unit.

Initially, containers didn’t gain much prominence, mostly due to usability issues. However, since Docker entered the scene by addressing these challenges, containers have become practically mainstream.

What is Docker?

Docker is a Linux-based, open-source containerization platform that developers use to build, run, and package applications for deployment using containers. Unlike virtual machines, Docker containers offer:

  • OS-level abstraction with optimum resource utilization
  • Interoperability
  • Efficient build and test
  • Faster application execution

Fundamentally, Docker containers modularize an application’s functionality into multiple components that allow deploying, testing, or scaling them independently when needed.

Take, for instance, a Docker containerized database of an application. With such a framework, you can scale or maintain the database independently from other modules/components of the application without impacting the workloads of other critical systems.

Components of a Docker architecture

Components of a Docker architecture

Docker comprises the following different components within its core architecture:

  • Images
  • Containers
  • Registries
  • Docker Engine

Images

Images are like blueprints containing instructions for creating a Docker container. Images define:

  • Application dependencies
  • The processes that should run when the application launches

You can get images from DockerHub or create your own images by including specific instructions within a file called Dockerfile.

Containers

Containers are live instances of images on which an application or its independent modules are run.

In an object-oriented programming analogy, an image is a class and the container is an instance of that class. This allows operational efficiency by allowing to you to multiple containers from a single image.

Registries

A Docker registry is like a repository of images.

The default registry is the Docker Hub, a public registry that stores public and official images for different languages and platforms. By default, a request for an image from Docker is searched within the Docker Hub registry.

You can also own a private registry and configure it to be the default source of images for your custom requirements.

Docker Engine

The Docker Engine is of the core components of a Docker architecture on which the application runs. You could also consider the Docker Engine as the application that’s installed on the system that manages containers, images, and builds.

A Docker Engine uses a client-server architecture and consists of the following sub-components:

  • The Docker Daemon is basically the server that runs on the host machine. It is responsible for building and managing Docker images.
  • The Docker Client is a command-line interface (CLI) for sending instructions to the Docker Daemon using special Docker commands. Though a client can run on the host machine, it relies on Docker Engine’s REST API to connect remotely with the daemon.
  • A REST API supports interactions between the client and the daemon.

Benefits of Docker in the SDLC

There are numerous benefits that Docker enables across an application architecture. These are some of the benefits that Docker brings across multiple stages of the software development lifecycle (SDLC):

  • Build. Docker allows development teams to save time, effort, and money by dockerizing their applications into single or multiple modules. By taking the initial effort to create an image tailored for an application, a build cycle can avoid the recurring challenge of having multiple versions of dependencies that may cause problems in production.
  • Testing. With Docker, you can independently test each containerized application (or its components) without impacting other components of the application. This also enables a secured framework by omitting tightly coupled dependencies and enabling superior fault tolerance.
  • Deploy & maintain. Docker helps reduce the friction between teams by ensuring consistent versions of libraries and packages are used at every stage of the development process. Besides, deploying an already tested container eliminates the introduction of bugs into the build process, thereby enabling an efficient migration to production.

When it comes to the enterprise use of containers, you can rest easy knowing that Docker works with so many popular tools, including:

Docker alternatives

Although Docker is one of the most popular choices for application containerization, there are alternatives:

  • Containerd. Originally a tool that was part of the Docker ecosystem, this Docker alternative has morphed into its own high-level container runtime. Unlike Docker, which handles network plugins and overlays, Containerd abstracts these functionalities and focuses on running and managing images.
  • LXC/LXD Linux Containers. An open-source containerization platform with a set of language bindings, libraries, and tools that enables the creation and management of virtual environments. Being tightly bound to the Linux ecosystem, its adoption rate is comparatively limited.
  • Core OS rkt. Pronounced as “rocket”, this is another open-source software containerization alternative to Docker. An essential feature of rkt is that it is arguably a more secure containerization platform that fixes some of the vulnerable flaws within Docker’s design.

A few other lesser-known alternatives include OpenVz and RunC.

Docker supports business agility

The idea of an agile, consistent, and independent environment that allowed faster builds and application interoperability turned out to be more challenging in virtual machines than initially thought.

Thanks to Docker, an organization can now fill the gaps left by virtual machines—without duplicating computing resources and while avoiding effort redundancy. In today’s cloud native environment, Dockers are synonymous with application efficiency and maintainability.

No wonder organizations continue adopting Docker!

Related reading

]]>
Docker Security: 14 Best Practices for Securing Docker Containers https://www.bmc.com/blogs/docker-security-best-practices/ Fri, 19 Feb 2021 15:50:50 +0000 https://www.bmc.com/blogs/?p=20234 Containerization of applications involves packaging application code in a virtual container with its dependencies—the required libraries, frameworks, and configuration files. This approach aids portability and operates consistently across various computing environments and infrastructure, without losing efficiency. One particularly popular container platform is Docker. Organizations use Docker for developing applications that are: Efficiently optimized Highly scalable […]]]>

Containerization of applications involves packaging application code in a virtual container with its dependencies—the required libraries, frameworks, and configuration files. This approach aids portability and operates consistently across various computing environments and infrastructure, without losing efficiency.

One particularly popular container platform is Docker. Organizations use Docker for developing applications that are:

  • Efficiently optimized
  • Highly scalable
  • Portable
  • Agile

Through its lightweight run-time environments, Docker containers share underlying operating systems to host applications that support a DevOps environment. Being a critical element of the Cloud-Native framework, Docker brings numerous benefits to your software development lifecycle (SDLC). But those benefits aren’t without risk. You’re likely to face complexities, particularly when it comes to securing the Docker framework.

By default, Docker containers are secure. However, it is imperative that you know possible vulnerabilities in order to adopt an approach that safeguards against potential security risks.

So, in this article, we’ll look at the best practices for securing a Docker-based architecture across three key areas:

  • Infrastructure
  • Images
  • Access and authentication

Let’s get started.

docker security best practices

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

Securing Docker infrastructure

Containers are virtualized units that can host applications. To do so, containers hold:

  • Code binaries
  • Configuration files
  • Related dependencies

Since containers form the foundation of a cloud-native setup, securing them from potential attack vectors is a critical activity throughout the container lifecycle. A holistic approach to securing such a framework is to protect not only the Docker container but also its underlying infrastructure.

Let’s break down the best approach to securing infrastructure and see how it works.

Update your Docker version regularly

First things first: Ensure that your Docker version is up to date. Obsolete versions are susceptible to security attacks. New version releases often contain patches and bug fixes that address vulnerabilities of older versions.

The same holds true for the host environment: ensure that supporting applications are up-to-date and free of known bugs or security loopholes.

Maintain lean & clean containers

An extended container environment expands the attack surface and is comparatively more prone to security breaches than lean setups. To avoid this, configure your containers to contain only the necessary components that keep them operating as you intend:

  • Software packages
  • Libraries
  • Configuration files

Further, routinely check host instances for unused containers and base images and discard those that aren’t in use.

Configure APIs & network

Docker Engine uses HTTP APIs to communicate across a network. Poorly configured APIs carry security flaws that hackers can exploit.

To avoid this, protect your containers by securely configuring the API that restricts them from being publicly exposed. One approach is to enforce encrypted communication by enabling certificate-based authentication.

(Get more details on securing Docker APIs.)

Limit usage of system resources

Set a limit on the proportion of infrastructure resources that each container can use. These infrastructure resources include:

  • CPU
  • Memory
  • Network bandwidth

Docker uses Control Groups that limits the allocation and distribution of resources among the different processes. This approach prevents a compromised container from consuming excessive resources that could disrupt service delivery in the event of a security breach.

Maintain host isolation

Run containers with different security requirements on separate hosts.

Maintaining the isolation of containers through different namespaces serves to protect critical data from a full-blown attack. This approach also prevents noisy neighbors from consuming excessive resources on pool-based isolation to impact services of other containers.

Restrict container capabilities

By default, Docker containers can maintain and acquire additional privileges that may or may not be necessary to run its core services.

As a best practice, you should limit a container’s permissions to only what is required to run its applications. To do so, use the command to drop all privileges of the Docker container:

$ docker run --cap-drop ALL

Following this, add specific privileges to the container with the –cap-add flag. This approach restricts Docker containers from obtaining unnecessary privileges that get exploited during security breaches.

Filter system calls

Apply system call filters that allow you to choose which calls can be made by containers to the Linux kernel.

This approach enables a secure computing mode, thereby reducing possible exposure points to avoid security mishaps—particularly to avert exploitation of Kernel vulnerabilities.

Securing Docker images

Now, let’s move to security best practices beyond the infrastructure.

Docker images are templates of executable code that are used to create containers and host applications. A Docker image consists of runtime libraries and the root file system—making the image one of the most critical fundamentals of a Docker container.

Here are some best practices to follow when it comes to securing Docker images.

Use trusted image

Get Docker base images only from trusted sources that are up-to-date and properly configured.

Additionally, ensure Docker images are correctly signed by enabling the Docker Content Trust feature to filter out unsecured questionable sources.

Scan images regularly

It is crucial to maintain a robust security profile of Docker Images and routinely scan them for vulnerabilities. Do this in addition to the initial scan before downloading an image to ensure it is safe to use.

With regular image scans, you can also minimize exposure by:

  • Auditing critical files and directories
  • Keeping them updated with the latest security patches

Favor minimal base images

Avoid using larger generic Docker Images over smaller ones to minimize security vulnerabilities. This offers two valuable outcomes:

  • Reduces the attack surface
  • Gets rid of default configurations that are more susceptible to hacks

Access & Authentication Management

The final category for Docker Security involves access and authentication.

Securing Docker Daemon through Access Control is often known as applying the first layer of security. Without securing Docker Daemon, everything is always vulnerable:

  • The underlying operations
  • Applications
  • Business functions

Implement least privileged user

By default, processes within Docker containers have root privileges that grant them administrative access to both the container and the host. This opens up containers and the underlying host to security vulnerabilities that hackers might exploit.

To avoid these vulnerabilities, set up a least-privileged user that grants only the necessary privileges to run containers. Alternatively, restrict run-time configurations that prohibit the use of a privileged user.

Use a secrets management tool

Never store secrets in a Dockerfile that may allow a user with access to the Dockerfile to misplace, misuse, or compromise an entire framework’s security.

Standard best practice is to safely encrypt key secrets in third-party tools, such as the Hashicorp Vault. You can use this same approach for other container secrets beyond access credentials.

Limit direct access to container files

Transient containers require consistent upgrades and bug fixes. As a result, such container files are exposed each time a user accesses them.

As a best practice, maintain container logs outside the container. This drastically reduces consistent direct usage of container files. It also enables your team to troubleshoot issues without accessing logs within a container directory.

Enable encrypted communication

Limit Docker Daemon’s access to only a handful of key users. Additionally, limit direct access to container files by enforcing SSH-only access for general users.

Use TLS Certificates for encrypting host-level communication. It’s also essential to disable unused ports and keep default ports exposed only for internal use.

Securing Docker secures your IT environment

Security within an IT landscape is a critical mission that you should never overlook.

To secure a cloud-native framework, the first step always is to factor in the vulnerabilities of your framework’s key elements. As a result, organizations should maintain a robust security profile that centers around containers and their underlying infrastructure.

Though approaches to implementing end-to-end security may differ, the goal is always to factor in vulnerable points and adopt best practices that mitigate risks.

Related reading

]]>
Top Docker Certifications To Earn Today https://www.bmc.com/blogs/docker-certifications/ Mon, 31 Aug 2020 09:00:56 +0000 https://www.bmc.com/blogs/?p=18472 Docker is an important tool for DevOps developers who use this platform to create faster, leaner, more agile objects, programs, and code using containers. Containerization is a process that allows a developer to create a development environment in which configurations and customizations are designed into the container environment, ensuring peak performance of applications and resulting […]]]>

Docker is an important tool for DevOps developers who use this platform to create faster, leaner, more agile objects, programs, and code using containers. Containerization is a process that allows a developer to create a development environment in which configurations and customizations are designed into the container environment, ensuring peak performance of applications and resulting in faster, easier builds.

In DevOps environments, operations managers may decide to bolster their enterprise IT talent-base by investing in education and certifications for the development team. Docker could be an important certification for your enterprise dev team if containerization is practiced at your organization.

In this article, you’ll learn all about Docker certifications, which are the best for a developer’s career and how to invest in them. But first, a look at what to expect when you enroll in a Docker certification program:

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

What to expect from a Docker certification?

Docker certifications can include any number of certifications for DevOps that will improve your Docker skills, and that includes learning about peripheral principles in DevOps that can be applied to Docker. For that reason, Docker certifications are a much broader group than something like data science certifications.

These certifications usually consist of online or in-person coursework followed by a test. As with most certifications, tests can be online or taken in person at testing centers. Many certifying bodies offer proctored tests online, so check with the certification board to learn what steps they are taking to bring digital education and testing to you.

Beyond online coursework and testing, people interested in certifying in Docker are likely to certify in areas of IT like:

Because Docker has broad appeal across IT organizations, a Docker certification is an asset for any enterprise organization that wants to grow and scale.

A relationships between Docker & DevOps

Docker supports some of the key principles of DevOps. Primarily, it allows for more agility and speed in programming. How does it do that?

Containerization platforms, like Docker, allow developers to quickly replicate development environments. This reduces the amount of time a DevOps team spends configuring the environment before moving onto the next phase in programming, reducing the chance of mistakes.

Importantly, Docker also allows automation to be configured. This includes:

  • Automated builds that use your existing containers to automatically create new ones to your needs
  • Automated testing

Using programs like Docker, developers can write basic code and set up instructions, allowing Docker to do a lot of the heavy lifting. For this reason, it’s an ideal tool for DevOps teams that must be able to act fast and with precision.

Top certifications for Docker professionals

Below are the top certifications for Docker developers in 2021:

 1. Docker Certified Associate

There is one primary certification offered by Docker for power users who want to increase their knowledge of Docker and demonstrate skill through certification. That’s the Docker Certified Associate.

Docker’s course is comprehensive. Considered the gold standard in Docker certifications, a Docker certified associate learns:

The Docker Certified Associate certification program offers three specialties:

  • Infrastructure
  • Containers
  • Plugins

The cost of becoming a Docker Certified Associate is $195 for the exam, plus the cost of any education coursework in which you choose to invest.

 2. Certified Kubernetes Administrator (CKA) or Application Developer

Kubernetes is a development engine that offers compatibility with Docker to create an environment that operates streamlined, entirely in the cloud, in a way that is uniquely Google. Learning about Kubernetes, from a certification standpoint, can also teach you about Docker and help you elevate your employees who use Docker within the Kubernetes framework.

The exams for these certifications are $300 each.

(Read our comparison of Kubernetes and Docker.)

 3. Udemy or Udacity Courses

While training courses like these don’t necessarily end in a professional certification, you can be awarded a certificate at the end that might be valuable to your employer. Further, if you are an employer looking to bolster your team, Udemy and Udacity have comprehensive coursework, some of which is offered for free.

Udemy, for example, offers a course called Docker Mastery Tutorial, taught by a Bret Fisher, Docker pro who has trained more than 30,000 people. Among the highlights of this course :

  • Advanced code development
  • No paid software required for course
  • Learn to make fast updates to swarm services

Using these techniques, even un-certified Docker professionals can display leadership that makes teams more efficient and propels Docker developers to new heights in their careers.

Docker certification tips

Below are our favorite tips for developers looking to achieve a Docker certification:

  • Don’t forget to practice test. The Docker platform offers practice tests for people preparing for Docker certifications. By logging onto Docker’s study information, those who are studying for a certification can be better prepared to pass.
  • Start from meager beginnings. The best way to learn Docker is to use it. It’s recommended you start with small projects, learning from your mistakes along the way, to best prepare for certification.
  • Be a strong community member. Docker’s platform allows developers to access the Docker community for questions, support and general information. Be a good steward of the Docker community by participating. This will also help you learn more practical knowledge about Docker as you prepare for certification.
  • Take a Docker for Beginner’s course. Many third party vendors offer Docker courses designed to bring beginners up to speed. Taking one of these courses could be valuable to your learning experience.

Containerization allows businesses to be more flexible and better equipped to handle changes to the business ecosystem. It does this because, by using tools like Docker, developers can work faster, more consistently, and they can ensure the correct customizations and configurations follow a development project through every phase of development, avoiding mistakes that might only be found in the testing phase.

If your enterprise business uses Docker containers as part of your build, a certification or two within your developer-base can improve your organization’s use of the tool. And if you’re a developer working for an enterprise company, adding Docker certifications to your education can increase the value you offer as an employee.

Related reading

]]>
Docker Commands: A Cheat Sheet https://www.bmc.com/blogs/docker-commands/ Tue, 21 Jul 2020 00:00:08 +0000 https://www.bmc.com/blogs/?p=18067 Docker’s purpose is to build and manage compute images and to launch them in a container. So, the most useful commands do and expose this information. Here’s a cheat sheet on the top Docker commands to know and use. (This is part of our Docker Guide. Use the right-hand menu to navigate.) Images and containers […]]]>

Docker’s purpose is to build and manage compute images and to launch them in a container. So, the most useful commands do and expose this information.

docker
Here’s a cheat sheet on the top Docker commands to know and use.

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

Images and containers

The docker command line interface follows this pattern:
docker <COMMAND>

docker images
docker container

The docker images and container commands grant access to the images and containers. From here, you are permitted to do something with them, hence:

docker images <COMMAND>
Docker container <COMMAND>

There are:

  • is lists the resources.
  • cp copies files/folders between the container and the local file system.
  • create creates new container.
  • diff inspects changes to files or directories in a running container.
  • logs fetches the logs of a container.
  • pause pauses all processes within one or more containers.
  • rename renames a container.
  • run runs a new command in a container.
  • start starts one or more stopped containers.
  • stop stops one or more running containers.
  • stats displays a livestream of containers resource usage statistics.
  • top displays the running processes of a container.

View resources with ls

docker images ls
docker container ls

From the container ls command, the container id can be accessed (first column).

docker-container

Control timing with start, stop, restart, prune

  • start starts one or more stopped containers.
  • stop stops one or more running containers.
  • restart restarts one or more containers.
  • prune (the best one!) removes all stopped containers.
docker container stop <container id>
docker container start <container id>
docker container restart <container id>
docker container prune <container id>

Name a container

docker run -d -name myfirstcontainer

View vital information: Inspect, stats, top

docker container inspect <container id>

docker container top <container id>

docker container stats <container id>

  • stats displays a live stream of container(s) resource usage statistics

stats docker

  • top displays the running processes of a container:

top-docker

  • inspect displays detailed information on one or more containers. With inspect, a JSON is returned detailing the name and states and more of a container.

inspects-docker

Additional resources

For more on this topic, there’s always the Docker documentation, the BMC DevOps Blog, and these articles:

]]>
What’s Docker Monitoring? How to Monitor Containers & Microservices https://www.bmc.com/blogs/docker-monitoring-explained-monitor-containers-microservices/ Fri, 07 Jul 2017 03:14:26 +0000 http://www.bmc.com/blogs/?p=10794 Docker Monitoring is the activity of monitoring the performance of microservice containers in Docker environments. Monitoring is the first step towards optimizing and improving performance. (This is part of our Docker Guide. Use the right-hand menu to navigate.) Setting the stage for Docker Monitoring Not long ago, most software application systems ran on bare-metal infrastructure […]]]>

Docker Monitoring is the activity of monitoring the performance of microservice containers in Docker environments. Monitoring is the first step towards optimizing and improving performance.

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

Setting the stage for Docker Monitoring

Not long ago, most software application systems ran on bare-metal infrastructure hosted in data centers. The hardware—consisting mainly of compute, storage, and networking components—was largely fixed in those environments, and, so was the monitoring infrastructure.  It required few updates later once a comprehensive solution was rolled out because changes in production, both hardware and application related, that have impact on monitoring configuration were infrequent.

The virtualization of compute resources didn’t change that scenario much. Though it provided lots of flexibility in provisioning non-production environments, the changes that impact monitoring tend to be few, except in situations where an application component running in a cluster is auto-scaled. Such dynamic configurations made sense only when implementing elasticity resulted in cost savings, and, for that, environments have to be on public cloud platforms like AWS where charges are usage-driven.

As the virtual machine (VM) retained the concept of a machine that runs an operating system, the tools and methods used for bare-metal infrastructure could still be useful for VM based environments with occasional tweaks. However, the use of containers to build application environments has a disruptive impact on traditional monitoring methods because containers don’t fit well with the assumptions made by traditional tools and methods that were originally designed for bare-metal machines.

The containers, of which Docker is a popular implementation, are normally brought up and down on demand. They are ephemeral as they are lightweight and can be started up with little system overhead so they could be discarded when not actively in use.

The Dockerization also forced the applications to be redesigned to work as distributed systems with each functional element is run in one more containers. That enabled a container based system to be scaled easily and the available compute resources could be allocated much more efficiently. As a result of inherent architectural change that containerization brought in, the production environments built using containers become highly dynamic and monitoring of such environments became much more important than it used to be before.

Common challenges

The dynamic nature of container-based application infrastructure brings new problems to monitoring tools. Docker also adds another layer of infrastructure and network monitoring requirements to the overall scope.

Think of the typical scenario of multiple VMs provisioned on a bare-metal machine and containers come and go on each one of those VMs. The monitoring requirements include checking:

  • The health of bare-metal host
  • The VMs provisioned on it
  • The containers active at a given point of time

Of course, how well these components interact with each other and to the outer world should also be checked from the networking side of monitoring requirements. It can soon become very complex.

Levels of Docker monitoring

As mentioned earlier, the container is an extra layer for infrastructure monitoring purposes. For addressing monitoring requirements of a container based application environment systematically, the monitoring should be implemented at various levels of the infrastructure and application.

Docker host

Docker containers are run on a cluster of large bare-metal or virtual machines. Monitoring of these machines for their availability and performance is important. This falls into the traditional infrastructure monitoring.

Typically, CPU, memory and storage usages are tracked and alerted based on the thresholds setup for those metrics. Implementing those are relatively easy: any monitoring tool would support it as part of core features.

Docker containers

The Docker containers are run on a cluster of hosts and a specific Docker instance could be running on any one of those hosts depending on the scheduling and scaling strategies set in the container orchestration system used, such as:

  • Docker Swarm
  • Kubernetes
  • Apache Mesos
  • Hashicorp Nomad

(Read our comparison of Docker Swarm and Kubernetes.)

Ideally, there is no need to track where the containers are running. But, things are rarely ideal in production (and that’s why you need monitoring in the first place) and you may want to look at a specific container instance. Tracking information on the up and running containers would be handy in such situations and also to make sure that scheduling and scaling rules are actually enforced.

Runtime resource usage

As with bare-metal and virtual machines, CPU, memory and storage metrics are tracked for Docker containers as well.  Container specific metrics related to CPU throttling, a situation when CPU cycles are allocated based on priorities set when there would be competition for available CPU, can also be tracked.

Tracking of these system performance metrics would help to determine whether resources on bare-metal and virtual machines, the container hosting infra, need to be upgraded.  It would also provide insights to finetune the resources allocated to a Docker image so its future container instances will be started up with adequate runtime resources.

The native Docker command “docker stats” returns some of these metrics but a tool like TrueSight is needed to capture these statistics system wide, for getting notified on potential issues and resolving those proactively.

Container networking

Checking on container level networks is one of the most important aspect of Docker monitoring. A container is supposed to run a lightweight component of a distributed application system. Communication between these components has to be reliable and predictable, especially when there is high dynamicity to a container-based environment in which the instances come and go.

Docker provides a container level network layer and also there are third-party tools to expose the services running on containers. Other components in the system can access a specific service using a supported method like REST API.

In a highly distributed environment, Docker networking configuration would soon become complex and it is important to monitor various network access requirements and proxy settings for the whole system to work.

Container performance

Just like it happens on a bare-metal or virtual machine, the runtime requirements would impact the overall performance of container and in turn the service running on it. Gathering performance data from containers is important to fine tune those.

Application endpoints

A container-based environment would be running a large, highly distributed application with each service running on one or more containers. The application checks could be done both at three levels:

  • Container level
  • Pod level (A pod is a group of containers that offers a service.)
  • System-wide level

Usually REST API endpoints would be available to perform such checks that could easily be plugged into any modern monitoring system to check the availability of related services.

Benefits of Docker monitoring

The benefits of Docker monitoring are not different from traditional monitoring. These are the main points:

  • Monitoring helps to identify issues proactively that would help to avoid system outages.
  • The monitoring time-series data provide insights to fine-tune applications for better performance and robustness.
  • With full monitoring in place, changes could be rolled out safely as issues will be caught early on and be resolved quickly before that transforms into root-cause for an outage.
  • The changes are inherent in container based environments and impact of that too gets monitored indirectly.

The last point makes it clear that monitoring is an essential part of Docker based environments due to their dynamicity and availability of application services has to be checked constantly.

Getting started with Docker Monitoring

A group of tools are needed for fully monitoring an application system running in production. Typically, they will cover:

  • Infrastructure
  • Network
  • Application features and performance
  • Last-mile monitoring and log analytics (Last-mile monitoring refers to checking on user experience.)

The requirement of Docker monitoring dictates that the monitoring tools selected should cover container level monitoring also. Or add extra tools, like those discussed already, to take care of that aspect.

Tracking ephemeral containers

The traditional monitoring is host0centric. The tools from that category assume that an application environment is made of devices with a unique IP address assigned to each one of them. That approach always poses problems beyond simple infrastructure monitoring because requirements such as checking an application feature is not just tied to one or more hosts.

The containers come and go and it would be better if those are not tracked individually. The best method is to tag the containers with keywords. That way time series data from same type of containers could be looked up for monitoring and operational insights, irrespective of their lifecycle status.

Containers add complexity

Usage of containers adds to the operational complexity of an application system and so the monitoring requirements. Most of the popular monitoring tools are not equipped to monitor Docker containers though it is not hard to extend them to support containers. New generation monitoring tools, both open source and licensed, support Docker monitoring out of the box.

The challenges in rolling out a good Docker monitoring system remain the same as with any generic monitoring systems:

  • Selecting a set of tools that are most suitable from a wide range of product offerings
  • Identifying the most important monitoring requirements
  • Mapping those to the features of selected tools
  • Making customizations to fill any gaps that are not covered by the tools, especially in the area of application monitoring
  • Setting up an alerting and response strategy that will not overwhelm the operations staff

Related reading

]]>
How To Introduce Docker Containers in The Enterprise https://www.bmc.com/blogs/3-steps-to-introduce-docker-containers-in-enterprise/ Thu, 08 Oct 2015 17:08:42 +0000 http://www.bmc.com/blogs/?p=8809 Docker container technology has seen a rapid rise in early adoption and broad market acceptance. It is a technology that is seen to be a strategic enabler of business value because of the benefits it can provide in terms of: Reduced cost Reduced risk Increased speed For enterprises that haven’t worked with Docker, introducing it […]]]>

Docker container technology has seen a rapid rise in early adoption and broad market acceptance. It is a technology that is seen to be a strategic enabler of business value because of the benefits it can provide in terms of:

  • Reduced cost
  • Reduced risk
  • Increased speed

For enterprises that haven’t worked with Docker, introducing it can seem daunting. How do you achieve business value, run Docker in development, test, and production, or effectively use automation with Docker?

As experienced users of this transformative tool, we have had success with a three-step yellow brick road approach. This process will enable your enterprise to embark on the Docker journey too.

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

Getting started with Docker containers

Step 1: Evaluation

In the early phases, engineers play and evaluate Docker technology by dockerizing a small set of applications.

  1. First, you’ll need a Docker host. Ubuntu or Redhat machines can be used to setup Docker in a few minutes by following instructions at the Docker website.
  2. Once the Docker host is set, at least initial development can be done in an insecure mode (no need for certificates in this phase). You can login to the Docker host and use Docker pull and run commands to run a few containers from the public Docker hub.
  3. Finally, selecting the right applications to dockerize is extremely important. Stateless internal or non-production apps would be a good way to start converting them to containers. Conversion requires the developer to write Docker files and become familiar with Docker build commands as well. The output of the build is a Docker image. Usually, an internal private Docker registry can be installed or the public Docker hub can be used with a private account so your images do not become public.

Step 2: Pilot

In the pilot phase, the primary goals are to start bringing in IT and DevOps teams to go through infrastructure and operations to setup Docker applications. An important part of this phase is to “IT-ize” the Docker containers to run a pilot in the IT production so that IT operations team can start managing them. This phase requires that IT operations manage dual stacks:

Management systems and software tools will be needed in four primary areas:

  1. Build Docker infrastructure. Carve out a new Docker infrastructure consisting of a farm of Docker hosts to run containers alongside with traditional virtualization platforms and hybrid clouds.
  2. Define & deploy your app as a collection of containers. Management system software can provide blueprints to define application topology consisting of Docker containers. Spin them up and then provide “Day 2” management of the containers for end users, such as start/stop and monitoring of Docker applications. They can also integrate with Docker Hubs or Docker Trusted Registry for sourcing images.
  3. Build your delivery pipeline. DevOps products can offer CI/CD workflows for continuous integration and continuous deployment of Docker images.
  4. Vulnerability testing of containers. Server automation tools can be used to do SCAP vulnerability testing of Docker images.

Step 3: Production

Now, you can deploy Docker containers to your production infrastructure. This will require not just DevOps and deployment of containers to a set of Docker hosts, but also security, compliance, and monitoring.

Supporting complex application topologies is a degree of sophistication many enterprises will, in fact, desire in order to:

  • Allow gradual introduction to the benefits of containers
  • Keeping the data in the traditional virtual or physical machines

Another degree of sophistication is the introduction of more complex distributed orchestration to improve data center utilization and reduce operational placement costs.

While in the previous phase we had used static partitioning of infrastructure resources into clusters, this phase will use more state of the art cluster schedulers such as Kubernetes or Fleet.

Governance, change control, CMDB integration, and quota management are some of the ways enterprise can start governing the usage of Docker as it grows in the enterprise. Container sprawl reduction through reclamation are additional processes that need to be automated at this level.

Final thoughts

Evaluate the business benefits at the end of each of these steps to determine if you’ve achieved ROI and accomplished your goals.

We believe that using this three-step phased approach to introducing Docker, with increasing sophisticated usage and automation, will make it easy to test drive and productize Docker inside enterprises.

Related reading

]]>