Dan Merron – BMC Software | Blogs https://s7280.pcdn.co Tue, 10 Oct 2023 11:20:25 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Dan Merron – BMC Software | Blogs https://s7280.pcdn.co 32 32 Infrastructure as Code (IaC): The Complete Beginner’s Guide https://www.bmc.com/blogs/infrastructure-as-code/ Fri, 05 Nov 2021 00:00:53 +0000 https://www.bmc.com/blogs/?p=13194 Infrastructure is one of the core tenets of a software development process—it is directly responsible for the stable operation of a software application. This infrastructure can range from servers, load balancers, firewalls, and databases all the way to complex container clusters. Infrastructure considerations are valid beyond production environments, as they spread across the complete development […]]]>

Infrastructure is one of the core tenets of a software development process—it is directly responsible for the stable operation of a software application. This infrastructure can range from servers, load balancers, firewalls, and databases all the way to complex container clusters.

Infrastructure considerations are valid beyond production environments, as they spread across the complete development process. They include tools and platforms such as CI/CD platforms, staging environments, and testing tools. These infrastructure considerations increase as the level of complexity of the software product increases. Very quickly, the traditional approach for manually managing infrastructure becomes an unscalable solution to meet the demands of DevOps-based modern rapid software development cycles.

And that’s how Infrastructure as Code (IaC) has become the de facto solution in development today. IaC allows you to meet the growing needs of infrastructure changes in a scalable and trackable manner.

What is infrastructure as code?

Infrastructure as Code or IaC is the process of provisioning and managing infrastructure defined through code, instead of doing so with a manual process.

As infrastructure is defined as code, it allows users to easily edit and distribute configurations while ensuring the desired state of the infrastructure. This means you can create reproducible infrastructure configurations.

Moreover, defining infrastructure as code also:

  • Allows infrastructure to be easily integrated into version control mechanisms to create trackable and auditable infrastructure changes.
  • Provides the ability to introduce extensive automation for infrastructure management. All these things lead to IaC being integrated into CI/CD pipelines as an integral part of the SDLC.
  • Eliminates the need for manual infrastructure provisioning and management. Thus, it allows users to easily manage the inevitable config drift of underlying infrastructure and configurations and keep all the environments within the defined configuration.

Declarative vs imperative Infrastructure as Code

When dealing with IaC tools, there are two major differentiating approaches for writing code. These two approaches are declarative and imperative. Simply put:

  • An imperative approach allows users to specify the exact steps to be taken for a change, and the system does not deviate from the specified steps.
  • A declarative approach essentially means users only need to define the end requirement, and the specific tool or platform handles the steps to take in order to achieve the defined requirement.

The declarative approach is preferred in most infrastructure management use cases as it offers a greater degree of flexibility when managing infrastructure.

Chef is considered an imperative tool, where Terraform, Pulumi, CloudFormation, ART, Puppet are all declarative. Uniquely, Ansible is mostly declarative with support for imperative commands.

IaC vs IaaS

Importantly, IaC is not a derivative of infrastructure as a service (IaaS). They are two different concepts.

  • Infrastructure as a Service is one of the core cloud services: virtualized computing resources—servers, networking infrastructure, storage, etc.—are provided via the cloud service.
  • Infrastructure as Code is a tool that can be used to provision and manage infrastructure. It is not limited to only cloud-based resources. In fact, you apply IaC to a wide variety of environments, including on-premises.

IaC vs IaaS

When & how to use Infrastructure as Code

IaC may seem unnecessary for simpler, less complex infrastructure requirements, but that isn’t accurate. Any—every—modern software development pipeline should use infrastructure as Code to handle the infrastructure.

Besides, the advantages of IaC far outweigh any implementation and management overheads.

Advantages of IaC

Here are the top benefits of IaC:

  • Reducing shadow IT within organizations and allowing timely and efficient infrastructure changes that are done in parallel to application development.
  • Integrating directly with CI/CD platforms.
  • Enabling version-controlled infrastructure and configuration changes leading to trackable and auditable configurations.
  • Easily standardizing infrastructure with reproducible configurations.
  • Effectively managing configurating drift and keeping infrastructure and configurations in their desired state.
  • Having the ability to easily scale up infrastructure management without increasing CapEx or OpEx. With IaC, you’ll reduce CapEx and OpEx spending overall, as automation eliminates the need for time-consuming manual interactions and reduces incorrect configurations.

When to use IaC

Not sure when to use IaC? The simplest answer is whenever you have to manage any type of infrastructure.

However, it becomes more complex with the exact requirements and tools. Some may require strict infrastructure management, while others may require both infrastructure and configuration management. Then comes platform-specific questions like if the tool has the necessary feature set, security implication, integrations, etc. On top of that, the learning curve comes into play as users prefer a simpler and more straightforward tool than a complex one.

The below table shows a categorization of the tools mentioned above according to their ideal use cases.

Use Case Tools to use
Infrastructure management Terraform, Pulumi, AWS CloudFormation, Azure Resource Templates
Configuration management with somewhat limited infrastructure management capabilities Ansible, Chef, Puppet
Configuration management CFEngine

One tool may not be sufficient in most scenarios. For instance, Terraform may be excellent for managing infrastructure across multiple cloud environments yet may be limited when in-depth configurations are required. In those kinds of situations, users can utilize a tool such as Ansible to carry out the necessary configurations.

Likewise, users can mix and match any IaC tool and use them in their CI/CD pipelines depending on the exact requirements.

(Learn how to set up your own CI/CD pipeline.)

Infrastructure as Code tools & platforms

Infrastructure as Code tools & platforms

Under the big IaC umbrella, there are all sorts of tools, from dedicated infrastructure management tools to configuration management, from open-source tools to platform-specific IaC options.

Let’s look at some of the most popular IaC tools and platforms.

Terraform

Terraform by HashiCorp is the leading IaC tool specialized in managing infrastructure across various platforms from AWS, Azure, GCP to Oracle Cloud, Alibaba Cloud, and even platforms like Kubernetes and Heroku.

As a platform-agnostic tool, Terraform can be used to facilitate any infrastructure provisioning and management use cases across different platforms and providers while ensuring the desired state across the configurations.

Ansible

Ansible is not a dedicated Infrastructure management tool but more of an open-source configuration management tool with IaC capabilities. Ansible supports both cloud and on-prem environments and can act through SSH or WinRM as an agentless tool. Ansible excels at configuration management and infrastructure provisioning yet is limited when it comes to managing said infrastructure.

(Find out why people often compare Ansible & Control-M.)

Pulumi

Pulumi is a relatively new tool that aims to provide a developer-first IaC experience. Unlike other tools that force users to use a specific language or format, Pulumi offers freedom to use any supported programming language any way they like.

This tool supports Python, TypeScript, JavaScript, Go, C#, F#, and the state is managed through Pulumi service by default.

Chef/Puppet

Chef and Puppet are two powerful configuration management tools. Both aim to provide configuration management and automation capabilities with some infrastructure management capabilities across the development pipeline.

  • Chef is developed to be easily integrated into DevOps practices with greater collaboration tools.
  • Puppet evolved by targeting sheer processes automation. Today, Puppet has automated built-in watchers to identify configuration drift.

(Check out Puppet’s State of DevOps report.)

CFEngine

CFEngine is one of the most mature tools solely focused on configuration management. Even though there is no capability to manage the underlying infrastructure, CDEngine can accommodate even the most complex configuration requirements, covering everything from security hardening to compliance.

AWS CloudFormation

CloudFormation is the AWS proprietary platform specific IaC tool to manage AWS infrastructure. CloudFormation has deep integration with all AWS services and can facilitate any AWS configuration as a first-party solution.

Azure Resource Templates

Microsoft Azure uses JSON-based Azure Resource Templates to facilitate IaC practices within the Azure platform. These resource templates ensure consistency of the infrastructure and can be used for any type of resource configuration.

In addition to the above, there are specialized tools aimed at specific infrastructure and configuration management tasks such as:

  • Packer, EC2 Image Builder, and Azure Image Builder create deployable custom os images.
  • Cloud-Init is the industry-standard cross-platform cloud instance initialization tool. It enables users to execute the script when provisioning resources (servers).
  • (R)?ex is a fully featured infrastructure automation framework

(Get acquainted with Azure DevOps.)

Examples of Infrastructure as Code

Let’s consider a simple scenario of provisioning an AWS EC2 Instance. In the following example, we can see how Terraform, Ansible, and AWS CloudFormation codes are used for this requirement.

Terraform

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }
}
 
provider "aws" {
  access_key = "aws_access_key"
  secret_key = "aws_secret_key"
  // shared_credentials_file = "/Users/.aws/creds"
  region = "us-west-1"
}
 
resource "aws_instance" "web_server" {
  ami                    = "ami-0123456"
  instance_type          = "t3.small"
  subnet_id              = "subnet-a000111x"
  vpc_security_group_ids = "sg-dfdd00011"
  key_name               = "web_server_test_key"
 
  tags = {
    Name = "Web_Server"
  }
}

Ansible

- hosts: localhost
gather_facts: False
vars_files:
- credentials.yml
tasks:
- name: Provision EC2 Instance
ec2:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
key_name: web_server_test_key
group: test
instance_type: t3.small
image: "ami-0123456"
wait: true
count: 1
region: us-west-1
instance_tags:
Name: Web_Server
register: ec2

AWS CloudFormation

AWSTemplateFormatVersion: "2010-09-09"
Resources:
WebInstance:
Type: AWS::EC2::Instance
Properties:
InstanceType: t3.small
ImageId: ami-0123456
KeyName: web_server_test_key
SecurityGroupIds:
- sg-dfdd00011
SubnetId: subnet-a000111x
Tags:
-
Key: Name
Value: Web_Server

A real world example: IaC for DevOps

Within the context of software development, a fundamental constraint is the need for the environment where recently developed software code is tested to exactly mirror the live environment where such code will be deployed to. This is the only way of assuring that the new code will not collide with existing code definitions: by generating errors or conflicts that may compromise the entire system.

In the past, software delivery would follow this sort of pattern:

  1. A System Administrator would setup up a physical server and install the operating system with all necessary service packs and tuning to mirror the status of the main operating live machine that supports the production environment.
  2. Then a Database Administrator would undergo the same process for the support database, and the machine would be handed off to a test team.
  3. The developer would deliver the code/program by copying it to the test machine, and the test team would run several operational and compliance tests.
  4. Once the new code has gone through the entire process, you can deploy it to the live, operational environment. In many cases, the new code won’t work correctly, so additional troubleshooting and rework are necessary.

(Understand the differences between deploying & releasing software.)

Manual recreation of a live environment leaves doors open to a multitude of most likely minor but potentially quite important human errors, regarding:

  • OS version
  • Patch level
  • Time zone
  • Etc.

A live environment clone, created using the exact same IaC as the live environment, has the absolute guarantee that that if it works in the cloned environment it will work in live.

Imagine a software delivery process involving separate environments for DEV, UAT, and Production. There’s seemingly little value in having a DEV and UAT environment that isn’t an exact mirror of the prod environment given that those early environments are critical to measuring the quality and production readiness of a software build version.

The introduction of virtualization enabled this process to be expedited, especially regarding the phase of creating and updating a test server that would mirror the live environment. Yet the process was manual, meaning a human would have to create and update the machine accordingly and in a timely fashion. With the introduction of DevOps, these process became even more “agile”. Adding automation to the server virtualization and testing phases replaces human intervention, improving productivity and efficiency.

To summarize: In the past, several man-hours and human resources were required to complete the software deployment cycle (Developers, Systems Administrators, Database Administrators, Operation testers). Now, it is possible to have the developer alone complete all tasks:

  1. The developer writes the application code and the configuration management-related instructions that will trigger actions from the virtualization environment, and other environments such as the database, appliances, testing tools, delivery tools, and more.
  2. Upon new code delivery, the configuration management instructions will automatically create a new virtual test environment with an application server plus database instance that exactly mirrors the live operational environment structure, both in terms of service packs and versioning as well as live data that is transferred to such virtual test environment. (This is the Infrastructure as Code part of the process.)
  3. Then a set of tools will perform necessary compliance tests and error identification and resolution. The new code is then ready for deployment to the live IT environment.

Quick, trackable infrastructure changes

Infrastructure as Code has become a vital part of modern application development and deployment pipelines. It is achieved by facilitating quick and trackable infrastructure changes that directly integrate into CI/CD platforms. Infrastructure as Code is crucial for both:

  • Facilitating scalable infrastructure management
  • Efficiently managing the config drift in all environments

Getting started with Infrastructure as Code may seem daunting with many different tools and platforms targeted at different use cases. However, cross this hurdle, and you will have a powerful infrastructure management mechanism at your fingertips.

Related reading

]]>
Deployment Pipelines (CI/CD) in Software Engineering https://www.bmc.com/blogs/deployment-pipeline/ Wed, 13 May 2020 00:00:55 +0000 https://www.bmc.com/blogs/?p=13084 On any Software Engineering team, a pipeline is a set of automated processes that allow developers and DevOps professionals to reliably and efficiently compile, build, and deploy their code to their production compute platforms. There is no hard and fast rule stating how a pipeline should look and the tools it must utilise. However, the […]]]>

On any Software Engineering team, a pipeline is a set of automated processes that allow developers and DevOps professionals to reliably and efficiently compile, build, and deploy their code to their production compute platforms.

There is no hard and fast rule stating how a pipeline should look and the tools it must utilise. However, the most common components of a pipeline are:

  • Build automation/continuous integration
  • Test automation
  • Deploy automation

A pipeline generally consists of a set of tools which are normally broken down into the following categories;

The key objective of a Software Delivery Pipeline is automation with no manual steps or changes required in or between any steps of the pipeline. Human error can and does occur when carrying out these boring and repetitive tasks manually and ultimately does affect the ability to meet deliverables and potentially SLAs due to botched deployments.

Deployment Pipeline

A Deployment pipeline is the process of taking code from version control and making it readily available to users of your application in an automated fashion. When a team of developers are working on projects or features they need a reliable and efficient way to build, test and deploy their work. Historically, this would have been a manual process involving lots of communication and a lot of human error.

The stages of a typical deployment pipeline are as follows.

deployment pipeline

Version Control

Software Developers working on their code generally commit their changes into source control (e.g. GitHub). When a commit to source control is made a the first stage of the deployment pipeline is started which triggers:

  • Code compilation
  • Unit tests
  • Code analysis
  • Installer creation

If all of these steps complete successfully the executables are assembled into binaries and stored into an artefact repository for later use.

Acceptance Tests

Acceptance testing is a process of running a series of tests over compiled/built code to test against the predefined acceptance criteria set by the business.

Independent Deployment

An independent deployment is the process of deploying the compiled and tested artefacts onto development environments. Development environments should be (ideally) a carbon copy of your production environments or very similar at worst. This allows the software to be functionally tested on production like infrastructure ready for any further automated or manual testing.

Production Deployment

This process is normally handed by the Operations or DevOps team. This should be a very similar process to independent deployments and should deliver the code to live production servers. Typically this process would involve either Blue/Green deployments or canary releases to allow for zero down time deployments and easy version roll backs in the event of unpredicted issues. In situations where there are no zero down time deployment abilities release windows are normally negotiated with the business.

Continuous Integration & Continuous Delivery Pipelines

Continuous Integration (CI) is a practice in which developers check their code into a version controlled repository several times per day. Automated build pipelines are triggered by these check ins which allow for fast and easy to locate error detection.

The key benefits of CI are:

  • Smaller changes are easier to integrate into larger code bases.
  • Easier for other team members to see what you have been working on
  • Bugs in larger pieces of work are identified early making them easier to fix resulting in less debugging work
  • Consistent code compile/build testing
  • Fewer integration issues allowing rapid code delivery

Continuous Delivery (CD) is the process which allows developers and operations engineers to deliver bug fixes, features and configuration changes into production reliably, quickly and sustainably. Continuous delivery offers the benefit of code delivery pipelines that are routinely carried out that can be performed on demand with confidence.

The benefits of CD are:

  • Lower-risk releases. Blue/Green deployments and canary releases allow for zero downtime deployments which are not detectable by users and make rolling back to a previous release relatively pain free.
  • Faster bug fixes & feature delivery. With CI & CD when features or bug fixes are finished, and have passed the acceptance and integration tests, a CD pipeline allows these to be quickly delivered into production.
  • Cost savings. Continuous Delivery allows teams to work on features and bug fixes in small batches which means user feedback is received much quicker. This allows for changes to be made along the way thus reducing the overall time and cost of a project.

Blue/Green Deployments

Utilisation of a Blue/Green Deployment process reduces risk and down time by creating a mirror copy your production environment naming one Blue and one Green. Only one of the environments is live at any given time serving live production traffic.

During a deployment, software is deployed to the non-live environment – meaning live production traffic is unaffected during the process. Tests are run against this currently non-live environment and once all tests have satisfied the predefined criteria traffic routing is switched to the non-live environment making it live.

The process is repeated in the next deployment with the original live environment now becoming non-live.

Canary Deployments

Different from Blue/Green deployments, Canary Deployments do not rely on duplicate environments to be running in parallel. Canary Deployments roll out a release to a specific number or percentage of users/servers to allow for live production testing before continuing to roll out the release across all users/servers.

The prime benefit of canary releases is the ability to detect failures early and roll back changes limiting the number of affected users/services in the event of exceptions and failures.

Tools for automating software quality

There are many different tools that you can use to build CI/CD pipelines outlined below, all of which can be used to build reliable and robust CI/CD pipelines with the added bonus of being able to get started for free!

In summary, CI is the automated process to enable software development teams to check in and verify the quality and ability to compile of their code. CD allows Development and Operations teams to reliably and efficiently delivery new features and bug fixes to their end uses in an automated fashion.

Related reading

]]>
What is Kubernetes (K8s)? A Kubernetes Basics Tutorial https://www.bmc.com/blogs/what-is-kubernetes/ Wed, 06 May 2020 00:00:35 +0000 https://www.bmc.com/blogs/?p=13151 In this post, we’re going to explain Kubernetes, also known as K8s. In this introduction, we’ll cover: What Kubernetes is What it can’t do The problems it solves K8s architectural components Installation Alternative options This article assumes you are new to Kubernetes and want to get a solid understanding of its concepts and building blocks. (This […]]]>

In this post, we’re going to explain Kubernetes, also known as K8s. In this introduction, we’ll cover:

  • What Kubernetes is
  • What it can’t do
  • The problems it solves
  • K8s architectural components
  • Installation
  • Alternative options

This article assumes you are new to Kubernetes and want to get a solid understanding of its concepts and building blocks.

(This article is part of our Kubernetes Guide. Use the right-hand menu to navigate.)

What is Kubernetes?

To begin to understand the usefulness of Kubernetes, we have to first understand two concepts: immutable infrastructure and containers.

  • Immutable infrastructure is a practice where servers, once deployed, are never modified. If something needs to be changed, you never do so directly on the server. Instead, you’ll build a new server from a base image, that have all your needed changes baked in. This way we can simply replace the old server with the new one without any additional modification.
  • Containers offer a way to package code, runtime, system tools, system libraries, and configs altogether. This shipment is a lightweight, standalone executable. This way, your application will behave the same every time no matter where it runs (e.g, Ubuntu, Windows, etc.). Containerization is not a new concept, but it has gained immense popularity with the rise of microservices and Docker.

Armed with those concepts, we can now define Kubernetes as a container or microservice platform that orchestrates computing, networking, and storage infrastructure workloads. Because it doesn’t limit the types of apps you can deploy (any language works), Kubernetes extends how we scale containerized applications so that we can enjoy all the benefits of a truly immutable infrastructure. The general rule of thumb for K8S: if your app fits in a container, Kubernetes will deploy it.

By the way, if you’re wondering where the name “Kubernetes” came from, it is a Greek word, meaning helmsman or pilot. The abbreviation K8s is derived by replacing the eight letters of “ubernete” with the digit 8.

The Kubernetes Project was open-sourced by Google in 2014 after using it to run production workloads at scale for more than a decade. Kubernetes provides the ability to run dynamically scaling, containerised applications, and utilising an API for management. Kubernetes is a vendor-agnostic container management tool, minifying cloud computing costs whilst simplifying the running of resilient and scalable applications.

Kubernetes has become the standard for running containerised applications in the cloud, with the main Cloud Providers (AWS, Azure, GCE, IBM and Oracle) now offering managed Kubernetes services.

Kubernetes basic terms and definitions

To begin understanding how to use K8S, we must understand the objects in the API. Basic K8S objects and several higher-level abstractions are known as controllers. These are the building block of your application lifecycle.

Basic objects include:

  • Pod. A group of one or more containers.
  • Service. An abstraction that defines a logical set of pods as well as the policy for accessing them.
  • Volume. An abstraction that lets us persist data. (This is necessary because containers are ephemeral—meaning data is deleted when the container is deleted.)
  • Namespace. A segment of the cluster dedicated to a certain purpose, for example a certain project or team of devs.

Controllers, or higher-level abstractions, include:

  • ReplicaSet (RS). Ensures the desired amount of pod is what’s running.
  • Deployment. Offers declarative updates for pods an RS.
  • StatefulSet. A workload API object that manages stateful applications, such as databases.
  • DaemonSet. Ensures that all or some worker nodes run a copy of a pod. This is useful for daemon applications like Fluentd.
  • Job. Creates one or more pods, runs a certain task(s) to completion, then deletes the pod(s).

Micro Service

A specific part of a previously monolithic application. A traditional micro-service based architecture would have multiple services making up one, or more, end products. Micro services are typically shared between applications and makes the task of Continuous Integration and Continuous Delivery easier to manage. Explore the difference between monolithic and microservices architecture.

Images

Typically a docker container image – an executable image containing everything you need to run your application; application code, libraries, a runtime, environment variables and configuration files. At runtime, a container image becomes a container which runs everything that is packaged into that image.

Pods

A single or group of containers that share storage and network with a Kubernetes configuration, telling those containers how to behave. Pods share IP and port address space and can communicate with each other over localhost networking. Each pod is assigned an IP address on which it can be accessed by other pods within a cluster. Applications within a pod have access to shared volumes – helpful for when you need data to persist beyond the lifetime of a pod. Learn more about Kubernetes Pods.

Namespaces

Namespaces are a way to create multiple virtual Kubernetes clusters within a single cluster. Namespaces are normally used for wide scale deployments where there are many users, teams and projects.

Replica Set

A Kubernetes replica set ensures that the specified number of pods in a replica set are running at all times. If one pod dies or crashes, the replica set configuration will ensure a new one is created in its place. You would normally use a Deployment to manage this in place of a Replica Set. Learn more about Kubernetes ReplicaSets.

Deployments

A way to define the desired state of pods or a replica set. Deployments are used to define HA policies to your containers by defining policies around how many of each container must be running at any one time.

Services

Coupling of a set of pods to a policy by which to access them. Services are used to expose containerised applications to origins from outside the cluster. Learn more about Kubernetes Services.

Nodes

A (normally) Virtual host(s) on which containers/pods are run.

Kubernetes architecture and components

A K8S cluster is made of a master node, which exposes the API, schedules deployments, and generally manages the cluster. Multiple worker nodes can be responsible for container runtime, like Docker or rkt, along with an agent that communicates with the master.

Master components

These master components comprise a master node:

  • Kube-apiserver. Exposes the API.
  • Etcd. Key value stores all cluster data. (Can be run on the same server as a master node or on a dedicated cluster.)
  • Kube-scheduler. Schedules new pods on worker nodes.
  • Kube-controller-manager. Runs the controllers.
  • Cloud-controller-manager. Talks to cloud providers.

Node components

  • Kubelet. Agent that ensures containers in a pod are running.
  • Kube-proxy. Keeps network rules and perform forwarding.
  • Container runtime. Runs containers.

What benefits does Kubernetes offer?

Out of the box, K8S provides several key features that allow us to run immutable infrastructure. Containers can be killed, replaced, and self-heal automatically, and the new container gets access to those support volumessecretsconfigurations, etc., that make it function.

These key K8S features make your containerized application scale efficiently:

  • Horizontal scaling.Scale your application as needed from command line or UI.
  • Automated rollouts and rollbacks.Roll out changes that monitor the health of your application—ensuring all instances don’t fail or go down simultaneously. If something goes wrong, K8S automatically rolls back the change.
  • Service discovery and load balancing.Containers get their own IP so you can put a set of containers behind a single DNS name for load balancing.
  • Storage orchestration.Automatically mount local or public cloud or a network storage.
  • Secret and configuration management.Create and update secrets and configs without rebuilding your image.
  • Self-healing.The platform heals many problems: restarting failed containers, replacing and rescheduling containers as nodes die, killing containers that don’t respond to your user-defined health check, and waiting to advertise containers to clients until they’re ready.
  • Batch execution.Manage your batch and Continuous Integration workloads and replace failed containers.
  • Automatic binpacking.Automatically schedules containers based on resource requirements and other constraints.

What won’t Kubernetes do?

Kubernetes can do a lot of cool, useful things. But it’s just as important to consider what Kubernetes isn’t capable of:

  • It does not replace tools like Jenkins—so it will not build your application for you.
  • It is not middleware—so it will not perform tasks that a middleware performs, such as message bus or caching, to name a few.
  • It does not care which logging solution is used. Have your app log to stdout, then you can collect the logs with whatever you want.
  • It does not care about your config language (e.g., JSON).

K8s is not opinionated with these things simply to allow us to build our app the way we want, expose any type of information and collect that information however we want.

Kubernetes competitors

Of course, Kubernetes isn’t the only tool on the market. There are a variety, including:

  • Docker Compose—good for staging but not production-ready.
  • Nomad—allows for cluster management and scheduling but it does not solve secret and config management, service discover, and monitoring needs.
  • Titus—Netflix’s open-source orchestration platform doesn’t have enough people using it in production.

Overall, Kubernetes offers the best out-of-the-box features along with countless third-party add-ons to easily extend its functionality.

Getting Started with Kubernetes

Typically, you would install Kubernetes on either on premise hardware or one of the major cloud providers. Many cloud providers and third parties are now offering Managed Kubernetes services however, for a testing/learning experience this is both costly and not required. The easiest and quickest way to get started with Kubernetes in an isolated development/test environment is minikube.

How to install Kubernetes

Installing K8S locally is simple and straightforward. You need two things to get up and running: Kubectl and Minikube.

  • Kubectl is a CLI tool that makes it possible to interact with the cluster.
  • Minikube is a binary that deploys a cluster locally on your development machine.

With these, you can start deploying your containerized apps to a cluster locally within just a few minutes. For a production-grade cluster that is highly available, you can use tools such as:

  • Kops
  • EKS, which is an AWS managed service
  • GKE, provided by Google

Minikube allows you to run a single-node cluster inside a Virtual Machine (typically running inside VirtaulBox). Follow the official Kubernetes documentation to install minikube on your machine. https://kubernetes.io/docs/setup/minikube/.

With minikube installed you are now ready to run a virtualised single-node cluster on your local machine. You can start your minikube cluster with;

$ minikube start

Interacting with Kubernetes clusters is mostly done via the kubectl CLI or the Kubernetes Dashboard. The kubectl CLI also supports bash autocompletion which saves a lot of typing (and memory). Install the kubectl CLI on your machine by using the official installation instructions https://kubernetes.io/docs/tasks/tools/install-kubectl/.

To interact with your Kubernetes clusters you will need to set your kubectl CLI context. A Kubernetes context is a group of access parameters that defines which users have access to namespaces within a cluster. When starting minikube the context is automatically switched to minikube by default. There are a number of kubectl CLI commands used to define which Kubernetes cluster the commands execute against.

$ kubectl config get-context
$ kubectl config set-context <context-name>

$ Kubectl config delete-context <context-name>

Deploying your first containerised application to Minikube

So far you should have a local single-node Kubernetes cluster running on your local machine. The rest of this tutorial is going to outline the steps required to deploy a simple Hello World containerised application, inside a pod, with an exposed endpoint on the minikube node IP address. Create the Kubernetes deployment with;

$ kubectl run hello-minikube --image=k8s.gcr.io/

echoserver:1.4 --port=8080

We can see that our deployment was successful so we can view the deployment with;

$ kubectl get deployments

Our deployment should have created a Kubernetes Pod. We can view the pods running in our cluster with;

$ kubectl get pods

Before we can hit our Hello World application with a HTTP request from an origin from outside our cluster (i.e. our development machine) we need to expose the pod as a Kubernetes service. By default, pods are only accessible on their internal IP address which has no access from outside the cluster.

$ kubectl expose deployment hello-minikube --

type=NodePort

Exposing a deployment creates a Kubernetes service. We can view the service with:

$ kubectl get services

When using a cloud provider you would normally set —type=loadbalancer to allocate the service with either a private or public IP address outside of the ClusterIP range. minikube doesn’t support load balancers, being a local development/testing environment and therefore —type=NodePort uses the minikube host IP for the service endpoint. To find out the URL used to access your containerised application type;

$ minikube service hello-minikube -—url

Curl the response from your terminal to test that our exposed service is reaching our pod.

$ curl http://<minikube-ip>:<port>

Now we have made a HTTP request to our pod via the Kubernetes service, we can confirm that everything is working as expected. Checking the the pod logs we should see our HTTP request.

$ kubectl logs hello-minikube-c8b6b4fdc-sz67z

To conclude, we are now running a simple containerised application inside a single-node Kubernetes cluster, with an exposed endpoint via a Kubernetes service.

Minikube for learning

Minikube is great for getting to grips with Kubernetes and learning the concepts of container orchestration at scale, but you wouldn’t want to run your production workloads from your local machine. Following the above you should now have a functioning Kubernetes pod, service and deployment running a simple Hello World application.

From here, if you are looking to start using Kubernetes for your containerized applications, you would be best positioned looking into building a Kubernetes Cluster or comparing the many Managed Kubernetes offerings from the popular cloud providers.

Additional resources

For more on Kubernetes, explore these resources:

]]>
Using Kubernetes Port, TargetPort, and NodePort https://www.bmc.com/blogs/kubernetes-port-targetport-nodeport/ Mon, 20 Apr 2020 00:00:08 +0000 https://www.bmc.com/blogs/?p=16985 (This article is part of our Kubernetes Guide. Use the right-hand menu to navigate.) Port configurations for Kubernetes Services In Kubernetes there are several different port configurations for Kubernetes services: Port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified […]]]>

(This article is part of our Kubernetes Guide. Use the right-hand menu to navigate.)

Port configurations for Kubernetes Services

In Kubernetes there are several different port configurations for Kubernetes services:

  • Port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
  • TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
  • NodePort exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified.

Let’s look at how to use these ports in your Kubernetes manifest.


Interested in Enterprise DevOps? Learn more about DevOps Solutions and Tools with BMC. ›

Using Port, TargetPort, and NodePort

apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  type: NodePort
  selector:
    app: hello-world
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
      nodePort: 30036

From the above examples the hello-world service will be exposed internally to cluster applications on port 8080 and externally to the cluster on the node IP address on 30036. It will also forward requests to pods with the label “app: hello-world” on port 80.

The configuration of the above settings can be verified with the command:

$ kubectl describe service hello-world


Create a pod running nginx to which the service will forward requests to:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: hello-world
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
      - containerPort: 80

To test and demonstrate the above configuration, we can create a pod running an ubuntu container to execute some curl commands to verify connectivity.

$ kubectl run -i --tty ubuntu --image=ubuntu --restart=Never -- sh 

From this pod run the following commands:

Curl the service on the ‘port’ defined in the Kubernetes manifest for the service.

$ curl hello-world:8080

This proves that curling the Kubernetes service on port 80 forwards the request to our nginx pod listening on port 80.

To test the NodePort on your machine (not in the ubuntu pod) you will need to find the IP address of the node that your pod is running on.

$ kubectl describe pod nginx

Now, you can curl the Node IP Address and the NodePort and should reach the nginx container running behind the Kubernetes service.

Additional resources

For more on Kubernetes, explore these resources:

]]>
Introduction to Kubernetes Helm Charts https://www.bmc.com/blogs/kubernetes-helm-charts/ Fri, 10 Apr 2020 00:00:54 +0000 https://www.bmc.com/blogs/?p=16937 In this post we are going to discuss a tool used with Kubernetes called Helm. Part of our multi-part Kubernetes Guide, this article will: Explore Helm charts Determine when and why to use Helm and Helm Charts Provide instructions for getting started To explore other K8s topics, use the navigation menu on the right-hand side. (This […]]]>

In this post we are going to discuss a tool used with Kubernetes called Helm. Part of our multi-part Kubernetes Guide, this article will:

  • Explore Helm charts
  • Determine when and why to use Helm and Helm Charts
  • Provide instructions for getting started

To explore other K8s topics, use the navigation menu on the right-hand side.

(This article is part of our Kubernetes Guide. Use the right-hand menu to navigate.)

What is Helm?

In simple terms, Helm is a package manager for Kubernetes. Helm is the K8s equivalent of yum or apt. Helm deploys charts, which you can think of as a packaged application. It is a collection of all your versioned, pre-configured application resources which can be deployed as one unit. You can then deploy another version of the chart with a different set of configuration.

Helm helps in three key ways:

  • Improves productivity
  • Reduces the complexity of deployments of microservices
  • Enables the adaptation of cloud native applications

Why use Helm?

Writing and maintaining Kubernetes YAML manifests for all the required Kubernetes objects can be a time consuming and tedious task. For the simplest of deployments, you would need at least 3 YAML manifests with duplicated and hardcoded values. Helm simplifies this process and creates a single package that can be advertised to your cluster.

Helm is a client/server application and, until recently, has relied on Tiller (the helm server) to be deployed in your cluster. This gets installed when installing/initializing helm on your client machine. Tiller simply receives requests from the client and installs the package into your cluster. Helm can be easily compared to RPM of DEB packages in Linux, providing a convenient way for developers to package and ship an application to their end users to install.

Once you have Helm installed and configured (details below), you are able to install production-ready applications from software vendors, such as MongoDB, MySQL and others, into your Kubernetes cluster with one very simple helm install command. Additionally, removing installed applications in your cluster is as easy as installing them.

What are Helm charts?

Helm Charts are simply Kubernetes YAML manifests combined into a single package that can be advertised to your Kubernetes clusters. Once packaged, installing a Helm Chart into your cluster is as easy as running a single helm install, which really simplifies the deployment of containerized applications.

Describing Helm

Helm has two parts to it:

  • The client (CLI), which lives on your local workstation.
  • The server (Tiller), which lives on the Kubernetes cluster to execute what’s needed.

The idea is that you use the CLI to push the resources you need and tiller will make sure that state is in fact the case by creating/updating/deleting resources from the chart. To fully grasp helm, there are 3 concepts we need to get familiar with:

  • Chart: A package of pre-configured Kubernetes resources.
  • Release: A specific instance of a chart which has been deployed to the cluster using Helm.
  • Repository: A group of published charts which can be made available to others.

Benefits of Helm

Developers like Helm charts for many reasons:

Boosts productivity

Software engineers are good at writing software, and their time is best spent doing just that. Using Helm allows software to deploy their test environments at the click of a button.

An example of this might be that, in order to test a new feature, an engineer needs a SQL database. Instead of going through the process of installing the software locally, creating the databases and tables required, the engineer can simply run a single Helm Install command to create and prepare the database ready for testing.

Reduces duplication & complexity

Once the chart is built once, it can be used over and over again and by anyone. The fact that you can use the same chart for any environment reduces complexity of creating something for dev, test and prod. You can simply tune you chart and make sure it is ready to apply to any environment. And you get the benefit of using a production ready chart in dev.

Smooths the K8S learning curve

It’s no secret that the learning curve for Kubernetes and containers is long for your average developer. Helm simplifies that learning curve: developers do not require a full, detailed understanding of the function of each Kubernetes object in order to start developing and deploying container applications.

Helm easily integrates into CI/CD pipelines and allows software engineers to focus on writing code—not deploying applications.

Simplifies deployments

Helm Charts make it easy to set overridable defaults in the values.yaml file, allowing software vendors or administrators of charts to define a base setting. Developers and users of charts can override these settings when installing their chart to suit their needs. If the default installation is required, then no override is required.

Deploying applications to Kubernetes is not a straightforward process, with different objects being tightly coupled. This required specific knowledge of these objects and what their functions are in order to be able to successfully deploy. Helm takes the complexity out of that doing much of the hard work for you.

Describing a Helm chart

Helm has a certain structure when you create a new chart. To create, run “helm create YOUR-CHART-NAME”. Once this is created, the directory structure should look like:

YOUR-CHART-NAME/
 |
 |- .helmignore 
 | 
 |- Chart.yaml 
 | 
 |- values.yaml 
 | 
 |- charts/ 
 |
 |- templates/
  • .helmignore: This holds all the files to ignore when packaging the chart. Similar to .gitignore, if you are familiar with git.
  • Chart.yaml: This is where you put all the information about the chart you are packaging. So, for example, your version number, etc. This is where you will put all those details.
  • Values.yaml: This is where you define all the values you want to inject into your templates. If you are familiar with terraform, think of this as helms variable.tf file.
  • Charts: This is where you store other charts that your chart depends on. You might be calling another chart that your chart need to function properly.
  • Templates: This folder is where you put the actual manifest you are deploying with the chart. For example you might be deploying an nginx deployment that needs a service, configmap and secrets. You will have your deployment.yaml, service.yaml, config.yaml and secrets.yaml all in the template dir. They will all get their values from values.yaml from above.

Installing Helm and configuring Helm Charts

Ready to use Helm? Installing and configuring Helm for your K8S cluster is a very quick and straight forward process—there are multiple versions of Helm that can be installed (v1/v2 and most recently v3), all of which can be configured to your organization’s needs. Check out the getting started page for instructions on downloading and installing Helm.

Beginning your first Helm chart is as simple as installing some charts from the stable repository, which is available on GitHub. The Helm stable repository is a collection of curated applications ready to be deployed into your cluster.

Helm users can write their own charts or can obtain charts from the stable repository:

If you want to write your own Helm Charts for your applications, Helm provides a simple Developer’s Guide for getting started.

Helm simplifies software deployment

In summary, Helm takes a lot of the complexity out of deploying software and your own applications to Kubernetes. It can be installed in a matter of minutes and you can be deploying charts from the stable repository in no time. Writing your own charts is also a straightforward process, though does require understanding of Kubernetes objects. Helm can empower your developers to work more efficiently giving them the tools they need to test their code locally.

Additional resources

For more on Kubernetes, explore these resources:

]]>
How To Write Kubectl Subcommands https://www.bmc.com/blogs/kubernetes-how-to-write-kubectl-subcommands/ Mon, 06 May 2019 00:00:04 +0000 https://www.bmc.com/blogs/?p=14029 Since the release of Kubernetes v1.12 it has been possible to extend kubectl with subcommands (plugins) to make automating repetitive Kubernetes tasks seem more kubectl native and easier to use for developers. Before you can start to make use of kubectl subcommands you’re going to need to upgrade your version of kubectl cli to at […]]]>

Since the release of Kubernetes v1.12 it has been possible to extend kubectl with subcommands (plugins) to make automating repetitive Kubernetes tasks seem more kubectl native and easier to use for developers.

Before you can start to make use of kubectl subcommands you’re going to need to upgrade your version of kubectl cli to at least v1.12. Follow the official Kubernetes documentation for your platform here. (link: https://kubernetes.io/docs/tasks/tools/install-kubectl/).

What are the benefits of kubectl subcommands?

Using subcommands doesn’t really add any benefit that you can’t introduce by writing your own scripts and distributing them as you would normally. However they do allow your scripts to look like they are built right into kubectl and feel more natural for users if you are distributing these scripts to developer machines.

Kubectl plugins suit 2 major languages each with their own benefits:

Go: The go to choice for cloud native/kubernetes which can be distributed as a single binary.

Bash: Cross platform and can be relied upon due to users invoking kubectl which is a bash command.

As an example this guide is going to focus on building a kubectl subcommand executed with `kubectl cmd` that is going to run an argument based command on all pods in a given namespace that match a `| grep` filter. This example will use `env` as the command to run on pods to simply print out all environment variables however can be invoked with any command you desire.

Save the following script anywhere in your `$PATH`.

Make the script executable with `chmod +x /usr/local/bin/kubectl-cmd`

To confirm that kubectl is aware of your new plugin type

`kubectl plugin list`
Now that the plugin is listed as available inside kubectl there is nothing left to do. You can now invoke your new kubectl subcommand with `kubectl cmd <arg1> <arg2> <arg3>`.

Invoking kubectl plugins

As seen in the example above, arguments that your plugin is invoked with will be passed to your executable in the same way they would if you were to run this in any other way.

You can read more about the kubectl plugin mechanisms on the official Kubernetes GitHub https://github.com/kubernetes/enhancements/blob/c665c8d7203e15cc4b0ad53343d357ca3019c22c/keps/sig-cli/0024-kubectl-plugins.md.

Aside from writing your own kubectl plugins you can use krew (link: https://github.com/kubernetes-sigs/krew) to find and install community plugins written by other developers to extend the built in functionality of kubectl. You can also publish your own plugins to krew for others to use.

The code used in this guide can be found on GitHub so you can quickly get started. I’ve also included a simple Ansible Playbook to simplify installing any custom plugins you have written which can be found https://github.com/dpmerron-ltd/How-To-Write-Kubectl-Subcommands/tree/master/kubectl-plugin-installer

https://github.com/dpmerron-ltd/How-To-Write-Kubectl-Subcommands

]]>