Developers are discovering the power of using containers to package applications quickly and easily. This trend is driven by the need for speed and agility as it enables developers to rapidly release new applications and functions by taking a DevOps approach. While most enterprise organizations are using traditional environments for development, they are also gradually adding containers to the mix. According to Forrester, “Container strategies are critical to digital business… EA pros must define their container strategy to accelerate digital transformation.”1
What makes containers so popular?
As organizations develop container strategies, they are generally based on Docker, which is becoming a de facto standard for containers. A container is an application packaging structure that runs on any host supported by Docker. What makes containers unique and appealing is that you can insulate applications from the idiosyncrasies of specific infrastructure or middleware, so moving from machine to machine (or environment to environment) is greatly simplified. They’re small, dynamic, and give developers the freedom to concentrate on coding the best solutions without having to worry about the technical configurations of their environments. Containers will run wherever they go, enable a quick startup, and are very scalable.
By isolating the application from the environment, you don’t have to worry about whether the application you develop on a personal laptop will work on a test or staging machine. You simply start up a container when you need it. So, developers can build applications faster and add the business functionality that customers require.
A look at the challenges of managing containerized applications
But consider this. Containerized applications are still applications. They require all the same operational and run-time management used for traditional applications along with some new wrinkles related to this new technology.
Same management requirements as traditional applications
When a containerized application executes, you need to determine if it was successful and identify any subsequent actions to take. This includes managing the upstream/downstream dependency relationships so that actions occur in the correct sequence and at the correct time. You need to know if SLAs are being met; be notified when errors occur; open incidents when necessary; analyze problems when failures occur; and have facilities to repair or restart failed applications. These are the basic capabilities and requirements for managing enterprise application workloads.
A workflow automation and job scheduling tool is ideal for expediting the execution of containerized business processes and ensures the consistent standards needed to meet production requirements.
Unique container requirements
In addition to the orchestration, management and visibility requirements that you need in traditional environments, containers add some new ones that are unique.
Like any new technology in the enterprise, containers are likely to be added into an already complex mix of applications, middleware, operating systems and tools. It’s highly likely there will be a need to integrate, or at least to interact, between old and new.
The very nature of containers almost requires containerized services to be made up of significantly more components than monolithic applications. Keeping track of many pieces is more challenging than tracking fewer pieces.
Because containers usually disappear once an application completes, you need to collect logs and output, audit information and generally ensure that all of the required application history is preserved before a container disappears.
To provide this level of insight and visibility, your management solution must understand both containers and traditional applications.
Save time and eliminate headaches with a proven digital business automation solution
Control-M provides the management capabilities in a containerized or native container environment. In fact, the more complex the environment, the greater value you’ll get out of Control-M. Think about it.
Mature solutions don’t grow on trees or spring up fully formed. Either you build them from scratch, manage them and go through many iterations that could possibly take many years, or you can take something that already works — like Control-M — and expand it to work in a new environment. That’s what we’ve done at BMC to evolve the solution to support the container environment while delivering true digital business automation.
You can now embed Control-M’s execution agent into your application container image. As containers are instantiated, the embedded agents dynamically register with Control-M and are available to run jobs. Work can be distributed transparently to specific containers, among container groups, or to traditional execution environments.
This approach gives organizations complete flexibility and abstraction of workload placement while gaining all the benefits of Control-M for the work being managed. Because all these capabilities are accessible via RESTful web services or a simple node.js cli, developers and DevOps teams can access these operational capabilities directly from their CI/CD toolchain eliminating scripting and simplifying comprehensive, automated testing.
Why this matters
Developers need the capability to iterate quickly. Businesses are pressured to deliver applications with new capabilities and services to their customers. Control-M provides the rich functionality that’s essential for managing digital business automation for containers without having to build these capabilities yourself. As a result, DevOps teams can manage containers more effectively, deal with the complexity of enterprise production environments, and adapt to meet new requirements.
These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.
See an error or have a suggestion? Please let us know by emailing firstname.lastname@example.org.