Choosing a container management platform can make your head swim these days. In addition to the Open Source options (e.g. Kubernetes, Swarm, and Rancher), it seems every cloud vendor has its own custom offering as well, and despite all being based on Docker, no two are quite alike. Never satisfied with just keeping pace, Amazon actually offers two options, so even customers accustomed to saying “I just run it in Amazon” still have a choice to make.

“Here is my application, run it for me, when and where I want it, securely. That’s the end game.”Kelsey Hightower, Staff Developer Advocate, Google Cloud Platform

Ultimately, both Elastic Container Service (ECS) and Elastic Container Service for Kubernetes (EKS) are “container management platforms,” sometimes also called “container orchestration platforms.” To understand the difference between the two, it helps to understand what “container management” means in the first place.

Docker.com defines a container image as “… a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.” Code and tools and settings… What more do you need? Well, real-world applications need much more to be useful. They:

  • Send and respond to network calls,
  • Connect to databases, caches, and APIs,
  • Output log and QOS data, and
  • Execute in an environment with CPU, RAM, and storage resources.

Building a Docker image only encapsulates the application itself – the container management platform provides the rest of this functionality.

ECS and EKS are Amazon’s two offerings in this space, and both provide all of the above features. However the way they do so is very different. ECS is sometimes referred to as a “simplified version” of Kubernetes, but this is misleading. Each platform actually takes a different approach toward container orchestration that becomes important as you begin to scale applications.

The easiest place to start comparing the two is with the scheduler, the component responsible for determining where a container is run, how many copies are started, and how resources are allocated to them. As shown in Figure 1 below, ECS follows a very traditional, easy to understand model. Each application in your stack (e.g. an API or Thumbnailer) is defined as a “Service” in ECS. ECS then schedules (runs) “Tasks” (instances) of the desired container on one or more underlying hosts to meet the resource requirements defined for that service:

This structure tends to be very easy to implement because other than defining Services, it closely mimics familiar server-based workloads. Migrating an existing application to ECS may require little more than a Dockerfile, pushing an image to Amazon’s Elastic Container Repository (ECR), and defining a service to run that image. Management and monitoring tools are basic but functional, and require little training to use.

In contrast, EKS is essentially just a hosted form of Kubernetes. Where ECS provides networking and support components via AWS service components such as Application Load Balancers (ALBs), Route 53, and CloudWatch. Kubernetes provides these mechanisms internally, and also adds the concept of a “Pod”. Pods are a powerful and flexible (but often confusing) mechanism that provides finer grained control over the components within a service.

For instance, suppose the Thumbnail service was actually three components working together: a microservice API, an image processor, and a storage engine:

As shown in Figure 2, in addition to Kubernetes internally providing services such as routing and discovery, “Pods” are used here to define sub-components that make up the thumbnailing service. Containers within a Pod run collocated with one another, and there have easy access (via “localhost”) to each other and to shared resources such as storage volumes. Application architects can leverage these concepts to create more sophisticated stacks than in ECS.

Finally, EKS enables the use of a wide range of Kubernetes-ecosystem add-ons such as Project Calico, a sophisticated network policy engine, and Prometheus, a powerful performance monitoring service.

What About Fargate? (… or “What Is Fargate?”)

Container management platforms slice and dice a server’s CPU and RAM to better allocate them to your workloads. The servers still exist, they are just subdivided further than before. Even though they become nearly identical (typically just a base operating system plus an agent), there is still a small but measurable burden in managing these systems.

Fargate is a fancy name for a simple concept. Amazon takes over management of the underlying server. Instead of booting a server, installing the agent, and keeping it up to date, you simply create a cluster and add workload to it. Amazon automatically adds pre-configured servers to the “pool” to support your workload requirements.

In this author’s opinion, Fargate is a big win in most cases. Used properly it should cost no more than self-management, and often costs less because it is easy to forget to shut down unused host capacity when self-managing. However, there are some valid exceptions that may preclude its use:

  • Highly regulated environments may require organizations to take more responsibility for the entire “stack”, down to the hardware level. Fargate is not compatible with “dedicated tenancy” hosting requirements.
  • ECS + Fargate currently supports only one networking mode, “awsvpc“, which has some limitations if deep control over the networking layer is required (see below).
  • Fargate automatically allocates resources to meet workload demands, with few controls over how this is be done. In environments with heavy R&D activity, this could easily lead to uncontrolled cost growth if not tightly monitored. Self-hosting could allow the creation of limited-capacity clusters for R&D purposes that eliminate this risk.

Which Platform is Right For You?

If EKS is so powerful, why wouldn’t it be the automatic choice for new workloads? It turns out that because ECS simple yet mature, it still has a lot to recommend it:

  • DevOps teams leveraging Terraform, Elastic Beanstalk, or other “software defined infrastructure” tools will generally find ECS well supported in these apps.
  • The learning curve in ECS is much lower. Organizations with limited DevOps resources, or that are not prepared to re-architect applications around concepts like Pods, may find ECS easier to adopt.
  • While Kubernetes offers many more choices regarding add-ons in its ecosystem, each choice requires time, resources, and maintenance to leverage fully. ECS has only one option in each category: if it meets your needs, you’re already done.
  • If Kubernetes is a long-term goal but “too much” to adopt all at once, ECS can be a compatible first step, allowing an organization to implement a containerization strategy and move its workloads into a managed service with less up-front investment.

On the other hand, ECS can sometimes be too simple, and there are even a few lightly-documented “gotchas” that may be obstacles for some apps:

  • Without a concept similar to Pods, fine-grained control over container placement is not possible. Many (most?) applications can live without this – but for those that require it, this could be a total blocker.
  • Particularly when run via ECS Fargate, some additional technical limitations may apply. For instance, the only networking mode available is “awsvpc” which at the time of this writing does not allow custom hosts table definitions for running tasks (they are overridden), and may only run images from ECR or Docker Hub (public repositories).
  • ECS management tools are limited to the Web Console, CLI, and SDKs. Logging and performance monitoring are done through CloudWatch, deployments through ECS itself, and service discovery via Route 53. If any of these tools are unacceptable, it may be time to step up to EKS.

In the end, every organization has different needs, and both options have pros and cons: there is no one “right” answer here. However, in this author’s opinion, there is a simple litmus test that can be applied: If you know you need Kubernetes, then you need EKS. If you do not know, you probably don’t – consider starting with ECS first.

E-Book: Avoid Sticker Shock—How to Determine the True Cost of Clouds

Cost reduction is one of the main reasons for moving to the cloud. Cost reduction is not a guarantee – but is achievable with the right plan. Get insight into the right steps to take for migrating workloads to the cloud and reducing costs as a result.
Read the E-Book ›
Last updated: 07/19/2018

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing blogs@bmc.com.

About the author

Chad Robinson

Chad Robinson

Chad Robinson is a cloud software architect and development team lead based in Denver, CO. His specialties include Web and mobile architectures, AWS and Google Cloud Platforms, and Agile SDLCs. When not at the keyboard he is most likely hiking in the Rocky Mountains or enjoying one of Colorado's many fine breweries.