AWS – BMC Software | Blogs https://s7280.pcdn.co Mon, 04 Apr 2022 10:15:26 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png AWS – BMC Software | Blogs https://s7280.pcdn.co 32 32 AWS Management Tools: What’s Available & How To Choose https://s7280.pcdn.co/aws-management-tools/ Thu, 17 Feb 2022 00:00:48 +0000 http://www.bmc.com/blogs/?p=12306 When cloud computing was introduced to the masses, new startups and innovative startups were among the early adopters. Cloud vendors such as Amazon, Microsoft, and Google offered a myriad of cloud resources designed to run different types of IT workloads. The flexibility and variety of choice sharpened the appetite for a cloud-first business paradigm: Legacy […]]]>

When cloud computing was introduced to the masses, new startups and innovative startups were among the early adopters. Cloud vendors such as Amazon, Microsoft, and Google offered a myriad of cloud resources designed to run different types of IT workloads. The flexibility and variety of choice sharpened the appetite for a cloud-first business paradigm:

  • Legacy applications and workloads were quickly relocating to the cloud.
  • IT began building containerized apps and delivering services to a global user base via the internet.

The growing cloud adoption trend was quickly faced by IT management and governance challenges. According to research, solving the cloud governance challenge is the top priority for SMBs investing in cloud solutions. Large enterprises are equally concerned: 84% are worried about managing cloud spending.

Fortunately, large vendors such as Amazon Web Services (AWS) offer a vast library of cloud management and governance tools. In this article, we will explore the three categories of AWS cloud management solutions:

  • Enable: Built-in governance control tools.
  • Provision: AWS cloud management tools that allow users to allocate and use resources efficiently based on defined policies.
  • Operate: Maximize the performance of your AWS cloud systems. Streamline governance and control, and ensure compliance.

(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)

Enable tools

AWS Control Tower

Manages multiple AWS accounts and teams for your AWS cloud environment. Security, compliance, and visibility protocols extend to all accounts that are provisioned with a few simple clicks with the AWS Control Tower tool.

Benefits:

  • Easy provisioning and configuration of multiple AWS accounts.
  • Automate policy management: enforce rules, Service Control Policies (SCPs).
  • Gain full dashboard visibility into accounts and policies.

AWS Organizations

Grow and scale your AWS environment by programmatically provisioning accounts, allocating resources, organizing workflows for account groups and simplifying the billing process for grouped accounts.

Benefits:

  • Easily and quickly scale your AWS cloud environment.
  • Central audit of scalable cloud environments.
  • Simplified identity and access control systems.
  • Optimize resource provisioning and reduce duplication with AWS Resource Access Manager (RAM) and AWS License Manager.

AWS Well-Architected Tool

Review existing workloads and compare your IT environment to the AWS architectural best practices. The tool uses the AWS Well-Architected Framework that allows users to develop secure IT networks optimized for multi-cloud environments.

Benefits:

  • Free AWS cloud architecture guidance.
  • Cloud workload monitoring for compliance to AWS architectural best practices.
  • Identify performance bottlenecks, monitor workloads, and track changes.

five-pillars-of-aws

Provisioning Tools

AWS CloudFormation

AWS CloudFormation provides a common language to provision foundational assets in your cloud instance. Using a basic text file, CloudFormation enables you to model and provision each asset required.

Benefits:

  • Model your infrastructure from a single source: a text file
  • Standardize the infrastructure for your entire organization in a simplified way
  • Provisions can be automated and deployed over and over again without being rebuilt
  • Demystify infrastructure by treating it like what it is: code

AWS Service Catalog

Enables users to oversee a robust index of services primed for use on AWS. With services that incorporate everything from virtual machine images, servers, applications and databases, AWS Service Catalog enables you to centrally administer programs. It empowers clients to rapidly deploy IT services they need, on-demand.

Benefits:

  • Ensure your organization complies with industry standards
  • Help users find IT services to deploy
  • Manage IT services from one central point

AWS OpsWorks

Lets you write small instances of code to automate configurations. AWS OpsWorks main benefit is that it offers application and server management for Puppet, Chef, and Stacks; Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.

Using instances of Chef and Puppet designed for AWS, developers can deploy code that keeps their configurations in check. OpsWorks has three offerings:

  • AWS OpsWorks for Chef Automate
  • AWS OpsWorks for Puppet Enterprise
  • AWS OpsWorks Stacks

AWS Trusted Advisor

AWS Trusted Advisor is a provisioning resource that provides on-demand, real-time guidance to AWS users that increases the overall performance of your AWS environment. It does this by optimizing the instance, recalibrating things that reduce cost, increase security, and more.

Benefits:

  • Full access to a wide range of perks that optimize your AWS instance
  • Increased security
  • Fine-tuned performance
  • Alerts and notifications

Operate Tools

Amazon Cloud Watch

Amazon CloudWatch provides monitoring administration services for AWS cloud resources and applications. Users benefit from the Amazon CloudWatch tool to gather and track data analytics, screen log records, set alerts, and respond to changes in your AWS assets.

Benefits:

  • Amazon EC2 monitoring
  • AWS resource monitoring
  • Custom metrics monitoring
  • Log monitoring and storage
  • View data in visual reports
  • React to resource changes
  • Set alarms

Amazon CloudWatch can screen AWS assets, for example, Amazon EC2 occurrences, Amazon DynamoDB tables and Amazon RDS DB instances and custom metrics produced by your applications and services.

AWS CloudTrail

An important operational tool, AWS CloudTrail helps enterprise businesses achieve compliance and track user activity. The service offers governance, compliance, operational and risk auditing of your account. Cloud Trail provides a comprehensive list of actions taken throughout AWS and aligned services.

Benefits:

  • User activity is recorded in a secure log
  • Compliance audits become easier with pre-stored event logs generated by the system
  • Find areas where your system is vulnerable and monitor or fix them
  • Security automation

AWS Config

Manage and audit configurations of your AWS environments and systems. The AWS Config keeps a repository of configuration records and evaluates them against optimal specifications.

It also tracks changes and dependencies between AWS resources. It helps users monitor the many configurations of their AWS instance and services—an otherwise time-consuming process. AWS Config offers assistance monitoring, assessing, auditing and evaluating configurations in one place.

Benefits:

  • Continuously monitor and track configuration changes.
  • Up to date with compliance and audit requirements.
  • Manage changes at scale. Troubleshooting is simplified and can be automated.

AWS Systems Manager

AWS Systems Manager gives you full control of the framework on AWS. Systems Manager offers an impactful, easy-to-use UI so you can see operational information from various sources and automate tasks needed for smooth operation. With Systems Manager, you can assemble assets by application, monitor operational system info and activate resources.

Benefits:

  • Ensures security and compliance
  • Includes management of hybrid environments
  • Full visibility of resource groups and configurations lets you have greater control
  • Perfect for automation, easy-to-use
  • Detect problems more quickly

Visit the AWS Management Tools homepage for more tools and detailed descriptions.

Third-party tools for managing AWS

In addition to the tools created by AWS, a number of third-party vendors offer resources for provisioning, ops management, monitoring and configurations.

RightScale

RightScale is a multi-use tool that helps with operations management and provisioning. This tool is also used for monitoring governance and optimizing for cost. This cloud management platform offers users the ability to manage all their clouds from one UI.

SCALR

Similar to RightScale, SCALR has a number of functions that are helpful for users in an AWS environment. The aim of this service is to increase productivity, reduce cost, enhance security, and prevent common concerns such as vendor lock-in. All the while, offering a flexible environment for users on a public, private, or hybrid cloud.

Hybridfox

Hybridfox is a popular Chrome add-on that works with a number of IaaS/PaaS providers, including AWS. It can be used with public and private clouds. It’s perfect for users who have multiple cloud environments because it allows for switching between them seamlessly.

Cloudability

Cloudability is a full-service cloud suite that offers users migration assistance, configuration management, and operations management. Cloudability helps to ensure governance and compliance needs are met, while offering a full suite of services to AWS users.

Ylastic

Ylastic is a cloud management service that focuses on managing user instances of AWS in an intuitive way and offering data analytic and backup options. Ylastic touches operations management, configuration management, security, compliance and more.

While the differences between some of these tools may seem small, something like red-flag resolution and alerts could make all the difference for enterprise business leaders. In many instances, it comes down to personal preference.

Overall, when purchasing any new services or applications, it’s important to first take inventory of the unique needs of your business, then decide on the right course of action. Apart from choosing the right services, implementing an effective cloud management strategy is also of paramount importance.

Related reading

]]>
AWS Certifications in 2022 https://www.bmc.com/blogs/aws-certifications/ Thu, 03 Feb 2022 00:00:19 +0000 https://www.bmc.com/blogs/?p=18444 Amazon Web Services (AWS) certifications are highly sought-after credentials in today’s environment. AWS certs provide an industry standard for demonstrating AWS cloud expertise, rigorously testing competency, and providing an accurate representation of the test-taker’s skills. This article covers several aspects of getting and maintaining an AWS certification. We’ll look at the following topics and how […]]]>

Amazon Web Services (AWS) certifications are highly sought-after credentials in today’s environment. AWS certs provide an industry standard for demonstrating AWS cloud expertise, rigorously testing competency, and providing an accurate representation of the test-taker’s skills.

This article covers several aspects of getting and maintaining an AWS certification. We’ll look at the following topics and how they relate to your AWS certification journey:

(This article is part of our AWS Guide. Use the right-hand menu to navigate.)

Available AWS certifications & categories

There are eleven active AWS certifications that you can achieve in 2022:

  • 1 foundational certification
  • 3 associate-level certifications
  • 2 professional-level certifications
  • 5 specialty certifications

Here’s a brief description of what each certification level offers and what proficiencies they certify.

Foundational Level

Cloud Practitioner is the only Foundational AWS certificate. It is ideal for candidates with at least six months of experience in any role involving the AWS cloud, such as technical, sales, purchasing, financial, or managerial positions.

This certification verifies that the candidate has an overall familiarity with the AWS Cloud and related knowledge and skills. While other certifications tie into specific technical roles such as Architect, Developer or Operations, the Cloud Practitioner certification provides a more general foundation for any career path.

Associate Level

Each of the Associate certifications typically require at least a year of previous direct experience and knowledge regarding AWS services and related technologies. The three certifications within the Associate level are:

  • Solutions Architect focuses on designing and implementing AWS distributed systems
  • SysOps Administrator focuses on deploying, managing, and operating workloads on AWS
  • Developer focuses on writing and deploying cloud-based applications

Professional Level

The highest certification category, each Professional AWS certification requires a full two years of experience, with each candidate being successful and highly capable within their respective roles. The two Professional-level certifications are:

  • Solutions Architect validates the candidate’s ability to design, deploy, and evaluate AWS applications inside complex and diverse environments
  • DevOps Engineer validates the candidate’s ability to automate the testing and deployment of AWS applications and infrastructure

Specialty certifications

Whereas the previous three levels represent the core role-based certifications that AWS offers, the Specialty certifications provide evaluations in specific technical areas. These certifications include:

  • Advanced networking
  • Security
  • Machine Learning
  • Data analytics
  • Database

Requirements vary for each specialty certification. Candidates must possess experience with AWS technology, along with 2-5 years’ experience in the specialty domain. Check each individual certification for prerequisites and requirements.

How much does AWS certification cost?

To earn an AWS certification, you’ll have to pass a test. Each exam requires an AWS testing fee, typically between $100 and $300 (visit this page for Amazon’s current pricing).

Be prepared though—the examination fee won’t be your only certification cost. You may also have to invest cash and time in test preparation, including:

  • Paid classroom or remote training
  • Course materials
  • Practice exams

But there is an upside! Amazon offers an AWS Free Tier account for trials, 12 months free access, and some free-tier services that you never need to pay for. This is valuable when studying for certification. However, if you’re studying specific certification scenarios, you may have to purchase additional services with your free account.

Benefits of an AWS Certification

AWS remains one of the top cloud service providers in the market. For good reason.

Obtaining AWS certifications demonstrates competency in AWS services. They also help candidates clearly demonstrate to potential employers exactly what skills they have, which helps you to:

  • Increase your competitiveness
  • Negotiate your salary

Many significant IT professional and management opportunities aren’t available without a related AWS certification. While great salaries aren’t guaranteed, AWS certified jobs frequently can offer salaries ranging from $90k to $160k+ USD, depending on the AWS certification category and job environment.

Of course, AWS certifications can also aid candidates in improving skills or learning new ones. Preparing for the exams through practice exercises and studying can:

  • Reinforce knowledge on key concepts
  • Correct outdated/wrong knowledge
  • Introduce you to new areas

Picking the right AWS certification

The two main factors that determine appropriate AWS certification needs are the experience level and career path desires of the candidate. If you already work in a particular field and wish to move up to higher positions, look for certifications that match your capabilities. Then, check the requirements in terms of experience and skills to determine if an Associate, Professional, or Specialty certification is the best fit.

AWS outlines several “learning paths” that can help guide candidates toward the best certifications for obtaining specific professional roles in the future.

If you’re just starting out, the Foundational Cloud Practitioner certificate can be a good choice. Explore the various learning paths to help identify specific professional goals and the best certificates to reach them.

Study & practice

There are many options for exam preparation. Useful ways to get ready for AWS certification include:

  • Taking training classes
  • Using study guides
  • Taking practice exams
  • Reading AWS whitepapers

Training classes are available through AWS, and third-party global and local AWS training partners. You can find AWS approved instruction at the AWS Classroom Training Overview web page.

Remote and (sometimes) in-person training offer the best options for learning AWS skills and certifications. They provide instructor-led training and labs, as well as practice exams, books, and exercises. Amazon also offers study guides in both ebook and physical formats.

Studying for AWS exams

Studying for any given exam is likely to require anywhere from 80 to 120 hours.

For candidates working full-time jobs, this can mean months of preparation. Start a study regimen about two or three months before the exam date with a consistent weekly schedule designed to cover all the relevant material in the given timeframe. Certification exams cover a lot of material in an Amazon-specific format so give yourself plenty of time to absorb the material.

Regarding exam practice, regularly take the certification practice tests provided with your study materials. Even if you’ve been working with the material for years, certification exam questions may contain specific terminology and phrasings that you’re unfamiliar with. Taking the practice tests help prepare you for the tone and pace of the exam.

For more details on the format, type, delivery method, time limit, costs, and available languages of each exam, check the page of your intended certification by clicking on its badge on the AWS learn about training by role or solution page.

Related reading

]]>
AWS Cloud Databases Explained: Innovation in the Multi-Cloud https://www.bmc.com/blogs/aws-cloud-databases/ Wed, 10 Nov 2021 00:00:46 +0000 https://www.bmc.com/blogs/?p=12896 In the technology industry, innovation is all about agility and performance. If a new technology product cannot keep up with the fast-paced, data-driven user ecosystem, chances are that product will soon be replaced by a better alternative…one that’s developed by a startup firm that focuses all its internal resources on product innovation enabled by cloud […]]]>

In the technology industry, innovation is all about agility and performance. If a new technology product cannot keep up with the fast-paced, data-driven user ecosystem, chances are that product will soon be replaced by a better alternative…one that’s developed by a startup firm that focuses all its internal resources on product innovation enabled by cloud computing capabilities such as cloud databases and multiload infrastructure.

But what exactly is cloud database? How does the AWS cloud database systems run in a multi-cloud world deliver innovation? Let’s discuss!

(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)

What is a cloud database?

A database is a collection of structured information. In the context of software engineering, the organized collection of stored data is controlled, managed, and modified according to the principles defined by a database management system (DBMS).

The structure of a DBMS is determined by different types of underlying data models, classified as relational and non-relational. Examples include:

  • Network
  • Hierarchical
  • Object-oriented
  • Object-relational
  • Multidimensional
  • Additional models

In the digital age, data grows exponentially and so the database storage systems must scale accordingly. On-premises servers are often insufficient to meet the growing needs of big data applications. Managing databases requires a suite of technology solutions governing all aspects of big data, including DBMS, security, analytics, infrastructure, and operations management.

All this encourages organizations to break free from legacy commercial databases that offer limited scalability, management controls and flexibility to integrate with competing products and standardized non-proprietary technologies.

AWS Cloud Database Service

Cloud computing has accelerated the transition from legacy databases to cloud-native alternatives with fully managed, built-in DBMS features. A popular example is the AWS Database solutions suite that lets users break free from…

By definition, a cloud database is any cloud-native database system, including the managed database services accessed from the cloud on a subscription-based pricing model. Cloud database systems allow users to spend time on application-centric work instead of spending time and capital on resource-intensive database management and administrative tasks.

Services such as the AWS database managed services allow organizations to take advantage of commercial-grade database capabilities at high performance, dependability and security as an affordable OpEx.

Cloud database examples

Consider the case example of Airbnb, one of the earliest customers of the AWS Relational Database Service (RDS) customers. Airbnb migrated its database workloads to the AWS database platform in 2010 and over the next three years, the company scaled its operations significantly:

  • 2 billion rows stored in RDS (That’s 150,000 listings at the Airbnb platform every day.)
  • From 24 to over a 1000 EC2 instances
  • From 300GB to 50TB of high-quality photos stored in AWS S3

Why did Airbnb choose the cloud database service model?

All of this growth and scale took only a five-person operations team. (We’re impressed.) The cloud database service empowered Airbnb to concentrate its engineering efforts entirely on application development and innovation. AWS offered the tools and capabilities necessary to meet the fast-changing and growing database management needs of Airbnb, while the company was engaged in reinventing and disrupting the entire travel industry.

Multi-cloud enables innovation

Business organizations face a diverse set of requirements that are never entirely satisfied with a single server infrastructure model. They need the security and performance of an on-premises private cloud system, the scalability and cost optimization of a public cloud, and flexibility of a hybrid cloud datacenter—all delivered within a heterogeneous infrastructure environment.

(Read our cloud primer: public vs private vs hybrid.)

This is exactly what defines a multi-cloud environment. Multi-cloud refers to the combination of two or more cloud infrastructure environments. The services can come from different vendors, in different architectural models. The workloads are distributed selectively across the multi-cloud infrastructure systems and users can optimize the multi-cloud system for cost, scalability, performance, dependability, and technology capabilities.

Here’s how a cloud database system fits in a multi-cloud environment to deliver innovation and performance in today’s era of big data:

  • Cloud databases offer the features and familiarity of a legacy relational database system without the associated limitations such as lock-in, high cost, administrative workload, performance issues, feature limitations and resource-intensive operations.
  • The management, security, flexibility, and scalability of cloud databases is similar to that of a single-tenant application. Cloud database services such as AWS are purpose-built and feature rich, which enables highly customizable database operations within the scope of a third-party vendor managed service offering. This means that users have limited responsibilities on developing, managing, and maintaining a cloud database.
  • The pricing model is that of a shared multi-tenant service. High CapEx is replaced by affordable OpEx.

Choosing a database in the cloud

Sifting through the thousands of cloud options isn’t easy. When you opt for a single vendor, like AWS or Azure, you can take advantage of guidance and migration selection the company provides.

As always, consider your company’s needs—your product, your competitive edge, and, perhaps most importantly, what your customers expect. Do your customers need the scalability and flexibility that the cloud offers?

Today, AWS still offers the most databases under the single umbrella. Google and Azure are a solid alternative, rapidly adding databases and serverless options. Still, your company may prefer the cutting-edge innovation of a multi-cloud solution, even with its inherent drawbacks. Review the list of cloud database services, the features and cost options before migrating your legacy database workloads to the cloud.

Related reading

]]>
AWS ECS vs EKS: What’s The Difference? How To Choose? https://www.bmc.com/blogs/aws-ecs-vs-eks/ Thu, 28 Oct 2021 00:00:24 +0000 https://www.bmc.com/blogs/?p=12575 The increased popularity of containerized applications has illustrated the need for proper container orchestration platforms to support applications at scale. Containers need to be managed throughout their lifecycle, and many products have been created to fulfill this need. These container orchestration products range from open-source solutions such as Kubernetes and Rancher to provider-specific implementations such […]]]>

The increased popularity of containerized applications has illustrated the need for proper container orchestration platforms to support applications at scale.

Containers need to be managed throughout their lifecycle, and many products have been created to fulfill this need. These container orchestration products range from open-source solutions such as Kubernetes and Rancher to provider-specific implementations such as:

  • Amazon Elastic Container Service (ECS)
  • Azure Kubernetes Service (AKS)
  • Elastic Kubernetes Service (EKS)

All these different platforms come with their unique advantages and disadvantages. Amazon itself offers an extensive array of container management services and associated tools like the ECS mentioned above, EKS, AWS Fargate, and the newest option, EKS Anywhere.

AWS users need to evaluate these solutions carefully before selecting the right container management platform for their needs—and we’re here to help!

(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)

How container management works

A container is a lightweight, stand-alone, portable, and executable package that includes everything required to run an application from the application itself to all the configurations, dependencies, system libraries, etc. This containerization greatly simplifies the development and deployment of applications. However, we need the following things to run containers:

While containers encapsulate the application itself, the container management or an orchestration platform provides the rest of the above facilities required throughout the lifecycle of the container.

ECS and EKS are the primary offerings by AWS that aim to provide this container management functionality. In the following sections, we will see what exactly these two offerings bring to the table.

What is Amazon Elastic Container Service (ECS)?

The Elastic Container Service can be construed as a simplified version of Kubernetes—but that’s misleading. The Elastic Container Service is an AWS-opinionated, fully managed container orchestration service. ECS is built with simplicity in mind without sacrificing management features. It easily integrates with AWS services such as AWS Application/Network load balancers and CloudWatch.

Amazon Elastic Container Service uses its scheduler to determine:

  • Where a container is run
  • The number of copies started
  • How resources are allocated

As shown in the following image, ECS follows a simple, easily understood model. Each application in your stack (API, Thumb, Web) is defined as a service in ECS and schedules (runs) tasks (instances) on one or more underlying hosts that meet the resource requirements defined for each service.

Elastic Container Service

This model is relatively simple to understand and implement for containerized workloads as it closely resembles a traditional server-based implementation. Thus migrating applications to ECS becomes a simple task that only requires the containerized application, pushing the container image to the Amazon Elastic Container Repository (ECR) and then defining the service to run the image in ECS.

Most teams can easily adapt to such a workflow. ECS also provides simple yet functional management and monitoring tools that suit most needs.

What is Elastic Kubernetes Service (EKS)?

The Elastic Kubernetes Service is essentially a fully managed Kubernetes Cluster. The primary difference between ECS and EKS is how they handle services such as networking and support components.

  • ECS relies on AWS-provided services like ALB, Route 53, etc.,
  • EKS handles all these mechanisms internally, just as in any old Kubernetes cluster.

The Elastic Kubernetes Service provides all the features and flexibility of Kubernetes while leveraging the managed nature of the service. However, all these advantages come with the increased complexity of the overall application architecture.

EKS introduces the Kubernetes concept of Pods to deploy and manage containers while ECS directly uses individual containers to deploy them. Pods can contain either one or more containers with a shared resource pool and provide far more flexibility and fine-grained control over components within a service. The below image shows that all the services (ex: proxy, service discovery) that need to run containers are within the Kubernetes cluster.

Let’s assume that our Thumb service was a combination of three separate components:

Kubernetes allows us to run these three separate components as distinct containers within a single Pod that makes up the Thumb service.

Kubernetes

Containers within the pods run collocated with one another. Furthermore, they have easy access to each other and can share resources like storage without relying on complex configurations or external services. All these facts make it possible for users to create more complex applications architectures with EKS.

Additionally, EKS enables users to tap into the wider Kubernetes echo-system and use add-ons like:

  • The networking policy provider Calico
  • Romana Layer 3 networking solution
  • CoreDNS flexible DNS service
  • Many other third-party add-ons and integrations

Since EKS is based on Kubernetes, users have the flexibility to move their workloads between different Kubernetes clusters without being vendor-locked into a specific provider or platform.

What is Fargate? How does it affect all this?

Even with managed services, servers still exist, and users can decide which types of compute options to use with ECS or EKS.

AWS Fargate is a serverless, pay-as-you-go compute engine that allows you to focus on building applications without managing servers. This means that AWS will take over the management of the underlying server without requiring users to create a server, install software, and keep it up to date. With AWS Fargate, you only need to create a cluster and add a workload—then, AWS will automatically add pre-configured servers to match your workload requirements.

Fargate is the better solution in most cases. It will not cost more than self-managed servers and, most of the time, saves costs due to only charging for the exact usage. Therefore, users do not have to worry about the unused capacity like in self-managed servers, which requires manually shutting down to save costs.

However, here some notable exceptions of Fargate:

  • Fargate cannot be used in highly regulated environments with strict security and compliance requirements. The reason is that users lose access to the underlying servers, which they might need control over to meet those stringent regulatory requirements. Additionally, Fargate does not support “dedicated tenancy” hosting requirements.
  • ECS and Fargate only support AWS VPC networking mode, which may not be suitable when deep control over the networking layer is required.
  • Fargate automatically allocates resources depending on the workload with limited control over the exact mechanism. This automatic resource allocation can lead to unexpected cost increases, especially in R&D environments where many workloads are tested. Therefore, self-managed servers with capacity limitations will be a better solution for these kinds of scenarios.

What about EKS Anywhere?

EKS Anywhere extends the functionality of EKS by allowing users to create and operate Kubernetes clusters on customer-managed infrastructure with default configurations. It provides the necessary tools to manage the cluster using the EKS console.

EKS Anywhere is built upon the Amazon EKS Distro and provides all the necessary and up-to-date software which resides on your infrastructure. Moreover, it provides a far more reliable Kubernetes platform compared to a fully self-managed Kubernetes cluster.

EKS Anywhere is also an excellent option to power hybrid cloud architecture while maintaining operational consistency between the cloud and on-premises. Besides, EKS Anywhere provides the ideal solution to keep data with on-premises infrastructure where data sovereignty is a primary concern. It leverages AWS to manage the application architecture and delivery.

Choosing ECS vs EKS: which is right for you?

EKS is undoubtedly the more powerful platform. However, it does not mean EKS is the de facto choice for any workload. ECS is still suitable for many workloads with its simplicity and feature set.

When to use ECS

  • ECS is much simpler to get started with a lower learning curve. Small organizations or teams with limited resources will find ECS the better option to manage their container workloads compared to the overhead associated with Kubernetes.
  • Tighter AWS integrations allow users to use already familiar resources like ALB, NLB, Route 53, etc., to manage the application architectures. It helps them to get the application up and running quickly.
  • ECS can be the stepping stone for Kubernetes. Rather than adapting EKS at once, users can use ECS to implement a containerization strategy and move its workloads into a managed service with less up-front investment.

When to use EKS

On the other hand, ECS can sometimes be too simple with limited configuration options. This is where EKS shines. It offers far more features and integrations to build and manage workloads at any scale.

  • Pods may not be required for many workloads. However, Pods offer unparalleled control over pod placement and resource sharing. This can be invaluable when dealing with most service-based architectures.
  • EKS offers far more flexibility when managing the underlying resources with the flexibility to run on EC2, Fargate, and even on-premise via EKS Anywhere.
  • EKS provides the ability to use any public and private container repositories.
  • Monitoring and management tools of ECS are limited to the ones provided by AWS. While they are sufficient for most use cases, EKS allows greater management and monitoring capabilities both via built-in Kubernetes tools and readily available external integrations.

All in all, the choice of the platform comes down to specific user needs. Both options have their pros and cons, and any of them can be the right choice depending on the workload.

To sum up, it’s better to go with EKS if you are familiar with Kubernetes and want to get the flexibility and features it provides. On the other hand, you can try ECS first if you are just starting up with containers or want a simpler solution.

Related reading

]]>
AWS Global Cloud Infrastructure: Regions, Zones & More https://www.bmc.com/blogs/aws-regions-availability-zones/ Thu, 09 Sep 2021 00:16:13 +0000 https://www.bmc.com/blogs/?p=12596 Amazon Web Services (AWS) pioneered the modern-day cloud computing IT service delivery model. Amazon’s cloud service delivery success relies on the technology’s accessibility and availability for end-users. The AWS Cloud has grown rapidly since its inception, expanding its services across worldwide geographic locations. Its vast global cloud infrastructure footprint sets it apart from leading competitors […]]]>

Amazon Web Services (AWS) pioneered the modern-day cloud computing IT service delivery model. Amazon’s cloud service delivery success relies on the technology’s accessibility and availability for end-users.

The AWS Cloud has grown rapidly since its inception, expanding its services across worldwide geographic locations. Its vast global cloud infrastructure footprint sets it apart from leading competitors like Microsoft Azure and Google Cloud.

This primer describes how the AWS global cloud infrastructure works and what that means for AWS customers. We’ll look at the following items:

(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)

What is the AWS Global Cloud Infrastructure?

The AWS Global Cloud infrastructure is the backbone network of global data centers and other platforms that Amazon uses to deliver application workloads and AWS services.

The AWS Global Cloud Infrastructure is the backbone network for delivering AWS workloads and services

The AWS Global Cloud Infrastructure is the backbone network for delivering AWS workloads and services (Source)

For cloud application and service delivery, customers provision and connect their end users and organizational environments to the following AWS global infrastructure components:

  • AWS Regions
  • AWS Availability Zones
  • AWS Local Zones
  • AWS Wavelength Zones

AWS Regions

The heart of the AWS Global Cloud. AWS Regions are physical locations around the world where Amazon clusters data centers for application and service delivery in AWS Availability Zones. Regions also provide extensions for other delivery options, such as AWS Local Zones.

Each AWS Region may offer different service quality in terms of latency, solutions portfolio, and cost, based on its geographic location and distance from customer sites.

(Explore regions from other cloud providers in Availability Regions & Zones for AWS, Azure & GCP.)

AWS Availability Zones

An Availability Zone (AZ) is a grouping of one or more discrete data centers that provide applications and services in an AWS region.

Each AZ contains redundant connectivity, power, and networking capabilities, and individual AZs are physically separated (isolated) from each other by a meaningful distance. All AZs in an AWS Region are connected through low latency and high throughput networking channels.

Because of their connectivity and redundancy, AZs provide customer application and database operating environments that are more scalable and fault tolerant. Because regional AZs are physically isolated from each other, applications can be partitioned across multiple AZs for high availability.

AWS Local Zones

AWS Region extensions that place compute, storage, databases, and other AWS services in closer proximity to large populations, IT centers, and industries.

AWS Local Zones are provisioned to run high-speed applications—such as media, entertainment, real-time gaming, live video streaming, and machine learning—that require single-digit millisecond latency to service users in specific geographic locations.

AWS Wavelength Zones

Wavelength zones provide 5G telecommunications connections inside AWS Regions. Wavelength zones embed AWS compute and storage capabilities within communication service providers’ (CSPs) data centers at the edge of their 5G networks.

5G devices can reach apps running in Wavelength Zones without ever leaving the 5G network, allowing them to take advantage of 5G bandwidth and latency.

AWS Outposts

AWS Outposts is a fully managed service that allows customers to build and deploy AWS storage capacity at customer sites. Outposts extend AWS infrastructure, services, tools, and APIs to customer locations, enabling a hybrid delivery experience.

AWS Outposts can be created in almost any customer provided space, including:

  • Data centers
  • Co-location facilities
  • Customer on-premises facilities

AWS cloud infrastructure delivery environments

Customers can use AWS global cloud infrastructure zones and outposts to deliver workloads and services to users inside specific geographical locations (AWS Regions), through the following infrastructure delivery environments:

  • Cloud data centers: Scalable, clustered, and redundant Amazon provided data centers (AWS Availability Zones).
  • Cloud single-digit millisecond latency environments: Amazon ultralow latency data centers that are significantly closer to users in large population centers, IT centers, and industries (AWS Local Zones).
  • 5G environments: 5G telecommunication network delivery for single-digit millisecond latency access for mobile devices and end users (AWS Wavelength Zones).
  • Hybrid delivery: Customer hosted sites where workloads and services can be delivered with the same infrastructure, services, tools, and APIs used in AWS facilities (AWS Outposts).

Amazons Global Cloud Infrastructure Delivery Environments

How big is the AWS cloud?

As of this writing, here’s how big the AWS cloud is:

  • The AWS cloud spans 81 Availability Zones within 25 worldwide geographic regions. Announced plans include 21 more Availability Zones and seven more regions in Europe, Asia, and Australia.
  •  There are currently eight AWS Local Zones for ultralow latency applications, with nine more Local Zones on the way.
  • There are 17 Wavelength Zones available for ultralow latency and 5G processing.

Up-to-date information on the location and count of AWS Regions, Availability Zones, Local Regions, and Wavelength Zones can be found on the official AWS Global Infrastructure page.

How to choose AWS global infrastructure components

The choice for choosing your AWS global cloud infrastructure may come as a tradeoff for organizations based on a range of factors, including:

Latency & proximity

The distance between the cloud deployments (AWS Regions and Zones) and end-users is a key factor that determines the latency and network performance of the cloud service.

  • For ultralow latency workloads, you may need to provision AWS Local Zones to meet user needs.
  •  For 5G access, there may be no other choice than to provision an AWS Wavelength Zone.

Performance is further affected when the cloud solution is integrated with on-premise legacy technologies and apps as part of a hybrid cloud strategy (AWS Outposts). In a hybrid strategy, performance is highly influenced by local factors, such as on-premises network and telecommunications speed, that AWS capabilities may not be able to compensate for.

Selecting the AWS Region closest to customer or end-user proximity helps ensure the best user experience. The closest region is also usually the least expensive option when compared to choosing an AWS Region in a distant geographic location.

Amazon also offers services such as AWS Route 53 to automatically direct global network traffic through the most optimal channels to maximize availability and performance.

Cost

Pricing across AWS Regions varies, because of different CapEx, OpEx, and regulations in different geographic locations. Organizations may need to identify the optimal tradeoff between the cost and other factors—including service catalog items, latency, network performance, regulatory compliance—when configuring their AWS infrastructure.

AWS offers a cost calculator to estimate the expected costs of AWS services in different regions. The more complex your AWS environment is, the harder it will be to accurately estimate your AWS global cloud infrastructure costs. The cost calculator is a good place to start for estimating costs on expected AWS regions used.

Service catalog availability

Amazon offers a vast portfolio of cloud-based solutions spanning AWS Regions. While the most popular AWS services are available across all AWS Regions, not every region offers all services. Consult the AWS Regional Services page to determine AWS service availability in every Region.

AWS Region choices should be based on current and future workload needs. Workload requirement changes may require additional AWS service investments in different regions, particularly when servicing many worldwide locations.

Regulatory compliance & security

Regulatory compliance and security can also affect which AWS regions and zones will host your workloads. Consider the following factors when selecting which AWS global cloud infrastructure components to deploy for applications and services.

  • Regulatory compliance. Specific industry and regulatory specifications—such as the EU’s General Data Protection Regulation (GDPR) and state, provincial, or local locality data handling regulations—may require sensitive end-user data to be processed in specific geographic locations.
  •  Data mobility. Cloud computing makes it easy for organizations to transfer, store, and process information in data centers at distant locations, which may be considered as a violation of compliance regulations that may lead to costly lawsuits and damages to the brand reputation.
  •  Security requirements. Organizations may also be obliged to distribute workloads across multiple geographically disparate cloud data centers to ensure high availability and security standards of sensitive business information and IT-enabled services.

Service level agreements

To meet the desired standards of IT service availability and performance, AWS provides many Service Level Agreements (SLAs) covering service uptime and credits for service failure.

Consult the AWS Service Level Agreement page for more information on the SLAs Amazon offers for delivered services.

The green factor

Many organizations are pushing to achieve carbon neutrality and adopting environmentally friendly business practices. Amazon is no different. Using renewable energy, AWS aims to operate a net carbon neutral global infrastructure by contributing toward various renewable energy and data center projects,.

The location of AWS data centers is therefore one criteria to consider in your annual sustainability report that could serve as a competitive differentiation toward your sustainability efforts.

Considering your cloud strategy

AWS offers a range of global cloud infrastructure offerings that can fit well in your IT strategy, whether you’re focusing on service uptime, product portfolio, availability, compliance, or sustainability factors.

CIOs need to make well-informed decisions pertaining to AWS global cloud infrastructure with considerations for both the near-term and long-term implications of their IT investments.

(See how to build your multi-cloud strategy.)

Related reading

]]>
AWS Serverless Applications: The Beginner’s Guide https://www.bmc.com/blogs/aws-serverless-applications/ Fri, 23 Jul 2021 00:00:22 +0000 http://www.bmc.com/blogs/?p=12265 Serverless applications are changing the way companies do business by enabling them to deploy much faster and more frequently—a competitive advantage. Amazon’s AWS Serverless Application Model (AWS SAM) has been a game changer in this space, making it easy for developers to create, access and deploy applications, thanks to simplified templates and code samples. (This […]]]>

Serverless applications are changing the way companies do business by enabling them to deploy much faster and more frequently—a competitive advantage.

Amazon’s AWS Serverless Application Model (AWS SAM) has been a game changer in this space, making it easy for developers to create, access and deploy applications, thanks to simplified templates and code samples.

(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)

What you’ll learn in this guide

This guide is a primer for developers who are interested in learning how to program in AWS SAM. We’ll discuss the following key AWS SAM topics and features:

Key terminology

Here are three key terms you need to understand to start programming in AWS SAM:

  • Function-as-a-Service (FaaS). Think of FaaS as a ready-to-implement framework that can be easily tailored to the needs of an enterprise business. FaaS allows customers to develop and run code without the need to provision and manage infrastructure.
  • Compute service. An on-demand FaaS that specializes in serverless computing.
  • Serverless application. An application, programmed in the cloud, that requires no server maintenance.

(Learn more about serverless architecture.)

Who uses AWS SAM?

AWS SAM is an important resource for any developer who:

  • Is ready to use serverless computing
  • Wants to learn more about serverless architecture

The resources available within AWS SAM make it easy for any programmer to get their feet wet with low-cost, efficient serverless computing services provided by Amazon.

What is the AWS Serverless Application Model?

The AWS Serverless Application Model (SAM) is designed to make the creation, deployment, and execution of serverless applications as simple as possible. This can be done using AWS SAM templates with just a few choice code snippets—this way, practically anyone can create a serverless app.

What AWS resources are used for AWS SAM?

Let’s look at the resources that are used in the AWS Serverless Application Model.

AWS SAM CLI, YAML & CloudFormation

AWS SAM applications can be created using the AWS SAM CLI command line tool. SAM CLI lets you build, test, and deploy applications using either:

  • SAM CLI templates
  • The AWS Cloud Development Kit (CDK)

SAM CLI can also be used for application deployment.

You can define and model SAM application templates using YAML.

AWS SAM templates are an extension of AWS CloudFormation templates. At deployment, SAM transforms the SAM syntax into CloudFormation syntax. CloudFormation sets up and configures the server infrastructure that your serverless applications will run under, allowing you to concentrate on your application rather than your infrastructure.

AWS Lambda

The AWS Lambda FaaS platform exists so that developers can run code without administering servers. Using the Lambda compute service, developers can upload deployable code. Then, Amazon handles the administration, charging only for the accumulated runtime used.

The Lambda console makes it easy to create applications with just a few clicks. Lambda supports many languages through Lambda runtime environments, including C#, Go, Java, .Net, Node.js, and Python.

Developers can create single, simple, and scalable Lambda functions that can be invoked in several ways, including:

  • Automatic invocation using triggers in Lambda or another AWS service. Using triggers, Lambda functions can be invoked in response to lifecycle events, external events, or from a schedule.
  • Event source mapping from another AWS resource. Event source mappings read items from an Amazon Kinesis or Amazon DynamoDB stream; an Amazon SQS queue; or another AWS resource and send those items to a Lambda function.
  • Other AWS services that can directly invoke Lambda functions.

Of note, functions created on Lambda’s architecture are scalable for optimum performance.

(Compare AWS Lambda with AWS ECS.)

Amazon API Gateway

Using the Amazon API Gateway service, developers can create, deploy, secure, and monitor the frontend of their serverless application. API Gateway extensions can be used in your AWS SAM templates.

Like AWS Lambda, which is designed to create the backend of an application, Amazon API Gateway takes the complexity out of writing and deploying code that executes the front-door entry point to your application, including authorization. The Gateway APIs work with AWS Lambda backends, as well as Amazon EC2 and other web application services.

Amazon keeps its API Gateway service low-cost and affordable for developers by only charging for:

  • Calls made to APIs
  • Data transfers out of them

AWS Serverless Application Repository

The AWS Serverless Application Repository is a searchable ecosystem that allows developers to find serverless applications and their components for deployment. Its helps simplify serverless application development by providing ready-to-use apps.

AWS Serverless Application Repository Basic Steps

Here are the basic steps:

  1. Search and discover. A developer can search the repository for code snippets, functions, serverless applications, and their components.
  2. Integrate with the AWS Lambda console. Repository components are already available to developers.
  3. Configure. Before deploying, developers can set environment variables, parameter values, and more. For example, you can go the plug-and-play route by adding repository components to a larger application framework, or you can deconstruct and tinker with the code for further customization. If needed, pull requests can also be submitted to repository authors.
  4. Deploy. Deployed applications can be managed from the AWS Management Console. A developer can follow prompts to name, describe and upload their serverless applications and components to the ecosystem where they can be shared internally and with other developers across the ecosystem. This feature makes AWS SAM a truly open-source environment.

Benefits of programming in AWS SAM

You can build serverless applications for almost any type of backend service without having to worry about scalability and managing servers. Here are some of the many benefits that building serverless applications in AWS SAM has to offer.

Low cost & efficient

AWS SAM is low-cost and efficient for developers because of its pay-as-you-go structure. The platform only charges developers for usage, meaning you never pay for more of a service than you use.

Simplified processes

The overarching goal of AWS SAM is ease-of-use. By design, it’s focused on simplifying application development so that programmers have more freedom to create in the open-source ecosystem.

Quick, scalable deployment

AWS SAM makes deployment quick and simple by allowing developers to upload code to AWS and letting Amazon handle the rest. They also provide robust testing environments, so developers don’t miss a beat. All of this occurs on a platform that is easy to scale, allowing apps to grow and change to meet business objectives.

Convenient & accessible

Undoubtedly, AWS SAM offers a convenient solution for developing in the cloud. Its serverless nature also means that it is a universally accessible platform. The wide reach of the internet makes it easy to execute code on-demand from anywhere.

Decreased time to market

Overall, choosing a serverless application platform saves time and money that would otherwise be spent managing and operating servers or runtimes, whether on-premises or in the cloud. Because developers can create apps in a fraction of the time (think hours—not weeks or months), they are able to focus more of their attention on accelerating innovation in today’s competitive digital economy.

AWS SAM for Serverless Applications

It’s clear that AWS SAM is a highly efficient, highly scalable, low-cost, and convenient solution for cloud programming.

But for those who haven’t yet made the switch, there are some concerns that arise from developing using AWS SAM, including:

  • A general lack of control over the ecosystem that developers are coding in.
  • Vendor lock-in that may occur when you sign up for any FaaS.
  • Session timeouts that require developers to rewrite code, making it more complex instead of simplifying the process.
  • AWS Lambda timeouts: Lambda functions are limited by a timeout value that can be configured from 3 seconds to 900 seconds (15 minutes). Lambda automatically terminates functions running longer than its time-out value.

The first two points are common drawbacks of any outsourcing strategy. The latter two concerns, however, might mean you’ll implement a few new workarounds. Regardless, the many benefits of programming in AWS SAM cannot be overlooked.

Related reading

]]>
AWS ECS vs AWS Lambda: What’s The Difference & How To Choose https://www.bmc.com/blogs/aws-ecs-vs-aws-lambda/ Thu, 06 May 2021 07:59:38 +0000 https://www.bmc.com/blogs/?p=49562 There are several options for code and application deployment in the AWS ecosystem. Two of the most popular options for AWS application deployment are: Elastic Container Service (ECS) AWS Lambda Today, let’s look at ECS and Lambda, explaining what each option does and discuss some critical considerations for deploying one option over another. (This article […]]]>

There are several options for code and application deployment in the AWS ecosystem. Two of the most popular options for AWS application deployment are:

  • Elastic Container Service (ECS)
  • AWS Lambda

Today, let’s look at ECS and Lambda, explaining what each option does and discuss some critical considerations for deploying one option over another.

(This article is part of our AWS Guide. Use the right-hand menu to navigate.)

AWS ECS

Elastic Container Services (ECS): Container orchestration

ECS is used to manage and deploy Docker containers at scale (container orchestration).

ECS uses the following AWS components to store and run ECS applications:

  • AWS ECR
  • ECS cluster
  • Task definitions
  • ECS service scheduler
  • Other AWS services

Let’s take a look.

The AWS Elastic Container Repository (ECR)

ECR manages, stores, compresses, secures, and controls access to container images. Alternatively, you can use Docker Hub for storing and accessing container images.

An AWS ECS cluster for running containerized applications

A cluster is a logical grouping of tasks and containers that use several different launch types (infrastructure) to run. Clusters can contain:

  • Elastic Cloud Compute (EC2) instances
  • Amazon-managed instances using AWS Fargate
  • External servers outside the Amazon ecosystem using Amazon ECS Anywhere

When adding EC2 instances to an ECS cluster, you will need to manage the EC2 servers in that cluster, including locking down EC2 security, access, patching their operating systems, and performing maintenance.

AWS Fargate is a serverless computing environment that works with ECS. When adding Fargate tasks to an ECS cluster, AWS provisions and manages the EC2 servers your containers run in, relieving you from administering a separate EC2 infrastructure.

Amazon ECS Anywhere is an ECS extension that allows you to run containerized applications on servers outside the AWS ecosystem. ECS Anywhere loads an agent onto customer managed operating systems, making them managed instances and allowing those instances to register into an ECS cluster.

Task definitions

Task definitions that describe 1-10 containers (and parameters) that define an ECS application. Task definitions are stored as JSON files.

The AWS ECS service scheduler

This service schedules tasks (instantiations of the task definitions) to run inside your ECS cluster infrastructure. It executes tasks on a cron-like schedule or your own custom schedule.

The task scheduler also:

Other services

AWS is also integrated with other AWS services.

How ECS works

Putting it together, the ECS service scheduler launches tasks into ECS clusters using task definitions that reference and use Docker containers stored in the Elastic Container Repository or Docker Hub.

Using ECS is notable in that you can define your own EC2 server cluster, let AWS manage and deploy EC2 servers (AWS Fargate), use external servers (ECS Anywhere), or a combination of the three. Clusters and ECS tasks run inside specified AWS Virtual Private Clouds (VPCs) and subnets.

AWS Lambda: Serverless function deployment

AWS Lambda excels at running smaller on-demand applications that are triggered by new events and information.

Lambda is an AWS service that runs code without the need to provision and manage infrastructure. There is no EC2 provisioning or clusters to define. Lambda totally controls the EC2 instances your code runs on, and it auto-scales as needed. Lambda operates at the function and code level in the following way:

You create the code you want to run

Lambda supports many languages through Lambda runtimes, including nodeJS, Python, Go, Java, Ruby, .Net, and C#.

You package and upload your code

Package your code in a zip or jar deployment package, and upload it using the Lambda console or AWS CLI.

You designate the file and function name that serves as your entry point. Once uploaded, Lambda provides an Amazon Resource Name (ARN), which serves as an ID for the function you just created.

Your Lambda function is invoked and runs on EC2 instances

Lambda functions can be invoked several ways, including:

  • Directly from the Lambda console, the Lambda API, the AWS software development kit (AWS SDK), AWS CLI, and AWS toolkits.
  • From triggers in a Lambda resource or another resource.
  • From a Lambda event source mapping that invokes a Lambda function. Lambda reads events from several AWS services such as Amazon DynamoDB, Amazon Simple Queue Service (SQS), etc. Event sources are created in AWS CLI or AWS SDK.
  • From AWS services that invoke Lambda directly, including AWS ELB, AWS Simple Notification Service (SNS), Amazon API Gateway, and several others.

Functions can be invoked either:

  • Synchronously, where Lambda waits for a response after running the function and returns the response code with additional data to the client, allowing for retries and error responses
  • Asynchronously, where the client hands the event off to Lambda and doesn’t wait for response code and data.

After the function’s initial call, Lambda can scale by an additional 500 instances per minute when traffic spikes, until there are enough instances to handle the traffic. Scaling continues until a Lambda concurrency limit is reached or requests come in faster than your function can scale. You can also contract for provisioned concurrency to prevent Lambda scaling waits. As traffic dies down, Lambda stops unused instances.

How Lambda works

Lambda provisions serverless application deployment, including:

  • EC2 instance setup
  • Load balancing
  • Target groups
  • Auto-scaling

Your main responsibility is setting up your functions and invocation configurations. Lambda does the rest. Lambda functions are easy to set up and worry-free as far as infrastructure goes.

AWS Lambda

ECS vs Lambda: which is right for you?

Now that we’ve looked at what each application deployment option does and how it works, here are some considerations in choosing one option over the other.

Consider Lambda over ECS when…

  • You have a smaller application that runs on-demand in 15 minutes or less. Lambda functions are limited by a timeout value that can be configured from 3 seconds to 900 seconds (15 minutes). Lambda automatically terminates functions running longer than its time-out value.
  • You don’t care or need advanced EC2 instance configuration. Lambda manages, provisions, and secures EC2 instances for you, along with providing target groups, load balancing, and auto-scaling. It eliminates the complexity of managing EC2 instances.
  • You want to pay only for capacity used. Lambda charges are metered by milliseconds used and the number of times your code is triggered. Costs are correlated to usage. Lambda also has a free usage tier.

Consider ECS over Lambda when… 

  • You are running Docker containers. While Lambda now has Container Image Support, ECS is a better choice for a Docker ecosystem, especially if you are already creating Docker containers.
  • You want flexibility to run in a managed EC2 environment or in a serverless environment. You can provision your own EC2 instances or Amazon can provision them for you. You have several options.
  • You have tasks or batch jobs running longer than 15 minutes. Choose ECS when dealing with longer-running jobs, as it avoids the Lambda timeout limit above.
  • You need to schedule jobs. ECS provides a service scheduler for long running tasks and applications, along with the ability to run tasks manually.

Generally, ECS is best used for running a Docker environment on AWS using clustered instances. Lambda is best used for quickly deploying small, on-demand applications in a serverless environment.

Related reading

]]>
What Is AWS Organizations? How It Works & Best Practices https://www.bmc.com/blogs/aws-organizations/ Mon, 19 Apr 2021 07:54:02 +0000 https://www.bmc.com/blogs/?p=49348 AWS Organizations is an AWS account management service that lets users centrally manage and control groups of AWS accounts, and the workflows and policies that apply to them. The management process can be done manually or programmatically at the API level. Users can: Integrate multiple AWS services with multiple unique AWS accounts. Manage the user […]]]>

AWS Organizations is an AWS account management service that lets users centrally manage and control groups of AWS accounts, and the workflows and policies that apply to them.

The management process can be done manually or programmatically at the API level. Users can:

  • Integrate multiple AWS services with multiple unique AWS accounts.
  • Manage the user environments based on organizational, legal, or project-based policies.

The accounts can also share resources, security mechanisms, audit requirements, configurations, and policies between multiple AWS organizations.

In this article, we will give an overview of the AWS Organization service and how you can use it as a best practice for your AWS user environments.

(This article is part of our AWS Guide. Use the right-hand menu to navigate.)

User accounts in AWS Organizations

Originally, Amazon Web Services began with a single user account that enrolled multiple AWS services. Each person used a single AWS account and subscribed to multiple AWS services as necessary.

However, using a single account per user limits how organizations can manage the services, security permissions, audits, policies, and billings across multiple business divisions and projects assigned to the same user account.

The concept of AWS account has evolved significantly since the inception of the AWS cloud service, which continues to grow, particularly in the areas of:

  • Solutions
  • Resource options
  • Billing
  • Configuration features

Now, we can consider AWS accounts as containers that consist of such capabilities, all governed and managed across multiple AWS accounts but within the same centralized environment.

(Explore other AWS management tools.)

Benefits of AWS Organizations

Here’s why it makes sense to use multiple AWS accounts for the same categories of AWS resources contained in multiple unique and manageable account environments:

  • Easily categorize and discover services. Find and assign AWS applications programmatically using APIs, command line interface (CLI), and GUIs.
  • Apply logical boundaries to all aspects of policies. Different projects within the organization may be exposed to different security and compliance requirements. For example, by segregating AWS resources within multiple AWS accounts across all of those different projects, you can easily enforce unique identity policies in compliance to the applicable regulatory frameworks.
  • Contain damage within logically isolated user accounts. If a specific user account is compromised, only the resources assigned to that AWS Organizations user account will be exposed to the higher risk.
  • Easily manage billing and resources on a project or task basis. Employees can switch between their AWS Organizations accounts assigned to them and utilize resources optimally as required.

AWS Organization Key Features

Key features of AWS Organizations

The AWS Organizations is a service that enables organizations to define, manage, and govern groups of AWS user accounts and centrally provision services and policies—and maintain a single bill for the AWS Organization and the set of underlying user accounts.

To realize these capabilities, AWS Organizations lets you:

  • Manage multiple AWS Accounts in separate environments. Establish boundaries that define the policies, services, and resources used across grouped organizational units (OUs).
  • Control access & permissions. Enforce policies for identity and access management across users based on teams, business divisions, and projects.
  • Share resources across accounts. Once an AWS service is created and configured, you can share that service across multiple users, both within and beyond the same AWS organization.
  • Audit for compliance. Maintain an extensive audit trail of all accounts for auditing purposes.
  • Manage cost. Single consolidated billing process allows users to track, manage and optimize usage across all accounts and user environments.
  • Use for free. Activating this feature is free. The accounts are only charged for the AWS services and resources they consume, as usual.

Drawbacks of AWS Organizations

While AWS Organizations makes it easy to manage multiple user accounts, it can also make the entire system more complicated and possibly introduce security lapses.

Here are a few security best practices, suggested by AWS, to make sure AWS Organizations works best for managing multiple user account environments:

  • Security. Use an email address managed by your organization for the AWS Organizations root user. Use complex passwords, multi-factor authentication, and a phone number for account recovery.
  • Monitoring. Apply the necessary controls to monitor the usage of root user accounts. The root user access should be rare and trigger flags immediately in event of unauthorized access.
  • Restrict privileges. Attach Service Control Policy (SCP) to the root user account. As a result, the security policy will extend to all AWS Organization users.

By understanding how AWS Organizations works, you can limit any concerns and maximize its advantages for your AWS usage.

Related reading

]]>
Amazon’s Elastic File System (EFS) Explained https://www.bmc.com/blogs/aws-efs-elastic-file-system/ Mon, 05 Apr 2021 00:00:13 +0000 https://www.bmc.com/blogs/?p=49262 Amazon’s Elastic File System (EFS) is a scalable storage solution that can be used for general purpose workloads. An EFS can be attached to multiple Amazon Web Services (AWS) compute instances and on-premises servers, providing a common resource for applications and data storage in many different environments. Let’s look at how an EFS functions and […]]]>

Amazon’s Elastic File System (EFS) is a scalable storage solution that can be used for general purpose workloads. An EFS can be attached to multiple Amazon Web Services (AWS) compute instances and on-premises servers, providing a common resource for applications and data storage in many different environments.

Let’s look at how an EFS functions and the benefits it provides organizational computing environments, including:

  • What is an EFS?
  • Advantages to using an EFS
  • Disadvantages to using an EFS
  • Use cases for EFS volumes
  • Creating an EFS
  • Mounting an EFS on an on-premises server
  • EFS pricing
  • Creating & using an EFS

(This article is part of our AWS Guide. Use the right-hand menu to navigate.)

What is an EFS?

An EFS is a Network File System (NFS) that organizes data in a logical file hierarchy. Data is stored in a path-based system, where data files are organized in folders and sub-folders.

Mapped file server drives and detachable USB drives both use hierarchical file systems, so the concept should be familiar to anyone who has ever dealt with personal computers and servers.

EFSs are ideal candidates for storing:

  • Organizational data
  • File server
  • Individual data
  • Application data

Amazon states that a single EFS can be simultaneously connected to thousands of Elastic Compute Cloud (EC2) instances or on-premise resources, allowing you to share EFS data with as many resources as needed.

Access to shared EFS folders and data is provided through native operating system interfaces.

Advantages to using an EFS

An Amazon EFS is elastic. That means its storage capacity can be automatically scaled up (add more storage) or scaled down (shrink storage capacity) as folders and files are added to or removed from the system. This is a major advantage over traditional storage solutions—you can add or remove capacity without disrupting users or applications.

Importantly, EFS storage is permanent. When attached to an AWS compute instance, data will not disappear when that instance is relaunched.

Disadvantages to using an EFS

Amazon EFSs do have a couple limitations:

  • No Windows instances. Amazon EFSs are not supported on AWS Windows EC2 instances. EFS volumes can only be used with non-Windows instances, such as Linux, that support NFS volumes.
  • No system boot volumes. Amazon EFS volumes also cannot be used for system boot volumes. AWS EC2 instances must use Elastic Block Store (EBS) volumes for booting their systems. EBS volumes are like EFS volumes with one exception. An EBS volume can only be connected to one EC2 instance or server, while EFS volumes can be connected to several EC2 instances and on-premises resources.

Use cases for EFS volumes

An EFS is suitable for the following use cases:

  • Web serving and content management
  • Enterprise application usage
  • Media and entertainment
  • Shared and home directories
  • Database backups
  • Developer and application tools
  • Container storage
  • Big data analytics
  • Other applications where you need to connect a common data source to a single server or multiple servers

Creating an EFS

An EFS is created within an AWS Virtual Private Cloud (AWS VPC) and must be attached to EC2 instances within the same VPC.

All the resources associated with an EFS—VPC, EC2 instances, and the EFS itself—must reside in the same AWS region. To host an EFS, you can use a default VPC or a custom VPC.

(Learn how VPCs work in our AWS VPCs introduction.)

You can create and manage an EFS using:

  • The AWS Management Console
  • An AWS Command Line Interface (CLI)
  • AWS API interfaces

Setup is relatively easy. AWS EC2 instances can mount existing EFSs to store and access data.

Mounting an EFS on an on-premises server

You can also mount an EFS on an on-premises server using AWS Direct Connect or AWS VPN. Using an EFS from on-premises servers allows you to:

  • Migrate data to EFS
  • Backup on-premises storage
  • Perform additional tasks

EFS pricing

EFS File system throughput scales automatically as capacity grows. Throughput can also be provisioned independent of capacity, if needed.

EFS storage can be provisioned under several different AWS pricing models, depending on:

  • Whether your EFS data will be accessed frequently or infrequently
  • Whether an EFS will store data in one or several AWS zones
  • Whether you are using throughput provisioning
  • What AWS storage class is used

How to practice creating & using an EFS

AWS offers a Getting Started exercise where you can create a sample Amazon Elastic File System, attach it to an EC2 instance, and transfer files to your new EFS using AWS DataSync (AWS account required).

This exercise also provides instructions for deleting the AWS sample resources you create, so your account is not charged after your EFS testing.

An Elastic File System can be created through the AWS Management Console

For more information on using an Amazon EFS for your data needs, consult the Amazon site directly.

Related reading

]]>
What’s AWS VPC? Amazon Virtual Private Cloud Explained https://www.bmc.com/blogs/aws-vpc-virtual-private-cloud/ Wed, 03 Mar 2021 15:39:59 +0000 https://www.bmc.com/blogs/?p=20326 Amazon’s Virtual Private Cloud (VPC) is a foundational AWS service in both the Compute and Network AWS categories. Being foundational means that other AWS services, such as Elastic Compute Cloud (EC2), cannot be accessed without an underlying VPC network. Creating a VPC is critical to running in the AWS cloud. Let’s take a look at: […]]]>

Amazon’s Virtual Private Cloud (VPC) is a foundational AWS service in both the Compute and Network AWS categories. Being foundational means that other AWS services, such as Elastic Compute Cloud (EC2), cannot be accessed without an underlying VPC network.

Creating a VPC is critical to running in the AWS cloud. Let’s take a look at:

(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)

How VPCs work: virtual networking environments

Each VPC creates an isolated virtual network environment in the AWS cloud, dedicated to your AWS account. Other AWS resources and services operate inside of VPC networks to provide cloud services.

AWS VPC will look familiar to anyone used to running a physical Data Center (DC). A VPC behaves like a traditional TCP/IP network that can be expanded and scaled as needed. However, the DC components you are used to dealing with—such as routers, switches, VLANS, etc.—do not explicitly exist in a VPC. They have been abstracted and re-engineered into cloud software.

Using VPC, you can quickly spin up a virtual network infrastructure that AWS instances can be launched into. Each VPC defines what your AWS resources need, including:

  • IP addresses
  • Subnets
  • Routing
  • Security
  • Networking functionality

Where VPCs live

All VPCs are created and exist in one—and only one—AWS region. AWS regions are geographic locations around the world where Amazon clusters its cloud data centers.

The advantage of regionalization is that a regional VPC provides network services originating from that geographical area. If you need to provide closer access for customers in another region, you can set up another VPC in that region.

This aligns nicely with the theory of AWS cloud computing where IT applications and resources are delivered through the internet on-demand and with pay-as-you-go pricing. Limiting VPC configurations to specific regions allows you to selectively provide network services where they are needed, as they are needed.

Each Amazon account can host multiple VPCs. Because VPCs are isolated from each other, you can duplicate private subnets among VPCs the same way you could use the same subnet in two different physical data centers. You can also add public IP addresses that can be used to reach VPC-launched instances from the internet.

Amazon creates one default VPC for each account, complete with:

  • Default subnets
  • Routing tables
  • Security groups
  • Network access control list

You can modify or use that VPC for your cloud configurations or you can build a new VPC and supporting services from scratch.

Managing your VPCs

VPC administration is handled through these AWS management interfaces:

  • AWS Management Console is the web interface for managing all AWS functions (image below).
  • AWS Command Line Interface (CLI) provides Windows, Linux, and Mac commands for many AWS services. AWS frequently provides configuration instructions as CLI commands.
  • AWS Software Development Kit (SDK) provides language-specific APIs for AWS services, including VPCs.
  • Query APIs. Low-level API actions can be submitted through HTTP or HTTPS requests. Check AWS’s EC2 API Reference for more information.

The AWS Management Console manages your VPCs and other AWS services

(Learn about more AWS management tools.)

Elements of a VPC

The web-based AWS management console, show above, shows most of the VPC resources you can create and manage. VPC network services include:

  • IPv4 and IPv6 address blocks
  • Subnet creation
  • Route tables
  • Internet connectivity
  • Elastic IP addresses (EIPs)
  • Network/subnet security
  • Additional networking services

Let’s look briefly at each.

IPv4 and IPv6 address blocks

VPC IP address ranges are defined using Classless interdomain routing (CIDR) IPv4 and IPv6 blocks. You can add primary and secondary CIDR blocks to your VPC, if the secondary CIDR block comes from the same address range as the primary block.

AWS recommends that you specify CIDR blocks from the private address ranges specified in RFC 1918, shown in the table below. See the AWS VPCs and Subnets page for restrictions on which CIDR blocks can be used.

Subnet creation

Launched EC2 instances run inside a designated VPC subnet (sometimes referred to as launching an instance into a subnet).

For IP addressing, each subnet’s CIDR contains a subset of the VPC CIDR block. Each subnet isolates its individual traffic from all other VPC subnet traffic. A subnet can only contain one CIDR block. You can designate different subnets to handle different types of traffic.

For example, file server instances can be launched into one subnet, web and mobile applications can be launched into a different subnet, printing services into another, and so on.

Route tables

Route tables contains the rules (routes) that determine how network traffic is directed inside your VPC and subnets. VPC creates a default route table called the main route table. The main route table is automatically associated with all VPC subnets. Here, you have two options:

  • Update and use the main route table to direct network traffic.
  • Create your own route table to be used for individual subnet traffic.

Internet connectivity

For Internet access, each VPC configuration can host one Internet Gateway and provide network address translation (NAT) services using the Internet Gateway, NAT instances, or a NAT gateway.

Elastic IP addresses (EIPs)

EIPs are static public IPv4 addresses that are permanently allocated to your AWS account (EIP is not offered for IPv6). EIPs are used for public Internet access to:

  • An instance
  • An AWS elastic network interface (ENI)
  • Other services needing a public IP address

You allocate EIPs for long-term permanent network usage.

Network/subnet security

VPCs use security groups to provide stateful protection (the state of the connection session is maintained) for instances. AWS describes security groups as virtual firewalls.

VPCs also provide network access control lists (NACLs) to stateless VPC subnets—that is, the state of the connection is not maintained.

Additional networking services

Of course, these are not the only AWS services a VPC provides. You can use VPC to configure other common networking services such as:

  • Virtual Private Networks (VPNs)
  • Direct connectivity between VPCs (VPC peering)
  • Gateways
  • Mirror sessions

elements of aws vpc

VPCs & shared responsibility

Before you start configuring VPCs, check out Amazon’s Shared Responsibility model. Per Amazon, security and compliance is a shared responsibility between AWS and its customers.

For your AWS account and configurations, AWS is responsible for the “Security of the Cloud” while customers are responsible for “Security in the Cloud.” Generally:

  • AWS is responsible for the AWS cloud infrastructure (hardware, cloud software, networking, facilities) that run AWS services.
  • Customers are responsible for what they run in the cloud, such as servers, data, encryption, applications, security, access, operating systems, etc.

The shared responsibility model lays out who is responsible for specific issues when you experience AWS downtime, security breaches, or loss of business. It is important to understand these limits as you set up your VPC configuration. Consult the shared responsibility model for more information.

Related reading

]]>